-
Posts
2,639 -
Joined
-
Last visited
-
Days Won
2
Content Type
Forums
Store
Downloads
Blogs
Everything posted by Joel Bodenmann
-
Glad to hear that you got the pixmaps working! The required memory size can be calculated by this equation: mem_bits = (pixels_width * pixels_height * num_bits_per_pixel) + overhead Note that this equation doesn't account for pixel packing and similar things (which most likely won't be an issue in your case). So if you're using an RGB565 format you need (400 * 240 * 2) bytes of memory for a full framebuffer plus the overhead. The overhead is totally negligible though as it's just a couple of bytes. Furthermore, µGFX allows you to use different pixel/color formats for each individual display driver. As pixmaps are implemented using the real display driver interface this means that you could theoretically use a different color format for your pixmap. However, as you'd have to push the pixels from the pixmap to your physical display at some point that would mean that each pixel would need to be recalculated which is very slow and would prohibit using DMA unless you have dedicated hardware to do that. Depending on the application you wouldn't want to have one or more full-sized pixmaps but instead you'd just have partial, context dependent pixmaps. But that really depends on what your application does and how you implement other things.
-
I gave it a quick try on a bare project and I can confirm that there's a bug in the Keil specific context switching code for the Cortex-M7 processor. We'll look into this ASAP.
-
Thank you, we appreciate it. Please don't forget to check for the things that we mentioned in this topic (eg. you must be 100% sure that RTX is no longer running, setting the correct CPU and compiler in the configuration file, ...).
-
We did a lot of tests on a multitude of different platforms and setups and the RAW32 (baremetal) port of µGFX 2.7 runs fine. Otherwise we wouldn't have released it if there would be a general problem with it. We'll put together an official demo exactly for your specific setup but as mentioned that will take some time as we currently have a lot on our plate. Our recommendation is that you start following the advice we give you. Unfortunately, just saying "doesn't work" all the time doesn't allow us to help you in an efficient way. Other than that you either have to wait until we have to the time to create a ready-to-run example project for you or contact us for commercial support.
-
There's currently no existing API to do that as you discovered. I added this feature request to our ToDo list. Of course you can always add that function yourself if you are in a hurry. The existing functions of the list widget show how to access items in the list.
-
Let us know whether that works, we'll look into consider adding actual high-level API for that.
-
There's currently no high-level API for that. I didn't check the code and I don't have everything in my head but I guess you should be able to just manually modify the keyset member of the GKeyboardObject directly and then issuing a redraw. This would look something like this (untested code): GHandle ghVirtualKeyboard; ghVirtualKeyboard = gwinKeyboardCreate(...); (GKeyboardObject*)ghVirtualKeyboard->keyset = <your new keyset> gwinRedraw(ghVirtualKeyboard);
-
Okay, I'm a bit confused on what your actual question/problem is. Before we continue on this: Is your question how you can change a key set of the existing pre-defined built-in English1 layout or is your question on how you can change the currently displayed key set programmatically?
-
The disabled state refers to the state of the entire widget (which you can control through gwinSetEnabled() as well as the corresponding gwinEnable() and gwinDisable() wrappers). With the implementation of the default built-in list widget it's not possible to enable/disable individual list items. If you need such functionality we recommend implement a custom widget. In this particular case you can simply copy the /src/gwin/gwin_list.h and /src/gwin/gwin_list.c files to your project and modify them accordingly. Please don't hesitate to ask if you have any further questions. We're happy to help.
-
I assume you are hitting the limitation of the single-file-inclusion mechanism as noted here. In that case you'd have to either use the make build system or resort back to the good old adding-each-file-individually technique. Note that in case of you're using our own high-level project makefiles that we supply it might be a simple case of setting GFXSINGLEMAKE to no. If you're not using the single-file-inclusion mechanism: Make sure that you make a clean build. If it still doesn't work
-
Note: Do not forget to still explicitly set the CPU that you are using by setting the GFX_CPU setting to the correct value as mentioned by inmarket.
-
There's more to it than simply changing the used OS abstraction in the configuration file. You also have to disable Keil RTX, ensure that the package is no longer loaded and so on. Then, if there are still problems one has to debug the problem and so on. Unfortunately this is not a two minutes job. We'd recommend you to create a new bare-metal "Hello World" (aka Blinking LED) project for the STM32F746G-Discovery board in Keil and then adding µGFX as per the step-by-step guide using the baremetal port instead of RTX. The first thing you want to do if you're facing issues is debugging to figure out whether it's just halting somewhere or ending up in a hardfault or similar. You definitely want to make sure that the threading of the bare-metal port is working properly before you continue. You can do that by either manually creating two or three threads or by using the corresponding GOS demo.
-
Glad to hear that you got it working! And thank you for the feedback regarding the documentation. Don't hesitate to ask if you have any other questions. We're happy to help wherever we can!
-
We'll put together an official ready-to-run demo project as per your request. However, that might take a while. We're currently very busy with customer projects and getting the next release put together. If this is for a time sensitive commercial projects I'd recommend you to contact us via e-mail regarding commercial support which would speed things up.
-
This is largely discouraged. Access to the external SDRAM is a lot slower than access to the internal SRAM. You will loose not just a bit, but tons of performance. That is theoretically possible but maybe a waste of memory. Caching images only makes sense if you frequently need to (re)draw those images. Of course, if you have the memory to spare you can always cache all images. For a finished product you'd usually convert all your images to the NATIVE image format so that there is zero CPU overhead when rendering an image (if you can spare the memory required to store the image as it will be completely uncompressed). In that case caching an image would only be helpful if the access time to the storage location of the image is a lot longer than to your external SDRAM (that would for example be the case if the file comes off an SD-Card). Other than that, caching really makes just sense if you need to redraw the same image over and over again or if it is an image format that is very CPU intensive (and slow) to decode such as PNG. My personal recommendation in your case is to use the external SDRAM to render stuff into pixmaps. Depending on the GDISP driver that you are using, all GDISP rendering functions are directly piped through to the display controller. The exceptions are where the GDISP driver maintains a framebuffer itself and only flushes when asked to or automatically to the actual display but that will most likely not be the case in your particular setup. When you render complex things such as a complete widgets where certain parts get overdrawn and anti-aliased fonts where some pixels get over-drawn multiple times and sometimes even require pixel read-back there's a lot of access to the framebuffer which can be very slow. Pixmaps are virtual displays (dynamic and with arbitrary sizes) that allow you to render stuff in your memory and only copy the final result to the much slower framebuffer once you're finished. As a bonus benefit, you can render the same pixmap to multiple places on the same real display if your application calls for that. Another thing to keep in mind is the memory bus bandwidth. Your display is connected to the same bus as your SDRAM. As the display controller maintains it's own framebuffer it means that you have to copy every single pixel you want to change from your own memory (either the internal SRAM or your external SDRAM in your case) to the display controller's frame buffer. The FMC interface has a maximum bandwidth and copying data form the external SDRAM to the display controller's framebuffer will be slower because of bus turn-around times and similar things. What might sound a bit confusing could be simplified to this: Using the external SDRAM which is connected to the same bus as your display will mean that the maximum frame rate you could get out of your display will be lower. You can either access the SDRAM or your display frame buffer but never the two at the same time. However, when using pixmaps to render stuff in RAM first you'd gain additional performance because usually display controllers that maintain their own framebuffers and hook up to the FMC interface don't expose the actual framebuffer to the memory bus. Instead, you just tell the display controller whether the thing you're currently sending should be interpreted as commands or as actual pixel data and then you setup a certain window in which you will operate and after that you can send the actual pixel values. Using a pixmap doesn't have all this overhead because each pixel can be addressed directly, there are no commands that need to be passed first, no window setups and so on - each pixel is part of the memory map of your microcontroller and you can just set it to a different color value directly. I guess it's getting confused now, what I'm trying to say is that you have to figure out what works best for you at the end. There are many factors to consider, some I left out because they are only marginally important, some I left out because I don't have a whole lot of time right now and some I left out because I forgot about them. But never the less, my advice stays the same: Use pixmaps if your application can actually benefit from them. Right now the more important thing for you to know is that currently µGFX doesn't provide a proper memory pooling interface. What this means is that you cannot specify to gfxAlloc() where it should take the memory from. Therefore, it would be up to you to properly setup the additional memory segments in your linker script and getting the things to where you want them to be. In the particular case of pixmaps it might even require one or two lines of modification in the creation function as, when I remember correctly, it doesn't take a GDisplay* parameter that would allow you to directly pass your own object located in the external SDRAM. But I'm not sure so don't quote me on that. Sorry for the crowded text, I wrote this over the course of several hours jumping from one meeting to another (and yes, some coffee breaks in between to be honest). Just let us know if you have any additional questions. We're happy to help
-
Hello and welcome to the community! The GUI on the DE0-Nano board on the µGFX home page is based on µGFX. The hexagon shaped buttons are custom widgets that have been implemented specifically for that particular application. You can write any custom widget that you like following the corresponding documentation. There are two or three crude examples available in the download section. Maybe we should think of adding those hexagonal buttons too...
-
We just pushed a fix for this. The reason was a missing __cpp() wrapper macro around the C function call to gfxThreadExit() inside the inline assembly code. Note that we haven't tested this ourselves yet, this has to wait a couple of days. We'd appreciate it if you could grab the latest master and let us know whether it works now for you.
-
Working on it. I can confirm that there's a build issue with the latest master regarding the missing symbol definition.
-
We are running µGFX on a bare-metal platform with a Keil µVision project ourselves successfully so this really shouldn't be more than a configuration issue. Can you please do what @inmarket told you and ensure that you updated all the files. On a side note, note that we just pushed a change/fix for the RAW32 memory manager, maybe you want to upgrade to the latest master branch at this point. If you keep having these problems, please attach the complete compilation output log from a clean-build as a text file.
-
gdispPixmapDelete won't return all memory allocated
Joel Bodenmann replied to ErikI's topic in Support
And thank you for bringing this to our attention This fix came just in time to make it into the upcoming release of version 2.7. -
I just checked the corresponding documentation and you are right. The supplied parameter is interpreted as words, not bytes. µGFX currently doesn't provide a high-level API for memory pool management. Therefore, you'd have to do some of the work yourself. Please don't hesitate to create a new forum topic regarding that if you have questions.
-
Ah, looks like I was a few seconds too late. Glad to hear that you got it working!
-
Are you sure on that x4? I am pretty confident that it's really just 256 bytes which is not a whole lot if you run GUI stuff in it. But then again, it depends on whether FreeRTOS adds the space required for task-switching information to that or whether that will be part of those 256 bytes. Note that there are two different things at play here: Stack and heap. Stack is the place where you actually execute code. Each time you call a function, there is stuff that gets put on the stack. That stuff includes backing-up CPU registers, local variables and so on. The heap is the piece of memory that you use to dynamically allocate memory. Using gfxAlloc() will use heap, not stack. When I remember correctly the heap size in FreeRTOS is configured by something called configTOTAL_HEAP_SIZE or similar. You might have to increase that. However, keep in mind that by increasing the heap size you sacrifice stack size. Don't worry, we are happy to help wherever we can But of course we never say "No" to a beer
-
When you're using the FreeRTOS port the GFX_OS_HEAP_SIZE macro has no effect. Instead, gfxAlloc() is simply a wrapper macro around pvPortMalloc(). In that case you have to properly configure your memory settings on the FreeRTOS side of things. Don't forget to make sure that your FreeRTOS task has a sufficient stack size as well. Most likely your problem right now will be that you are have too large stack sizes that the heap managed by FreeRTOS is simply too small and with all the stuff you're doing you run out of memory and can't allocate the gdispImage object for your image anymore.
-
As the image doesn't get chached (unless you explicitly ask for that by using gdispImageChage()) it means that there wasn't enough memory to allocate the gdispImage object. The decoder itself is part of the program memory. The GDISP_IMAGE_ERR_UNRECOVERABLE that you get along with that just indicates that it's an error that can't be "fixed" automatically. If you are using the built-in memory manager (eg. when using the RAW32 port to run bare-metal without an underlying operating system) it's usually just a matter of increasing the heap size that you specify in the configuration file with: GFX_OS_HEAP_SIZE.