Jump to content

inmarket

Moderators
  • Posts

    1,296
  • Joined

  • Last visited

  • Days Won

    4

Posts posted by inmarket

  1. This might be a problem with the amount of memory allocated for the history buffer. There is a configuration variable that changes how memory is allocated. In one case it will allocate enough memory to cover the entire window, in the other case it allocated a proportion of that based on a fill percentage for the window given that windows seldom are 100% full of characters.

    Note that this calculation is done when the window is created. If you change the window font to a smaller font then the memory allocated may be insufficient for a full screen.

    In any case, the buffer not being large enough will not cause crashes. It will just forget earlier data (contents at the top of the window).

     

  2. There is no other repository with images etc. We simply used the demo image programs in the repository and just changed the image to display and then visually compared that with the sample output provided by the image suite. From memory there are good suites I found for BMP, GIF and PNG. A simple Google search should find them.

    The building of demo programs is development platform and target platform specific which is why no build makefile is included with the demos even though the source code itself for most demos is platform independent. Instead we have a generic gmake based makefile that will work for most platforms with minor tweaks for each platform. For example look in /boards/base/stm32f429/examples. You will see our generic makefile. This can be customised for any platform (Dev and target device and embedded os) very simply. In particular there is one variable to set to build any particular demo for the specified platform. All the complex supporting files are in /tools/gmake_scripts. There is a good wiki article on the website on doing a first time build under windows.

    Of course using a makefile is only one way to build uGFX. Many people use our "single-file-make" system particularly with IDE's. This method however does have some functionality limitations (to be fixed in v3).

    So, for a automated building and testing something based around our generic makefile would be a good start. The test Dev platform and target platform would need to be specified - my suggestion is it should support Linux/Linux and win32/win32 as programs built for a different target device would be hard verify correctness.

    I hope that helps

  3. Thank you for your kind comments!

    With regard to the image decoders, yes the JPG decoder is the odd one out. It was recently added based on contributed code. It is the only decoder that currently requires full in memory decoding (rather than decoding as it is being displayed). It is also the least tested of the decoders. It is basically there for completeness until it can be rewritten to be more consistent with the other decoders. Whilst my primary effort is currently on the upcoming uGFX V3 which is concentrating on the GDisplay/GDriver interface I hope over Christmas to be able to clean up the JPG decoder.

    With regard to test suites for the image decoders, all the decoders except JPG (and native) have been run against image suites during development. These image suites were not added to the uGFX library itself for a  few reasons 1. They are of little use to most uGFX users, 2. There were questions regarding licensing that were created by including them, and 3. They are freely available elsewhere on the internet.

    Automated unit testing is something that is sorely missing currently in uGFX. Whilst we have a good range of "demo" apps that does not constitute proper unit tests. The reason for this is currently just resourcing. uGFX has grown quickly from a hobby project so there are still many gaps.

    With regard to your ideas for contribution - we would love it! We are happy to provide assistance wherever we can to help and are happy to change aspects in order to achieve improved development rigor.

    Regarding alpha blending and APNG, one of the difficulties currently is the lack of support for alpha at the GDISP/GDisplay interface. Whilst doing manual alpha blending is possible by reading back from the display surface, many displays do not support this. To date we have tended to cheat in these situations by only alpha blending when blending against a constant background colour and using an alpha cliff in other situations. In uGFX V3 we are adding alpha channel support into the GDISP/GDisplay/GDriver interfaces. This should make alpha blending and APNG much more easily obtainable. It was the intention that some time after uGFX v3.0 that the image decoders would get some rework to use new GDISP capabilities such as streaming support in order to again reduce resource requirements and improve speed.

    Any help you can provide would absolutely be appreciated!

  4. uGFX doesn't decode the whole image into RAM (unlike most image libraries) except for the JPG decoder. Instead it decided on the fly straight from the encoded image in flash.

    Each of the image calls returns detailed error information if it fails. Use your debugger to determine what the error code is.

    I suspect that your problem is not enough heap memory. GIF images take a minimum of 12k RAM while they are decoding (it is due to the memory requirements of the compression algorithm) so when you only have 16k total heap that makes things VERY tight which is why you can get it to work raw32 but not with the overhead of freertos. That 12k is only used while decoding an image so you can use that same 12k to decode one image after another with no problems.

    I would suggest that you use BMP images, perhaps rle encoded and with the minimum number of bits possible for your image in order to save space.

    GIF, PNG, JPG images will all require too much decoding ram for your situation.

  5. Another thing you might want to look at...

    There are some forum posts of a person who updated the stm32 ltdc driver to support a 2nd page which was drawn into by uGFX rather than to the primary display surface. They then used the display sync period to DMA data from the 2nd buffer to the primary display surface using the DMA2D controller in the 746. This prevented tearing and allowed uGFX to work without the need for pixmaps.

  6. Pixmaps cannot be built with single file make and the current uGFX - the build requirements for multiple displays (and pixmaps is effectively a 2nd display) are just too complex for single file building. One such issue is the symbol space. Most drivers use similar or identical function names resulting in it being impossible to build two or more drivers in the same compilation unit.

    All hope is not lost however, in the upcoming uGFX v3 these issues will be resolved by a redesigned driver interface. In v3 makefile builds will internally use single file make.

    Having said that, using makefile builds in eclipse is very easy - just start a makefile project. This is actually our preferred and best supported mechanism. Creating a project using the eclipse automatic makefile building process and intergrating a custom uGFX makefile is I believe possible but NOT simple requiring careful modification of the automatic makefile.

  7. uGFX, like all modern graphics systems, assume 1:1 pixel ratio ie the width of the pixel is the same as the height of the pixel.

    Historically this hasn't always been true e.g. I think EGA had non-square pixels. The big advantage of VGA when it came out was its high resolution (comparitively) and it's square pixels.

    Unfortunately it looks like your display is a display without square pixels. I am guessing it is a lower quality cheaper display - certainly it has a design problem and shapes will be warped in whatever orientation you draw, it is probably just more obvious in portrait mode.

    The only way around this is to take the distortion into account when you draw e.g. use an appropriately sized ellipse rather than a circle.

  8. We will look to add something like a GFLASH directive in ugfx v3.

    Note the esp8266 has similar problems as the flash is only 32bit addressable, not byte addressable, even for byte level data. To work around the compiler places everything in RAM.

  9. When gcc compiles it places static const data into its own linker segment. It is up to the linker, the link map and the flash loader to ensure that is put in flash rather than RAM.

    On most platforms that results in it being in flash. An example where it does not is the esp8266. For that processor everything is put into RAM because of addressing restrictions and alignment issues with the flash.

    On your processor you will need to examine the linker scripts to see where static const data is put. In ugfx we define fonts as static const data and therefore expect it will arrive in flash. If it is not on your platform you may need to play with your linker scripts.

  10. gtimer runs on its own thread. This means that if one timer job takes a long time then that will definitely delay other timer jobs that are ready to run. For a cooperative os like raw32 the same also applies to work done in any other thread.

    The key point here however is that the delay caused is only after a task is ready to run. It doesn't delay when the task is ready to run. E.g... 

    A timer task at 35000ms system time is set to run in 100ms. This means the task is "ready" to run at 35100ms system time. No matter what else is happening in other tasks that doesn't change. The timer task will then be run at the first available opportunity after 35100ms system time. Having other busy tasks will only delay that first opportunity. 

    The advantage of this is if you set a repeating timer then the cycle rate will average that period even if a specific timer event is delayed due to the cpu being busy doing other things. 

  11. This is a problem in the SDL mouse driver.

    If you look at line 209 in drivers/multiple/SDL/gdisp_lld_SDL.c you will see that explicit touch support has been turned off with a #if 0.

    It looks like the reason that this has been turned off is because on some equipment (eg osx touchpads) the coordinates returned are completely wrong. It also looks like the coordinates can come in different forms (0 to 1.0, or 0 to screen size) depending on the SDL version although the code looks like it can handle that.

    You can try re-enabling that code - maybe it will work for you on your hardware.

     

     

  12. Another way is to create a separate thread to run other tasks; or put geventEventWait into the new thread and use the main thread to do your other tasks.

    If you use this thread approach and are using a cooperative schedular (like in the RAW32 or Arduino ports) don't forget to run gfxYield periodically to give other threads a chance to run.

  13. The 2nd parameter to geventEventWait is a timeout. You can specify a timeout in milliseconds for geventEventWait to wait for an event. If it hasn't found an event during that time it returns NULL rather than the event pointer.

    By placing geventEventWait in a loop with a suitable timeout period, and by handling the NULL that can now be returned, you can now do other things as well in the loop.

  14. I am not sure of why doing it on the command line would differ from doing it as a system call. Perhaps one was settling the date forward and one was setting it backwards?

    Having said that, geventEventWait has a timeout parameter. You will probably need to debug in to find out what is happening. It sounds like you are using Linux (or some other unix platform) so it is entirely possible that changing the date has affected the timing calls for ugfx.

    Places to look...

    geventEventWait - obviously

    The sleep and other semaphore wait functions in the GOS layer

    The time handling in gtimer.

  15. Just a short comment on what you said about size...

    ugfx has multiple modules which provide different features e.g. GDISP is just basic drawing, GWIN is a full windowing system. As individual modules can be turned off and on and many sub-features can also be turned off and on it is possible to build a compromise between functionality and resource usage. Unfortunately when you want the advanced features the price in resources must be paid. At one extreme GDISP is small (particularly if only the required features are turned on and only the UI2 font is used). It also does not require any other modules. On the other hand GWIN requires just about every other module in uGFX although again features can still be controlled.

    We have spent a lot of effort to keep resource usage to the minimum as it is designed for embedded platforms and we believe it is the lightest in the industry. Still, running GWIN on a ATmega8 is not really practicle.

    Studio is built on top of GWIN. As it is generated code it is never going to be as efficient as hand optimised code (Although again we try hard). Unfortunately, there is a price to pay for the power that Studio brings taking it beyond the capabilities of some devices.

    That is the thing about embedded programming - it is always a compromise between resources and the power of the provided interface.

    Any ideas that you (or anyone else) can provide on how we can offer the same functionality with less resource usage we would love to hear from you. Just create a new forum topic, we need all the help and ideas we can get. :)

     

×
×
  • Create New...