Jump to content

Help with flushing into SSD1306


thexeno

Recommended Posts

Hi,

I've managed to integrate the uGFX with my libraries and get it almost working with the SSD1306 display. Now,  I read that I don't need anymore to manually flush the modifications in the controller, by enabling the autoflush (which is by default when including the SSD1306 driver).

Now, the display is configured (well, I receive the aknowledge in the I2C bus transactions during the initialization). But I have the problem that is never sending the display data even calling the gdispFlush(). Basically, this

// Don't flush if we don't need it.
		if (!(g->flags & GDISP_FLG_NEEDFLUSH))
			return;

is always true. Could be that there is something mismatching in the defines? Because GDISP_FLG_NEEDFLUSH is defined as

#define GDISP_FLG_NEEDFLUSH			(GDISP_FLG_DRIVER<<0)

and is not well clear enough where is set the g->flags. Seems always to be Zero, since looking in the ugfx/src folder never gets a different value.

What am I missing?

Link to comment
Share on other sites

3 hours ago, Joel Bodenmann said:

The flag you're referring to is part of the GDISP driver layer as @inmarket mentioned. High-level flushing behavior gets defined by GDISP_NEED_AUTOFLUSH and GDISP_NEED_TIMERFLUSH. If both are set to false, you shouldn't get any flushing happening without manually calling gdispFlush().

I know, but I set GDISP_NEED_AUTOFLUSH to true. As you said, if both are set to false, it does not work, but I've got them to true. If I have time later I will try to debug again, but the flag in my code seems to be 0. Could be a problem with the defines? The conf file is correctly taken from the compiler, because if I set the GDISP to false, is not initializing the GDISP.

Later I will try again to follow the assignments of this flag in the gdisp_lld_SSD1306, so that I can bring you something more precise.

Edited by thexeno
Link to comment
Share on other sites

Ok, here what is happening. In this function:

static GFXINLINE void drawpixel(GDisplay *g) {

	// Best is hardware accelerated pixel draw
	#if GDISP_HARDWARE_DRAWPIXEL
		#if GDISP_HARDWARE_DRAWPIXEL == HARDWARE_AUTODETECT
			if (gvmt(g)->pixel)
		#endif
		{
			gdisp_lld_draw_pixel(g);
			return;
		}
	#endif

	// Next best is cursor based streaming
	[...]
}

the gdisp_lld_draw_pixel(g) should be the one defined in file gdisp_lld_SSD1306.c because of the vmt if I had understood well. But the gvmt(g) -> pixel is always false, in other words, the only call should be the one shown in the snippet but it gets skipped, and since all the other calls that came later are skipped from the configuration, the drawpixel just returns by doing nothing. How can I check the correct initialization of g?

In gfxconf.h I had

#define GDISP_DRIVER_LIST                            GDISPVMT_SSD1306

and in gdisp_lld_SSD1306.c I had

#define GDISP_DRIVER_VMT			GDISPVMT_SSD1306

 

Link to comment
Share on other sites

Ahh. That is what is going on. You are including the driver the wrong way.

Unless you are using multiple displays do not define GDISP_DRIVER_LIST in your gfxconf.h file.

If you are using the standard makefile you just specify the required driver in the makefile. It is clearly documented how to do that in the standard makefile.

If you are using an IDE or some other build system you need to add the driver directory to your include path and the driver source file to the build list. Both steps are necessary.

With what you are currently doing it is not correctly adding the driver. Note that if you are altering any source within the uGFX directory then you are doing something wrong.

By the way, I think from memory the SSD1306 driver is a window region based driver. I therefore wouldn't expect it to have a setpixel routine.

Link to comment
Share on other sites

13 hours ago, inmarket said:

Ahh. That is what is going on. You are including the driver the wrong way.

Unless you are using multiple displays do not define GDISP_DRIVER_LIST in your gfxconf.h file.

If you are using the standard makefile you just specify the required driver in the makefile. It is clearly documented how to do that in the standard makefile.

If you are using an IDE or some other build system you need to add the driver directory to your include path and the driver source file to the build list. Both steps are necessary.

With what you are currently doing it is not correctly adding the driver. Note that if you are altering any source within the uGFX directory then you are doing something wrong.

By the way, I think from memory the SSD1306 driver is a window region based driver. I therefore wouldn't expect it to have a setpixel routine.

But I thought that the file which I need to modify is the  gfxconf.h, I did not modify something else (attached for your information).
Now, with your suggestions (everything else were already correct, in terms of inclusions), it was not updating the driver for another reason, related to the clipping. I think is related to the fact that you said it is a window region based driver, so I assume thatI need to use other APIs, but I wonder what I should use instead.

Here why the graphics memory does not get updated:
clip_check.PNG.3323584db218e12b555b337086cc72a5.PNG

But then when disabling GDISP_NEED_VALIDATION the CLIP is disabled and I actually send some data. Of course it does not visualize anything yet, but this need some additional debug, which may involve my I2C library. For additional information, I also attached my main.c.


But then, I realized something bad: the code sends data only if I put a basic optimization, since if it is not optimized at all, it gets stuck in this loop, because of this weird numbers (see image below), which are not happening with a basic optimization. And, with no optimization, if I tweak those number in debug to actually exit the loop, the code continues to work. See picture:

no_opt_bug.PNG.5ff78c1f87eddf1a78d9506906f87818.PNG
no_opt_bug_vals.PNG.120c383bd8d3be8452e2dbb6f2a53bf7.PNG

Those values are never happening with Optmimize(-O1) option active, which is the minimal optimization, and therefore the code works and never get stuck. I though that may be a bug related to gcc used in Atmel Studio 7. Sorry if I may seems naive, but have you tried this driver with Atmel Studio or a gcc compiler?

 

gfxconf.h

main.c

Edited by thexeno
Link to comment
Share on other sites

We always compile with -O1 on gcc. If it works with that but doesn't with a higher optimisation level then you have struck (yet another) gcc optimisation bug.

Unfortunately the optimiser in gcc has lots and lots of bugs once you turn the optimiser level up. An optimiser is supposed to more efficiently convert your c code to assembly. If it changes what the code does that is a bug. Unfortunately we cannot cater for gcc specific optimiser bugs as uGFX is cross platform and cross compiler. Try a later gcc version and use -O1 or try a different compiler.

All code released is tested and known working with gcc.

By the way GDISP_NEED_VALIDATION prevents pixels trying to be drawn on the screen outside the screen area. It stops the gdisp drivers being fed out of bounds pixel data. It should never be turned off and the setting is likely to dissappear in V3. If that is culling your drawing it is likely you are trying to draw offscreen.

The gdisp public API can be used regardless of what type of controller you connect - it just internally does operations differently depending on the type of controller. None of that detail however is anything you really need to worry about.

gfxconf.h is a user supplied file, part of the user project. We give examples for our demos but it is not a part of the uGFX library itself as it is user project specific.

Link to comment
Share on other sites

1 hour ago, inmarket said:

We always compile with -O1 on gcc. If it works with that but doesn't with a higher optimisation level then you have struck (yet another) gcc optimisation bug.

 

But the weird thing is that this happens when no optimization  is applied.

Link to comment
Share on other sites

Hi @thexeno,

I'm currently reading this with my phone on the train so I can't dive into any details. But to answer your earlier question: Yes, the SSD1306 driver is a well tested driver that is used by a lot of commercial customers and community members. I used it myself both in customer projects as well as hobby projects. I used it successfully with different compilers as well.

I know that that information doesn't help you to resolve your problem right now but just to give you some confidence on this.

Link to comment
Share on other sites

Hi, I've got it work!

-> I had to manage the correctness of my I2C driver interfaced with the library (as expected). But something was still wrong, like the display bad configured and not getting useful data in the GRAM.
-> So I had also to manage in the write command and write data, through appropriate change in my I2C APIs, the initial send of the command or data ID byte (0x00 or 0x40) (I initially thought that was managed in the driver). Now I get the display animation and drawings, but gets dirty with random pixels.
-> Then I had to increase the heap memory, everything works fine.

But now, the heap of 1024 was not enough and was messing up the memory. I made few trials, now works with 1212 bytes, maybe could be a bit less. But still, the only GFX (with 1212B of heap) now occupy 1460B of RAM. Is it normal? (Is it possible, not for practical reasons but for my own curiosity, a native support for external memory?)
 

Link to comment
Share on other sites

Great work at getting it working! Thank you for sharing your findings with us!

The reason your memory requirement is so high is because the SSD1306 driver has to maintain an internal framebuffer. That's simply due to the fact that the SSD1306 driver requires you to write entire chunks of pixels each time - so that's a limitation given by the display controller you're using

µGFX is capable of using external memory. That's no problem at all. You can locate your framebuffer anywhere you want. In fact, some driver, such as the STM32LTDC almost always use external memory for that.

Link to comment
Share on other sites

5 hours ago, Joel Bodenmann said:

The reason your memory requirement is so high is because the SSD1306 driver has to maintain an internal framebuffer. That's simply due to the fact that the SSD1306 driver requires you to write entire chunks of pixels each time - so that's a limitation given by the display controller you're using

I just have a last question. The heap allocation is only for the framebuffer or also other? I was wondering if I have a way to know exactly the minimum heap size that I can safely set. This because if I corrupt the memory, not necessarily I see it as a dirt on the screen and may be latent for later bugs.

Edited by thexeno
Link to comment
Share on other sites

The heap is used for other things as well. For example, the display driver structure itself is allocated on the heap.

Working out how much heap is currently in use is possible but not that easy. Probably the easiest way would be to change the definition of gfxAlloc to record the memory blocks being allocated. You could probably do this most easily with a debugger by setting a breakpoint on gfxAlloc/gfxFree and then manually adding up the sizes as they occur.

With regard to possible corruption, on most cpus the heap competes for space with the cpu stack. Typically the heap grows up from the end of the program variables and the stack grows down from the top of memory. If the two meet you end up with corruption and other problems. Each cpu and operating system take different approaches to dealing with the issue from simply ignoring it to full memory segmentation and protection. As this is completely outside of uGFX's scope this is something that we can't help you with. It is best to ask that in a forum specific to your cpu and operating system. 

Link to comment
Share on other sites

On 13/10/2017 at 23:40, inmarket said:

Working out how much heap is currently in use is possible but not that easy. Probably the easiest way would be to change the definition of gfxAlloc to record the memory blocks being allocated. You could probably do this most easily with a debugger by setting a breakpoint on gfxAlloc/gfxFree and then manually adding up the sizes as they occur.

I tried this way. Is calling the alloc only 2 times, one with size sz = 40(dec) and the second with 1024. But the display works properly with 1068 of heap available. Do you know why?

Edited by thexeno
Link to comment
Share on other sites

The argument you pass to gfxAlloc() is the number of bytes you want to have available for your own use. If you request 40 bytes then you'll be presented with a zone where you can use all 40 bytes to your pleasing. However, the memory manager still needs to store some information (let's call this meta information) about the allocated memory. Most importantly it's the size of the block: If you call gfxFree() then the memory manager needs to be able to know how many bytes are going to be freed. It does that by putting a header in front of the memory that you get a pointer to. That's why you need a few bytes more than just what gfxAlloc() gets as a parameter.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
×
×
  • Create New...