cancel
Showing results for 
Search instead for 
Did you mean: 

Should DMA variables be volatile

Carl_G
Senior II

I am using the DMA in a few places. I am wondering should these variables be volatile. As it is, the compiler has no idea when the data will be read from or written into these variables. When that is the case, shouldn't you make the variable volatile? I don't think I have seen them as volatile in the example code however. Am I missing something here?

14 REPLIES 14
TDK
Guru

If using cache, buffers from DMA should be cleaned before writing and invalidated before reading. Volatile here doesn't help.

Buffers should be marked volatile if the value is changing within a function and the function needs the updated value. For example, if you set a flag in an interrupt which you then check for and process in the main flag. (This is a little more complicated since link time optimization can cause functions to become combined, so to speak.)

 

If you feel a post has answered your question, please click "Accept as Solution".
CTapp.1
Senior II

If a buffer is initialised (which is always the case for C, if it's not an automatic), then the optimizer can make assumptions about the value it holds. If that buffer is used to hold (say) an SPI command sequence and it is also used to received a response (replacing the original content), then the optimizer may assume that the value has not changed if the receive transfer takes place via DMA:

void f1( void )
{
  uint8_t buffer = 0xa5; // Command value

  spiDMASendReceive( &buffer );

  switch ( buffer )
  {
    case 0x00: action00(); break;  // Eliminated by optimization
    case 0x01: action01(); break;  // Eliminated by optimization
    default:                       // Only feasible value without 'volatile'
  }
}

You can see the effect here, where no code is generated for f1(), but it is for f2() as 'volatile' qualification is added.

And yes, caching (hardware) and optimization (compile time) are not the same, but both can appear to "break" the code.

Edited to make it clear that I was referring to hardware caching, not run time caching of calculated values.


@CTapp.1 wrote:

And yes, caching (runtime) and optimization (compile time) are not the same


But the optimisation might be to use "caching" (unlikely in the case of a buffer, but quite possible for a simple int, say).

See the PS to my previous post.

A complex system that works is invariably found to have evolved from a simple system that worked.
A complex system designed from scratch never works and cannot be patched up to make it work.

That's where we get into terminology and definitions! I was only considering hardware, so will update my reply to reflect that ;)

The language standard doesn't help either, as it does mention "caching" (generally to mean a computed value may be stored in a static buffer* so that the result can be reused), but has nothing to say about optimization other than that it is irrelevant (but the program must behave the same).

* this is generally "design time" optimization made during compiler implementation, where as the elimination of a read would be made during compilation.

Instead of actually reading from memory each time, the compiler might optimise by reading once into a register, and using that.

IIRC this is called register allocation or register optimization. Not caching.