cancel
Showing results for 
Search instead for 
Did you mean: 

SPI + DMA Chip Select Data Synchronization?

Fabian21
Associate III

Hi everyone,

I am currently working with an STM32G071. It receives data over the SPI in slave mode. The data is packet based, each currently has 4 bytes, but that will differ in the future. A single chip select sequence may contain multiple packets, but a single packet is never split over multiple CS.

The SPI is configured to work with the DMA to transfer the 4 bytes of a packet and I get an interrupt from the DMA once finished. Everything works as expected.

However, I would like to make sure that the receiver is robust, even in case of wrong data on the bus, especially not the correct number of bytes. Currently, the DMA will transfer 4 bytes into the RAM, while ignoring the CS. In case there is one byte missing, the DMA will receive the 3 remaining bytes. Although the CS is then deasserted, it will wait for the next transmission to get the last byte. From this point on the data will be shifted by one byte. That is not acceptable for me. I want the DMA to re-sync at some point.

As far as I can see there is no way to get the information of a CS-event from the SPI. There is no interrupt or status flag for being selected/unselected. Is that correct?

The only solution I can think of is to set up an additional GPIO EXTI interrupt on the CS line and use it to re-initialize the DMA every time the CS is asserted or de-asserted. If I implement it for the assertion case, I will have to make sure that the ISR is fast enough to be ready before data is received over the SPI, which might not be possible. If I implement the interrupt for the de-assertion, I have to make sure that the DMA is finished transferring the data and the interrupt is called, which might be a bit tricky.

Am I missing something obvious? How would you implement this? Should be a "standard" problem I guess 😉

Thanks a lot!

1 ACCEPTED SOLUTION

Accepted Solutions
Fabian21
Associate III

Thanks for your elaboration.

I was more or less deliberately vague as I have control over master and slave and thinking about a protocol where I need to transfer packets of constant and known size. Yes, I could just use the circular DMA mode, get an interrupt once the buffer is filled and this would probably work. However, if there is some sort of fault, glitch, ESD event, or whatever in the system I need the slave to notice that and recover at some point. In this case it does not matter if it takes a few more packets, which get lost, as this event should be very unlikely. But it should never break the system permanently.

However, I was wondering that there is no built-in hardware-based mechanism to achieve some sort of synchronization with the CS signal. It seems like you confirmed my initial findings on that.

Yes, I can solve this problem by using some sort of interrupt, but it will never reach the performance I intended. So I have to live with that or switch to an FPGA.

An other way to solve this on a higher level, I came up with, is to add a CRC to the packet and check it after reception. If the CRC check fails, something is probably wrong and I simply reset everything. Costs a bit more data, but a CRC check is a good idea anyway in this application.

Thanks a lot for your input and consider the question as answered 😉

View solution in original post

6 REPLIES 6

Use circular DMA and the chipselect interrupt only.

JW

Fabian21
Associate III

Hi,

can you elaborate your answer a bit?

You mean the asserting-interrupt? So at the beginning of a transfer?

What should the interrupt exactly do? Reset the DMA channel?

Fabian

> You mean the asserting-interrupt?

Yes.

> So at the beginning of a transfer?

No, at the end.

> What should the interrupt exactly do? Reset the DMA channel?

No, process the received data, as you do in DMA interrupt now (which you don't need anymore). There is no need to reset DMA, you run it circular, forever.

JW

Fabian21
Associate III

Sorry but I don't get it.

Please correct me if I am wrong: For me assertion means the selection of the chip. In case of an active low signal this is the falling edge - the beginning of the transmission.

The de-assertion interrupt (de-selection) is fired only when the transmission is complete, which may have included several (unknown number of) packets. So I could only process the last one, as the other got overwritten. Even if I would choose a larger DMA buffer to hold multiple packets, how do I know where a packet begins?

There's no single "best" solution to a problem such as this (that's the major fallacy of schemes like HAL, assuming that you can abstract away the details in real-world applications). The possibilities are numerous and they depend on the detailed circumstances.

You gave us only vague description of master, and hinted towards possibly missing byte, which might represent unreliable hardware or potentially crappy software allowing overruns.

The basic description said, packets are framed by chipselect and you want to discriminate between 3 and 4 byte packets (yes you've mentioned multiple packets per frame but did not go into further detail). This alone sounds like a quite decent basis and I replied you outlining what would I do. Your additional requirements change the picture: if you don't know how long the packets and their combination within frame given by chipselect are, how do you want to discriminate between good and bad packets?

Do you have control over the master? Then plan the scheme for optimum process on both sides. If not, then present the exact requirements imposed by the master, and then it may quite well turn out that there are only inferior solutions.

Or, to answer your questions directly:

> how do I know where a packet begins?

Where the previous packet ended.

JW

Fabian21
Associate III

Thanks for your elaboration.

I was more or less deliberately vague as I have control over master and slave and thinking about a protocol where I need to transfer packets of constant and known size. Yes, I could just use the circular DMA mode, get an interrupt once the buffer is filled and this would probably work. However, if there is some sort of fault, glitch, ESD event, or whatever in the system I need the slave to notice that and recover at some point. In this case it does not matter if it takes a few more packets, which get lost, as this event should be very unlikely. But it should never break the system permanently.

However, I was wondering that there is no built-in hardware-based mechanism to achieve some sort of synchronization with the CS signal. It seems like you confirmed my initial findings on that.

Yes, I can solve this problem by using some sort of interrupt, but it will never reach the performance I intended. So I have to live with that or switch to an FPGA.

An other way to solve this on a higher level, I came up with, is to add a CRC to the packet and check it after reception. If the CRC check fails, something is probably wrong and I simply reset everything. Costs a bit more data, but a CRC check is a good idea anyway in this application.

Thanks a lot for your input and consider the question as answered 😉