AnsweredAssumed Answered

Problem with CRC using 16 bit SPI with DMA on STM32F103

Question asked by robinson.henry on Jan 7, 2014
Latest reply on Jan 8, 2014 by robinson.henry

I have two processors on a single board, connected together using SPI.  The pin configuration is Ok – I am successfully shunting a block of data to and fro.

nSS on P2(Slave) is connected to a GPIO pin on P1 (Master).

The SPI configuration code is almost identical – cut and pasted – with the differences being:

Different pins (P1 uses Remapped SPI1 while P2 uses default SPI1)

P1 is Master, P2 is Slave

P1 has NSS configured to be Soft, on P2 it’s Hard.

I am sending identical blocks of data = 500 bytes = 3,4,5,6,7 etc up to 0xF6 (= 0x1F6 wrapped to 8 bit) These are byte values in an array; but I’m treating the array as a block of 250 16 bit words for DMA purposes.

The SPI clock speed is generally around 1MHz, although I have run it quite happily at 8Mhz; same result as described below, but by slowing it down I can see timings more accurately.

The transaction is controlled by P1: periodically it calls KickRXTX():

                uint16_t ix;

                SPI_Cmd(SPI1, ENABLE);

                PullDownNSS();

                 ix = SPI1->DR;    // read DR so as to read CRC and clear RXNE flag

                 // wait, give P2 a chance to set up its DMA and init CRC

                for (ix = 0; ix < 500; ix++) {}

 

// tried putting these lines before enabling SPI; make no difference to output:

                SPI_CalculateCRC(SPI1, DISABLE);

                SPI_CalculateCRC(SPI1, ENABLE);

                 ConfigureDMASPI1();    // set up DMA RX and TX

                 // set up DMA interrupt on transfer complete

                DMA_ITConfig(DMA1_Channel3, DMA_IT_TC, ENABLE);

 

When the DMA interrupt fires, P1 waits for SPI TXE set; then SPI BSY clear; then calls Stop():

                bool CRCerror = FALSE;

                uint16_t c;

                uint16_t crc_tx, crc_rx;

                 // ************ DUMMY READ can be done here ***********

                 crc_tx = SPI_GetCRC(SPI1, SPI_CRC_Tx);

                crc_rx = SPI_GetCRC(SPI1, SPI_CRC_Rx);

                Trace("CRCs T = %02X, R = %02X\r\n",crc_tx,crc_rx);           

                CRCerror = (SPI_I2S_GetFlagStatus(SPI1, SPI_FLAG_CRCERR) == SET)?TRUE:FALSE;

                Trace("CRC error = %u\r\n",CRCerror);

                // ************ DUMMY READ ***********

                c = SPI1->DR;     // read DR so as to read CRC and clear RXNE flag

                Trace("Rx CRC = %02X\r\n",c);

                SPI1_ReleaseCS();

                SPI_Cmd(SPI1, DISABLE);

 

Looking at the results on a logic analyzer, it looks fine.

On the slave side, it prepares to receive by calling Prep2RXTX():

                ConfigureDMASPI1();// set up DMA on RX and TX channels (in that order)

             

Then it reacts to interrupts on the falling and rising edges of NSS:

On falling edge,

                Trace("SPI start\r\n");

                SPI_CalculateCRC(SPI1, DISABLE);

                SPI_CalculateCRC(SPI1, ENABLE);

                SPI_Cmd(SPI1, ENABLE);                              //set SPE

On rising edge,

                Trace("nSS!\r\n");

                // DMA will be complete... as it's faster than this

                // check data

                Trace("RX byte 1 = %02X\r\n",SPI1_Inbuf[1]);

                Trace("RX byte 499 = %02X\r\n",SPI1_Inbuf[499]);

                crc_tx = SPI_GetCRC(SPI1, SPI_CRC_Tx);

                crc_rx = SPI_GetCRC(SPI1, SPI_CRC_Rx);

                Trace("CRCs T = %02X, R = %02X\r\n",crc_tx,crc_rx);

                CRCerror = (SPI_I2S_GetFlagStatus(SPI1, SPI_FLAG_CRCERR) == SET)?TRUE:FALSE;

                Trace("CRC error = %u\r\n",CRCerror);

                c = SPI1->DR;     // read DR so as to read CRC and clear RXNE flag

                Trace("Rx CRC = %02X\r\n",c);

                SPI_Cmd(SPI1, DISABLE);

                DMA_Cmd(DMA1_Channel2,DISABLE);

                DMA_Cmd(DMA1_Channel3,DISABLE);                                             

                m_SPI1state = _SPI1_idle_;

So now we have identical waveforms in both directions (MISO and MOSI).  They both report reading CRC = 0x1648, in agreement with the logic analyzer.  They both report calculated values of 1648 in both directions.  But CRC error bit is set on the Master; clear on the Slave.  WHY???

Now the really weird part.

If I move the      // ************ DUMMY READ *********** lines,

up to     // ************ DUMMY READ can be done here ***********

the Slave starts reporting CRC error.  The Slave!!!  I haven’t touched its code, and neither have I altered the waveforms or timing in any way.  On the logic analyzer there is no discernible change.  The Trace outputs (serial to terminal) are unchanged except that it now reads CRC error = 1 on the Slave side, whereas it previously said CRC error = 0.  All the other reported values are the same.

I note in the datasheet that timings are critical; spurious SPI clock edges (eg when enabling/disabling SPI) can mess up the CRC; I’ve been careful to keep them well out of the way.

Any ideas?  Is the hardware flaky?  Has anyone else seen this?  I’m getting pretty close to writing a separate CRC and inserting it into the data before sending, but if the hardware version can be made to work, it would be more efficient.

It would be nice if ST could provide a more detailed explanation of the CRC process.  For example a line on the diagram on Section 25.3 (Figure 236) on p675 of the STM32F103 reference manual, showing exactly where the data for the CRC is tapped off into the “Communications Control” box.  My assumption is that it’s from the input to the shift register, i.e. directly on the MISO/MOSI pins.



 

Outcomes