cancel
Showing results for 
Search instead for 
Did you mean: 

STM32F4 Ethernet access after FLASH Erase

cwparker
Associate II
Posted on August 27, 2014 at 17:20

I have a project that needs to erase FLASH sectors so that configuration data can be updated. The device is being accessed via ethernet. In the past, I was just polling the DMA status to access an incoming packet. This was all working. I just changed the code to use the ethernet interrupt to queue packets as they are received. This also is working with the exception of when I erase the FLASH. After the erase, either packets are not received or are received with a large delay (> 0.5 seconds). I have tried disabling both the ethernet DMA interrupt (DMAIER) and/or the NVIC ETH_IRQn, but neither solve the problem.

Before the erase: DMASR = 0x00660404, DMAIER -> 0x000000000

After the erase: DMASR = 0x006804C4 (if packet receieved) and DMAIER -> 0x000100040

I have tried resetting the DMA buffers/status after the erase, but this also has had no effect.

The code to erase the FLASH is in FLASH, but a different sector from the one being erased.

Again, all of this was/is working if I don't set up an interrupt to queue the incoming packets, but manually poll the DMA status to queue them.

Thanks for any help.

- Clint

#stm32f4-flash-ethernet
7 REPLIES 7
Posted on August 27, 2014 at 17:31

Well erasing flash can take a significant amount of time, stalling the processor, perhaps your IRQ handler needs to expect more than a single transaction to be pending.

If you run you code completely from RAM (including vector table and IRQs), do you still see the phenomenon?
Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
cwparker
Associate II
Posted on August 27, 2014 at 17:48

Not really possible to run program from RAM. It's too large. I could try loading the code that erases the FLASH into RAM, but don't know why that would help. Not sure what you mean by ''perhaps your IRQ handler needs to expect more than a single transaction to be pending''. I don't really care if I loose packets during the erase. The process for updating configuration data is via handshaking, i.e. the master doesn't send the next command until after the STM32F4 sends a response saying that it is done with the erase. The problem seems to related to packet(s) that I don't care about (such as an ARP) being recieved while the erase is in process. With the ethernet interrupt disabled, they are actually processed, but just buffered by the DMA controller. I don't know why that would be a problem since that all works if I don't use an interrupt routine in the first place and just poll for incoming packets.

I can post the code for the ethernet interrupt routine and the FLASH erase routine if that would help.

Regards,

- Clint

cwparker
Associate II
Posted on August 27, 2014 at 18:07

init code:

// Enable the Ethernet global Interrupt

NVIC_PriorityGroupConfig(NVIC_PriorityGroup_2);

ethernet_started = 0x1234;

ethernet_recv_interrupt(ENABLE);

void

ethernet_recv_interrupt(FunctionalState state)

{

    NVIC_InitTypeDef NVIC_InitStructure;

    NVIC_InitStructure.NVIC_IRQChannel = ETH_IRQn;

    NVIC_InitStructure.NVIC_IRQChannelPreemptionPriority = 2;

    NVIC_InitStructure.NVIC_IRQChannelSubPriority = 0;

    NVIC_InitStructure.NVIC_IRQChannelCmd = state;

    NVIC_Init(&NVIC_InitStructure);

    if

(state == DISABLE)

        ETH->DMAIER &= ~(ETH_DMA_IT_NIS | ETH_DMA_IT_R);

// turn off ethernet receive

    

else

        

ETH->DMAIER |= (ETH_DMA_IT_NIS | ETH_DMA_IT_R);

// turn on ethernet receive

}

void STM_FlashErase(uint32_t sector)

{

    ethernet_recv_interrupt(DISABLE);

    while

((FLASH->SR & FLASH_SR_BSY) != 0) ;

    STM_FlashUnlock();

    FLASH->CR = (FLASH_CR_PSIZE_32 | FLASH_CR_SER) | (sector << 3);

    FLASH->CR |= FLASH_CR_STRT; 

    while

((FLASH->SR & FLASH_SR_BSY) != 0) ;

    STM_FlashLock();

    ethernet_recv_interrupt(ENABLE);

}
Posted on August 27, 2014 at 19:05

Not really possible to run program from RAM. It's too large.

Then you'll want to consider making it smaller, to the point where you can demonstrate the problem concisely to those who might care.

I don't really care if I loose packets during the erase.

But it's not about what you care about, it's whether the peripheral cares about it, and suspends reception due to a lack of resources/descriptors. The IRQ Handler will need to unravel whatever mess the peripheral is in after it's been stalled for a second or so. You could take the flash/erase out of the equation by simply stalling the interrupt handling for a similar period of time, and observe if the peripheral is jammed up then too.

You could also try erasing everything before you start.
Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
cwparker
Associate II
Posted on August 28, 2014 at 13:54

Thank you for all the help, but it doesn't address the problem. Let me try to restate it as simply as I can:

1) I NEED to be able to erase FLASH sectors while my application is running.

2) I NEED to have an interrupt processing Ethernet input from the DMA in my application so that I do not loose packets if there is a quick burst of messages.

3) I DON''t CARE if I loose any packets while the FLASH is being erased or programmed.

4) My application is TOO LARGE to run totally out of RAM. That is just not possible!

I can already get 1 and 3 by simply polling the ethernet DMA. That WORKS! What I need to know is what needs to be done so that I can use an interrupt routine to service the Ethernet DMA with the erase/programming? It WORKS just fine as long as I don't do that. I have tried disabling the Ethernet interrupts during the FLASH erase/programming, but that didn't work. The reference manual does not document what happens if the processor is suspended during those. So what is required to shut things down during the erase/programming and start it all going again afterwards?

Thank you,

Clinton Parker
Posted on August 28, 2014 at 17:33

I don't think you understand what the underlying problem is, restating what you want to achieve doesn't get you there. I don't have time to invest in your problem.

I'm pretty sure that disabling the Ethernet interrupt isn't the way to go, you need to give it enough resources to work with, and you need to review how to get it out of suspended mode when it runs out of resources.
Tips, buy me a coffee, or three.. PayPal Venmo Up vote any posts that you find helpful, it shows what's working..
cwparker
Associate II
Posted on August 28, 2014 at 18:46

clive1, I'd like to thank you for the time you take to answer posts in general, but in this case I feel you're off base.

I was trying to solve a real problem - an application that uses an interrupt to service incoming Ethernet packets and erase/program the FLASH while that application is running. I don't feel that being able to do that is an unreasonable expectation.

The problem is that the STM32F4 Reference Maanual does not give any details, at least anywhere that I could find, of what happens when it suspends 'functions' during a FLASH erase/program opperation executing out of FLASH. Basically, I was just trying to get information about what that is and what has to be done to deal with it since it is not documented. It is general information and it should not require me having to post code or any such thing. It's not like there was a bug in my code that I was trying to figure out.

Just as you said ''you need to review how to get out of suspend mode'', that is what I was trying to determine since it is not documented.

Now back to the problem ...

I have figured out how to get it all working. Yes, it requires disabling the Ethernet interrupt as I posted previously, but I also needed to disable the DMA transfer as well:

    ETH->DMAOMR &= ~ETH_DMAOMR_SR;

// Stop DMA reception

    wait(10);

// wait 10us to allow any DMA to finish

    ETH->MACCR &= ~ETH_MACCR_RE;

// Disable receive state machine

and then re-enable it after the erase/programming is done:

    ETH->MACCR |= ETH_MACCR_RE;

// Enable receive state machine

    ETH->DMAOMR |= ETH_DMAOMR_SR;

// Start DMA reception

I still do not uderstand why not doing this affects the timing of the Ethernet interrupt, but it does.

Hope this helps anyone who might have to deal with this in the future.

Regards,

Clinton Parker