Hello,
As part of my internship, I am tasked with writing a device driver for the Ethernet programmable interface. I am currently trying to understand the behaviour of the interface under receive overflow conditions.
2 things puzzle me:
- The cumulated number of discarded frames caused by Rx queue overflows is HIGHER than the number of frames I actually do not receive on the application side. I compute the number of frames lost in overflows by regularly incrementing an internal counter with the value in ETH_MTLRXQMPOCR.OVFPKTCNT. As the ETH_MTLRXQMPOCR.OVFPKTCNT resets itself each time the register is read, I believe this should not be the cause of my issue. As for the number of actually lost frames, I am in control of the frames I send to the interface, and just set an increasing number as their payload (1st frame carries idx = 1, 2nd frame carries idx = 2, etc.), which I then use to figure out which frames are missing on the receive side.
- The "Rx queue overflow" interrupt gets triggered a long time the ETH_MTLRXQMPOCR.OVFPKTCNT overflow counter becomes > 0, or after I actually start seeing dropped frames on the application side. I am talking enough time to receive dozens of frames and "Receive complete" interrupts.
I'm afraid I fundamentally misunderstand how the interface handles receive overflows. Is this expected behaviour?
Thanks.