2016-09-01 09:24 AM
Is there a benefit to using 16-bit filters versus 32-bit filters in terms of MCU efficiency? I am writing a driver for DeviceNet CAN, and the only frames I expect have standard (11-bit) identifiers, not extended. For this, 16-bit filters seem just fine, as a have don't-cares for the EXID[17:15] bits. With 32-bit filters, I would extend the don't cares to all EXID bits.
The question is: is it less desirable to use don't-cares for 18 EXID bits versus 3 bits (for 16-bit filters)? Factors could include filtering speed, delay, processor resource, power, etc. I just don't know enough about the internal network processing implementation to make a judgement. (Or even know if it really matters in the long-run...) Thanks, all. #bxcan #can-filter #can2016-09-01 11:38 AM
Problem is that it is Third Party IP, so ST might not have clear perspective.
I don't think it makes that much difference, the CAN peripheral is clocking at much higher speeds internally, so it outpaces the wire speed. I would suspect it is a state machine doing matches (AND COMPARE) rather than having some logic dense CAM implementation. I'd tend to assume each compare has some unit cost, whether it is 16 or 32-bit, it might even use the same comparator and have two taps. Anyway the strategy for ST seems to be using minimal logic (small/efficient), rather than maximal (large/fast), to keep the gate count down, and allow for prototyping in FPGA equivalents.2016-09-02 03:54 AM
The choice of 16 vs. 32 has more to do with the protocol being used. With 29-bit IDs unless the header can easily fit in a 16-bit mask you pretty much have to go with the larger filter. Other protocols there are advantages to 16-bit. Not sure about DeviceNet but in CANopen, which also is restricted to 11-bit (except for a rare case) being able to split the filter works out well. CANopen 4-bit function groups in the COB-ID fit nicely into the 14 filters, with separate RX and TX (Remote transmit request) filters available when splitting into two 16-bit masks. I can set up a receive filter for a group but block RTR requests with the transmit filter if the group doesn't support it. That saves space in the RX FIFOs and passing a message to the net stack listener task that I know will be rejected.
Jack Peacock2016-09-02 08:34 AM
Thanks, Jack. For my little Dnet slave, I'm restricting incoming frames to those of the Predefined Master/Slave Connection Set, and I'm only supporting explicit messages aind I/O poll, so my masks for the standard identifier are
10maaaac100, 10maaaac101, 10maaaac110, and 10maaaac111, which reduces to 10maaaac1xx (x=don't care, 'maaaac' are MAC bits). Further, I can restrict the IDE bit to 0, and don't-care the extended id and RTR bits. So, I could simply use a single 32-bit mask: 10maaaac|1xxxxxxx|xxxxxxxx|xxxxx0xx I could do this with the four 16-bit masks, which would get me the flexibility of putting, say, polled messages in one FIFO, and explicits in another. If my stack couldn't keep up with pulling from a single FIFO, I might consider the 16-bit, but my driver's rx ISR simply puts all the messages into an RTOS queue, so I'm pretty much indemnified against FIFO overrun (one has to stress-test message storms, of course!). And, I'm not using FIFO id or filter id to sort messages. Trying to KISS. It was the MCU's efficiency of using that one 32-bit filter vs 4 16-bit filters that that got me questioning strategy. As Clive mentions, it probably doesn't matter.