2025-01-27 06:41 AM
Hi,
I have a piece of CRC calculating code that I nicked from somewhere a couple of decades ago. I've been wondering about a few lines:
*CRCVal ^= (*CRCVal << 8) << 4;
*CRCVal ^= ((*CRCVal & 0xFF) << 4) << 1;
(CRCVal is a pointer to a uint16_t variable.)
What I'm curious about is the slightly weird left-shifting notation. Shouldn't it give exactly the same result if it was written like this?
*CRCVal ^= (*CRCVal << 12);
*CRCVal ^= (*CRCVal & 0xFF) << 5;
Does anyone have a hunch why the shifts are split up like this? Could it be something related to a specific compiler or device that handles half-byte shifts in a certain way?
By the way, it would be great if this forum had a "General programming" board. I prefer to ask questions here, as I think that the community has a nice vibe.
2025-01-28 12:39 AM
@EThom.3 wrote:Or perhaps an 8051, if that even had a C compiler 20-30 years ago. .
Keil C51 was well-established 20-30 years ago.
Keil was established as an 8051 tools company long before ARM bought them.
2025-01-28 12:44 AM
Interesting. Thanks.
Thus, my best guess is that the piece of code was originally made for either a PIC or an 8051.
2025-01-28 12:55 AM
Maybe by someone trying to write assembler in C: they didn't realise that the compiler can work out how to do a 12-bit shift - they didn't have to manually decompose it themselves?
Used to see a lot of that.
Also people thinking that cramming as much as possible into a single source line would produce less object code!