2014-03-21 12:28 AM
Dear all,
According to the datasheet of Stm32f103t6, it is an
A
RM Cortex-M3 MCU with 32 Kbytes Flash. I am not sure any other STM32 IC have the similar issue.
However, today I download a software more than 32k(it is 51k) with J-Flash of SEGGER co.,Ltd, the MCU work normally.So, I wonder that the real size of STM32F103T6's flash is more than 32k, may be 64K.Can any FAE validate this issue? Thanks.2014-03-21 10:06 AM
I guess it has more to do with economies of scale than with technicalities.
It may be just a trick to cut costs.I wouldn't count on having more than 32k in every STM32F103T6.
2014-03-21 11:08 AM
Can any FAE validate this issue?
You'll need to contact your own FAE, or rep. I don't believe there are any here offering support. It should be simple enough for you to validate how much memory is in your part, the CPU will Hard Fault if you read beyond the end of the flash on die.2014-03-21 01:28 PM
An easy mistake to make is assuming that the size of a hex file is roughly the size of the flash it will occupy. It's actually more than double the size of the flash (assuming there are no blank regions). I don't know if you made this mistake, but if you did, I'll just say ''been there, done that''.
I would not be surprised if ST make their chips with more FLASH than they advertise. And then on testing, switch out any blocks that are found to be bad. (Quite what happens to spare good blocks of Flash is up to ST.) - Danish2014-03-25 12:51 AM
2014-03-25 01:10 AM
2014-03-25 02:40 AM
ST Guarantee that for the 32k part there is 32k of FLASH in the area specified in the documentation. And that FLASH is sufficiently good that it will last the specified number of erase-write cycles (subject to the specified failure rate).
ST probably make several different memory-sized parts from the same die. And when testing some of the parts they make, ST might decide that some blocks of FLASH memory are not good enough to guarantee the lifetime specified in the data sheet (even if they are good enough to work for the first few programming cycles). So what they do is map the believed-to-be-good memory to where they say there is working memory for a small-memory part. They don't guarantee that there won't be FLASH elsewhere in the memory-map.Although we assume things are nicely digital, the real world is analog. FLASH works by storing a tiny electric charge onto a capacitor and hoping that the developed voltage is enough to make a transistor turn on. You're relying on the capacitor being sufficiently big, and sufficiently non-leaky for the charge to remain there. After many cycles of applying such high voltages that the insulator breaks down i.e. programming / erasing. ST can't predict that a particular memory bit won't last if it is working now, but they probably do some analog tests on each bit and if it is out of their expected behaviour range then they mark the entire block of memory as bad.I know of one competitor who visibly used error-correction-coding in their FLASH such that even if one bit in a 128-byte row of FLASH is wrong the system corrects for this and passes the correct word to the processor.Why do I expect problems in the FLASH memory? I have read that fewer than 50% of chips as complex as a microprocessor fully work when that are made. (Ref: Bob Pease ''What's all this quality stuff anyhow''). Where a manufacturer can say ''this one isn't good enough to sell as the 64k part, but we can sell it as a 32k part'', surely they will do that.In short, you might find some 32k parts which seem to have more good FLASH. ST don't guarantee that the extra FLASH will work long-term. And ST don't guarantee that other 32k parts will have extra FLASH. You might be lucky. But as a manufacturer of goods I sell for profit, I dare not take that risk. - Danish