cancel
Showing results for 
Search instead for 
Did you mean: 

Am I reinventing the wheel making my own non-blocking delay function?

HTD
Senior III

As I use interrupts in my projects a lot, but not every event in the system generates hardware interrupt - I need to be able to wait for events but the waiting needs to be initialized with a hardware interrupt.

That means anything called by an interrupt handler can't spin-wait or it crash (or badly lag) the entire system. So - here's why.

I tried to google how to do it and found nothing useful. So I made my own function. It's in delay_async.c file of this gist:

https://gist.github.com/HTD/e36fb68488742f27a737a5d096170623

However, I wonder - isn't it a kind of reinventing the wheel? Doesn't STM32 HAL or IDK, maybe FreeRTOS have something virtually identical to what I made? I just can't find it. I committed that thing because I don't have time, I needed solution ASAP, but now I'm just curious, maybe it was already there and should be used instead?

9 REPLIES 9
JPeac.1
Senior

What you describe is a common situation in embedded programming. There are plenty oftools to manage this, based on using an RTOS and dividing your applications into separate tasks. Take a look at FreeRTOS or one of the alternatives ST supports.

An RTOS (real time operating system) solves a lot of the problems you describe (and some other you didn't anticipate...look up priority inversion and deadlocks) in a well-defined and reliable way. Your code will be cleaner and easier to follow if context switching and task scheduling is abstracted from the application code.

Short answer: Yes, you are reinventing a wheel that's been around for more than 60 years. My personal recommendation is FreeRTOS. It's not that complicated, excellent interrupt support, and rock solid for reliability. I've used it on a number of commercial products based around ST controllers.

You can, of course, rely solely on the ST HAL. I never used it, but assuming you have the time to debug other people's undocumented code, use your own judgement.

Jack Peacock

Javier1
Principal

what about using the Systick? it has an interruption function

Available for consulting/freelancing , hit me up in https://github.com/javiBajoCero

Here's what I've found:

"A call to vTaskDelay will put your task to sleep (blocked from getting any CPU) for the number of FreeRTOS ticks specified. You can't use it for precise timing, but it's fine for a task they needs to wake up now and then to do something."

Now, I have a DS18B20 digital thermometer in parasite power supply mode. It's guaranteed to complete temperature conversion in let's say 750ms. When I explicitly use TIM configured to 1 millisecond period to get an event and count 750 of these events - is it the same precision? I don't expect it to be one microsecond exact, but let's say 10% would most probably be good enough for such use cases. Is it what vTaskDelay does? Can I count for this precision?

And one more question - if I wait for exactly 1 tick - will it provide roughly 1ms delay (if that's the tick time), or it's possible that it often would be more like 2ms or even more? I guess that using a timer waiting for 1 tick will give me very precise 1ms delay unless there is a bug in an interrupt handler that would introduce unacceptable lag. Again, does FreeRTOS vTaskDelay works the same?

It would be very similar to what I have. Just reusing the same timer the FreeRTOS already uses. I've read somewhere it's not recommended. It was actually my first guess to use SysTick. Anyway, the only difference in code would be the timing source, an extra code (needed to be placed in a separate code file) would still be needed to implement many delays in a function.

My goal is to make the waiting code as simple as possible (it's actually done), but without creating a new dependency (not done, I include an extra file). A dependency on what I already have in let's say a HAL / TouchGFX project is OK though.

I think I could use register callback feature to make it as clean as it gets, but it still be a lot of code for a simple thing like "in 1ms check if the register bit is set". There's no difference for the MCU, but it's a huge difference for me to ensure the big function is designed correctly. When it's simple it's easy to tell if the code is correct by just looking at it. When the microscopic functionality like waiting for a tick takes like 5 long lines of code - it's getting harder and harder. The delays are usually situated within state machines that are already complicated enough.

Pavel A.
Evangelist III

TL;DR the ST "HAL" libraries are suitable for any real life project only so-so.

You get what you pay for, this is fine.

For many "almost" real life projects I do the following for I2C devices:

  • before talking to the device, call HAL_I2C_IsDeviceReady with a shortest timeout
  • If the device responds, send the command, with reasonable timeout
  • Else, if there is a reason for it to not respond (busy completing a recent command) schedule next poll
  • Else the device failed, needs recovery

If a project deserves specially designed wheels, then of course invent them without remorse.

I'm very used to Open Source projects and I see HAL libraries are also Open Source. I think I'll be able to contribute soon, for now I'm receiving hard core front line training. I started learning embedded programming like a month ago. I thought if I made C64 demos as a kid I might have a chance to actually complete my first commercial project, and so far it goes surprisingly good ;) The STM32 hardware is not that much different from good old C64, except it's much faster ;)

You can never be sure that two devices that count time are always in sync. Ordinary quartz generally has an accuracy of 20 to 100ppm. It is unlikely that your two devices are the same. You would have to wait 751 or 752 ms as a precaution.

Il you "wait for 1 tick" that mean waiting "until the next tick". So the delay can be from few us to near 1 ms. Depending on the interrupt priority and the other tasks priority it can be longer: a task with more priority than the waiting task can be woke up by this tick.

And this is the expected behavior with an RTOS that manages priorities.

It is not possible to be sure of a timing unless you use the highest priority interrupt to wake up the highest priority task (and critical sections can add jitter!!!)..

HTD
Senior III

That sheds some light. From what you explain it seems like if 2 interrupts with the same (or the other one with higher priority) would occur at the same time, my waiting task will be delayed for 1 tick? Are you sure? I would guess the system would handle the interrupt in order either of priority, sub-priority or whichever came first. That seems pretty straightforward. But then again - why the whole tick delay? 1 millisecond is a very long time. If the higher priority handler would take like 200ns to complete, should the lower priority handler be executed in next tick, or in the same tick, just 200ns later?

just 200ns later (interrupts are not related to RTOS ticks, except systick):

If 2 interrupt occurs at the same time, the lowest priority one will be delayed by the execution time of the higher priority handler routine (probably less than a tick because it's better to have short interrupt routines). This is interrupt handling.

If the interrupt is the systick interrupt, and the interrupt wake up your task and another task with higher priority, then your task will be delayed until the higner priority task go to sleep (waiting something and releasing the CPU). It can be arbitrarily long. This is task scheduling.

So if you wait for exactly 1 tick, the actual delay depends on many things and will not be constant (although on average you will be close to the desired value)