How can I reduce the time for interrupt callback function execution?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 02:13 AM - last edited on 2024-06-11 02:28 AM by mƎALLEm
I have this timer that is configured to generate an interrupt after every 15.625 µs, i.e., when it’s period elapses. The timer elapsed callback function contains the code below. The requirement is that new data be transmitted via SPI every 15.625 µs but a single iteration of this callback function is taking 17.95 µs currently. The clock speeds are at the maximum frequency I can achieve with the internal clock. I’ve also tried to reduce the length of the code as much as I can (There’s no code in while (1)). I believe if I use an external oscillator to achieve a higher SYSCLK frequency, the callback execution time will reduce. I wanted to know if there’s anything else I can try before redesigning the board.
void HAL_TIM_PeriodElapsedCallback(TIM_HandleTypeDef *htim)
{
if (htim == &htim2) {
DAC_EN;
HAL_SPI_Transmit(&hspi1, &wave_data[index], 3, 1);
DAC_DIS;
index += 3;
if (index >= 1535)
{
index = 0;
}
}
}
Solved! Go to Solution.
- Labels:
-
Interrupt
-
STM32F1 Series
-
TIM
Accepted Solutions
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-07-05 05:34 AM - edited 2024-07-07 12:29 AM
@ki01 wrote:This is my first STM project so I am not very familiar with all the possibilities it offers. Do you mind elaborating on how I can use interrupts more efficiently?
Sure. The point of using interrupts is to reduce the load on the CPU by handling events as needed instead of the MCU polling for them. You're doing that. But you've also placed very time-consuming blocking code in your callback. That means the MCU spends virtually all its time inside the interrupt callback - which is upside-down. While the MCU is inside an interrupt callback, lower-priority interrupts are not serviced. That means response times to other interrupts will suffer. In the worst case, other interrupts will not be serviced at all. Also, if you ever need your MCU to do any work except send this data over SPI, this way of doing it is very limiting.
In general, you want to keep interrupt callbacks as short as possible. In your case, the standard way to accomplish your goal would be to use the interrupt callback to fire off a DMA transfer from memory into SPI, and then immediately return. On the next callback, verify that the previous transfer has completed (it must, unless you've designed your system poorly - see my previous reply), and then fire off the next chunk transfer to the SPI peripheral.
That way, instead of near 100% MCU load, The MCU will have something around 0% load. Instead of wasting MCU cycles waiting for the SPI transfer to complete so you can send the next byte, you offload that work to dedicated hardware (the DMA peripheral). Your MCU will now just be sitting around in the main while() loop doing nothing most of the time. This allows your MCU to do other work that needs to be done while the transfer is ongoing or, if there isn't any. you could have it switch to a low-power mode.
But OK, in this project you may not need your MCU to do anything but transfer data to the SPI. As this things go, you'll probably want to add do more eventually even if you don't think so now,, but say you don't. Say that, as long as it works, you don't care if the MCU spends 100% of it's time just doing block SPI calls. In that case, there was no need to use interrupts - it may even be (slightly) less efficient. You could have just done your blocking sends inside the main loop, and then polled with a tight loop, waiting for the moment when you want to start sending over SPI (preferably still using a timer peripheral in time-base mode and reading its registers, but just polling on the global tick counter which has 1ms resolution may even be enough).
If this is your first project, getting anything to work is an accomplishment. But if you plan to do more projects, you might as well use this one to learn how to use the hardware efficiently, and how to design your software properly to do so. If you don't, and you plan to do more projects in the future, you'll soon find that your current approach, while being simplest, is also a dead-end. And once you have several more interrupts sources active, you might end up creating surprising and difficult-to-debug issues bugs (like lower-priority interrupt callbacks not getting called).
- Please post an update with details once you've solved your issue. Your experience may help others.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 02:28 AM
Hello and welcome to the community.
Please in next time use <\> button to insert your code for better visibility.
Thank you for your understanding.
PS:
1 - This is NOT an online support (https://ols.st.com) but a collaborative space.
2 - Please be polite in your reply. Otherwise, it will be reported as inappropriate and you will be permanently blacklisted from my help.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 03:59 AM
You can speed it up by not using HAL, but by writing your own code directly to handle the iterrupt from the timer. HAL is too complex and time inefficient. The same applies to SPI calls via HAL in an interrupt.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 04:47 AM
Do less? Do it directly in the Handler?
Use TIM+DMA to pace output to SPI->DR ?
Up vote any posts that you find helpful, it shows what's working..
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 07:57 PM
Using HAL is unfortunately the requirement in this case.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-11 11:01 PM
If HAL is a requirement, there is no other way than to use a much more powerful MCU.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-12 01:21 AM
I modified the code in the HAL_TIM_IRQHandler function to only contain code relevant to the timer channel in use and this did the trick. Not sure if this is going to lead to any issues but I'm guessing it won't since this the only timer interrupt I am using.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-12 03:49 AM
Yes, but then you didn't meet the condition of using HAL drivers, but in essence you made your own by modifying. In that case, wouldn't it be better to do it properly and without the rest of the ballast that is in HAL?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-27 07:25 AM - edited 2024-06-27 07:41 AM
Why are you using the blocking variant of HAL_SPI_Transmit call in an interrupt? Of course you have a problem.
You should use either the interrupt or the DMA versions of the API.
Yes, the HAL Timer IRQHandler is a bloated mess, but the real problem is that you're not using the facilities the HAL offers to operate more efficiently.
I'm concerned by the fact that your stated timer period is 15.625us, when the SPI transfer seems to take about that or even more. It suggests you haven't designed your system properly. You should either increase the speed of your SPI channel, or make your timer fire more slowly. Or both. Otherwise, by design your system can't meet the stated deadline. If you cannot do either, you really do need to redesign your system. But then, your bottleneck is not the HCLK frequency, it's the SPI frequency.
As long as you ensure (by design) that the SPI transfer completes well before the next timer event, you should have no overruns. So just verify that the transfer has completed at the start of the callback and you should be good.
You should try to get as much timing margin as possible. For various reasons the transfer duration is not completely deterministic, so you need to design conservatively. Try for 2x (say, to get SPI completed in 8us) or less if you can. At the very worst case, I would not feel comfortable with less than 10-15% of slack.
It would be wise to measure the transfer duration accurately with a logic analyzer (you could do it in software, but that would introduce more overhead to your already-borderline design).
- Please post an update with details once you've solved your issue. Your experience may help others.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Email to a Friend
- Report Inappropriate Content
2024-06-30 08:40 AM - edited 2024-06-30 04:31 PM
I've been on this forum a few months now, and it's getting to be very annoying that people post their question, get a response and then just disappear without a word or, even worse just disappear. So that anyone tempted to respond with advice is forced to wonder whether it's be a total waste of time and effort.
I now feel very guilty about the handful of times I've committed this very crime over the years, elsewhere.
Thank you. Thank you, avatar-clad strangers, for this moral education you are providing me with.
- Please post an update with details once you've solved your issue. Your experience may help others.
