cancel
Showing results for 
Search instead for 
Did you mean: 

CPU Utilization

Vmere.1
Senior

"Could you Implement a monitor task that ideally should be executed at the beginning of each time slice and determine the utilization factor during the previous time slice.

If the utilization factor is too high it should give warning. If we can use idle time then we can measure the utilization.

In order to set up this monitor we should this to perform a one time estimate how many idle operations can be performed in a time slice.

I probably think you can use this estimate. The monitor task should print a warning on the output if the utilization factor is higher than 90%.."

This is my requirement from I got for my internship. And I implemented one and when I check with segger systemview tool it is showing little difference. did anyone did something like this?

volatile float referencePoint = 1512.493;
volatile uint32_t tickhookcurrent = 0UL;
volatile float utilization_monitor = 0;
void vApplicationIdleHook( void )
{
/* This hook function does nothing but increment a counter. */
	tickhookcurrent++;
}
 
 
void vApplicationTickHook(){
	utilization_monitor = 100 - ((float) tickhookcurrent *100 / referencePoint);
	tickhookcurrent = 0;
}
 
void monitor_task(void *argument){
	char *s;
	while(1){
		sprintf(s, "Utilization %f", utilization_monitor);
		print_function(s);
	}
}
 
uint32_t lpt_profiler;
void task1_lowpriority(void *argument)
{
	TickType_t timestamp;
	while(1){
		//lpt_profiler;
		timestamp = xTaskGetTickCount() * portTICK_PERIOD_MS;
		while((lpt_profiler = ( (xTaskGetTickCount() * portTICK_PERIOD_MS) - timestamp))  <  25*portTICK_PERIOD_MS);
		SEGGER_SYSVIEW_PrintfHost("LPT task from task1");
}
	vTaskDelete(NULL);
}

Also If I have any task which is greater than idle task, then scheduler is not calling the idle task at all. I think the requirement is flawed. Can you give me some advice if I can do this?

1 ACCEPTED SOLUTION

Accepted Solutions
Piranha
Chief II
5 REPLIES 5
JPeac.1
Senior

First of all, you might clarify one of your requirements. By utilization do you mean overall execution time for tasks vs. an elapsed interval or are you trying to pinpoint utilization rate by individual task? If the goal is a type of watchdog then you only need to track idle time, since by definition when idle isn't running you are consuming task execution cycles. You will have to add a highest priority task to routinely poll some global to detect the fault or modify the scheduler code to do the same thing. This assumes you don't adopt the programming model so beloved by ST and load up a high percentage of processing in ISR callbacks.

You didn't specify if or what RTOS you are using. FreeRTOS has a mechanism to profile task execution time through a macro style hook in the pre-emptive task scheduler. The hook tracks when task context changes, basically the point in time when an unblocked task with a higher (or round robin) priority is scheduled for the next slice. I use this to track elapsed time (using a spare 16-bit 100usec hardware timer) spent in each task (which can be less than a full slice). This doesn't track time in interrupts so it's not perfect, but it is relatively lightweight in terms of scheduler overhead. Per task stack overflow can be trapped at the same time.

Since the macro hook is in the scheduler, not a task, it runs at the RTOS priority level. This has the benefit of not impacting very high interrupts running above the RTOS priority level. ISR processing time is distributed across whatever task is active, but profiling ISRs in situ can be difficult when latency can impact the application.

Obviously, this technique isn't ideal when normal usage is 97% and fault is 99%. Measurements fall within the margin of error, yielding false positives. But if you're looking for 99% usage (i.e. failed spinlock or infinite loop) when the application should be running at much lower levels, I've found it works quite well. I've used it since FreeRTOS added in the scheduler hooks.

BTW the problem IS poorly worded. It looks at one time slice at a time instead of a time series analysis. Many applications employ burst processing for critical events, which may reach 100% over several slices, followed by a return to idle levels when the event has completed. The problem is worded to imply a time-sharing system, where every user is guaranteed the same amount of time, rather than prioritized execution based on "real time" reaction to events. I sense the Linux influence here (and spare the indignant recriminations, I know there are Linux versions patched up to support real time operation).

Bear in mind no time-slice based monitoring will detect loop type faults in interrupts, especially if spinlocks are present (extremely poor programming practice in an ISR). That's why there are hardware watchdogs. Nor does it handle tasks containing critical sections (i.e. interrupts disabled) with loops that may not terminate, or priority inversion triggering task deadlocks.

Jack Peacock

KnarfB
Principal III

> when I check with segger systemview tool it is showing little difference

difference to what?

> Also If I have any task which is greater than idle task

True only if you have always a *ready* (not blocked) task which is greater than idle task.

hth

KnarfB

I try to measure the cpu utilization using above code. In reference i'm comparing my values to the segger system view.

First of all, you might clarify one of your requirements. By utilization do you mean overall execution time for tasks vs. an elapsed interval or are you trying to pinpoint utilization rate by individual task? If the goal is a type of watchdog then you only need to track idle time, since by definition when idle isn't running you are consuming task execution cycles. You will have to add a highest priority task to routinely poll some global to detect the fault or modify the scheduler code to do the same thing. This assumes you don't adopt the programming model so beloved by ST and load up a high percentage of processing in ISR callbacks.

My requirement is to calculate the total cpu utilization time which I assume to calculate via 100-idle time.

So from the above bold format statement, here I should create a task with the highest priority and do a taskYield so that it runs once every systick interrupt.

Yeah, now I got a good picture why we cannot use the technique when normal cpu usage is and fault usage are close.

Thank you for the comprehensive answer and exhaustive data.

Piranha
Chief II

There is a much simpler and more accurate solution for a total CPU load measurement:

https://community.st.com/s/question/0D50X0000AId4NsSQJ/i-know-sounds-dumb-but-how-do-you-configure-systick