cancel
Showing results for 
Search instead for 
Did you mean: 

Can I implement a nanosecond delayed signal output with the stm32H723 chip

JinJin
Associate

I would like to implement delay signal output with STM32 MCU.

So I use the original signal input for EXT interrupt.

EXT interrupt's callback activates the timer

If a timer callback is called after a time delay of the desired value, it is implemented by setting the GPIO pin to HIGH.

However, it can be confirmed that the desired amount of delay is achieved, but the timing of the pin becoming HIGH varies each time, and it takes about 1.5 to 1.8us between EXT and TIMER.

I would like to know if the idea can be implemented as STM32 MCU series.

 

also posting my Clock Configuration pic

 

JinJin_0-1765879474780.png

 

 

9 REPLIES 9

The MCU core seems to be running at 500MHz and the timers at 250MHz.
That's 2 nano second and 4 nano second clock period respectively.
Then there is interrupt latency, synchronization between peripheral and core, flash latency, overhead of io-functions. You might be able to get a latency of about 100 CPU cycles give or take. I doubt it can be lower than 50.
So you can get perhaps a 200 nanosecond delay using your approach.
There might be a way to use peripherals only to get an input to an output, but you are still limited by the clock period of 4 nanosecond.
And there is the delay of the GPIO-pin push-pull circuit. It cannot instantly switch on or off.
A 1 nanosecond delay is impossible using any clocked logic in the MCU. Unless there the MCU has a delay line you can use (I know there are internal programmable delay lines for quad-spi and octo-spi flash, but I don't think they are available externally).

You can delay a digital signal externally with some logical gates. Connecting an even number of inverters in series. Or use an RC-circuit with a schmitt trigger.


What is your use case?

Kudo posts if you have the same problem and kudo replies if the solution works.
Click "Accept as Solution" if a reply solved your problem. If no solution was posted please answer with your own.
Gyessine
ST Employee

Hello @JinJin 
If  I understood your question right
based on your product Datasheet table 54

Gyessine_0-1765891883348.png

if you choose "VERY HIGH" as GPIO maximum output speed you can reach toggling times of nano seconds, but you will need a very clean implementation 

Hope that answered your question
Gyessine

To give better visibility on the answered topics, please click on Accept as Solution on the reply which solved your issue or answered your question.

> A 1 nanosecond delay is impossible. 

I would add "timpossible o manage via core instructions". But it remains correct.
A few nanoseconds could be achieved hardware, which requires more or less effort.

But I would ask the OP @JinJin what he requires this delay for, i.e. what the delayed signal and un-deleyed signals are supposed to achieve in combination.

LCE
Principal II

yep, the OP didn't exactly say "1 ns", so maybe he's happy with 100 ns?

But as char[] said, in the "few ns" area, I'd try with hardware gates.

With a few more ns, maybe this can be done with a timer using an external trigger / clock via pin.

RobK1
Senior

OP needs to provide a far more detailed description of what he wants to achieve before any useful answers can be given.

JinJin
Associate

I have already completed the code implementation using the EXTI → Timer → GPIO HIGH flow.
With the prescaler configuration shown in the attached clock configuration image, I am able to achieve a nominal delay resolution of approximately 4 ns.

However, I suspect that the latency introduced by the EXTI interrupt and the timer interrupt is not constant on each occurrence. As a result, the GPIO rising edge timing varies within a range of approximately 500 ns.

Therefore, my main question is whether there is any method to minimize this jitter within the STM32 MCU architecture, or if this behavior is an inherent limitation of STM32 when using an interrupt-based approach.

Any clarification on whether this level of jitter can be reduced further, or if a different hardware-based approach is required, would be appreciated

 

I appreciate all the efforts and suggestions to help clarify this issue.

JinJin
Associate

PS. already GPIO output speed is very high

MasterT
Lead

Configure timer in slave_triger mode, ETRF pin as a trigger. OnePulse PWM. There 'd be latency about 20-30 nsec - so it's minimum delay, resolution 2 nsec. No interrupt, all in hardware

Example of code I have for stm32G474re:

static void tim3_config(void)
{
  TIM_HandleTypeDef htim234;

  TIM_ClockConfigTypeDef  sClockSourceConfig  = {0};
  TIM_MasterConfigTypeDef sMasterConfig       = {0};
  TIM_OC_InitTypeDef      sConfigOC           = {0};
  TIM_SlaveConfigTypeDef  sSlaveConfig        = {0};

    __HAL_RCC_TIM3_CLK_ENABLE();

    htim234.Instance                = TIM3;
    
    htim234.Init.Prescaler          = 0;
    htim234.Init.CounterMode        = TIM_COUNTERMODE_UP;
    htim234.Init.Period             = 352;
    htim234.Init.ClockDivision      = TIM_CLOCKDIVISION_DIV1;
    htim234.Init.AutoReloadPreload  = TIM_AUTORELOAD_PRELOAD_ENABLE;

    HAL_TIM_Base_Init(&htim234);

    sClockSourceConfig.ClockSource  = TIM_CLOCKSOURCE_INTERNAL;
    HAL_TIM_ConfigClockSource(&htim234, &sClockSourceConfig);
    
    HAL_TIM_PWM_Init(&htim234);
    HAL_TIM_OnePulse_Init(&htim234, TIM_OPMODE_SINGLE);

    sSlaveConfig.SlaveMode            = TIM_SLAVEMODE_TRIGGER;
    sSlaveConfig.InputTrigger         = TIM_TS_ETRF;
    sSlaveConfig.TriggerPolarity      = TIM_TRIGGERPOLARITY_INVERTED;
    sSlaveConfig.TriggerPrescaler     = TIM_TRIGGERPRESCALER_DIV1;
    sSlaveConfig.TriggerFilter        = 0;    
    HAL_TIM_SlaveConfigSynchro(&htim234, &sSlaveConfig);

    sMasterConfig.MasterOutputTrigger = TIM_TRGO_OC4REF; // TIM_TRGO_UPDATE;
    sMasterConfig.MasterSlaveMode     = TIM_MASTERSLAVEMODE_DISABLE;
    
    HAL_TIMEx_MasterConfigSynchronization(&htim234, &sMasterConfig);
    
    sConfigOC.OCMode      = TIM_OCMODE_PWM2;
    sConfigOC.Pulse       = 1; 
    sConfigOC.OCPolarity  = TIM_OCPOLARITY_LOW;
    sConfigOC.OCFastMode  = TIM_OCFAST_DISABLE;
    
    HAL_TIM_PWM_ConfigChannel(&htim234, &sConfigOC, TIM_CHANNEL_4);
    HAL_TIM_Base_Start(&htim234);
    HAL_TIM_PWM_Start(&htim234, TIM_CHANNEL_4);
}

 

Put code and vector table in RAM, don't use other interrupts, and the interrupt jitter should be a couple cycles max. Of course, writing to GPIO has latency of its own.

This would be better implemented on a simpler chip such as the F4 where you could eliminate jitter.

Probably there are better ways of implementing this than a CPU-based approach.

If you feel a post has answered your question, please click "Accept as Solution".