FreeRTOS Support Archive
The FreeRTOS support forum is used to obtain active support directly from Real
Time Engineers Ltd. In return for using our top quality software and services for
free, we request you play fair and do your bit to help others too! Sign up
to receive notifications of new support topics then help where you can.
This is a read only archive of threads posted to the FreeRTOS support forum.
The archive is updated every week, so will not always contain the very latest posts.
Use these archive pages to search previous posts. Use the Live FreeRTOS Forum
link to reply to a post, or start a new support thread.
[FreeRTOS Home] [Live FreeRTOS Forum] [FAQ] [Archive Top] [December 2014 Threads] Microsecond delay within taskPosted by pugglewuggle on December 24, 2014 Is there any method of doing this with FreeRTOS 8.x that does not require using a hardware timer peripheral and an ISR? I'm thinking similar to the Arduino delayMicroseconds() function. That would be very nice. I know the kernel tick rate affects the resolution of a microsecond delay in depending on the clock rate of the processor. It would be good to have a version that works by actually allowing the scheduler to switch to other tasks on fast processors, and perhaps one that disables interrupts and just waits within the current task and disallows switching (for low microsecond delays on slower processors). Thoughts/am I missing something?
Microsecond delay within taskPosted by richard_damon on December 24, 2014 There is nothing preventing you from using a delayMicroseconds() function like what is used in non-threaded programs (one that just spins on a counter).
I would be wary of disabling interrupts for this, as that will add the delay into the latency of ALL interrupts in the system, so unless this time delay is more important than the timing of all the other interrupts, I wouldn't disable the interrupts.
Unless the delay is very many microseconds, you wouldn't be able to shift to another task, and even that would require something to generate an interrupt at the end to force the switch back.
Microsecond delay within taskPosted by pugglewuggle on December 24, 2014 gotcha. So I'd have to set up a semaphore and give/take it in the delayMicroseconds function and an ISR?
Microsecond delay within taskPosted by richard_damon on December 24, 2014 A Semaphore that is given in an ISR and waited for in the delay function is one good way to implement shorter than a tick delays/periods. If the operation to be done is simple/quick enough, it could even be done in the ISR.
This works well for delays on the order of a 100us or longer (depending on the speed of your processor). Note that the actual delay in the function will be a bit longer than the timer setting as you have the ISR processing delay, and the subsequent task switch (and possible additional delays if a higher priority task becomes ready)
This doesn't work well for short (like 5us) delays where you really need a precise value. That isn't the time frame that an RTOS is aimed at. For that you either need "hardware" to do it, or you disable interrupts and do precisely timed stand alone code.
Microsecond delay within taskPosted by pugglewuggle on December 24, 2014 Are you aware of any resources explaining the actual processor time overhead of processing such actions in FreeRTOS (ISR/task context switches/semaphore give/take, etc.). I'm sure it's more a function of clock cycles and then the real time comes from your core clock rate. What I'm trying to figure out is at what point it becomes necessary to disable interrupts (for low-value us timing) and when it is not (long us delay periods).
Microsecond delay within taskPosted by pugglewuggle on December 24, 2014 Another question... at what point does it become a bad idea to have the kernel handle delays instead of hardware? Such as vTaskDelay(1/portTICKPERIODMS) to get 1 milliseconds. If my kernel tick rate is set to 500 Hz does that mean that my 1 millisecond delay will be at minimum 2 milliseconds? Just not sure how that works.
Microsecond delay within taskPosted by rtel on December 24, 2014 The tick rate you set using configTICKRATEHZ sets the resolution of
time you can use with FreeRTOS API functions. Therefore if you set a
tick period of 2ms (500hz) and request a delay of 1 tick you will get a
delay of between just over 0 ms (if the request to delay came
immediately before a tick interrupt, so the next tick comes very quickly
after the delay request) up to just under 2ms (if the request to delay
came immediately after a tick interrupt, so you have nearly an entire
tick period before the next tick interrupt).
RTOS ticks are just periodic interrupts generated from a timer - their
behaviour is exactly as any tick interrupt you would have in a system
whether the system is using an RTOS or not.
Regards.
Microsecond delay within taskPosted by heinbali01 on December 24, 2014 There is one thing I miss in this interesting discussion, i.e. the question: does it matter if occasionally the delay last too long?
Some protocols are immune for clock stretching, like i2c and spi. If an interrupt occurs while executing delayMicroseconds(), the peer will just wait a little longer, but no error occurs.
Regards,
Hein
Microsecond delay within taskPosted by richard_damon on December 24, 2014 One thing to watch out, If portTICKPERIODMS is 2, the 1/portTICKPERIODMS = 1/2 = 0 (for normal integer math), so you will get no delay.
As to your other question, if you need "precise" delays (where too long is bad also), you need to use the "disable interrupt" method in order to keep the system from losing time to doing something else. If you can handle being somewhat longer (how much depends on priorities and the needs of interrupts and other priority tasks), then using the RTOS is a possibility. The big problem with trying to quantify the delay in a FreeRTOS scheduling is that it is somewhat system dependant, things like:
Processor & Speed
Compiler used (and options, particularly optimization)
Max Critical section length
Interrupt usage/code.
Microsecond delay within taskPosted by rtel on December 24, 2014 In recent versions pdMSTOTICKS() is preferred over port TICKRATEMS().
Regards.
Copyright (C) Amazon Web Services, Inc. or its affiliates. All rights reserved.
|