Skip to content

Commit

Permalink
kernel: idle: introduce idle enter/exit hooks
Browse files Browse the repository at this point in the history
To allow for custom SoC idle/sleep/pm behavior in a scalable manner,
introduce hooks for implementing idle enter and idle exit
(post idle enter) behavior.

The current idle thread implementation is moved to "default" enter
and exit hooks, selected by the same criteria as the existing ifdefs,
but with the selection logic moved to Kconfig to ensure non conflicting
hook implementations.

With these hooks, default behavior should be unchanged. SoCs can now
select the Kconfig option IDLE_ENTER_HOOK_CUSTOM and
IDLE_EXIT_HOOK_CUSTOM and implement the hooks.

Next step will be to move these "default" hooks to their respective
owners if possible, like moving the PM hook to the PM subsystem.

Signed-off-by: Bjarki Arge Andreasen <[email protected]>
  • Loading branch information
bjarki-andreasen committed Jan 9, 2025
1 parent 5aeda6f commit f74d2f2
Show file tree
Hide file tree
Showing 4 changed files with 163 additions and 63 deletions.
17 changes: 16 additions & 1 deletion include/zephyr/platform/hooks.h
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,6 @@
* directly from application code but may be freely used within the OS.
*/


/**
* @brief SoC hook executed at the beginning of the reset vector.
*
Expand Down Expand Up @@ -78,4 +77,20 @@ void board_early_init_hook(void);
*/
void board_late_init_hook(void);

/**
* @brief Hook executed when idle thread is entered.
*
* This hook is implemented by the SoC and can be used to perform any
* SoC-specific idle enter logic.
*/
void idle_enter_hook(void);

/**
* @brief Kook executed when idle thread is exited.
*
* This hook is implemented by the SoC and can be used to perform any
* SoC-specific idle exit logic.
*/
void idle_exit_hook(void);

#endif
1 change: 1 addition & 0 deletions kernel/Kconfig
Original file line number Diff line number Diff line change
Expand Up @@ -1078,3 +1078,4 @@ endmenu
rsource "Kconfig.device"
rsource "Kconfig.vm"
rsource "Kconfig.init"
rsource "Kconfig.idle"
52 changes: 52 additions & 0 deletions kernel/Kconfig.idle
Original file line number Diff line number Diff line change
@@ -0,0 +1,52 @@
# Copyright (c) 2025 Nordic Semiconductor ASA
# SPDX-License-Identifier: Apache-2.0

menu "Kernel idle options"

choice IDLE_ENTER_HOOK
prompt "Idle enter hook implementation"
help
The implementation of the optional idle_enter_hook()
hook executed when idle thread is entered.
default IDLE_ENTER_HOOK_PM
default IDLE_ENTER_HOOK_LOOP
default IDLE_ENTER_HOOK_CPU_IDLE

config IDLE_ENTER_HOOK_CUSTOM
bool "Custom idle enter hook implementation"

config IDLE_ENTER_HOOK_LOOP
bool "Relaxed loop idle enter hook implementation"
depends on SMP
depends on !SCHED_IPI_SUPPORTED

config IDLE_ENTER_HOOK_CPU_IDLE
bool "CPU idle idle enter hook implementation"

config IDLE_ENTER_HOOK_PM
bool "PM idle enter hook implementation"
depends on PM

endchoice # IDLE_ENTER_HOOK

choice IDLE_EXIT_HOOK
prompt "Idle exit hook implementation"
depends on IDLE_ENTER_HOOK
help
The implementation of the optional idle_exit_hook()
hook executed after idle_enter_hook() returns.
default IDLE_EXIT_HOOK_YIELD

config IDLE_EXIT_HOOK_CUSTOM
bool "Custom idle exit hook implementation"
help
SoC will implement idle_exit_hook()

config IDLE_EXIT_HOOK_YIELD
bool "Yield idle exit hook implementation"
depends on !PREEMPT_ENABLED
depends on !USE_SWITCH || SPARC

endchoice # IDLE_EXIT_HOOK

endmenu
156 changes: 94 additions & 62 deletions kernel/idle.c
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
/*
* Copyright (c) 2016 Wind River Systems, Inc.
* Copyright (c) 2025 Nordic Semiconductor ASA
*
* SPDX-License-Identifier: Apache-2.0
*/
Expand Down Expand Up @@ -27,71 +28,102 @@ void idle(void *unused1, void *unused2, void *unused3)
__ASSERT_NO_MSG(arch_current_thread()->base.prio >= 0);

while (true) {
/* SMP systems without a working IPI can't actual
* enter an idle state, because they can't be notified
* of scheduler changes (i.e. threads they should
* run). They just spin instead, with a minimal
* relaxation loop to prevent hammering the scheduler
* lock and/or timer driver. This is intended as a
* fallback configuration for new platform bringup.
*/
if (IS_ENABLED(CONFIG_SMP) && !IS_ENABLED(CONFIG_SCHED_IPI_SUPPORTED)) {
for (volatile int i = 0; i < 100000; i++) {
/* Empty loop */
}
z_swap_unlocked();
}

/* Note weird API: k_cpu_idle() is called with local
* CPU interrupts masked, and returns with them
* unmasked. It does not take a spinlock or other
* higher level construct.
*/
(void) arch_irq_lock();

#ifdef CONFIG_PM
_kernel.idle = z_get_next_timeout_expiry();

/*
* Call the suspend hook function of the soc interface
* to allow entry into a low power state. The function
* returns false if low power state was not entered, in
* which case, kernel does normal idle processing.
*
* This function is entered with interrupts disabled.
* If a low power state was entered, then the hook
* function should enable interrupts before exiting.
* This is because the kernel does not do its own idle
* processing in those cases i.e. skips k_cpu_idle().
* The kernel's idle processing re-enables interrupts
* which is essential for the kernel's scheduling
* logic.
*/
if (k_is_pre_kernel() || !pm_system_suspend(_kernel.idle)) {
k_cpu_idle();
}
#else
#ifdef CONFIG_IDLE_ENTER_HOOK
idle_enter_hook();
#endif /* CONFIG_IDLE_ENTER_HOOK */

#ifdef CONFIG_IDLE_EXIT_HOOK
idle_exit_hook();
#endif /* CONFIG_IDLE_EXIT_HOOK */
}
}

#ifdef CONFIG_IDLE_ENTER_HOOK_LOOP
void idle_enter_hook(void)
{
/*
* SMP systems without a working IPI can't actually
* enter an idle state, because they can't be notified
* of scheduler changes (i.e. threads they should
* run). They just spin instead, with a minimal
* relaxation loop to prevent hammering the scheduler
* lock and/or timer driver. This is intended as a
* fallback configuration for new platform bringup.
*/
for (volatile int i = 0; i < 100000; i++) {
/* Empty loop */
}
z_swap_unlocked();
}
#endif /* CONFIG_IDLE_ENTER_HOOK_LOOP */

#ifdef CONFIG_IDLE_ENTER_HOOK_CPU_IDLE
void idle_enter_hook(void)
{
/* Note weird API: k_cpu_idle() is called with local
* CPU interrupts masked, and returns with them
* unmasked. It does not take a spinlock or other
* higher level construct.
*/
(void)arch_irq_lock();

k_cpu_idle();
}
#endif /* CONFIG_IDLE_ENTER_HOOK_CPU_IDLE */

#ifdef CONFIG_IDLE_ENTER_HOOK_PM
void idle_enter_hook(void)
{
/* Note weird API: k_cpu_idle() is called with local
* CPU interrupts masked, and returns with them
* unmasked. It does not take a spinlock or other
* higher level construct.
*/
(void)arch_irq_lock();

_kernel.idle = z_get_next_timeout_expiry();

/*
* Call the suspend hook function of the soc interface
* to allow entry into a low power state. The function
* returns false if low power state was not entered, in
* which case, kernel does normal idle processing.
*
* This function is entered with interrupts disabled.
* If a low power state was entered, then the hook
* function should enable interrupts before exiting.
* This is because the kernel does not do its own idle
* processing in those cases i.e. skips k_cpu_idle().
* The kernel's idle processing re-enables interrupts
* which is essential for the kernel's scheduling
* logic.
*/
if (k_is_pre_kernel() || !pm_system_suspend(_kernel.idle)) {
k_cpu_idle();
#endif /* CONFIG_PM */

#if !defined(CONFIG_PREEMPT_ENABLED)
# if !defined(CONFIG_USE_SWITCH) || defined(CONFIG_SPARC)
/* A legacy mess: the idle thread is by definition
* preemptible as far as the modern scheduler is
* concerned, but older platforms use
* CONFIG_PREEMPT_ENABLED=n as an optimization hint
* that interrupt exit always returns to the
* interrupted context. So in that setup we need to
* explicitly yield in the idle thread otherwise
* nothing else will run once it starts.
*/
if (_kernel.ready_q.cache != arch_current_thread()) {
z_swap_unlocked();
}
# endif /* !defined(CONFIG_USE_SWITCH) || defined(CONFIG_SPARC) */
#endif /* !defined(CONFIG_PREEMPT_ENABLED) */
}

k_cpu_idle();
}
#endif /* CONFIG_IDLE_ENTER_HOOK_PM */

#ifdef CONFIG_IDLE_EXIT_HOOK_YIELD
void idle_exit_hook(void)
{
/*
* A legacy mess: the idle thread is by definition
* preemptible as far as the modern scheduler is
* concerned, but older platforms use
* CONFIG_PREEMPT_ENABLED=n as an optimization hint
* that interrupt exit always returns to the
* interrupted context. So in that setup we need to
* explicitly yield in the idle thread otherwise
* nothing else will run once it starts.
*/
if (_kernel.ready_q.cache != arch_current_thread()) {
z_swap_unlocked();
}
}
#endif /* CONFIG_IDLE_EXIT_HOOK_YIELD */

void __weak arch_spin_relax(void)
{
Expand Down

0 comments on commit f74d2f2

Please sign in to comment.