path: root/kernel/irq_work.c
diff options
authorPeter Zijlstra <>2014-02-11 16:01:16 +0100
committerThomas Gleixner <>2014-02-21 21:49:07 +0100
commitcd578abb24aa67ce468c427d3356c08ea32cf768 (patch)
tree974a97cebfc368e8bee9c1beccbbd9bda00d89ef /kernel/irq_work.c
parent90ed5b0fa5eb96e1cbb34aebf6a9ed96ee1587ec (diff)
perf/x86: Warn to early_printk() in case irq_work is too slow
On Mon, Feb 10, 2014 at 08:45:16AM -0800, Dave Hansen wrote: > The reason I coded this up was that NMIs were firing off so fast that > nothing else was getting a chance to run. With this patch, at least the > printk() would come out and I'd have some idea what was going on. It will start spewing to early_printk() (which is a lot nicer to use from NMI context too) when it fails to queue the IRQ-work because its already enqueued. It does have the false-positive for when two CPUs trigger the warn concurrently, but that should be rare and some extra clutter on the early printk shouldn't be a problem. Cc: Cc: Cc: Cc: Dave Hansen <> Cc: Fixes: 6a02ad66b2c4 ("perf/x86: Push the duration-logging printk() to IRQ context") Signed-off-by: Peter Zijlstra <> Link: Signed-off-by: Thomas Gleixner <>
Diffstat (limited to 'kernel/irq_work.c')
1 files changed, 4 insertions, 2 deletions
diff --git a/kernel/irq_work.c b/kernel/irq_work.c
index 55fcce6065cf..a82170e2fa78 100644
--- a/kernel/irq_work.c
+++ b/kernel/irq_work.c
@@ -61,11 +61,11 @@ void __weak arch_irq_work_raise(void)
* Can be re-enqueued while the callback is still in progress.
-void irq_work_queue(struct irq_work *work)
+bool irq_work_queue(struct irq_work *work)
/* Only queue if not already pending */
if (!irq_work_claim(work))
- return;
+ return false;
/* Queue the entry and raise the IPI if needed. */
@@ -83,6 +83,8 @@ void irq_work_queue(struct irq_work *work)
+ return true;