Patchwork [3.5.y.z,extended,stable] Patch "ring-buffer: Fix NULL pointer if rb_set_head_page() fails" has been added to staging queue

mail settings
Submitter Herton Ronaldo Krzesinski
Date Jan. 7, 2013, 8:36 p.m.
Message ID <>
Download mbox | patch
Permalink /patch/210181/
State New
Headers show


Herton Ronaldo Krzesinski - Jan. 7, 2013, 8:36 p.m.
This is a note to let you know that I have just added a patch titled

    ring-buffer: Fix NULL pointer if rb_set_head_page() fails

to the linux-3.5.y-queue branch of the 3.5.y.z extended stable tree 
which can be found at:;a=shortlog;h=refs/heads/linux-3.5.y-queue

If you, or anyone else, feels it should not be added to this tree, please 
reply to this email.

For more information about the 3.5.y.z tree, see



From acec2b351ba58a30c7592bb91e937b7ae709b5ca Mon Sep 17 00:00:00 2001
From: Steven Rostedt <>
Date: Thu, 29 Nov 2012 22:27:22 -0500
Subject: [PATCH] ring-buffer: Fix NULL pointer if rb_set_head_page() fails

commit 54f7be5b831254199522523ccab4c3d954bbf576 upstream.

The function rb_set_head_page() searches the list of ring buffer
pages for a the page that has the HEAD page flag set. If it does
not find it, it will do a WARN_ON(), disable the ring buffer and
return NULL, as this should never happen.

But if this bug happens to happen, not all callers of this function
can handle a NULL pointer being returned from it. That needs to be

Signed-off-by: Steven Rostedt <>
Signed-off-by: Herton Ronaldo Krzesinski <>
 kernel/trace/ring_buffer.c |    9 +++++++--
 1 file changed, 7 insertions(+), 2 deletions(-)



diff --git a/kernel/trace/ring_buffer.c b/kernel/trace/ring_buffer.c
index db6dff1..35bf8f7 100644
--- a/kernel/trace/ring_buffer.c
+++ b/kernel/trace/ring_buffer.c
@@ -1396,6 +1396,8 @@  rb_insert_pages(struct ring_buffer_per_cpu *cpu_buffer)
 		struct list_head *head_page_with_bit;

 		head_page = &rb_set_head_page(cpu_buffer)->list;
+		if (!head_page)
+			break;
 		prev_page = head_page->prev;

 		first_page = pages->next;
@@ -2934,7 +2936,7 @@  unsigned long ring_buffer_oldest_event_ts(struct ring_buffer *buffer, int cpu)
 	unsigned long flags;
 	struct ring_buffer_per_cpu *cpu_buffer;
 	struct buffer_page *bpage;
-	unsigned long ret;
+	unsigned long ret = 0;

 	if (!cpumask_test_cpu(cpu, buffer->cpumask))
 		return 0;
@@ -2949,7 +2951,8 @@  unsigned long ring_buffer_oldest_event_ts(struct ring_buffer *buffer, int cpu)
 		bpage = cpu_buffer->reader_page;
 		bpage = rb_set_head_page(cpu_buffer);
-	ret = bpage->page->time_stamp;
+	if (bpage)
+		ret = bpage->page->time_stamp;
 	raw_spin_unlock_irqrestore(&cpu_buffer->reader_lock, flags);

 	return ret;
@@ -3256,6 +3259,8 @@  rb_get_reader_page(struct ring_buffer_per_cpu *cpu_buffer)
 	 * Splice the empty reader page into the list around the head.
 	reader = rb_set_head_page(cpu_buffer);
+	if (!reader)
+		goto out;
 	cpu_buffer->reader_page-> = rb_list_head(reader->;
 	cpu_buffer->reader_page->list.prev = reader->list.prev;