diff mbox series

[SRU,Kinetic] Revert "epoll: autoremove wakers even more aggressively"

Message ID 20221122195726.104087-4-khalid.elmously@canonical.com
State New
Headers show
Series [SRU,Kinetic] Revert "epoll: autoremove wakers even more aggressively" | expand

Commit Message

Khalid Elmously Nov. 22, 2022, 7:57 p.m. UTC
BugLink: https://bugs.launchpad.net/bugs/1996678

This reverts commit aeb95eabcd5f9e5074095e54636258bce48c2eb4.

That commit was suspected of causing regressions, like the one reported in the BugLink, as well as
other suspected containerd/runc regressions reported on Azure and elsewhere, e.g.:

https://canonical.lightning.force.com/lightning/r/Case/5004K00000OnSZDQA3/view
https://github.com/opencontainers/runc/issues/3641
https://www.spinics.net/lists/kernel/msg4565924.html


Investigation is ongoing but there is a high probability/confidence that
bcf91619e32fe584ecfafa49a3db3d1db4ff70b2 is indeed the problem.

Signed-off-by: Khalid Elmously <khalid.elmously@canonical.com>
---
 fs/eventpoll.c | 22 ----------------------
 1 file changed, 22 deletions(-)
diff mbox series

Patch

diff --git a/fs/eventpoll.c b/fs/eventpoll.c
index 8b56b94e2f56..e2daa940ebce 100644
--- a/fs/eventpoll.c
+++ b/fs/eventpoll.c
@@ -1747,21 +1747,6 @@  static struct timespec64 *ep_timeout_to_timespec(struct timespec64 *to, long ms)
 	return to;
 }
 
-/*
- * autoremove_wake_function, but remove even on failure to wake up, because we
- * know that default_wake_function/ttwu will only fail if the thread is already
- * woken, and in that case the ep_poll loop will remove the entry anyways, not
- * try to reuse it.
- */
-static int ep_autoremove_wake_function(struct wait_queue_entry *wq_entry,
-				       unsigned int mode, int sync, void *key)
-{
-	int ret = default_wake_function(wq_entry, mode, sync, key);
-
-	list_del_init(&wq_entry->entry);
-	return ret;
-}
-
 /**
  * ep_poll - Retrieves ready events, and delivers them to the caller-supplied
  *           event buffer.
@@ -1843,15 +1828,8 @@  static int ep_poll(struct eventpoll *ep, struct epoll_event __user *events,
 		 * normal wakeup path no need to call __remove_wait_queue()
 		 * explicitly, thus ep->lock is not taken, which halts the
 		 * event delivery.
-		 *
-		 * In fact, we now use an even more aggressive function that
-		 * unconditionally removes, because we don't reuse the wait
-		 * entry between loop iterations. This lets us also avoid the
-		 * performance issue if a process is killed, causing all of its
-		 * threads to wake up without being removed normally.
 		 */
 		init_wait(&wait);
-		wait.func = ep_autoremove_wake_function;
 
 		write_lock_irq(&ep->lock);
 		/*