diff mbox series

[4/9] monitor: move need_resume flag into monitor struct

Message ID 20180704084507.14560-5-peterx@redhat.com
State New
Headers show
Series monitor: enable OOB by default | expand

Commit Message

Peter Xu July 4, 2018, 8:45 a.m. UTC
It was put into the request object to show whether we'll need to resume
the monitor after dispatching the command.  Now we move that into the
monitor struct so that it might be even used in other places in the
future, e.g., out-of-band message flow controls.

One thing to mention is that there is no lock needed before when
accessing the flag since the request object will always be owned by a
single thread.  After we move it into monitor struct we need to protect
that flag since it might be accessed by multiple threads now.  Renaming
the qmp_queue_lock into qmp_lock to protect the flag as well.

No functional change.

Signed-off-by: Peter Xu <peterx@redhat.com>
---
 monitor.c | 72 +++++++++++++++++++++++++++++++------------------------
 1 file changed, 41 insertions(+), 31 deletions(-)

Comments

Markus Armbruster July 5, 2018, 8:51 a.m. UTC | #1
Peter Xu <peterx@redhat.com> writes:

> It was put into the request object to show whether we'll need to resume
> the monitor after dispatching the command.  Now we move that into the
> monitor struct so that it might be even used in other places in the
> future, e.g., out-of-band message flow controls.
>
> One thing to mention is that there is no lock needed before when
> accessing the flag since the request object will always be owned by a
> single thread.  After we move it into monitor struct we need to protect
> that flag since it might be accessed by multiple threads now.  Renaming
> the qmp_queue_lock into qmp_lock to protect the flag as well.
>
> No functional change.
>
> Signed-off-by: Peter Xu <peterx@redhat.com>

Marc-André's "[PATCH v3 04/38] monitor: no need to save need_resume" and
"[PATCH v3 05/38] monitor: further simplify previous patch" also mess
with need_resume.  Marc-André, could you have a look at this patch and
the next one?
Peter Xu July 5, 2018, 9:49 a.m. UTC | #2
On Thu, Jul 05, 2018 at 10:51:33AM +0200, Markus Armbruster wrote:
> Peter Xu <peterx@redhat.com> writes:
> 
> > It was put into the request object to show whether we'll need to resume
> > the monitor after dispatching the command.  Now we move that into the
> > monitor struct so that it might be even used in other places in the
> > future, e.g., out-of-band message flow controls.
> >
> > One thing to mention is that there is no lock needed before when
> > accessing the flag since the request object will always be owned by a
> > single thread.  After we move it into monitor struct we need to protect
> > that flag since it might be accessed by multiple threads now.  Renaming
> > the qmp_queue_lock into qmp_lock to protect the flag as well.
> >
> > No functional change.
> >
> > Signed-off-by: Peter Xu <peterx@redhat.com>
> 
> Marc-André's "[PATCH v3 04/38] monitor: no need to save need_resume" and
> "[PATCH v3 05/38] monitor: further simplify previous patch" also mess
> with need_resume.  Marc-André, could you have a look at this patch and
> the next one?

Sorry I should have looked at those before hand.  I think I must be
waiting for another post to split the patches into two (after
Marc-Andre poked me with that thread) but then I forgot about that.

So now I suspect we'd better keep that flag since in the next patch
the suspend operation can happen conditionally now.

Thanks,
Markus Armbruster July 5, 2018, 11:09 a.m. UTC | #3
Peter Xu <peterx@redhat.com> writes:

> On Thu, Jul 05, 2018 at 10:51:33AM +0200, Markus Armbruster wrote:
>> Peter Xu <peterx@redhat.com> writes:
>> 
>> > It was put into the request object to show whether we'll need to resume
>> > the monitor after dispatching the command.  Now we move that into the
>> > monitor struct so that it might be even used in other places in the
>> > future, e.g., out-of-band message flow controls.
>> >
>> > One thing to mention is that there is no lock needed before when
>> > accessing the flag since the request object will always be owned by a
>> > single thread.  After we move it into monitor struct we need to protect
>> > that flag since it might be accessed by multiple threads now.  Renaming
>> > the qmp_queue_lock into qmp_lock to protect the flag as well.
>> >
>> > No functional change.
>> >
>> > Signed-off-by: Peter Xu <peterx@redhat.com>
>> 
>> Marc-André's "[PATCH v3 04/38] monitor: no need to save need_resume" and
>> "[PATCH v3 05/38] monitor: further simplify previous patch" also mess
>> with need_resume.  Marc-André, could you have a look at this patch and
>> the next one?
>
> Sorry I should have looked at those before hand.  I think I must be
> waiting for another post to split the patches into two (after
> Marc-Andre poked me with that thread) but then I forgot about that.
>
> So now I suspect we'd better keep that flag since in the next patch
> the suspend operation can happen conditionally now.

Could you two together figure out how to combine your work?  Would take
me off this critical path...
Marc-André Lureau July 5, 2018, 11:32 a.m. UTC | #4
Hi

On Thu, Jul 5, 2018 at 1:09 PM, Markus Armbruster <armbru@redhat.com> wrote:
> Peter Xu <peterx@redhat.com> writes:
>
>> On Thu, Jul 05, 2018 at 10:51:33AM +0200, Markus Armbruster wrote:
>>> Peter Xu <peterx@redhat.com> writes:
>>>
>>> > It was put into the request object to show whether we'll need to resume
>>> > the monitor after dispatching the command.  Now we move that into the
>>> > monitor struct so that it might be even used in other places in the
>>> > future, e.g., out-of-band message flow controls.
>>> >
>>> > One thing to mention is that there is no lock needed before when
>>> > accessing the flag since the request object will always be owned by a
>>> > single thread.  After we move it into monitor struct we need to protect
>>> > that flag since it might be accessed by multiple threads now.  Renaming
>>> > the qmp_queue_lock into qmp_lock to protect the flag as well.
>>> >
>>> > No functional change.
>>> >
>>> > Signed-off-by: Peter Xu <peterx@redhat.com>
>>>
>>> Marc-André's "[PATCH v3 04/38] monitor: no need to save need_resume" and
>>> "[PATCH v3 05/38] monitor: further simplify previous patch" also mess
>>> with need_resume.  Marc-André, could you have a look at this patch and
>>> the next one?
>>
>> Sorry I should have looked at those before hand.  I think I must be
>> waiting for another post to split the patches into two (after
>> Marc-Andre poked me with that thread) but then I forgot about that.
>>
>> So now I suspect we'd better keep that flag since in the next patch
>> the suspend operation can happen conditionally now.
>
> Could you two together figure out how to combine your work?  Would take
> me off this critical path...
>

With Peter patches, the monitor is also suspended when the queue of
command is too long (when oob-enabled). Thus the variable now covers
one of two different cases.

My patches only covered the first case simplification (legacy /
!oob-enabled suspend).

The second case could probably use a similar simplification, looking
at the queue length, but at this point I think I'd prefer to keep the
variable for clarity and sanity state checking.
Peter Xu July 5, 2018, 12:01 p.m. UTC | #5
On Thu, Jul 05, 2018 at 01:32:46PM +0200, Marc-André Lureau wrote:
> Hi
> 
> On Thu, Jul 5, 2018 at 1:09 PM, Markus Armbruster <armbru@redhat.com> wrote:
> > Peter Xu <peterx@redhat.com> writes:
> >
> >> On Thu, Jul 05, 2018 at 10:51:33AM +0200, Markus Armbruster wrote:
> >>> Peter Xu <peterx@redhat.com> writes:
> >>>
> >>> > It was put into the request object to show whether we'll need to resume
> >>> > the monitor after dispatching the command.  Now we move that into the
> >>> > monitor struct so that it might be even used in other places in the
> >>> > future, e.g., out-of-band message flow controls.
> >>> >
> >>> > One thing to mention is that there is no lock needed before when
> >>> > accessing the flag since the request object will always be owned by a
> >>> > single thread.  After we move it into monitor struct we need to protect
> >>> > that flag since it might be accessed by multiple threads now.  Renaming
> >>> > the qmp_queue_lock into qmp_lock to protect the flag as well.
> >>> >
> >>> > No functional change.
> >>> >
> >>> > Signed-off-by: Peter Xu <peterx@redhat.com>
> >>>
> >>> Marc-André's "[PATCH v3 04/38] monitor: no need to save need_resume" and
> >>> "[PATCH v3 05/38] monitor: further simplify previous patch" also mess
> >>> with need_resume.  Marc-André, could you have a look at this patch and
> >>> the next one?
> >>
> >> Sorry I should have looked at those before hand.  I think I must be
> >> waiting for another post to split the patches into two (after
> >> Marc-Andre poked me with that thread) but then I forgot about that.
> >>
> >> So now I suspect we'd better keep that flag since in the next patch
> >> the suspend operation can happen conditionally now.
> >
> > Could you two together figure out how to combine your work?  Would take
> > me off this critical path...
> >
> 
> With Peter patches, the monitor is also suspended when the queue of
> command is too long (when oob-enabled). Thus the variable now covers
> one of two different cases.
> 
> My patches only covered the first case simplification (legacy /
> !oob-enabled suspend).
> 
> The second case could probably use a similar simplification, looking
> at the queue length, but at this point I think I'd prefer to keep the
> variable for clarity and sanity state checking.

Thanks for confirming that.

Meanwhile, I'm not 100% sure whether the trick can be played again
here (e.g., resume when queue length is N-1 for either of the queue).
If without the flag, there should be a very tricky condition (when
both queues are with length N-1) that we might resume the monitor
twice while we should only do that once.

Regards,
diff mbox series

Patch

diff --git a/monitor.c b/monitor.c
index 80cada1162..215029bc22 100644
--- a/monitor.c
+++ b/monitor.c
@@ -176,14 +176,20 @@  typedef struct {
     bool capab_offered[QMP_CAPABILITY__MAX]; /* capabilities offered */
     bool capab[QMP_CAPABILITY__MAX];         /* offered and accepted */
     /*
-     * Protects qmp request/response queue.
+     * Protects qmp request/response queue, and need_resume flag.
      * Take monitor_lock first when you need both.
      */
-    QemuMutex qmp_queue_lock;
+    QemuMutex qmp_lock;
     /* Input queue that holds all the parsed QMP requests */
     GQueue *qmp_requests;
     /* Output queue contains all the QMP responses in order */
     GQueue *qmp_responses;
+    /*
+     * Whether we need to resume the monitor afterward.  This flag is
+     * used to emulate the old QMP server behavior that the current
+     * command must be completed before execution of the next one.
+     */
+    bool need_resume;
 } MonitorQMP;
 
 /*
@@ -261,12 +267,6 @@  struct QMPRequest {
      */
     QObject *req;
     Error *err;
-    /*
-     * Whether we need to resume the monitor afterward.  This flag is
-     * used to emulate the old QMP server behavior that the current
-     * command must be completed before execution of the next one.
-     */
-    bool need_resume;
 };
 typedef struct QMPRequest QMPRequest;
 
@@ -367,7 +367,7 @@  static void qmp_request_free(QMPRequest *req)
     g_free(req);
 }
 
-/* Caller must hold mon->qmp.qmp_queue_lock */
+/* Caller must hold mon->qmp.qmp_lock */
 static void monitor_qmp_cleanup_req_queue_locked(Monitor *mon)
 {
     while (!g_queue_is_empty(mon->qmp.qmp_requests)) {
@@ -375,7 +375,7 @@  static void monitor_qmp_cleanup_req_queue_locked(Monitor *mon)
     }
 }
 
-/* Caller must hold the mon->qmp.qmp_queue_lock */
+/* Caller must hold the mon->qmp.qmp_lock */
 static void monitor_qmp_cleanup_resp_queue_locked(Monitor *mon)
 {
     while (!g_queue_is_empty(mon->qmp.qmp_responses)) {
@@ -385,12 +385,23 @@  static void monitor_qmp_cleanup_resp_queue_locked(Monitor *mon)
 
 static void monitor_qmp_cleanup_queues(Monitor *mon)
 {
-    qemu_mutex_lock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_lock(&mon->qmp.qmp_lock);
     monitor_qmp_cleanup_req_queue_locked(mon);
     monitor_qmp_cleanup_resp_queue_locked(mon);
-    qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_unlock(&mon->qmp.qmp_lock);
 }
 
+/* Try to resume the monitor if it was suspended due to any reason */
+static void monitor_qmp_try_resume(Monitor *mon)
+{
+    assert(monitor_is_qmp(mon));
+    qemu_mutex_lock(&mon->qmp.qmp_lock);
+    if (mon->qmp.need_resume) {
+        monitor_resume(mon);
+        mon->qmp.need_resume = false;
+    }
+    qemu_mutex_unlock(&mon->qmp.qmp_lock);
+}
 
 static void monitor_flush_locked(Monitor *mon);
 
@@ -525,9 +536,9 @@  static void qmp_queue_response(Monitor *mon, QDict *rsp)
          * Push a reference to the response queue.  The I/O thread
          * drains that queue and emits.
          */
-        qemu_mutex_lock(&mon->qmp.qmp_queue_lock);
+        qemu_mutex_lock(&mon->qmp.qmp_lock);
         g_queue_push_tail(mon->qmp.qmp_responses, qobject_ref(rsp));
-        qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+        qemu_mutex_unlock(&mon->qmp.qmp_lock);
         qemu_bh_schedule(qmp_respond_bh);
     } else {
         /*
@@ -548,9 +559,9 @@  static QDict *monitor_qmp_response_pop_one(Monitor *mon)
 {
     QDict *data;
 
-    qemu_mutex_lock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_lock(&mon->qmp.qmp_lock);
     data = g_queue_pop_head(mon->qmp.qmp_responses);
-    qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_unlock(&mon->qmp.qmp_lock);
 
     return data;
 }
@@ -769,7 +780,7 @@  static void monitor_data_init(Monitor *mon, bool skip_flush,
 {
     memset(mon, 0, sizeof(Monitor));
     qemu_mutex_init(&mon->mon_lock);
-    qemu_mutex_init(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_init(&mon->qmp.qmp_lock);
     mon->outbuf = qstring_new();
     /* Use *mon_cmds by default. */
     mon->cmd_table = mon_cmds;
@@ -789,7 +800,7 @@  static void monitor_data_destroy(Monitor *mon)
     readline_free(mon->rs);
     qobject_unref(mon->outbuf);
     qemu_mutex_destroy(&mon->mon_lock);
-    qemu_mutex_destroy(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_destroy(&mon->qmp.qmp_lock);
     monitor_qmp_cleanup_req_queue_locked(mon);
     monitor_qmp_cleanup_resp_queue_locked(mon);
     g_queue_free(mon->qmp.qmp_requests);
@@ -4151,9 +4162,9 @@  static QMPRequest *monitor_qmp_requests_pop_any(void)
     qemu_mutex_lock(&monitor_lock);
 
     QTAILQ_FOREACH(mon, &mon_list, entry) {
-        qemu_mutex_lock(&mon->qmp.qmp_queue_lock);
+        qemu_mutex_lock(&mon->qmp.qmp_lock);
         req_obj = g_queue_pop_head(mon->qmp.qmp_requests);
-        qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+        qemu_mutex_unlock(&mon->qmp.qmp_lock);
         if (req_obj) {
             break;
         }
@@ -4176,26 +4187,26 @@  static QMPRequest *monitor_qmp_requests_pop_any(void)
 static void monitor_qmp_bh_dispatcher(void *data)
 {
     QMPRequest *req_obj = monitor_qmp_requests_pop_any();
+    Monitor *mon;
     QDict *rsp;
 
     if (!req_obj) {
         return;
     }
 
+    mon = req_obj->mon;
     if (req_obj->req) {
         trace_monitor_qmp_cmd_in_band(qobject_get_try_str(req_obj->id) ?: "");
-        monitor_qmp_dispatch(req_obj->mon, req_obj->req, req_obj->id);
+        monitor_qmp_dispatch(mon, req_obj->req, req_obj->id);
     } else {
         assert(req_obj->err);
         rsp = qmp_error_response(req_obj->err);
-        monitor_qmp_respond(req_obj->mon, rsp, NULL);
+        monitor_qmp_respond(mon, rsp, NULL);
         qobject_unref(rsp);
     }
 
-    if (req_obj->need_resume) {
-        /* Pairs with the monitor_suspend() in handle_qmp_command() */
-        monitor_resume(req_obj->mon);
-    }
+    monitor_qmp_try_resume(mon);
+
     qmp_request_free(req_obj);
 
     /* Reschedule instead of looping so the main loop stays responsive */
@@ -4244,10 +4255,9 @@  static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens)
     req_obj->id = id;
     req_obj->req = req;
     req_obj->err = err;
-    req_obj->need_resume = false;
 
     /* Protect qmp_requests and fetching its length. */
-    qemu_mutex_lock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_lock(&mon->qmp.qmp_lock);
 
     /*
      * If OOB is not enabled on the current monitor, we'll emulate the
@@ -4257,11 +4267,11 @@  static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens)
      */
     if (!qmp_oob_enabled(mon)) {
         monitor_suspend(mon);
-        req_obj->need_resume = true;
+        mon->qmp.need_resume = true;
     } else {
         /* Drop the request if queue is full. */
         if (mon->qmp.qmp_requests->length >= QMP_REQ_QUEUE_LEN_MAX) {
-            qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+            qemu_mutex_unlock(&mon->qmp.qmp_lock);
             /*
              * FIXME @id's scope is just @mon, and broadcasting it is
              * wrong.  If another monitor's client has a command with
@@ -4281,7 +4291,7 @@  static void handle_qmp_command(JSONMessageParser *parser, GQueue *tokens)
      * etc. will be delivered to the handler side.
      */
     g_queue_push_tail(mon->qmp.qmp_requests, req_obj);
-    qemu_mutex_unlock(&mon->qmp.qmp_queue_lock);
+    qemu_mutex_unlock(&mon->qmp.qmp_lock);
 
     /* Kick the dispatcher routine */
     qemu_bh_schedule(qmp_dispatcher_bh);