mbox series

[00/13] ibmvfc: initial MQ development

Message ID 20201126014824.123831-1-tyreld@linux.ibm.com (mailing list archive)
Headers show
Series ibmvfc: initial MQ development | expand

Message

Tyrel Datwyler Nov. 26, 2020, 1:48 a.m. UTC
Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.

This initial implementation adds the necessary Sub-CRQ framework and implements
the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
HW backed channels. The event pool and locking still leverages the legacy single
queue implementation, and as such lock contention is problematic when increasing
the number of queues. However, this initial work demonstrates a 1.2x factor
increase in IOPs when configured with two HW queues despite lock contention.

Tyrel Datwyler (13):
  ibmvfc: add vhost fields and defaults for MQ enablement
  ibmvfc: define hcall wrapper for registering a Sub-CRQ
  ibmvfc: add Subordinate CRQ definitions
  ibmvfc: add alloc/dealloc routines for SCSI Sub-CRQ Channels
  ibmvfc: add Sub-CRQ IRQ enable/disable routine
  ibmvfc: add handlers to drain and complete Sub-CRQ responses
  ibmvfc: define Sub-CRQ interrupt handler routine
  ibmvfc: map/request irq and register Sub-CRQ interrupt handler
  ibmvfc: implement channel enquiry and setup commands
  ibmvfc: advertise client support for using hardware channels
  ibmvfc: set and track hw queue in ibmvfc_event struct
  ibmvfc: send commands down HW Sub-CRQ when channelized
  ibmvfc: register Sub-CRQ handles with VIOS during channel setup

 drivers/scsi/ibmvscsi/ibmvfc.c | 460 ++++++++++++++++++++++++++++++++-
 drivers/scsi/ibmvscsi/ibmvfc.h |  37 +++
 2 files changed, 493 insertions(+), 4 deletions(-)

Comments

Hannes Reinecke Dec. 2, 2020, 12:03 p.m. UTC | #1
On 11/26/20 2:48 AM, Tyrel Datwyler wrote:
> Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
> towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
> Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
> partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
> negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.
> 
> This initial implementation adds the necessary Sub-CRQ framework and implements
> the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
> HW backed channels. The event pool and locking still leverages the legacy single
> queue implementation, and as such lock contention is problematic when increasing
> the number of queues. However, this initial work demonstrates a 1.2x factor
> increase in IOPs when configured with two HW queues despite lock contention.
> 
Why do you still hold the hold lock during submission?
An initial check on the submission code path didn't reveal anything 
obvious, so it _should_ be possible to drop the host lock there.
Or at least move it into the submission function itself to avoid lock 
contention. Hmm?

Cheers,

Hannes
Tyrel Datwyler Dec. 2, 2020, 5:19 p.m. UTC | #2
On 12/2/20 4:03 AM, Hannes Reinecke wrote:
> On 11/26/20 2:48 AM, Tyrel Datwyler wrote:
>> Recent updates in pHyp Firmware and VIOS releases provide new infrastructure
>> towards enabling Subordinate Command Response Queues (Sub-CRQs) such that each
>> Sub-CRQ is a channel backed by an actual hardware queue in the FC stack on the
>> partner VIOS. Sub-CRQs are registered with the firmware via hypercalls and then
>> negotiated with the VIOS via new Management Datagrams (MADs) for channel setup.
>>
>> This initial implementation adds the necessary Sub-CRQ framework and implements
>> the new MADs for negotiating and assigning a set of Sub-CRQs to associated VIOS
>> HW backed channels. The event pool and locking still leverages the legacy single
>> queue implementation, and as such lock contention is problematic when increasing
>> the number of queues. However, this initial work demonstrates a 1.2x factor
>> increase in IOPs when configured with two HW queues despite lock contention.
>>
> Why do you still hold the hold lock during submission?

Proof of concept.

> An initial check on the submission code path didn't reveal anything obvious, so
> it _should_ be possible to drop the host lock there.

Its used to protect the event pool and the event free/sent lists. This could
probably have its own lock instead of the host lock.

> Or at least move it into the submission function itself to avoid lock
> contention. Hmm?

I have a followup patch to do that, but I didn't see any change in performance.
I've got another patch I'm finishing that provides dedicated event pools for
each subqueue such that they will no longer have any dependency on the host lock.

-Tyrel

> 
> Cheers,
> 
> Hannes