diff mbox series

[libgpiod] core: Basic port to uAPI v2

Message ID 20201002063148.32667-1-warthog618@gmail.com
State New
Headers show
Series [libgpiod] core: Basic port to uAPI v2 | expand

Commit Message

Kent Gibson Oct. 2, 2020, 6:31 a.m. UTC
Port existing implementation from GPIO uAPI v1 to v2.
The libgpiod external interface remains unchanged, only the internal
implementation switches from uAPI v1 to v2.

This is a minimal port - uAPI v2 features are only used where it
simplifies the implementation, specifically multiple events on a bulk can
now be handled directly by the kernel in a single v2 line request rather
than being emulated by multiple v1 event requests.

Signed-off-by: Kent Gibson <warthog618@gmail.com>
---
 include/gpiod.h |   2 +
 lib/core.c      | 421 ++++++++++++++++++++++++------------------------
 2 files changed, 211 insertions(+), 212 deletions(-)

Comments

Andy Shevchenko Oct. 2, 2020, 1:48 p.m. UTC | #1
On Fri, Oct 2, 2020 at 9:32 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> Port existing implementation from GPIO uAPI v1 to v2.
> The libgpiod external interface remains unchanged, only the internal
> implementation switches from uAPI v1 to v2.

Cool!

> This is a minimal port - uAPI v2 features are only used where it
> simplifies the implementation, specifically multiple events on a bulk can
> now be handled directly by the kernel in a single v2 line request rather
> than being emulated by multiple v1 event requests.

...

> +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_DISABLE)
> +               bias_flags++;
> +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP)
> +               bias_flags++;
> +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_DOWN)
> +               bias_flags++;
> +       if (bias_flags > 1)
> +               return false;

Seems to me like an emulation of hweight(), although I don't know if
we have in standard libraries such.

...

> -       if (values) {
> -               for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
> -                       data.values[i] = (uint8_t)!!values[i];
> +       for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++) {
> +               lines_bitmap_set_bit(&lv.mask, i);
> +               lines_bitmap_assign_bit(&lv.bits, i, values && values[i]);
>         }

Hmm... What about
       for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
               lines_bitmap_set_bit(&lv.mask, i);

  if (values) {
       for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
               lines_bitmap_assign_bit(&lv.bits, i, values[i]);
  }

?

...

>         /*
>          * 16 is the maximum number of events the kernel can store in the FIFO
>          * so we can allocate the buffer on the stack.
> +        *
> +        * NOTE: This is no longer strictly true for uAPI v2.  While 16 is
> +        * the default for single line, a request with multiple lines will

for a single

> +        * have a larger buffer.  So need to rethink the allocation here,

So we (I, ...?) need

> +        * or at least the comment above...
>          */
Kent Gibson Oct. 2, 2020, 4:06 p.m. UTC | #2
On Fri, Oct 02, 2020 at 04:48:02PM +0300, Andy Shevchenko wrote:
> On Fri, Oct 2, 2020 at 9:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > Port existing implementation from GPIO uAPI v1 to v2.
> > The libgpiod external interface remains unchanged, only the internal
> > implementation switches from uAPI v1 to v2.
> 
> Cool!
> 

Not terribly - I needed a libgpiod-based testbed for my kernel
changes, and this was the easiest way to do it.
By happy coincidence that aligns with the path Bart wants to take for
libgpiod 2.0, so here we are.

> > This is a minimal port - uAPI v2 features are only used where it
> > simplifies the implementation, specifically multiple events on a bulk can
> > now be handled directly by the kernel in a single v2 line request rather
> > than being emulated by multiple v1 event requests.
> 
> ...
> 
> > +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_DISABLE)
> > +               bias_flags++;
> > +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP)
> > +               bias_flags++;
> > +       if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_DOWN)
> > +               bias_flags++;
> > +       if (bias_flags > 1)
> > +               return false;
> 
> Seems to me like an emulation of hweight(), although I don't know if
> we have in standard libraries such.
> 

Yup, same here, and it was quicker at the time to just write this than go
looking...


> ...
> 
> > -       if (values) {
> > -               for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
> > -                       data.values[i] = (uint8_t)!!values[i];
> > +       for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++) {
> > +               lines_bitmap_set_bit(&lv.mask, i);
> > +               lines_bitmap_assign_bit(&lv.bits, i, values && values[i]);
> >         }
> 
> Hmm... What about
>        for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
>                lines_bitmap_set_bit(&lv.mask, i);
> 
>   if (values) {
>        for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
>                lines_bitmap_assign_bit(&lv.bits, i, values[i]);
>   }
> 
> ?
> 

I find mine easier to read.
Yours may be slightly faster though.  And maybe larger?

> ...
> 
> >         /*
> >          * 16 is the maximum number of events the kernel can store in the FIFO
> >          * so we can allocate the buffer on the stack.
> > +        *
> > +        * NOTE: This is no longer strictly true for uAPI v2.  While 16 is
> > +        * the default for single line, a request with multiple lines will
> 
> for a single
> 
> > +        * have a larger buffer.  So need to rethink the allocation here,
> 
> So we (I, ...?) need
> 

The purpose of this patch is to move what work I'd already done on
libgpiod in my tree, in order to test the kernel changes, into Bart's
libgpiod tree, where it can be used as the basis of 2.0 development.

So that note is a reminder for Bart, or whoever gets there first...
But leaving it vague is still grammatically correct.

Cheers,
Kent.
Bartosz Golaszewski Oct. 5, 2020, 10:27 a.m. UTC | #3
On Fri, Oct 2, 2020 at 6:06 PM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Fri, Oct 02, 2020 at 04:48:02PM +0300, Andy Shevchenko wrote:
> > On Fri, Oct 2, 2020 at 9:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> > > Port existing implementation from GPIO uAPI v1 to v2.
> > > The libgpiod external interface remains unchanged, only the internal
> > > implementation switches from uAPI v1 to v2.
> >
> > Cool!
> >
>
> Not terribly - I needed a libgpiod-based testbed for my kernel
> changes, and this was the easiest way to do it.
> By happy coincidence that aligns with the path Bart wants to take for
> libgpiod 2.0, so here we are.
>

Indeed. I'll apply this to a staging branch for now called
for-linux-v5.10-rc1 and once v5.10-rc1 is tagged, we'll apply it and
make this kernel version a requirement on the master branch. From
there, we'll work on designing the v2 library API. I'm not giving
myself any deadlines - let's just get it right and it'll be ready when
it's done.

As I explained in my other email: I explicitly want to break the API
and ABI to improve the user-space interface. The question now is just
how to handle this. I'm in favor of keeping the library name as is and
keep the public header in the same place as well. I'm not too
concerned with two versions of the library living in the same system
really. I don't think this library is big or significant enough to
warrant dealing with versioning hell.

Bartosz

[snip]
Bartosz Golaszewski Oct. 12, 2020, 10:37 a.m. UTC | #4
On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> Port existing implementation from GPIO uAPI v1 to v2.
> The libgpiod external interface remains unchanged, only the internal
> implementation switches from uAPI v1 to v2.
>
> This is a minimal port - uAPI v2 features are only used where it
> simplifies the implementation, specifically multiple events on a bulk can
> now be handled directly by the kernel in a single v2 line request rather
> than being emulated by multiple v1 event requests.
>
> Signed-off-by: Kent Gibson <warthog618@gmail.com>

Hi Kent,

I don't want to break the master branch before v5.10-rc1 is tagged so
I applied this patch to a separate branch I made available at:

    git://git.kernel.org/pub/scm/libs/libgpiod/libgpiod.git  for-linux-v5.10-rc1

I'll make this my base for any v2.0 development for the next two weeks.

Best Regards,
Bartosz Golaszewski
Bartosz Golaszewski Oct. 13, 2020, 7:08 a.m. UTC | #5
On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> Port existing implementation from GPIO uAPI v1 to v2.
> The libgpiod external interface remains unchanged, only the internal
> implementation switches from uAPI v1 to v2.
>
> This is a minimal port - uAPI v2 features are only used where it
> simplifies the implementation, specifically multiple events on a bulk can
> now be handled directly by the kernel in a single v2 line request rather
> than being emulated by multiple v1 event requests.
>
> Signed-off-by: Kent Gibson <warthog618@gmail.com>
> ---

Hi Kent!

I just noticed that this broke the tests in Python and C++ bindings.
Something to do with the RISING and FALLING edge values and how
they're translated in the bindings, it seems. Let me know if you'd
have time to take a look at it and see if it's something obvious.
Otherwise I'll investigate that tomorrow.

Bartosz
Kent Gibson Oct. 13, 2020, 8:30 a.m. UTC | #6
On Tue, Oct 13, 2020 at 09:08:44AM +0200, Bartosz Golaszewski wrote:
> On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > Port existing implementation from GPIO uAPI v1 to v2.
> > The libgpiod external interface remains unchanged, only the internal
> > implementation switches from uAPI v1 to v2.
> >
> > This is a minimal port - uAPI v2 features are only used where it
> > simplifies the implementation, specifically multiple events on a bulk can
> > now be handled directly by the kernel in a single v2 line request rather
> > than being emulated by multiple v1 event requests.
> >
> > Signed-off-by: Kent Gibson <warthog618@gmail.com>
> > ---
> 
> Hi Kent!
> 
> I just noticed that this broke the tests in Python and C++ bindings.
> Something to do with the RISING and FALLING edge values and how
> they're translated in the bindings, it seems. Let me know if you'd
> have time to take a look at it and see if it's something obvious.
> Otherwise I'll investigate that tomorrow.
> 

Hmmm, that's odd - I'll take a look at it shortly.

Cheers,
Kent.
Kent Gibson Oct. 14, 2020, 3:17 a.m. UTC | #7
On Tue, Oct 13, 2020 at 09:08:44AM +0200, Bartosz Golaszewski wrote:
> On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > Port existing implementation from GPIO uAPI v1 to v2.
> > The libgpiod external interface remains unchanged, only the internal
> > implementation switches from uAPI v1 to v2.
> >
> > This is a minimal port - uAPI v2 features are only used where it
> > simplifies the implementation, specifically multiple events on a bulk can
> > now be handled directly by the kernel in a single v2 line request rather
> > than being emulated by multiple v1 event requests.
> >
> > Signed-off-by: Kent Gibson <warthog618@gmail.com>
> > ---
> 
> Hi Kent!
> 
> I just noticed that this broke the tests in Python and C++ bindings.
> Something to do with the RISING and FALLING edge values and how
> they're translated in the bindings, it seems. Let me know if you'd
> have time to take a look at it and see if it's something obvious.
> Otherwise I'll investigate that tomorrow.
> 

Lots of problems here, so where to start...

Firstly, I foolishly assumed that the coverage of the CXX and Python
tests was aimed at testing the binding, and that wouldn't provide
additional coverage to the core API to that tested by the C tests.
Not the case :(.

Most of the problems are due to the switch from an event request per
line to a single line request.  This breaks tests that assume an fd per
line and test the fds using poll.  As they are now the same, all line fds
in the bulk become ready to read at once, where previously it was only one.
These are only present in CXX and Python test cases :(.

I also misunderstood the use case for gpiod_line_event_wait_bulk(),
and used the event from the fd to determine the line (incorrectly as it
turns out).  As a consequence tests that then read the event from the
line fd find the subsequent event, or the wrong number of events.
Again, these are only present in CXX and Python tests :(.

And I misunderstood the offset passed in gpiod_line_bulk_get_line(),
using the chip offset not the bulk offset (there are too many damn
offsets here), so that only happened to work in the C tests as ALL lines
on the test chip were requested so the chip and bulk offsets match.
Changing the test to mismatch those offsets makes the C tests fail as
well :).
To fix that we'd need to peek into the file to get the event
offset without actually removing the event from the file, and be able to
find the line in a bulk based on chip offset.

Oh, and gpiod_line_bulk_get_line() doesn't do range checking, so will
happily return garbage if offset >= bulk.num_lines.  It happened to
return NULL in my testing, resulting an exception when trying to add the
line to the bulk.  But if it had returned something else...

So, in short, the switch to using one line request for all events in the
bulk is problematic with the exposing of the line fd, and the
current implementation of gpiod_line_event_wait_bulk() is broken.

Where to go from here depends on where you want to go with the API.
As mentioned yesterday, my preference would be for
gpiod_line_event_wait_bulk() to return the next event that occured on the
bulk rather than a bulk of lines with events.

And it needs to be made clear that the fd returned by
line.event_get_fd() applies to the bulk - assuming we switch to the one
request per bulk.

Cheers,
Kent.
Bartosz Golaszewski Oct. 14, 2020, 7:37 a.m. UTC | #8
On Wed, Oct 14, 2020 at 5:17 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Tue, Oct 13, 2020 at 09:08:44AM +0200, Bartosz Golaszewski wrote:
> > On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> > > Port existing implementation from GPIO uAPI v1 to v2.
> > > The libgpiod external interface remains unchanged, only the internal
> > > implementation switches from uAPI v1 to v2.
> > >
> > > This is a minimal port - uAPI v2 features are only used where it
> > > simplifies the implementation, specifically multiple events on a bulk can
> > > now be handled directly by the kernel in a single v2 line request rather
> > > than being emulated by multiple v1 event requests.
> > >
> > > Signed-off-by: Kent Gibson <warthog618@gmail.com>
> > > ---
> >
> > Hi Kent!
> >
> > I just noticed that this broke the tests in Python and C++ bindings.
> > Something to do with the RISING and FALLING edge values and how
> > they're translated in the bindings, it seems. Let me know if you'd
> > have time to take a look at it and see if it's something obvious.
> > Otherwise I'll investigate that tomorrow.
> >
>
> Lots of problems here, so where to start...
>
> Firstly, I foolishly assumed that the coverage of the CXX and Python
> tests was aimed at testing the binding, and that wouldn't provide
> additional coverage to the core API to that tested by the C tests.
> Not the case :(.
>
> Most of the problems are due to the switch from an event request per
> line to a single line request.  This breaks tests that assume an fd per
> line and test the fds using poll.  As they are now the same, all line fds
> in the bulk become ready to read at once, where previously it was only one.
> These are only present in CXX and Python test cases :(.
>
> I also misunderstood the use case for gpiod_line_event_wait_bulk(),
> and used the event from the fd to determine the line (incorrectly as it
> turns out).  As a consequence tests that then read the event from the
> line fd find the subsequent event, or the wrong number of events.
> Again, these are only present in CXX and Python tests :(.
>
> And I misunderstood the offset passed in gpiod_line_bulk_get_line(),
> using the chip offset not the bulk offset (there are too many damn
> offsets here), so that only happened to work in the C tests as ALL lines
> on the test chip were requested so the chip and bulk offsets match.
> Changing the test to mismatch those offsets makes the C tests fail as
> well :).
> To fix that we'd need to peek into the file to get the event
> offset without actually removing the event from the file, and be able to
> find the line in a bulk based on chip offset.
>
> Oh, and gpiod_line_bulk_get_line() doesn't do range checking, so will
> happily return garbage if offset >= bulk.num_lines.  It happened to
> return NULL in my testing, resulting an exception when trying to add the
> line to the bulk.  But if it had returned something else...
>
> So, in short, the switch to using one line request for all events in the
> bulk is problematic with the exposing of the line fd, and the
> current implementation of gpiod_line_event_wait_bulk() is broken.
>
> Where to go from here depends on where you want to go with the API.
> As mentioned yesterday, my preference would be for
> gpiod_line_event_wait_bulk() to return the next event that occured on the
> bulk rather than a bulk of lines with events.
>
> And it needs to be made clear that the fd returned by
> line.event_get_fd() applies to the bulk - assuming we switch to the one
> request per bulk.

Thanks for the detailed explanation. It all makes sense now. I thought
it's possible to just use the new uAPI while keeping the old library
interface for now. It does not seem to be the case (at least for
events) in which case I'll just back out your port and I'll start
working on a new API bit by bit, changing separate parts and
introducing non-compatible changes where it's required.

If you want to work on this too - just let me know which parts so we
don't do the same thing twice.

Bartosz
Kent Gibson Oct. 14, 2020, 8:51 a.m. UTC | #9
On Wed, Oct 14, 2020 at 09:37:38AM +0200, Bartosz Golaszewski wrote:
> On Wed, Oct 14, 2020 at 5:17 AM Kent Gibson <warthog618@gmail.com> wrote:
> >
> > On Tue, Oct 13, 2020 at 09:08:44AM +0200, Bartosz Golaszewski wrote:
> > > On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> > > >
> > > > Port existing implementation from GPIO uAPI v1 to v2.
> > > > The libgpiod external interface remains unchanged, only the internal
> > > > implementation switches from uAPI v1 to v2.
> > > >
> > > > This is a minimal port - uAPI v2 features are only used where it
> > > > simplifies the implementation, specifically multiple events on a bulk can
> > > > now be handled directly by the kernel in a single v2 line request rather
> > > > than being emulated by multiple v1 event requests.
> > > >
> > > > Signed-off-by: Kent Gibson <warthog618@gmail.com>
> > > > ---
> > >
> > > Hi Kent!
> > >
> > > I just noticed that this broke the tests in Python and C++ bindings.
> > > Something to do with the RISING and FALLING edge values and how
> > > they're translated in the bindings, it seems. Let me know if you'd
> > > have time to take a look at it and see if it's something obvious.
> > > Otherwise I'll investigate that tomorrow.
> > >
> >
> > Lots of problems here, so where to start...
> >
> > Firstly, I foolishly assumed that the coverage of the CXX and Python
> > tests was aimed at testing the binding, and that wouldn't provide
> > additional coverage to the core API to that tested by the C tests.
> > Not the case :(.
> >
> > Most of the problems are due to the switch from an event request per
> > line to a single line request.  This breaks tests that assume an fd per
> > line and test the fds using poll.  As they are now the same, all line fds
> > in the bulk become ready to read at once, where previously it was only one.
> > These are only present in CXX and Python test cases :(.
> >
> > I also misunderstood the use case for gpiod_line_event_wait_bulk(),
> > and used the event from the fd to determine the line (incorrectly as it
> > turns out).  As a consequence tests that then read the event from the
> > line fd find the subsequent event, or the wrong number of events.
> > Again, these are only present in CXX and Python tests :(.
> >
> > And I misunderstood the offset passed in gpiod_line_bulk_get_line(),
> > using the chip offset not the bulk offset (there are too many damn
> > offsets here), so that only happened to work in the C tests as ALL lines
> > on the test chip were requested so the chip and bulk offsets match.
> > Changing the test to mismatch those offsets makes the C tests fail as
> > well :).
> > To fix that we'd need to peek into the file to get the event
> > offset without actually removing the event from the file, and be able to
> > find the line in a bulk based on chip offset.
> >
> > Oh, and gpiod_line_bulk_get_line() doesn't do range checking, so will
> > happily return garbage if offset >= bulk.num_lines.  It happened to
> > return NULL in my testing, resulting an exception when trying to add the
> > line to the bulk.  But if it had returned something else...
> >
> > So, in short, the switch to using one line request for all events in the
> > bulk is problematic with the exposing of the line fd, and the
> > current implementation of gpiod_line_event_wait_bulk() is broken.
> >
> > Where to go from here depends on where you want to go with the API.
> > As mentioned yesterday, my preference would be for
> > gpiod_line_event_wait_bulk() to return the next event that occured on the
> > bulk rather than a bulk of lines with events.
> >
> > And it needs to be made clear that the fd returned by
> > line.event_get_fd() applies to the bulk - assuming we switch to the one
> > request per bulk.
> 
> Thanks for the detailed explanation. It all makes sense now. I thought
> it's possible to just use the new uAPI while keeping the old library
> interface for now. It does not seem to be the case (at least for
> events) in which case I'll just back out your port and I'll start
> working on a new API bit by bit, changing separate parts and
> introducing non-compatible changes where it's required.
> 
> If you want to work on this too - just let me know which parts so we
> don't do the same thing twice.
> 

I originally did the port in multiple steps, so I should be able to put
togther a patch that retains the old event behaviour without much
trouble.  That would switch you totally over to uAPI v2 while still
doing everything as per v1.  Then you could workout the best way to
integrate the new features from there.

Cheers,
Kent.
Bartosz Golaszewski Oct. 14, 2020, 8:52 a.m. UTC | #10
On Wed, Oct 14, 2020 at 10:51 AM Kent Gibson <warthog618@gmail.com> wrote:
>
> On Wed, Oct 14, 2020 at 09:37:38AM +0200, Bartosz Golaszewski wrote:
> > On Wed, Oct 14, 2020 at 5:17 AM Kent Gibson <warthog618@gmail.com> wrote:
> > >
> > > On Tue, Oct 13, 2020 at 09:08:44AM +0200, Bartosz Golaszewski wrote:
> > > > On Fri, Oct 2, 2020 at 8:32 AM Kent Gibson <warthog618@gmail.com> wrote:
> > > > >
> > > > > Port existing implementation from GPIO uAPI v1 to v2.
> > > > > The libgpiod external interface remains unchanged, only the internal
> > > > > implementation switches from uAPI v1 to v2.
> > > > >
> > > > > This is a minimal port - uAPI v2 features are only used where it
> > > > > simplifies the implementation, specifically multiple events on a bulk can
> > > > > now be handled directly by the kernel in a single v2 line request rather
> > > > > than being emulated by multiple v1 event requests.
> > > > >
> > > > > Signed-off-by: Kent Gibson <warthog618@gmail.com>
> > > > > ---
> > > >
> > > > Hi Kent!
> > > >
> > > > I just noticed that this broke the tests in Python and C++ bindings.
> > > > Something to do with the RISING and FALLING edge values and how
> > > > they're translated in the bindings, it seems. Let me know if you'd
> > > > have time to take a look at it and see if it's something obvious.
> > > > Otherwise I'll investigate that tomorrow.
> > > >
> > >
> > > Lots of problems here, so where to start...
> > >
> > > Firstly, I foolishly assumed that the coverage of the CXX and Python
> > > tests was aimed at testing the binding, and that wouldn't provide
> > > additional coverage to the core API to that tested by the C tests.
> > > Not the case :(.
> > >
> > > Most of the problems are due to the switch from an event request per
> > > line to a single line request.  This breaks tests that assume an fd per
> > > line and test the fds using poll.  As they are now the same, all line fds
> > > in the bulk become ready to read at once, where previously it was only one.
> > > These are only present in CXX and Python test cases :(.
> > >
> > > I also misunderstood the use case for gpiod_line_event_wait_bulk(),
> > > and used the event from the fd to determine the line (incorrectly as it
> > > turns out).  As a consequence tests that then read the event from the
> > > line fd find the subsequent event, or the wrong number of events.
> > > Again, these are only present in CXX and Python tests :(.
> > >
> > > And I misunderstood the offset passed in gpiod_line_bulk_get_line(),
> > > using the chip offset not the bulk offset (there are too many damn
> > > offsets here), so that only happened to work in the C tests as ALL lines
> > > on the test chip were requested so the chip and bulk offsets match.
> > > Changing the test to mismatch those offsets makes the C tests fail as
> > > well :).
> > > To fix that we'd need to peek into the file to get the event
> > > offset without actually removing the event from the file, and be able to
> > > find the line in a bulk based on chip offset.
> > >
> > > Oh, and gpiod_line_bulk_get_line() doesn't do range checking, so will
> > > happily return garbage if offset >= bulk.num_lines.  It happened to
> > > return NULL in my testing, resulting an exception when trying to add the
> > > line to the bulk.  But if it had returned something else...
> > >
> > > So, in short, the switch to using one line request for all events in the
> > > bulk is problematic with the exposing of the line fd, and the
> > > current implementation of gpiod_line_event_wait_bulk() is broken.
> > >
> > > Where to go from here depends on where you want to go with the API.
> > > As mentioned yesterday, my preference would be for
> > > gpiod_line_event_wait_bulk() to return the next event that occured on the
> > > bulk rather than a bulk of lines with events.
> > >
> > > And it needs to be made clear that the fd returned by
> > > line.event_get_fd() applies to the bulk - assuming we switch to the one
> > > request per bulk.
> >
> > Thanks for the detailed explanation. It all makes sense now. I thought
> > it's possible to just use the new uAPI while keeping the old library
> > interface for now. It does not seem to be the case (at least for
> > events) in which case I'll just back out your port and I'll start
> > working on a new API bit by bit, changing separate parts and
> > introducing non-compatible changes where it's required.
> >
> > If you want to work on this too - just let me know which parts so we
> > don't do the same thing twice.
> >
>
> I originally did the port in multiple steps, so I should be able to put
> togther a patch that retains the old event behaviour without much
> trouble.  That would switch you totally over to uAPI v2 while still
> doing everything as per v1.  Then you could workout the best way to
> integrate the new features from there.
>

Yes, that would be great! Thanks!

Bart
diff mbox series

Patch

diff --git a/include/gpiod.h b/include/gpiod.h
index 3477f9d..a6e34ae 100644
--- a/include/gpiod.h
+++ b/include/gpiod.h
@@ -1485,6 +1485,8 @@  struct gpiod_line_event {
 	/**< Best estimate of time of event occurrence. */
 	int event_type;
 	/**< Type of the event that occurred. */
+	int offset;
+	/**< Offset of line on which the event occurred. */
 };
 
 /**
diff --git a/lib/core.c b/lib/core.c
index b964272..64b5940 100644
--- a/lib/core.c
+++ b/lib/core.c
@@ -403,26 +403,49 @@  bool gpiod_line_needs_update(struct gpiod_line *line GPIOD_UNUSED)
 	return false;
 }
 
+static int line_info_v2_to_info_flags(struct gpio_v2_line_info *info)
+{
+	int iflags = 0;
+
+	if (info->flags & GPIO_V2_LINE_FLAG_USED)
+		iflags |= GPIOLINE_FLAG_KERNEL;
+
+	if (info->flags & GPIO_V2_LINE_FLAG_OPEN_DRAIN)
+		iflags |= GPIOLINE_FLAG_OPEN_DRAIN;
+	if (info->flags & GPIO_V2_LINE_FLAG_OPEN_SOURCE)
+		iflags |= GPIOLINE_FLAG_OPEN_SOURCE;
+
+	if (info->flags & GPIO_V2_LINE_FLAG_BIAS_DISABLED)
+		iflags |= GPIOLINE_FLAG_BIAS_DISABLE;
+	if (info->flags & GPIO_V2_LINE_FLAG_BIAS_PULL_UP)
+		iflags |= GPIOLINE_FLAG_BIAS_PULL_UP;
+	if (info->flags & GPIO_V2_LINE_FLAG_BIAS_PULL_DOWN)
+		iflags |= GPIOLINE_FLAG_BIAS_PULL_DOWN;
+
+	return iflags;
+}
+
 int gpiod_line_update(struct gpiod_line *line)
 {
-	struct gpioline_info info;
+	struct gpio_v2_line_info info;
 	int rv;
 
 	memset(&info, 0, sizeof(info));
-	info.line_offset = line->offset;
+	info.offset = line->offset;
 
-	rv = ioctl(line->chip->fd, GPIO_GET_LINEINFO_IOCTL, &info);
+	rv = ioctl(line->chip->fd, GPIO_V2_GET_LINEINFO_IOCTL, &info);
 	if (rv < 0)
 		return -1;
 
-	line->direction = info.flags & GPIOLINE_FLAG_IS_OUT
+	line->direction = info.flags & GPIO_V2_LINE_FLAG_OUTPUT
 						? GPIOD_LINE_DIRECTION_OUTPUT
 						: GPIOD_LINE_DIRECTION_INPUT;
-	line->active_state = info.flags & GPIOLINE_FLAG_ACTIVE_LOW
+
+	line->active_state = info.flags & GPIO_V2_LINE_FLAG_ACTIVE_LOW
 						? GPIOD_LINE_ACTIVE_STATE_LOW
 						: GPIOD_LINE_ACTIVE_STATE_HIGH;
 
-	line->info_flags = info.flags;
+	line->info_flags = line_info_v2_to_info_flags(&info);
 
 	strncpy(line->name, info.name, sizeof(line->name));
 	strncpy(line->consumer, info.consumer, sizeof(line->consumer));
@@ -508,86 +531,154 @@  static bool line_request_direction_is_valid(int direction)
 	return false;
 }
 
-static __u32 line_request_direction_to_gpio_handleflag(int direction)
+static void line_request_type_to_gpio_v2_line_config(int reqtype,
+		struct gpio_v2_line_config *config)
 {
-	if (direction == GPIOD_LINE_REQUEST_DIRECTION_INPUT)
-		return GPIOHANDLE_REQUEST_INPUT;
-	if (direction == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT)
-		return GPIOHANDLE_REQUEST_OUTPUT;
+	if (reqtype == GPIOD_LINE_REQUEST_DIRECTION_AS_IS)
+		return;
 
-	return 0;
+	if (reqtype == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT) {
+		config->flags |= GPIO_V2_LINE_FLAG_OUTPUT;
+		return;
+	}
+	config->flags |= GPIO_V2_LINE_FLAG_INPUT;
+
+	if (reqtype == GPIOD_LINE_REQUEST_EVENT_RISING_EDGE)
+		config->flags |= GPIO_V2_LINE_FLAG_EDGE_RISING;
+	else if (reqtype == GPIOD_LINE_REQUEST_EVENT_FALLING_EDGE)
+		config->flags |= GPIO_V2_LINE_FLAG_EDGE_FALLING;
+	else if (reqtype == GPIOD_LINE_REQUEST_EVENT_BOTH_EDGES)
+		config->flags |= (GPIO_V2_LINE_FLAG_EDGE_RISING |
+				  GPIO_V2_LINE_FLAG_EDGE_FALLING);
 }
 
-static __u32 line_request_flag_to_gpio_handleflag(int flags)
+static void line_request_flag_to_gpio_v2_line_config(int flags,
+		struct gpio_v2_line_config *config)
 {
-	int hflags = 0;
+	if (flags & GPIOD_LINE_REQUEST_FLAG_ACTIVE_LOW)
+		config->flags |= GPIO_V2_LINE_FLAG_ACTIVE_LOW;
 
 	if (flags & GPIOD_LINE_REQUEST_FLAG_OPEN_DRAIN)
-		hflags |= GPIOHANDLE_REQUEST_OPEN_DRAIN;
-	if (flags & GPIOD_LINE_REQUEST_FLAG_OPEN_SOURCE)
-		hflags |= GPIOHANDLE_REQUEST_OPEN_SOURCE;
-	if (flags & GPIOD_LINE_REQUEST_FLAG_ACTIVE_LOW)
-		hflags |= GPIOHANDLE_REQUEST_ACTIVE_LOW;
+		config->flags |= GPIO_V2_LINE_FLAG_OPEN_DRAIN;
+	else if (flags & GPIOD_LINE_REQUEST_FLAG_OPEN_SOURCE)
+		config->flags |= GPIO_V2_LINE_FLAG_OPEN_SOURCE;
+
 	if (flags & GPIOD_LINE_REQUEST_FLAG_BIAS_DISABLE)
-		hflags |= GPIOHANDLE_REQUEST_BIAS_DISABLE;
-	if (flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_DOWN)
-		hflags |= GPIOHANDLE_REQUEST_BIAS_PULL_DOWN;
-	if (flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP)
-		hflags |= GPIOHANDLE_REQUEST_BIAS_PULL_UP;
+		config->flags |= GPIO_V2_LINE_FLAG_BIAS_DISABLED;
+	else if (flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_DOWN)
+		config->flags |= GPIO_V2_LINE_FLAG_BIAS_PULL_DOWN;
+	else if (flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP)
+		config->flags |= GPIO_V2_LINE_FLAG_BIAS_PULL_UP;
+}
 
-	return hflags;
+static void line_request_config_to_gpio_v2_line_config(
+		const struct gpiod_line_request_config *reqcfg,
+		struct gpio_v2_line_config *lc)
+{
+	line_request_type_to_gpio_v2_line_config(reqcfg->request_type, lc);
+	line_request_flag_to_gpio_v2_line_config(reqcfg->flags, lc);
 }
 
-static int line_request_values(struct gpiod_line_bulk *bulk,
-			       const struct gpiod_line_request_config *config,
-			       const int *default_vals)
+static bool line_request_config_validate(
+		const struct gpiod_line_request_config *config)
 {
-	struct gpiod_line *line;
-	struct line_fd_handle *line_fd;
-	struct gpiohandle_request req;
-	unsigned int i;
-	int rv, fd;
+	int bias_flags = 0;
 
 	if ((config->request_type != GPIOD_LINE_REQUEST_DIRECTION_OUTPUT) &&
 	    (config->flags & (GPIOD_LINE_REQUEST_FLAG_OPEN_DRAIN |
-			      GPIOD_LINE_REQUEST_FLAG_OPEN_SOURCE))) {
-		errno = EINVAL;
-		return -1;
-	}
+			      GPIOD_LINE_REQUEST_FLAG_OPEN_SOURCE)))
+		return false;
+
 
 	if ((config->flags & GPIOD_LINE_REQUEST_FLAG_OPEN_DRAIN) &&
 	    (config->flags & GPIOD_LINE_REQUEST_FLAG_OPEN_SOURCE)) {
+		return false;
+	}
+
+	if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_DISABLE)
+		bias_flags++;
+	if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_UP)
+		bias_flags++;
+	if (config->flags & GPIOD_LINE_REQUEST_FLAG_BIAS_PULL_DOWN)
+		bias_flags++;
+	if (bias_flags > 1)
+		return false;
+
+	return true;
+}
+
+static void lines_bitmap_set_bit(__u64 *bits, int nr)
+{
+	*bits |= _BITULL(nr);
+}
+
+static void lines_bitmap_clear_bit(__u64 *bits, int nr)
+{
+	*bits &= ~_BITULL(nr);
+}
+
+static int lines_bitmap_test_bit(__u64 bits, int nr)
+{
+	return !!(bits & _BITULL(nr));
+}
+
+static void lines_bitmap_assign_bit(__u64 *bits, int nr, bool value)
+{
+	if (value)
+		lines_bitmap_set_bit(bits, nr);
+	else
+		lines_bitmap_clear_bit(bits, nr);
+}
+
+static int line_request(struct gpiod_line_bulk *bulk,
+			const struct gpiod_line_request_config *config,
+			const int *vals)
+{
+	struct gpiod_line *line;
+	struct line_fd_handle *line_fd;
+	struct gpio_v2_line_request req;
+	unsigned int i;
+	int rv, fd, state;
+
+	if (!line_request_config_validate(config)) {
 		errno = EINVAL;
 		return -1;
 	}
 
 	memset(&req, 0, sizeof(req));
 
-	req.lines = gpiod_line_bulk_num_lines(bulk);
-	req.flags = line_request_flag_to_gpio_handleflag(config->flags);
-
-	if (config->request_type == GPIOD_LINE_REQUEST_DIRECTION_INPUT)
-		req.flags |= GPIOHANDLE_REQUEST_INPUT;
-	else if (config->request_type == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT)
-		req.flags |= GPIOHANDLE_REQUEST_OUTPUT;
-
+	req.num_lines = gpiod_line_bulk_num_lines(bulk);
+	line_request_config_to_gpio_v2_line_config(config, &req.config);
+	if (req.config.flags & (GPIO_V2_LINE_FLAG_EDGE_RISING |
+				GPIO_V2_LINE_FLAG_EDGE_FALLING))
+		state = LINE_REQUESTED_EVENTS;
+	else
+		state = LINE_REQUESTED_VALUES;
 
-	gpiod_line_bulk_foreach_line_off(bulk, line, i) {
-		req.lineoffsets[i] = gpiod_line_offset(line);
-		if (config->request_type ==
-				GPIOD_LINE_REQUEST_DIRECTION_OUTPUT &&
-		    default_vals)
-			req.default_values[i] = !!default_vals[i];
+	gpiod_line_bulk_foreach_line_off(bulk, line, i)
+		req.offsets[i] = gpiod_line_offset(line);
+
+	if (config->request_type == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT &&
+	    vals) {
+		req.config.num_attrs = 1;
+		req.config.attrs[0].attr.id = GPIO_V2_LINE_ATTR_ID_OUTPUT_VALUES;
+		gpiod_line_bulk_foreach_line_off(bulk, line, i) {
+			lines_bitmap_assign_bit(
+				&req.config.attrs[0].mask, i, 1);
+			lines_bitmap_assign_bit(
+				&req.config.attrs[0].attr.values,
+				i, vals[i]);
+		}
 	}
 
 	if (config->consumer)
-		strncpy(req.consumer_label, config->consumer,
-			sizeof(req.consumer_label) - 1);
+		strncpy(req.consumer, config->consumer,
+			sizeof(req.consumer) - 1);
 
 	line = gpiod_line_bulk_get_line(bulk, 0);
 	fd = line->chip->fd;
 
-	rv = ioctl(fd, GPIO_GET_LINEHANDLE_IOCTL, &req);
+	rv = ioctl(fd, GPIO_V2_GET_LINE_IOCTL, &req);
 	if (rv < 0)
 		return -1;
 
@@ -596,11 +687,11 @@  static int line_request_values(struct gpiod_line_bulk *bulk,
 		return -1;
 
 	gpiod_line_bulk_foreach_line_off(bulk, line, i) {
-		line->state = LINE_REQUESTED_VALUES;
+		line->state = state;
 		line->req_flags = config->flags;
-		if (config->request_type ==
-				GPIOD_LINE_REQUEST_DIRECTION_OUTPUT)
-			line->output_value = req.default_values[i];
+		if (config->request_type == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT)
+			line->output_value = lines_bitmap_test_bit(
+				req.config.attrs[0].attr.values, i);
 		line_set_fd(line, line_fd);
 
 		rv = gpiod_line_update(line);
@@ -613,72 +704,6 @@  static int line_request_values(struct gpiod_line_bulk *bulk,
 	return 0;
 }
 
-static int line_request_event_single(struct gpiod_line *line,
-			const struct gpiod_line_request_config *config)
-{
-	struct line_fd_handle *line_fd;
-	struct gpioevent_request req;
-	int rv;
-
-	memset(&req, 0, sizeof(req));
-
-	if (config->consumer)
-		strncpy(req.consumer_label, config->consumer,
-			sizeof(req.consumer_label) - 1);
-
-	req.lineoffset = gpiod_line_offset(line);
-	req.handleflags = line_request_flag_to_gpio_handleflag(config->flags);
-	req.handleflags |= GPIOHANDLE_REQUEST_INPUT;
-
-	if (config->request_type == GPIOD_LINE_REQUEST_EVENT_RISING_EDGE)
-		req.eventflags |= GPIOEVENT_REQUEST_RISING_EDGE;
-	else if (config->request_type == GPIOD_LINE_REQUEST_EVENT_FALLING_EDGE)
-		req.eventflags |= GPIOEVENT_REQUEST_FALLING_EDGE;
-	else if (config->request_type == GPIOD_LINE_REQUEST_EVENT_BOTH_EDGES)
-		req.eventflags |= GPIOEVENT_REQUEST_BOTH_EDGES;
-
-	rv = ioctl(line->chip->fd, GPIO_GET_LINEEVENT_IOCTL, &req);
-	if (rv < 0)
-		return -1;
-
-	line_fd = line_make_fd_handle(req.fd);
-	if (!line_fd)
-		return -1;
-
-	line->state = LINE_REQUESTED_EVENTS;
-	line->req_flags = config->flags;
-	line_set_fd(line, line_fd);
-
-	rv = gpiod_line_update(line);
-	if (rv) {
-		gpiod_line_release(line);
-		return rv;
-	}
-
-	return 0;
-}
-
-static int line_request_events(struct gpiod_line_bulk *bulk,
-			       const struct gpiod_line_request_config *config)
-{
-	struct gpiod_line *line;
-	unsigned int off;
-	int rv, rev;
-
-	gpiod_line_bulk_foreach_line_off(bulk, line, off) {
-		rv = line_request_event_single(line, config);
-		if (rv) {
-			for (rev = off - 1; rev >= 0; rev--) {
-				line = gpiod_line_bulk_get_line(bulk, rev);
-				gpiod_line_release(line);
-			}
-
-			return -1;
-		}
-	}
-
-	return 0;
-}
 
 int gpiod_line_request(struct gpiod_line *line,
 		       const struct gpiod_line_request_config *config,
@@ -692,34 +717,14 @@  int gpiod_line_request(struct gpiod_line *line,
 	return gpiod_line_request_bulk(&bulk, config, &default_val);
 }
 
-static bool line_request_is_direction(int request)
-{
-	return request == GPIOD_LINE_REQUEST_DIRECTION_AS_IS ||
-	       request == GPIOD_LINE_REQUEST_DIRECTION_INPUT ||
-	       request == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT;
-}
-
-static bool line_request_is_events(int request)
-{
-	return request == GPIOD_LINE_REQUEST_EVENT_FALLING_EDGE ||
-	       request == GPIOD_LINE_REQUEST_EVENT_RISING_EDGE ||
-	       request == GPIOD_LINE_REQUEST_EVENT_BOTH_EDGES;
-}
-
 int gpiod_line_request_bulk(struct gpiod_line_bulk *bulk,
 			    const struct gpiod_line_request_config *config,
-			    const int *default_vals)
+			    const int *vals)
 {
 	if (!line_bulk_same_chip(bulk) || !line_bulk_all_free(bulk))
 		return -1;
 
-	if (line_request_is_direction(config->request_type))
-		return line_request_values(bulk, config, default_vals);
-	else if (line_request_is_events(config->request_type))
-		return line_request_events(bulk, config);
-
-	errno = EINVAL;
-	return -1;
+	return line_request(bulk, config, vals);
 }
 
 void gpiod_line_release(struct gpiod_line *line)
@@ -772,7 +777,7 @@  int gpiod_line_get_value(struct gpiod_line *line)
 
 int gpiod_line_get_value_bulk(struct gpiod_line_bulk *bulk, int *values)
 {
-	struct gpiohandle_data data;
+	struct gpio_v2_line_values lv;
 	struct gpiod_line *line;
 	unsigned int i;
 	int rv, fd;
@@ -782,32 +787,20 @@  int gpiod_line_get_value_bulk(struct gpiod_line_bulk *bulk, int *values)
 
 	line = gpiod_line_bulk_get_line(bulk, 0);
 
-	if (line->state == LINE_REQUESTED_VALUES) {
-		memset(&data, 0, sizeof(data));
+	memset(&lv, 0, sizeof(lv));
 
-		fd = line_get_fd(line);
+	for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
+		lines_bitmap_set_bit(&lv.mask, i);
 
-		rv = ioctl(fd, GPIOHANDLE_GET_LINE_VALUES_IOCTL, &data);
-		if (rv < 0)
-			return -1;
+	fd = line_get_fd(line);
 
-		for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
-			values[i] = data.values[i];
+	rv = ioctl(fd, GPIO_V2_LINE_GET_VALUES_IOCTL, &lv);
+	if (rv < 0)
+		return -1;
 
-	} else if (line->state == LINE_REQUESTED_EVENTS) {
-		for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++) {
-			line = gpiod_line_bulk_get_line(bulk, i);
+	for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
+		values[i] = lines_bitmap_test_bit(lv.bits, i);
 
-			fd = line_get_fd(line);
-			rv = ioctl(fd, GPIOHANDLE_GET_LINE_VALUES_IOCTL, &data);
-			if (rv < 0)
-				return -1;
-			values[i] = data.values[0];
-		}
-	} else {
-		errno = EINVAL;
-		return -1;
-	}
 	return 0;
 }
 
@@ -823,7 +816,7 @@  int gpiod_line_set_value(struct gpiod_line *line, int value)
 
 int gpiod_line_set_value_bulk(struct gpiod_line_bulk *bulk, const int *values)
 {
-	struct gpiohandle_data data;
+	struct gpio_v2_line_values lv;
 	struct gpiod_line *line;
 	unsigned int i;
 	int rv, fd;
@@ -831,22 +824,22 @@  int gpiod_line_set_value_bulk(struct gpiod_line_bulk *bulk, const int *values)
 	if (!line_bulk_same_chip(bulk) || !line_bulk_all_requested(bulk))
 		return -1;
 
-	memset(&data, 0, sizeof(data));
+	memset(&lv, 0, sizeof(lv));
 
-	if (values) {
-		for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
-			data.values[i] = (uint8_t)!!values[i];
+	for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++) {
+		lines_bitmap_set_bit(&lv.mask, i);
+		lines_bitmap_assign_bit(&lv.bits, i, values && values[i]);
 	}
 
 	line = gpiod_line_bulk_get_line(bulk, 0);
 	fd = line_get_fd(line);
 
-	rv = ioctl(fd, GPIOHANDLE_SET_LINE_VALUES_IOCTL, &data);
+	rv = ioctl(fd, GPIO_V2_LINE_SET_VALUES_IOCTL, &lv);
 	if (rv < 0)
 		return -1;
 
 	gpiod_line_bulk_foreach_line_off(bulk, line, i)
-		line->output_value = data.values[i];
+		line->output_value = lines_bitmap_test_bit(lv.bits, i);
 
 	return 0;
 }
@@ -866,7 +859,7 @@  int gpiod_line_set_config_bulk(struct gpiod_line_bulk *bulk,
 			       int direction, int flags,
 			       const int *values)
 {
-	struct gpiohandle_config hcfg;
+	struct gpio_v2_line_config hcfg;
 	struct gpiod_line *line;
 	unsigned int i;
 	int rv, fd;
@@ -880,24 +873,30 @@  int gpiod_line_set_config_bulk(struct gpiod_line_bulk *bulk,
 
 	memset(&hcfg, 0, sizeof(hcfg));
 
-	hcfg.flags = line_request_flag_to_gpio_handleflag(flags);
-	hcfg.flags |= line_request_direction_to_gpio_handleflag(direction);
+	line_request_flag_to_gpio_v2_line_config(flags, &hcfg);
+	line_request_type_to_gpio_v2_line_config(direction, &hcfg);
 	if (direction == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT && values) {
-		for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++)
-			hcfg.default_values[i] = (uint8_t)!!values[i];
+		hcfg.num_attrs = 1;
+		hcfg.attrs[0].attr.id = GPIO_V2_LINE_ATTR_ID_OUTPUT_VALUES;
+		for (i = 0; i < gpiod_line_bulk_num_lines(bulk); i++) {
+			lines_bitmap_assign_bit(&hcfg.attrs[0].mask, i, 1);
+			lines_bitmap_assign_bit(
+				&hcfg.attrs[0].attr.values, i, values[i]);
+		}
 	}
 
 	line = gpiod_line_bulk_get_line(bulk, 0);
 	fd = line_get_fd(line);
 
-	rv = ioctl(fd, GPIOHANDLE_SET_CONFIG_IOCTL, &hcfg);
+	rv = ioctl(fd, GPIO_V2_LINE_SET_CONFIG_IOCTL, &hcfg);
 	if (rv < 0)
 		return -1;
 
 	gpiod_line_bulk_foreach_line_off(bulk, line, i) {
 		line->req_flags = flags;
 		if (direction == GPIOD_LINE_REQUEST_DIRECTION_OUTPUT)
-			line->output_value = hcfg.default_values[i];
+			line->output_value = lines_bitmap_test_bit(
+				hcfg.attrs[0].attr.values, i);
 
 		rv = gpiod_line_update(line);
 		if (rv < 0)
@@ -985,45 +984,37 @@  int gpiod_line_event_wait_bulk(struct gpiod_line_bulk *bulk,
 			       const struct timespec *timeout,
 			       struct gpiod_line_bulk *event_bulk)
 {
-	struct pollfd fds[GPIOD_LINE_BULK_MAX_LINES];
-	unsigned int off, num_lines;
+	struct pollfd pfd;
 	struct gpiod_line *line;
+	struct gpiod_line_event event;
 	int rv;
 
 	if (!line_bulk_same_chip(bulk) || !line_bulk_all_requested(bulk))
 		return -1;
 
-	memset(fds, 0, sizeof(fds));
-	num_lines = gpiod_line_bulk_num_lines(bulk);
-
-	gpiod_line_bulk_foreach_line_off(bulk, line, off) {
-		fds[off].fd = line_get_fd(line);
-		fds[off].events = POLLIN | POLLPRI;
-	}
+	line = gpiod_line_bulk_get_line(bulk, 0);
+	pfd.fd = line_get_fd(line);
+	pfd.events = POLLIN | POLLPRI;
 
-	rv = ppoll(fds, num_lines, timeout, NULL);
+	rv = ppoll(&pfd, 1, timeout, NULL);
 	if (rv < 0)
 		return -1;
 	else if (rv == 0)
 		return 0;
 
-	if (event_bulk)
-		gpiod_line_bulk_init(event_bulk);
-
-	for (off = 0; off < num_lines; off++) {
-		if (fds[off].revents) {
-			if (fds[off].revents & POLLNVAL) {
-				errno = EINVAL;
-				return -1;
-			}
-
-			if (event_bulk) {
-				line = gpiod_line_bulk_get_line(bulk, off);
-				gpiod_line_bulk_add(event_bulk, line);
-			}
+	if (pfd.revents) {
+		if (pfd.revents & POLLNVAL) {
+			errno = EINVAL;
+			return -1;
+		}
 
-			if (!--rv)
-				break;
+		if (event_bulk) {
+			gpiod_line_bulk_init(event_bulk);
+			rv = gpiod_line_event_read(line, &event);
+			if (rv)
+				return rv;
+			line = gpiod_line_bulk_get_line(bulk, event.offset);
+			gpiod_line_bulk_add(event_bulk, line);
 		}
 	}
 
@@ -1082,8 +1073,13 @@  int gpiod_line_event_read_fd_multiple(int fd, struct gpiod_line_event *events,
 	/*
 	 * 16 is the maximum number of events the kernel can store in the FIFO
 	 * so we can allocate the buffer on the stack.
+	 *
+	 * NOTE: This is no longer strictly true for uAPI v2.  While 16 is
+	 * the default for single line, a request with multiple lines will
+	 * have a larger buffer.  So need to rethink the allocation here,
+	 * or at least the comment above...
 	 */
-	struct gpioevent_data evdata[16], *curr;
+	struct gpio_v2_line_event evdata[16], *curr;
 	struct gpiod_line_event *event;
 	unsigned int events_read, i;
 	ssize_t rd;
@@ -1109,11 +1105,12 @@  int gpiod_line_event_read_fd_multiple(int fd, struct gpiod_line_event *events,
 		curr = &evdata[i];
 		event = &events[i];
 
-		event->event_type = curr->id == GPIOEVENT_EVENT_RISING_EDGE
+		event->offset = curr->offset;
+		event->event_type = curr->id == GPIO_V2_LINE_EVENT_RISING_EDGE
 					? GPIOD_LINE_EVENT_RISING_EDGE
 					: GPIOD_LINE_EVENT_FALLING_EDGE;
-		event->ts.tv_sec = curr->timestamp / 1000000000ULL;
-		event->ts.tv_nsec = curr->timestamp % 1000000000ULL;
+		event->ts.tv_sec = curr->timestamp_ns / 1000000000ULL;
+		event->ts.tv_nsec = curr->timestamp_ns % 1000000000ULL;
 	}
 
 	return i;