Message ID | 20230808171931.944154-1-u.kleine-koenig@pengutronix.de |
---|---|
Headers | show |
Series | pwm: Fix lifetime issues for pwm_chips | expand |
Hello Thierry, On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > this series addresses the issues I reported already earlier to this > list[1]. It is based on pwm/for-next and several patches I already sent > out, too. Maybe some of these have to be reworked (e.g. Thierry already > signalled not to like the patches dropping runtime error messages) but > in the expectation that I will have to create a v2 for this series, too > and it actually fixes a race condition, I sent the patches out for > review anyhow. For the same reason I didn't Cc: all the individual > maintainers. > > If you want to actually test I suggest you fetch my complete history > from > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > . > > In the end drivers have to allocate their pwm_chip using > pwmchip_alloc(). This is important for the memory backing the pwm_chip > being able to have a longer life than the driver. > > The motivation for this series is to prepare the pwm framework to add a > character device for each pwm_chip for easier and faster access to PWMs > from userspace compared to the sysfs API. For such an extension proper > lifetime tracking is important, too, as such a device can still be open > if a PWM disappears. I wonder how this topic will continue. This series fixes a lifetime issue that can result in a userspace triggered oops and it builds the base for my efforts to create a /dev/pwmchipX for faster control of PWMs from userspace (compared to sysfs). (Currently in the prototype stage.) I'd like to get this in during the next merge window, please tell me what needs to be done to make this happen. Best regards Uwe
Hello again, On Tue, Sep 26, 2023 at 12:06:25PM +0200, Uwe Kleine-König wrote: > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > this series addresses the issues I reported already earlier to this > > list[1]. It is based on pwm/for-next and several patches I already sent > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > signalled not to like the patches dropping runtime error messages) but > > in the expectation that I will have to create a v2 for this series, too > > and it actually fixes a race condition, I sent the patches out for > > review anyhow. For the same reason I didn't Cc: all the individual > > maintainers. > > > > If you want to actually test I suggest you fetch my complete history > > from > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > . > > > > In the end drivers have to allocate their pwm_chip using > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > being able to have a longer life than the driver. > > > > The motivation for this series is to prepare the pwm framework to add a > > character device for each pwm_chip for easier and faster access to PWMs > > from userspace compared to the sysfs API. For such an extension proper > > lifetime tracking is important, too, as such a device can still be open > > if a PWM disappears. > > I wonder how this topic will continue. This series fixes a lifetime > issue that can result in a userspace triggered oops and it builds the > base for my efforts to create a /dev/pwmchipX for faster control of PWMs > from userspace (compared to sysfs). (Currently in the prototype stage.) > > I'd like to get this in during the next merge window, please tell me > what needs to be done to make this happen. One problem I noticed yesterday is that this series depends on patch "drm/ssd130x: Print the PWM's label instead of its number" that currently waits in drm-misc-next for getting in the main line. The series could for sure be reworked to not rely on this patch, but I'd prefer to wait until after the next merge window instead of reworking it. Still, getting some feedback here in the mean time would be nice. The questions I wonder about myself are: - In patch #1, devm_pwmchip_alloc() could get another parameter for the .ops member. This would save a line per driver like chip->ops = &pwm_clk_ops; in return for an additional parameter that yields longer lines in the drivers. - In patch #101 instead of using &pwm_lock a per-pwmchip lock could be used for pwm_apply_state(). This would allow to parallelize pwm calls for different chips; I don't know how much this matters. Maybe the sensible option here is to keep it simple for now (i.e. how I implemented it now) until someone complains? (But see also the next item.) - A further complication is the requirement of pwm-ir-tx for faster pwm_apply_state() calls, see https://lore.kernel.org/linux-pwm/ZRb5OWvx3GxYWf9g@gofer.mess.org https://lore.kernel.org/linux-pwm/1bd5241d584ceb4d6b731c4dc3203fb9686ee1d1.1696156485.git.sean@mess.org . This complicates the locking scheme, I didn't try to address that yet. Best regards Uwe
On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > Hello, > > this series addresses the issues I reported already earlier to this > list[1]. It is based on pwm/for-next and several patches I already sent > out, too. Maybe some of these have to be reworked (e.g. Thierry already > signalled not to like the patches dropping runtime error messages) but > in the expectation that I will have to create a v2 for this series, too > and it actually fixes a race condition, I sent the patches out for > review anyhow. For the same reason I didn't Cc: all the individual > maintainers. > > If you want to actually test I suggest you fetch my complete history > from > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > . > > In the end drivers have to allocate their pwm_chip using > pwmchip_alloc(). This is important for the memory backing the pwm_chip > being able to have a longer life than the driver. Couldn't we achieve the same thing by just making sure that drivers don't use any of the device-managed APIs to do this? That seems like a slightly less intrusive way of doing things. > The motivation for this series is to prepare the pwm framework to add a > character device for each pwm_chip for easier and faster access to PWMs > from userspace compared to the sysfs API. For such an extension proper > lifetime tracking is important, too, as such a device can still be open > if a PWM disappears. As I mentioned before, I'd like to see at least a prototype of the character device patches before I merge this series. There's a whole lot of churn here and without the character device support it's hard to justify. I have a couple more detailed comments, but I'll leave those in response to some of the other patches for better context. Thierry
On Sun, Oct 01, 2023 at 01:10:24PM +0200, Uwe Kleine-König wrote: > Hello again, > > On Tue, Sep 26, 2023 at 12:06:25PM +0200, Uwe Kleine-König wrote: > > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > > this series addresses the issues I reported already earlier to this > > > list[1]. It is based on pwm/for-next and several patches I already sent > > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > > signalled not to like the patches dropping runtime error messages) but > > > in the expectation that I will have to create a v2 for this series, too > > > and it actually fixes a race condition, I sent the patches out for > > > review anyhow. For the same reason I didn't Cc: all the individual > > > maintainers. > > > > > > If you want to actually test I suggest you fetch my complete history > > > from > > > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > > > . > > > > > > In the end drivers have to allocate their pwm_chip using > > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > > being able to have a longer life than the driver. > > > > > > The motivation for this series is to prepare the pwm framework to add a > > > character device for each pwm_chip for easier and faster access to PWMs > > > from userspace compared to the sysfs API. For such an extension proper > > > lifetime tracking is important, too, as such a device can still be open > > > if a PWM disappears. > > > > I wonder how this topic will continue. This series fixes a lifetime > > issue that can result in a userspace triggered oops and it builds the > > base for my efforts to create a /dev/pwmchipX for faster control of PWMs > > from userspace (compared to sysfs). (Currently in the prototype stage.) > > > > I'd like to get this in during the next merge window, please tell me > > what needs to be done to make this happen. > > One problem I noticed yesterday is that this series depends on patch > "drm/ssd130x: Print the PWM's label instead of its number" that > currently waits in drm-misc-next for getting in the main line. The > series could for sure be reworked to not rely on this patch, but I'd > prefer to wait until after the next merge window instead of reworking > it. > > Still, getting some feedback here in the mean time would be nice. The > questions I wonder about myself are: > > - In patch #1, devm_pwmchip_alloc() could get another parameter for the > .ops member. This would save a line per driver like > > chip->ops = &pwm_clk_ops; > > in return for an additional parameter that yields longer lines in the > drivers. > > - In patch #101 instead of using &pwm_lock a per-pwmchip lock could be > used for pwm_apply_state(). This would allow to parallelize pwm calls > for different chips; I don't know how much this matters. Maybe the > sensible option here is to keep it simple for now (i.e. how I > implemented it now) until someone complains? (But see also the next > item.) > > - A further complication is the requirement of pwm-ir-tx for faster > pwm_apply_state() calls, see > > https://lore.kernel.org/linux-pwm/ZRb5OWvx3GxYWf9g@gofer.mess.org > https://lore.kernel.org/linux-pwm/1bd5241d584ceb4d6b731c4dc3203fb9686ee1d1.1696156485.git.sean@mess.org > > . This complicates the locking scheme, I didn't try to address that > yet. Frankly, I think per-chip locking is the only way to support slow busses. If we use the subsystem-wide lock, effectively all chips become slow and unusable for things like pwm-ir-tx. Perhaps we can draw some inspiration from how the IRQ infrastructure deals with this? IRQ chips have irq_bus_lock() and irq_bus_sync_unlock() callbacks to deal with this. We could have something similar for PWM chips. Perhaps that's even something that could be handled in the core by checking a "might_sleep" flag for the chip. Thierry
Hello Thierry, On Fri, Oct 06, 2023 at 11:20:03AM +0200, Thierry Reding wrote: > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > this series addresses the issues I reported already earlier to this > > list[1]. It is based on pwm/for-next and several patches I already sent > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > signalled not to like the patches dropping runtime error messages) but > > in the expectation that I will have to create a v2 for this series, too > > and it actually fixes a race condition, I sent the patches out for > > review anyhow. For the same reason I didn't Cc: all the individual > > maintainers. > > > > If you want to actually test I suggest you fetch my complete history > > from > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > . > > > > In the end drivers have to allocate their pwm_chip using > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > being able to have a longer life than the driver. > > Couldn't we achieve the same thing by just making sure that drivers > don't use any of the device-managed APIs to do this? That seems like a > slightly less intrusive way of doing things. The device-managed APIs are not the problem here. If you allocate in .probe and free in .remove there is a problem. (And that's exactly what the device managed APIs do.) So no, you cannot. The thing is the struct pwm_chip must stay around until the last reference is dropped. So you need some kind of reference counting. The canonical way to do that is using a struct device. You can try to hide it from the low-level drivers (as gpio does) at the cost that you have the driver allocated structure separate from the reference counted memory under framework control. The cost is more overhead because you have >1 memory chunk (memory fragmentation, less cache locality) and more pointers. IMHO the cleaner way is to not hide it and have the explicit handling needed in the drivers be not error prone (as spi does). I agree the switch suggested here is intrusive, but the "new way" a driver has to look like is fine, so I'd not hesitate here. > > The motivation for this series is to prepare the pwm framework to add a > > character device for each pwm_chip for easier and faster access to PWMs > > from userspace compared to the sysfs API. For such an extension proper > > lifetime tracking is important, too, as such a device can still be open > > if a PWM disappears. > > As I mentioned before, I'd like to see at least a prototype of the > character device patches before I merge this series. There's a whole lot > of churn here and without the character device support it's hard to > justify. The character device stuff is only one reason to get the lifetime tracking right. See that oops I can trigger today that is fixed by this series. > I have a couple more detailed comments, but I'll leave those in response > to some of the other patches for better context. Earlier today I managed to get this character support working enough to be able to trigger pwm_apply_state calls. My test case looks (simplified) like this: state.period = 50000; for (state.duty_cycle = 0; state.duty_cycle <= state.period; ++state.duty_cycle) pwm_apply_state(pwm, &state); With a naive sysfs backend that takes approx. a minute. With an optimized version (that only writes the duty_cycle file in the above case) it goes down to 20s. With the character device it takes 4.7s. Best regards Uwe
On Fri, Oct 06, 2023 at 12:41:02PM +0200, Uwe Kleine-König wrote: > Hello Thierry, > > On Fri, Oct 06, 2023 at 11:20:03AM +0200, Thierry Reding wrote: > > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > > this series addresses the issues I reported already earlier to this > > > list[1]. It is based on pwm/for-next and several patches I already sent > > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > > signalled not to like the patches dropping runtime error messages) but > > > in the expectation that I will have to create a v2 for this series, too > > > and it actually fixes a race condition, I sent the patches out for > > > review anyhow. For the same reason I didn't Cc: all the individual > > > maintainers. > > > > > > If you want to actually test I suggest you fetch my complete history > > > from > > > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > > > . > > > > > > In the end drivers have to allocate their pwm_chip using > > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > > being able to have a longer life than the driver. > > > > Couldn't we achieve the same thing by just making sure that drivers > > don't use any of the device-managed APIs to do this? That seems like a > > slightly less intrusive way of doing things. > > The device-managed APIs are not the problem here. If you allocate in > .probe and free in .remove there is a problem. (And that's exactly what > the device managed APIs do.) Heh... so you're saying that device-managed APIs are the problem here. Yes, without device-managed APIs you still need to make sure you don't free while the PWM devices are still in use, but at least you then have that option. My point was that with device-managed APIs, freeing memory at ->remove() comes built-in. Hence why it's confusing to see that this series keeps using device-managed APIs while claiming to address lifetime issues. > So no, you cannot. The thing is the struct pwm_chip must stay around until > the last reference is dropped. So you need some kind of reference > counting. The canonical way to do that is using a struct device. > > You can try to hide it from the low-level drivers (as gpio does) at the > cost that you have the driver allocated structure separate from the > reference counted memory under framework control. The cost is more > overhead because you have >1 memory chunk (memory fragmentation, less > cache locality) and more pointers. IMHO the cleaner way is to not hide Single-chunk allocations can also lead to less cache locality, depending on the size of your allocations. For instance if you primarily use small driver-specific data structures, those may fit into the cache while a combined data structure that consists of the core structure plus driver- private data may need to be split across multiple cache lines, and you may not end up using something like the core structure a lot of the time. Anyway, I'm not sure PWM is the kind of subsystem where cache locality is something we need to be concerned about. > it and have the explicit handling needed in the drivers be not error > prone (as spi does). > > I agree the switch suggested here is intrusive, but the "new way" a > driver has to look like is fine, so I'd not hesitate here. > > > > The motivation for this series is to prepare the pwm framework to add a > > > character device for each pwm_chip for easier and faster access to PWMs > > > from userspace compared to the sysfs API. For such an extension proper > > > lifetime tracking is important, too, as such a device can still be open > > > if a PWM disappears. > > > > As I mentioned before, I'd like to see at least a prototype of the > > character device patches before I merge this series. There's a whole lot > > of churn here and without the character device support it's hard to > > justify. > > The character device stuff is only one reason to get the lifetime > tracking right. See that oops I can trigger today that is fixed by this > series. My recollection is that you need to inject a 5 second sleep into an apply function in order to trigger this, which is not something you would usually do, or that someone could trigger by accident. Yes, it might be theoretically possible to run into this, but so far nobody has reported this to be an actual problem in practice. Also, as you have mentioned before, other mechanisms should already take care of tearing things down properly. If that's not happening for some reason, that's probably what we should investigate. It'd probably be less intrusive than 100 patches with churn in every driver. I'm not saying we don't need this, just trying to put it into perspective. Thierry
Hello Thierry, On Fri, Oct 06, 2023 at 12:15:55PM +0200, Thierry Reding wrote: > On Sun, Oct 01, 2023 at 01:10:24PM +0200, Uwe Kleine-König wrote: > > On Tue, Sep 26, 2023 at 12:06:25PM +0200, Uwe Kleine-König wrote: > > > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > > > this series addresses the issues I reported already earlier to this > > > > list[1]. It is based on pwm/for-next and several patches I already sent > > > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > > > signalled not to like the patches dropping runtime error messages) but > > > > in the expectation that I will have to create a v2 for this series, too > > > > and it actually fixes a race condition, I sent the patches out for > > > > review anyhow. For the same reason I didn't Cc: all the individual > > > > maintainers. > > > > > > > > If you want to actually test I suggest you fetch my complete history > > > > from > > > > > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > > > > > . > > > > > > > > In the end drivers have to allocate their pwm_chip using > > > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > > > being able to have a longer life than the driver. > > > > > > > > The motivation for this series is to prepare the pwm framework to add a > > > > character device for each pwm_chip for easier and faster access to PWMs > > > > from userspace compared to the sysfs API. For such an extension proper > > > > lifetime tracking is important, too, as such a device can still be open > > > > if a PWM disappears. > > > > > > I wonder how this topic will continue. This series fixes a lifetime > > > issue that can result in a userspace triggered oops and it builds the > > > base for my efforts to create a /dev/pwmchipX for faster control of PWMs > > > from userspace (compared to sysfs). (Currently in the prototype stage.) > > > > > > I'd like to get this in during the next merge window, please tell me > > > what needs to be done to make this happen. > > > > One problem I noticed yesterday is that this series depends on patch > > "drm/ssd130x: Print the PWM's label instead of its number" that > > currently waits in drm-misc-next for getting in the main line. The > > series could for sure be reworked to not rely on this patch, but I'd > > prefer to wait until after the next merge window instead of reworking > > it. > > > > Still, getting some feedback here in the mean time would be nice. The > > questions I wonder about myself are: > > > > - In patch #1, devm_pwmchip_alloc() could get another parameter for the > > .ops member. This would save a line per driver like > > > > chip->ops = &pwm_clk_ops; > > > > in return for an additional parameter that yields longer lines in the > > drivers. > > > > - In patch #101 instead of using &pwm_lock a per-pwmchip lock could be > > used for pwm_apply_state(). This would allow to parallelize pwm calls > > for different chips; I don't know how much this matters. Maybe the > > sensible option here is to keep it simple for now (i.e. how I > > implemented it now) until someone complains? (But see also the next > > item.) > > > > - A further complication is the requirement of pwm-ir-tx for faster > > pwm_apply_state() calls, see > > > > https://lore.kernel.org/linux-pwm/ZRb5OWvx3GxYWf9g@gofer.mess.org > > https://lore.kernel.org/linux-pwm/1bd5241d584ceb4d6b731c4dc3203fb9686ee1d1.1696156485.git.sean@mess.org > > > > . This complicates the locking scheme, I didn't try to address that > > yet. > > Frankly, I think per-chip locking is the only way to support slow > busses. If we use the subsystem-wide lock, effectively all chips become > slow and unusable for things like pwm-ir-tx. > > Perhaps we can draw some inspiration from how the IRQ infrastructure > deals with this? IRQ chips have irq_bus_lock() and irq_bus_sync_unlock() > callbacks to deal with this. We could have something similar for PWM > chips. Perhaps that's even something that could be handled in the core > by checking a "might_sleep" flag for the chip. I'll invest some time to work on that, also considering that we might not want sleep at all for "fast" PWMs. Best regards Uwe
Hello Thierry, On Fri, Oct 06, 2023 at 01:50:57PM +0200, Thierry Reding wrote: > On Fri, Oct 06, 2023 at 12:41:02PM +0200, Uwe Kleine-König wrote: > > On Fri, Oct 06, 2023 at 11:20:03AM +0200, Thierry Reding wrote: > > > On Tue, Aug 08, 2023 at 07:17:50PM +0200, Uwe Kleine-König wrote: > > > > this series addresses the issues I reported already earlier to this > > > > list[1]. It is based on pwm/for-next and several patches I already sent > > > > out, too. Maybe some of these have to be reworked (e.g. Thierry already > > > > signalled not to like the patches dropping runtime error messages) but > > > > in the expectation that I will have to create a v2 for this series, too > > > > and it actually fixes a race condition, I sent the patches out for > > > > review anyhow. For the same reason I didn't Cc: all the individual > > > > maintainers. > > > > > > > > If you want to actually test I suggest you fetch my complete history > > > > from > > > > > > > > https://git.pengutronix.de/git/ukl/linux pwm-lifetime-tracking > > > > > > > > . > > > > > > > > In the end drivers have to allocate their pwm_chip using > > > > pwmchip_alloc(). This is important for the memory backing the pwm_chip > > > > being able to have a longer life than the driver. > > > > > > Couldn't we achieve the same thing by just making sure that drivers > > > don't use any of the device-managed APIs to do this? That seems like a > > > slightly less intrusive way of doing things. > > > > The device-managed APIs are not the problem here. If you allocate in > > .probe and free in .remove there is a problem. (And that's exactly what > > the device managed APIs do.) > > Heh... so you're saying that device-managed APIs are the problem here. No, I'm just saying that replacing devm_kzalloc by kzalloc in .probe and free in .remove (and .probe's error path) doesn't help, because that's what devm_kzalloc does under the cover. > Yes, without device-managed APIs you still need to make sure you don't > free while the PWM devices are still in use, but at least you then have > that option. The big downside I see then is that each driver must cope to get the free at the right place. Using devm_pwmchip_alloc() puts that complexity into a single place in the core instead. > My point was that with device-managed APIs, freeing memory at ->remove() > comes built-in. Hence why it's confusing to see that this series keeps > using device-managed APIs while claiming to address lifetime issues. This is only part of the preparations. In the end there is no devm_kzalloc() any more. (And the cleanup handler of devm_pwmchip_alloc() calls put_device() which only triggers kfree() once all references are gone.) > > So no, you cannot. The thing is the struct pwm_chip must stay around until > > the last reference is dropped. So you need some kind of reference > > counting. The canonical way to do that is using a struct device. > > > > You can try to hide it from the low-level drivers (as gpio does) at the > > cost that you have the driver allocated structure separate from the > > reference counted memory under framework control. The cost is more > > overhead because you have >1 memory chunk (memory fragmentation, less > > cache locality) and more pointers. IMHO the cleaner way is to not hide > > Single-chunk allocations can also lead to less cache locality, depending > on the size of your allocations. For instance if you primarily use small > driver-specific data structures, those may fit into the cache while a > combined data structure that consists of the core structure plus driver- > private data may need to be split across multiple cache lines, and you > may not end up using something like the core structure a lot of the > time. I think this is wrong. Two memory chunks of size A and B are never better than a single memory chunk of size A+B for caching. Those small driver-specific data structures all live in the B part. For cache locality of this driver struct it doesn't matter if A is directly in front of that or somewhere else. But with having these in a single chunk you might benefit from better caching when accessing both A and B, and simpler page handling and less memory management overhead. > Anyway, I'm not sure PWM is the kind of subsystem where cache locality > is something we need to be concerned about. pwm-ir-tx is concerned about timing, and my customer controlling a motor wants little overhead, too. I don't have hard numbers, but I tend to invest that effort. > > it and have the explicit handling needed in the drivers be not error > > prone (as spi does). > > > > I agree the switch suggested here is intrusive, but the "new way" a > > driver has to look like is fine, so I'd not hesitate here. > > > > > > The motivation for this series is to prepare the pwm framework to add a > > > > character device for each pwm_chip for easier and faster access to PWMs > > > > from userspace compared to the sysfs API. For such an extension proper > > > > lifetime tracking is important, too, as such a device can still be open > > > > if a PWM disappears. > > > > > > As I mentioned before, I'd like to see at least a prototype of the > > > character device patches before I merge this series. There's a whole lot > > > of churn here and without the character device support it's hard to > > > justify. > > > > The character device stuff is only one reason to get the lifetime > > tracking right. See that oops I can trigger today that is fixed by this > > series. > > My recollection is that you need to inject a 5 second sleep into an > apply function in order to trigger this, which is not something you > would usually do, or that someone could trigger by accident. > > Yes, it might be theoretically possible to run into this, but so far > nobody has reported this to be an actual problem in practice. It is theoretically possible, and in my book that's a good enough reason to fix it. A use-after-free is relevant for the security of the whole system, so nobody telling us to have hit that problem isn't an excuse. > Also, as you have mentioned before, other mechanisms should already > take care of tearing things down properly. If that's not happening for > some reason, that's probably what we should investigate. It'd probably > be less intrusive than 100 patches with churn in every driver. Let's do some work sharing: I continue with the lifetime stuff (that we need for the character device stuff anyway) and the character device code and you check why device links don't work as expected? I cannot cope for every potential bug I notice, at least not always on the front of my todo list :-) Best regards Uwe