Message ID | 20181001102928.20533-1-vsementsov@virtuozzo.com |
---|---|
Headers | show |
Series | fleecing-hook driver for backup | expand |
On 10/1/18 5:29 AM, Vladimir Sementsov-Ogievskiy wrote: > v2 was "[RFC v2] new, node-graph-based fleecing and backup" > > Hi all! > > These series introduce fleecing-hook driver. It's a filter-node, which > do copy-before-write operation. Mirror uses filter-node for handling > guest writes, let's move to filter-node (from write-notifiers) for > backup too (patch 18) > > Proposed filter driver is complete and separate: it can be used > standalone, as fleecing provider (instead of backup(sync=none)). > (old-style fleecing based on backup(sync=none) is supported too), > look at patch 16. I haven't had time to look at this series in any sort of depth yet, but it reminds me of a question I just ran into with my libvirt code: What happens if we want to have two parallel clients both reading off different backup/fleece nodes at once? Right now, 'nbd-server-start' is hard-coded to at most one NBD server, and 'nbd-server-add' is hardcoded to adding an export to the one-and-only NBD server. But it would be a lot nicer if you could pick different ports for different clients (or even mix TCP and Unix sockets), so that independent backup jobs can both operate in parallel via different NBD servers both under control of the same qemu process, instead of the second client having to wait for the first client to disconnect so that the first NBD server can stop. In the meantime, you can be somewhat careful by controlling which export names are exposed over NBD, but even with nbd-server-start using "tls-creds", all clients can see one another's exports via NBD_OPT_LIST, and you are relying on the clients being well-behaved, vs. the nicer ability to spawn multiple NBD servers, then control which exports are exposed over which servers, and where distinct servers could even have different tls-creds. To get to that point, we'd need to enhance nbd-server-start to return a server id, and allow nbd-server-add and friends to take an optional parameter of a server id (for back-compat, if the server id is not provided, it operates on the first one).
02.10.2018 23:19, Eric Blake wrote: > On 10/1/18 5:29 AM, Vladimir Sementsov-Ogievskiy wrote: >> v2 was "[RFC v2] new, node-graph-based fleecing and backup" >> >> Hi all! >> >> These series introduce fleecing-hook driver. It's a filter-node, which >> do copy-before-write operation. Mirror uses filter-node for handling >> guest writes, let's move to filter-node (from write-notifiers) for >> backup too (patch 18) >> >> Proposed filter driver is complete and separate: it can be used >> standalone, as fleecing provider (instead of backup(sync=none)). >> (old-style fleecing based on backup(sync=none) is supported too), >> look at patch 16. > > I haven't had time to look at this series in any sort of depth yet, > but it reminds me of a question I just ran into with my libvirt code: > > What happens if we want to have two parallel clients both reading off > different backup/fleece nodes at once? Right now, 'nbd-server-start' > is hard-coded to at most one NBD server, and 'nbd-server-add' is > hardcoded to adding an export to the one-and-only NBD server. But it > would be a lot nicer if you could pick different ports for different > clients (or even mix TCP and Unix sockets), so that independent backup > jobs can both operate in parallel via different NBD servers both under > control of the same qemu process, instead of the second client having > to wait for the first client to disconnect so that the first NBD > server can stop. In the meantime, you can be somewhat careful by > controlling which export names are exposed over NBD, but even with > nbd-server-start using "tls-creds", all clients can see one another's > exports via NBD_OPT_LIST, and you are relying on the clients being > well-behaved, vs. the nicer ability to spawn multiple NBD servers, > then control which exports are exposed over which servers, and where > distinct servers could even have different tls-creds. > > To get to that point, we'd need to enhance nbd-server-start to return > a server id, and allow nbd-server-add and friends to take an optional > parameter of a server id (for back-compat, if the server id is not > provided, it operates on the first one). > Good thing. Don't see any problems from block layer: if we want to export the same fleecing through several different servers, they all can share the same fleecing node. However, with new approach, we even can setup several fleecing nodes for one active disk.. any benefits? For example we can start second external backup on the same disk, when the first one is still in progress.. Don't sure that's a real case.
02.10.2018 23:19, Eric Blake wrote: > On 10/1/18 5:29 AM, Vladimir Sementsov-Ogievskiy wrote: >> v2 was "[RFC v2] new, node-graph-based fleecing and backup" >> >> Hi all! >> >> These series introduce fleecing-hook driver. It's a filter-node, which >> do copy-before-write operation. Mirror uses filter-node for handling >> guest writes, let's move to filter-node (from write-notifiers) for >> backup too (patch 18) >> >> Proposed filter driver is complete and separate: it can be used >> standalone, as fleecing provider (instead of backup(sync=none)). >> (old-style fleecing based on backup(sync=none) is supported too), >> look at patch 16. > > I haven't had time to look at this series in any sort of depth yet, > but it reminds me of a question I just ran into with my libvirt code: > > What happens if we want to have two parallel clients both reading off > different backup/fleece nodes at once? Right now, 'nbd-server-start' > is hard-coded to at most one NBD server, and 'nbd-server-add' is > hardcoded to adding an export to the one-and-only NBD server. But it > would be a lot nicer if you could pick different ports for different > clients (or even mix TCP and Unix sockets), so that independent backup > jobs can both operate in parallel via different NBD servers both under > control of the same qemu process, instead of the second client having > to wait for the first client to disconnect so that the first NBD > server can stop. In the meantime, you can be somewhat careful by > controlling which export names are exposed over NBD, but even with > nbd-server-start using "tls-creds", all clients can see one another's > exports via NBD_OPT_LIST, and you are relying on the clients being > well-behaved, vs. the nicer ability to spawn multiple NBD servers, > then control which exports are exposed over which servers, and where > distinct servers could even have different tls-creds. > > To get to that point, we'd need to enhance nbd-server-start to return > a server id, and allow nbd-server-add and friends to take an optional > parameter of a server id (for back-compat, if the server id is not > provided, it operates on the first one). > hmm, about different ports, it's funny, but NBD-spec is directly against different ports for new style negotiation: A client who wants to use the new style negotiation SHOULD connect on the IANA-reserved port for NBD, 10809. The server MAY listen on other ports as well, but it SHOULD use the old style handshake on those. also, next sentence is strange too: The server SHOULD refuse to allow oldstyle negotiations on the newstyle port. refuse? Refuse to whom? It's a server, who choose negotiation type. Or this means, that it should refuse to start at all. so refuse to server admin, not to NBD client? sounds strange. Should it be "Server SHOULD NOT use the old style on port 10809"?