diff mbox series

Make socket paths and TMPDIR configurable?

Message ID 20170831140759.436kek42e27kqebj@MD1KR9XC.ww002.siemens.net
State Changes Requested
Delegated to: Stefano Babic
Headers show
Series Make socket paths and TMPDIR configurable? | expand

Commit Message

Storm, Christian Aug. 31, 2017, 2:07 p.m. UTC
Hi,

I'd like to see the various socket locations be configurable so that I
can put them into different directories and, e.g., give progress clients
access to the progress IPC socket only but not to, say, the control IPC
socket, including the ability to "see" the control socket at all.
Currently, they're all residing in /tmp/, hard-coded.

As a quick hack, consider something along the lines of the following:
```
```

This, however, breaks progress clients if they're compiled out-of-tree
without a proper include/generated/autoconf.h defining, e.g.,
CONFIG_SOCKET_CTRL_PATH. Hence, progress clients should get a command
line parameter -s /path/to/socket specifying the path to the progress
IPC socket (and a "sane" default as fallback which is the current
hard-coded location in /tmp). The same is true for the other sockets.


Carrying things further, TMPDIR as defined in include/util.h:115 and
used in various locations may also be split according to the concern at
hand so that not all temporary files regardless of their concern reside
in /tmp.


What do you think about this?


As a side note, what do you think about including systemd's socket-based
activation as an optional feature? Then, if a process wants to talk to
the control IPC socket created by systemd, systemd spawns SWUpdate and
hands over the socket to SWUpdate...



Kind regards,
   Christian

Comments

Stefano Babic Aug. 31, 2017, 3:01 p.m. UTC | #1
Hi Christian,

On 31/08/2017 16:07, Christian Storm wrote:
> Hi,
> 
> I'd like to see the various socket locations be configurable so that I
> can put them into different directories and, e.g., give progress clients
> access to the progress IPC socket only but not to, say, the control IPC
> socket, including the ability to "see" the control socket at all.
> Currently, they're all residing in /tmp/, hard-coded.
Right, they are hard-coded.

> 
> As a quick hack, consider something along the lines of the following:
> ```
> --- a/Kconfig
> +++ b/Kconfig
> @@ -110,6 +110,25 @@ config SW_VERSIONS_FILE
>  	  but in some cases it can be required to do it. Having a check,
>  	  the risky-component is not always updated.
>  
> +config SOCKET_CTRL_PATH
> +	string "SWUpdate control socket path"
> +	default "/tmp/sockinstctrl"
> +	help
> +	  Path to SWUpdate's IPC socket.
> +
> +config SOCKET_PROGRESS_PATH
> +	string "SWUpdate progress socket path"
> +	default "/tmp/swupdateprog"
> +	help
> +	  Path to the socket progress information is sent to.
> +

The thing is how to export this for the clients. We have an API, that is
the install interface (sockinstctrl) and progress. Both can be used by
external programs to drive SWUpdate. We can set that the client
interface "must" be linked to own software, but it remains a strange
situation. We still should guarantee that software built outside
SWUpdate works.


> +config SOCKET_REMOTE_HANDLER_DIRECTORY
> +	string "SWUpdate remote handler socket directory"
> +	default "/tmp/"
> +	help
> +	  Directory where sockets to remote handler processes
> +	  are expected to be found.
> +
> 
> --- a/handlers/remote_handler.c
> +++ b/handlers/remote_handler.c
> @@ -167,7 +167,7 @@ static int install_remote_image(struct img_type *img,
>  	struct RHmsg RHmessage;
>  	char bufcmd[80];
>  
> -	len = strlen(img->type_data) + strlen(TMPDIR) + strlen("ipc://") + 4;
> +	len = strlen(img->type_data) + strlen(CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY) + strlen("ipc://") + 4;

This is more difficult because this interface allows to bind an external
handler, maybe under a closed license.

>  
>  	/*
>  	 * Allocate maximum string
> @@ -177,7 +177,7 @@ static int install_remote_image(struct img_type *img,
>  		ERROR("Not enough memory");
>  		return -ENOMEM;
>  	}
> -	snprintf(connect_string, len, "ipc://%s%s", TMPDIR,
> +	snprintf(connect_string, len, "ipc://%s%s", CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY,
>  			img->type_data);
>  
>  	ret = zmq_connect(request, connect_string);
> 
> 
> --- a/include/network_ipc.h
> +++ b/include/network_ipc.h
> @@ -24,7 +24,7 @@
>  #include "swupdate_status.h"
>  
>  #define IPC_MAGIC		0x14052001
> -#define SOCKET_CTRL_PATH	"/tmp/sockinstctrl"
> +#define SOCKET_CTRL_PATH	CONFIG_SOCKET_CTRL_PATH
>  
>  typedef enum {
>  	REQ_INSTALL,
> 
> 
> --- a/include/progress.h
> +++ b/include/progress.h
> @@ -24,7 +24,7 @@
>  
>  #include <swupdate_status.h>
>  
> -#define SOCKET_PROGRESS_PATH "/tmp/swupdateprog"
> +#define SOCKET_PROGRESS_PATH CONFIG_SOCKET_PROGRESS_PATH
>  
>  /*
>   * Message sent via socket
> ```
> 
> This, however, breaks progress clients if they're compiled out-of-tree
> without a proper include/generated/autoconf.h defining, e.g.,

Right, but it is bad to export autoconf.h. We should find another way.

> CONFIG_SOCKET_CTRL_PATH. Hence, progress clients should get a command
> line parameter -s /path/to/socket specifying the path to the progress
> IPC socket (and a "sane" default as fallback which is the current
> hard-coded location in /tmp). The same is true for the other sockets.
> 
> 
> Carrying things further, TMPDIR as defined in include/util.h:115 and
> used in various locations may also be split according to the concern at
> hand so that not all temporary files regardless of their concern reside
> in /tmp.

ok

> 
> 
> What do you think about this?

Possible, but I do not know how to guarantee API consistency.

Another way is to put the sockets inside the configuration file instead
of defining it via CONFIG_. They are not anymore hard-coded.

This can be read from the client, too - but clients should also link
libconfig to parse it.

> 
> 
> As a side note, what do you think about including systemd's socket-based
> activation as an optional feature? Then, if a process wants to talk to
> the control IPC socket created by systemd, systemd spawns SWUpdate and
> hands over the socket to SWUpdate...

I do not know if currently work. SWUpdate is monitoring the processes
listening to interfaces, and that means it is the spawner. Then, for
systemd I understand that these processes will run independently from
SWUpdate "core", and then it does not monitor them anymore. It works
just with external processes with client library. Or can you better
detail this proposal ?

Regards,
Stefano
Storm, Christian Sept. 1, 2017, 9:04 a.m. UTC | #2
Hi Stefano,

> > I'd like to see the various socket locations be configurable so that I
> > can put them into different directories and, e.g., give progress clients
> > access to the progress IPC socket only but not to, say, the control IPC
> > socket, including the ability to "see" the control socket at all.
> > Currently, they're all residing in /tmp/, hard-coded.
> Right, they are hard-coded.
> 
> > 
> > As a quick hack, consider something along the lines of the following:
> > ```
> > --- a/Kconfig
> > +++ b/Kconfig
> > @@ -110,6 +110,25 @@ config SW_VERSIONS_FILE
> >  	  but in some cases it can be required to do it. Having a check,
> >  	  the risky-component is not always updated.
> >  
> > +config SOCKET_CTRL_PATH
> > +	string "SWUpdate control socket path"
> > +	default "/tmp/sockinstctrl"
> > +	help
> > +	  Path to SWUpdate's IPC socket.
> > +
> > +config SOCKET_PROGRESS_PATH
> > +	string "SWUpdate progress socket path"
> > +	default "/tmp/swupdateprog"
> > +	help
> > +	  Path to the socket progress information is sent to.
> > +
> 
> The thing is how to export this for the clients. We have an API, that is
> the install interface (sockinstctrl) and progress. Both can be used by
> external programs to drive SWUpdate. We can set that the client
> interface "must" be linked to own software, but it remains a strange
> situation. We still should guarantee that software built outside
> SWUpdate works.

Yes, I absolutely agree. It boils down to whether the *location* of the
socket is considered as part of the API or not.
If the answer is yes, then we shouldn't do it :)
If the answer is no or don't care, then we may as well introduce a
command line flag to the progress clients specifying the socket
location. If we're doing so, at least the progress socket doesn't have
to be co-located to the other sockets. And we may define a sane
default for the progress socket resembling the current location. So, it
won't break but it gives you the possibility to place the progress
socket somewhere else. In this case, it's on you to configure the
software stack to work together as you deviated from the default. If
you're sticking to the default, then everything works as it does now
because both, SWUpdate and progress clients, "agreed" on the location,
per convention, and this remains the default.


> > +config SOCKET_REMOTE_HANDLER_DIRECTORY
> > +	string "SWUpdate remote handler socket directory"
> > +	default "/tmp/"
> > +	help
> > +	  Directory where sockets to remote handler processes
> > +	  are expected to be found.
> > +
> > 
> > --- a/handlers/remote_handler.c
> > +++ b/handlers/remote_handler.c
> > @@ -167,7 +167,7 @@ static int install_remote_image(struct img_type *img,
> >  	struct RHmsg RHmessage;
> >  	char bufcmd[80];
> >  
> > -	len = strlen(img->type_data) + strlen(TMPDIR) + strlen("ipc://") + 4;
> > +	len = strlen(img->type_data) + strlen(CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY) + strlen("ipc://") + 4;
> 
> This is more difficult because this interface allows to bind an external
> handler, maybe under a closed license.

Yes, but again having a sane default and the option to configure it
would be nice in my opinion. You have to tie the external handler to
SWUpdate by means of integration work anyway. Then, while integrating,
you're left with the choice to go with the default or chose a different
location. Or do I miss the point here?


> >  	/*
> >  	 * Allocate maximum string
> > @@ -177,7 +177,7 @@ static int install_remote_image(struct img_type *img,
> >  		ERROR("Not enough memory");
> >  		return -ENOMEM;
> >  	}
> > -	snprintf(connect_string, len, "ipc://%s%s", TMPDIR,
> > +	snprintf(connect_string, len, "ipc://%s%s", CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY,
> >  			img->type_data);
> >  
> >  	ret = zmq_connect(request, connect_string);
> > 
> > 
> > --- a/include/network_ipc.h
> > +++ b/include/network_ipc.h
> > @@ -24,7 +24,7 @@
> >  #include "swupdate_status.h"
> >  
> >  #define IPC_MAGIC		0x14052001
> > -#define SOCKET_CTRL_PATH	"/tmp/sockinstctrl"
> > +#define SOCKET_CTRL_PATH	CONFIG_SOCKET_CTRL_PATH
> >  
> >  typedef enum {
> >  	REQ_INSTALL,
> > 
> > 
> > --- a/include/progress.h
> > +++ b/include/progress.h
> > @@ -24,7 +24,7 @@
> >  
> >  #include <swupdate_status.h>
> >  
> > -#define SOCKET_PROGRESS_PATH "/tmp/swupdateprog"
> > +#define SOCKET_PROGRESS_PATH CONFIG_SOCKET_PROGRESS_PATH
> >  
> >  /*
> >   * Message sent via socket
> > ```
> > 
> > This, however, breaks progress clients if they're compiled out-of-tree
> > without a proper include/generated/autoconf.h defining, e.g.,
> 
> Right, but it is bad to export autoconf.h. We should find another way.

Yes, I mentioned autoconf.h to illustrate where the #define is coming
from. I don't mean to export the whole SWUpdate config to progress
clients just for this one #define, this is indeed bad :)
As said, I'd like to have a sane default resembling the current
locations to not break things but with the option to configure them
elsewhere, then, of course, putting the obligation of giving the
progress clients the right path onto my shoulders.


> > CONFIG_SOCKET_CTRL_PATH. Hence, progress clients should get a command
> > line parameter -s /path/to/socket specifying the path to the progress
> > IPC socket (and a "sane" default as fallback which is the current
> > hard-coded location in /tmp). The same is true for the other sockets.
> > 
> > 
> > Carrying things further, TMPDIR as defined in include/util.h:115 and
> > used in various locations may also be split according to the concern at
> > hand so that not all temporary files regardless of their concern reside
> > in /tmp.
> 
> ok
> 
> > 
> > 
> > What do you think about this?
> 
> Possible, but I do not know how to guarantee API consistency.
> 
> Another way is to put the sockets inside the configuration file instead
> of defining it via CONFIG_. They are not anymore hard-coded.
> 
> This can be read from the client, too - but clients should also link
> libconfig to parse it.

Yes, I think this makes them needlessly complicated. You have to
supply code for doing so to every progress client implementation. As
it's just the location, I think a command line flag specifying the
socket path and a sane default fall-back would do the trick. Besides,
the argumentation above applies here as well: You're supplying the full
SWUpdate configuration just for one parameter, which is, in my opinion,
a bit too much and the progress clients shouldn't be concerned with the
other configuration for SWUpdate.
API consistency is given if you're running the default configuration as
nothing really changes. But, you have the flexibility to do otherwise
but then, of course, you have to do the integration work yourself. I
guess that's fair :)


> > As a side note, what do you think about including systemd's socket-based
> > activation as an optional feature? Then, if a process wants to talk to
> > the control IPC socket created by systemd, systemd spawns SWUpdate and
> > hands over the socket to SWUpdate...
> 
> I do not know if currently work. SWUpdate is monitoring the processes
> listening to interfaces, and that means it is the spawner. Then, for
> systemd I understand that these processes will run independently from
> SWUpdate "core", and then it does not monitor them anymore. It works
> just with external processes with client library. Or can you better
> detail this proposal ?

Well, this came to my mind while I was thinking about the socket
locations and I thought that systemd may as well provide the sockets to
SWUpdate, if you're using systemd as init, that is. If systemd is not
used as init, of course, the current behavior remains.
So, with the optional feature systemd enabled, there's a swupdate.socket
unit specifying the sockets and systemd creates those sockets if the
unit is enabled and started. Upon accessing a socket, systemd starts the
related swupdate.service unit that executes SWUpdate and hands over
the sockets systemd created to SWUpdate. Of course, you can start
swupdate.service right away to start SWUpdate without using socket-based
activation.
With this, SWUpdate still spawns and monitors the subprocesses but may
get started in two different ways: either by accessing a socket or
explicitly.
Given that systemd created the sockets, one has to configure the socket
locations in systemd parlance. If this configuration sticks to the
defaults, then nothing breaks. Otherwise, one has to give the socket
locations to the clients. For this, SWUpdate has to be adapted to spawn
those clients with the according socket paths or systemd is used for
spawning and monitoring the clients.
As systemd is very much capable of spawning and supervising the clients,
it may as well be used for this purpose on systems having systemd. For
systems running other init systems not that capable, I would use
SWUpdate's spawn and supervise capabilities.



So, what do you think about all this?


Kind regards,
   Christian
Stefano Babic Sept. 1, 2017, 11:22 a.m. UTC | #3
Hi Christian,

On 01/09/2017 11:04, Christian Storm wrote:
> Hi Stefano,
> 
>>> I'd like to see the various socket locations be configurable so that I
>>> can put them into different directories and, e.g., give progress clients
>>> access to the progress IPC socket only but not to, say, the control IPC
>>> socket, including the ability to "see" the control socket at all.
>>> Currently, they're all residing in /tmp/, hard-coded.
>> Right, they are hard-coded.
>>
>>>
>>> As a quick hack, consider something along the lines of the following:
>>> ```
>>> --- a/Kconfig
>>> +++ b/Kconfig
>>> @@ -110,6 +110,25 @@ config SW_VERSIONS_FILE
>>>  	  but in some cases it can be required to do it. Having a check,
>>>  	  the risky-component is not always updated.
>>>  
>>> +config SOCKET_CTRL_PATH
>>> +	string "SWUpdate control socket path"
>>> +	default "/tmp/sockinstctrl"
>>> +	help
>>> +	  Path to SWUpdate's IPC socket.
>>> +
>>> +config SOCKET_PROGRESS_PATH
>>> +	string "SWUpdate progress socket path"
>>> +	default "/tmp/swupdateprog"
>>> +	help
>>> +	  Path to the socket progress information is sent to.
>>> +
>>
>> The thing is how to export this for the clients. We have an API, that is
>> the install interface (sockinstctrl) and progress. Both can be used by
>> external programs to drive SWUpdate. We can set that the client
>> interface "must" be linked to own software, but it remains a strange
>> situation. We still should guarantee that software built outside
>> SWUpdate works.
> 
> Yes, I absolutely agree. It boils down to whether the *location* of the
> socket is considered as part of the API or not.
> If the answer is yes, then we shouldn't do it :)
> If the answer is no or don't care, then we may as well introduce a
> command line flag to the progress clients specifying the socket
> location.

It seems to me a reasonable solution. I would avoid that client are
constrained to link libconfig or whatever and try to let the interface
simple. Witch command line parameter and fallback to default we reach
already the goal.

> If we're doing so, at least the progress socket doesn't have
> to be co-located to the other sockets. And we may define a sane
> default for the progress socket resembling the current location. So, it
> won't break but it gives you the possibility to place the progress
> socket somewhere else. In this case, it's on you to configure the
> software stack to work together as you deviated from the default. If
> you're sticking to the default, then everything works as it does now
> because both, SWUpdate and progress clients, "agreed" on the location,
> per convention, and this remains the default.

Fine with me.

> 
> 
>>> +config SOCKET_REMOTE_HANDLER_DIRECTORY
>>> +	string "SWUpdate remote handler socket directory"
>>> +	default "/tmp/"
>>> +	help
>>> +	  Directory where sockets to remote handler processes
>>> +	  are expected to be found.
>>> +
>>>
>>> --- a/handlers/remote_handler.c
>>> +++ b/handlers/remote_handler.c
>>> @@ -167,7 +167,7 @@ static int install_remote_image(struct img_type *img,
>>>  	struct RHmsg RHmessage;
>>>  	char bufcmd[80];
>>>  
>>> -	len = strlen(img->type_data) + strlen(TMPDIR) + strlen("ipc://") + 4;
>>> +	len = strlen(img->type_data) + strlen(CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY) + strlen("ipc://") + 4;
>>
>> This is more difficult because this interface allows to bind an external
>> handler, maybe under a closed license.
> 
> Yes, but again having a sane default and the option to configure it
> would be nice in my opinion. You have to tie the external handler to
> SWUpdate by means of integration work anyway. Then, while integrating,
> you're left with the choice to go with the default or chose a different
> location. Or do I miss the point here?

No, it is ok.

> 
> 
>>>  	/*
>>>  	 * Allocate maximum string
>>> @@ -177,7 +177,7 @@ static int install_remote_image(struct img_type *img,
>>>  		ERROR("Not enough memory");
>>>  		return -ENOMEM;
>>>  	}
>>> -	snprintf(connect_string, len, "ipc://%s%s", TMPDIR,
>>> +	snprintf(connect_string, len, "ipc://%s%s", CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY,
>>>  			img->type_data);
>>>  
>>>  	ret = zmq_connect(request, connect_string);
>>>
>>>
>>> --- a/include/network_ipc.h
>>> +++ b/include/network_ipc.h
>>> @@ -24,7 +24,7 @@
>>>  #include "swupdate_status.h"
>>>  
>>>  #define IPC_MAGIC		0x14052001
>>> -#define SOCKET_CTRL_PATH	"/tmp/sockinstctrl"
>>> +#define SOCKET_CTRL_PATH	CONFIG_SOCKET_CTRL_PATH
>>>  
>>>  typedef enum {
>>>  	REQ_INSTALL,
>>>
>>>
>>> --- a/include/progress.h
>>> +++ b/include/progress.h
>>> @@ -24,7 +24,7 @@
>>>  
>>>  #include <swupdate_status.h>
>>>  
>>> -#define SOCKET_PROGRESS_PATH "/tmp/swupdateprog"
>>> +#define SOCKET_PROGRESS_PATH CONFIG_SOCKET_PROGRESS_PATH
>>>  
>>>  /*
>>>   * Message sent via socket
>>> ```
>>>
>>> This, however, breaks progress clients if they're compiled out-of-tree
>>> without a proper include/generated/autoconf.h defining, e.g.,
>>
>> Right, but it is bad to export autoconf.h. We should find another way.
> 
> Yes, I mentioned autoconf.h to illustrate where the #define is coming
> from. I don't mean to export the whole SWUpdate config to progress
> clients just for this one #define, this is indeed bad :)
> As said, I'd like to have a sane default resembling the current
> locations to not break things but with the option to configure them
> elsewhere, then, of course, putting the obligation of giving the
> progress clients the right path onto my shoulders.

Ok, we agree. I think that for sockinstctrl we could even think to
export the client library and this has the right configuration for
"sockinstctrl". IMHO this is better as simply export the socket, because
the internal API is not required to the external installer.

> 
> 
>>> CONFIG_SOCKET_CTRL_PATH. Hence, progress clients should get a command
>>> line parameter -s /path/to/socket specifying the path to the progress
>>> IPC socket (and a "sane" default as fallback which is the current
>>> hard-coded location in /tmp). The same is true for the other sockets.
>>>
>>>
>>> Carrying things further, TMPDIR as defined in include/util.h:115 and
>>> used in various locations may also be split according to the concern at
>>> hand so that not all temporary files regardless of their concern reside
>>> in /tmp.
>>
>> ok
>>
>>>
>>>
>>> What do you think about this?
>>
>> Possible, but I do not know how to guarantee API consistency.
>>
>> Another way is to put the sockets inside the configuration file instead
>> of defining it via CONFIG_. They are not anymore hard-coded.
>>
>> This can be read from the client, too - but clients should also link
>> libconfig to parse it.
> 
> Yes, I think this makes them needlessly complicated.

And I would like to avoid this.

> You have to
> supply code for doing so to every progress client implementation. As
> it's just the location, I think a command line flag specifying the
> socket path and a sane default fall-back would do the trick.

Right, I agree.

> Besides,
> the argumentation above applies here as well: You're supplying the full
> SWUpdate configuration just for one parameter, which is, in my opinion,
> a bit too much and the progress clients shouldn't be concerned with the
> other configuration for SWUpdate.
> API consistency is given if you're running the default configuration as
> nothing really changes. But, you have the flexibility to do otherwise
> but then, of course, you have to do the integration work yourself. I
> guess that's fair :)
> 
> 
>>> As a side note, what do you think about including systemd's socket-based
>>> activation as an optional feature? Then, if a process wants to talk to
>>> the control IPC socket created by systemd, systemd spawns SWUpdate and
>>> hands over the socket to SWUpdate...
>>
>> I do not know if currently work. SWUpdate is monitoring the processes
>> listening to interfaces, and that means it is the spawner. Then, for
>> systemd I understand that these processes will run independently from
>> SWUpdate "core", and then it does not monitor them anymore. It works
>> just with external processes with client library. Or can you better
>> detail this proposal ?
> 
> Well, this came to my mind while I was thinking about the socket
> locations and I thought that systemd may as well provide the sockets to
> SWUpdate, if you're using systemd as init, that is. If systemd is not
> used as init, of course, the current behavior remains.
> So, with the optional feature systemd enabled, there's a swupdate.socket
> unit specifying the sockets and systemd creates those sockets if the
> unit is enabled and started. Upon accessing a socket, systemd starts the
> related swupdate.service unit that executes SWUpdate and hands over
> the sockets systemd created to SWUpdate. Of course, you can start
> swupdate.service right away to start SWUpdate without using socket-based
> activation.
> With this, SWUpdate still spawns and monitors the subprocesses but may
> get started in two different ways: either by accessing a socket or
> explicitly.
> Given that systemd created the sockets, one has to configure the socket
> locations in systemd parlance. If this configuration sticks to the
> defaults, then nothing breaks. Otherwise, one has to give the socket
> locations to the clients. For this, SWUpdate has to be adapted to spawn
> those clients with the according socket paths or systemd is used for
> spawning and monitoring the clients.
> As systemd is very much capable of spawning and supervising the clients,
> it may as well be used for this purpose on systems having systemd. For
> systems running other init systems not that capable, I would use
> SWUpdate's spawn and supervise capabilities.
> 
> 
> 
> So, what do you think about all this?

Currently, I would try to avoid to have a different runtime behaviour.
If we have processes sometimes spawned by SWUpdate, sometimes from
systemd, we increase the constellation to test and possible
race-condition. And internal processes use an anonymous socket to
communicate with the installer that is not visible outside.

Best regards,
Stefano
diff mbox series

Patch

--- a/Kconfig
+++ b/Kconfig
@@ -110,6 +110,25 @@  config SW_VERSIONS_FILE
 	  but in some cases it can be required to do it. Having a check,
 	  the risky-component is not always updated.
 
+config SOCKET_CTRL_PATH
+	string "SWUpdate control socket path"
+	default "/tmp/sockinstctrl"
+	help
+	  Path to SWUpdate's IPC socket.
+
+config SOCKET_PROGRESS_PATH
+	string "SWUpdate progress socket path"
+	default "/tmp/swupdateprog"
+	help
+	  Path to the socket progress information is sent to.
+
+config SOCKET_REMOTE_HANDLER_DIRECTORY
+	string "SWUpdate remote handler socket directory"
+	default "/tmp/"
+	help
+	  Directory where sockets to remote handler processes
+	  are expected to be found.
+

--- a/handlers/remote_handler.c
+++ b/handlers/remote_handler.c
@@ -167,7 +167,7 @@  static int install_remote_image(struct img_type *img,
 	struct RHmsg RHmessage;
 	char bufcmd[80];
 
-	len = strlen(img->type_data) + strlen(TMPDIR) + strlen("ipc://") + 4;
+	len = strlen(img->type_data) + strlen(CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY) + strlen("ipc://") + 4;
 
 	/*
 	 * Allocate maximum string
@@ -177,7 +177,7 @@  static int install_remote_image(struct img_type *img,
 		ERROR("Not enough memory");
 		return -ENOMEM;
 	}
-	snprintf(connect_string, len, "ipc://%s%s", TMPDIR,
+	snprintf(connect_string, len, "ipc://%s%s", CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY,
 			img->type_data);
 
 	ret = zmq_connect(request, connect_string);


--- a/include/network_ipc.h
+++ b/include/network_ipc.h
@@ -24,7 +24,7 @@ 
 #include "swupdate_status.h"
 
 #define IPC_MAGIC		0x14052001
-#define SOCKET_CTRL_PATH	"/tmp/sockinstctrl"
+#define SOCKET_CTRL_PATH	CONFIG_SOCKET_CTRL_PATH
 
 typedef enum {
 	REQ_INSTALL,


--- a/include/progress.h
+++ b/include/progress.h
@@ -24,7 +24,7 @@ 
 
 #include <swupdate_status.h>
 
-#define SOCKET_PROGRESS_PATH "/tmp/swupdateprog"
+#define SOCKET_PROGRESS_PATH CONFIG_SOCKET_PROGRESS_PATH
 
 /*
  * Message sent via socket