diff mbox series

[v2,bpf-next,3/5] bpf: Introduce bpf_sk_{,ancestor_}cgroup_id helpers

Message ID c65795a13e69b7e4aa61a8e37aa340f2484f6c8a.1589405669.git.rdna@fb.com
State Changes Requested
Delegated to: BPF Maintainers
Headers show
Series bpf: sk lookup, cgroup id helpers in cgroup skb | expand

Commit Message

Andrey Ignatov May 13, 2020, 9:38 p.m. UTC
With having ability to lookup sockets in cgroup skb programs it becomes
useful to access cgroup id of retrieved sockets so that policies can be
implemented based on origin cgroup of such socket.

For example, a container running in a cgroup can have cgroup skb ingress
program that can lookup peer socket that is sending packets to a process
inside the container and decide whether those packets should be allowed
or denied based on cgroup id of the peer.

More specifically such ingress program can implement intra-host policy
"allow incoming packets only from this same container and not from any
other container on same host" w/o relying on source IP addresses since
quite often it can be the case that containers share same IP address on
the host.

Introduce two new helpers for this use-case: bpf_sk_cgroup_id() and
bpf_sk_ancestor_cgroup_id().

These helpers are similar to existing bpf_skb_{,ancestor_}cgroup_id
helpers with the only difference that sk is used to get cgroup id
instead of skb, and share code with them.

See documentation in UAPI for more details.

Signed-off-by: Andrey Ignatov <rdna@fb.com>
---
 include/uapi/linux/bpf.h       | 35 +++++++++++++++++++-
 net/core/filter.c              | 60 +++++++++++++++++++++++++++++-----
 tools/include/uapi/linux/bpf.h | 35 +++++++++++++++++++-
 3 files changed, 119 insertions(+), 11 deletions(-)

Comments

Yonghong Song May 14, 2020, 3:16 p.m. UTC | #1
On 5/13/20 2:38 PM, Andrey Ignatov wrote:
> With having ability to lookup sockets in cgroup skb programs it becomes
> useful to access cgroup id of retrieved sockets so that policies can be
> implemented based on origin cgroup of such socket.
> 
> For example, a container running in a cgroup can have cgroup skb ingress
> program that can lookup peer socket that is sending packets to a process
> inside the container and decide whether those packets should be allowed
> or denied based on cgroup id of the peer.
> 
> More specifically such ingress program can implement intra-host policy
> "allow incoming packets only from this same container and not from any
> other container on same host" w/o relying on source IP addresses since
> quite often it can be the case that containers share same IP address on
> the host.
> 
> Introduce two new helpers for this use-case: bpf_sk_cgroup_id() and
> bpf_sk_ancestor_cgroup_id().
> 
> These helpers are similar to existing bpf_skb_{,ancestor_}cgroup_id
> helpers with the only difference that sk is used to get cgroup id
> instead of skb, and share code with them.
> 
> See documentation in UAPI for more details.
> 
> Signed-off-by: Andrey Ignatov <rdna@fb.com>

Ack with one nit below.
Acked-by: Yonghong Song <yhs@fb.com>

> ---
>   include/uapi/linux/bpf.h       | 35 +++++++++++++++++++-
>   net/core/filter.c              | 60 +++++++++++++++++++++++++++++-----
>   tools/include/uapi/linux/bpf.h | 35 +++++++++++++++++++-
>   3 files changed, 119 insertions(+), 11 deletions(-)
> 
> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> index bfb31c1be219..e3cbc2790cdf 100644
> --- a/include/uapi/linux/bpf.h
> +++ b/include/uapi/linux/bpf.h
> @@ -3121,6 +3121,37 @@ union bpf_attr {
>    * 		0 on success, or a negative error in case of failure:
>    *
>    *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
> + *
> + * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
> + *	Description
> + *		Return the cgroup v2 id of the socket *sk*.
> + *
> + *		*sk* must be a non-**NULL** pointer that was returned from
> + *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same

It should also include bpf_skc_lookup_tcp(), right?

> + *		as in **bpf_skb_cgroup_id**\ ().
> + *
> + *		This helper is available only if the kernel was compiled with
> + *		the **CONFIG_SOCK_CGROUP_DATA** configuration option.
> + *	Return
> + *		The id is returned or 0 in case the id could not be retrieved.
> + *
> + * u64 bpf_sk_ancestor_cgroup_id(struct bpf_sock *sk, int ancestor_level)
> + *	Description
> + *		Return id of cgroup v2 that is ancestor of cgroup associated
> + *		with the *sk* at the *ancestor_level*.  The root cgroup is at
> + *		*ancestor_level* zero and each step down the hierarchy
> + *		increments the level. If *ancestor_level* == level of cgroup
> + *		associated with *sk*, then return value will be same as that
> + *		of **bpf_sk_cgroup_id**\ ().
> + *
> + *		The helper is useful to implement policies based on cgroups
> + *		that are upper in hierarchy than immediate cgroup associated
> + *		with *sk*.
> + *
> + *		The format of returned id and helper limitations are same as in
> + *		**bpf_sk_cgroup_id**\ ().
> + *	Return
> + *		The id is returned or 0 in case the id could not be retrieved.
>    */
>   #define __BPF_FUNC_MAPPER(FN)		\
>   	FN(unspec),			\
> @@ -3250,7 +3281,9 @@ union bpf_attr {
>   	FN(sk_assign),			\
>   	FN(ktime_get_boot_ns),		\
>   	FN(seq_printf),			\
> -	FN(seq_write),
> +	FN(seq_write),			\
> +	FN(sk_cgroup_id),		\
> +	FN(sk_ancestor_cgroup_id),
>   
[...]
Andrey Ignatov May 14, 2020, 4:55 p.m. UTC | #2
Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 08:16 -0700]:
> On 5/13/20 2:38 PM, Andrey Ignatov wrote:

> > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > index bfb31c1be219..e3cbc2790cdf 100644
> > --- a/include/uapi/linux/bpf.h
> > +++ b/include/uapi/linux/bpf.h
> > @@ -3121,6 +3121,37 @@ union bpf_attr {
> >    * 		0 on success, or a negative error in case of failure:
> >    *
> >    *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
> > + *
> > + * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
> > + *	Description
> > + *		Return the cgroup v2 id of the socket *sk*.
> > + *
> > + *		*sk* must be a non-**NULL** pointer that was returned from
> > + *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
> 
> It should also include bpf_skc_lookup_tcp(), right?

From what I see it should not.

cgroup id is available from sk->sk_cgrp_data that is a field of `struct
sock', i.e. `struct sock_common' doesn't have this field.

bpf_skc_lookup_tcp() returns RET_PTR_TO_SOCK_COMMON_OR_NULL and it can
be for example `struct request_sock` that has only `struct sock_common`
member, i.e. it doesn't have cgroup id.
Yonghong Song May 14, 2020, 5:24 p.m. UTC | #3
On 5/14/20 9:55 AM, Andrey Ignatov wrote:
> Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 08:16 -0700]:
>> On 5/13/20 2:38 PM, Andrey Ignatov wrote:
> 
>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>>> index bfb31c1be219..e3cbc2790cdf 100644
>>> --- a/include/uapi/linux/bpf.h
>>> +++ b/include/uapi/linux/bpf.h
>>> @@ -3121,6 +3121,37 @@ union bpf_attr {
>>>     * 		0 on success, or a negative error in case of failure:
>>>     *
>>>     *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
>>> + *
>>> + * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
>>> + *	Description
>>> + *		Return the cgroup v2 id of the socket *sk*.
>>> + *
>>> + *		*sk* must be a non-**NULL** pointer that was returned from
>>> + *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
>>
>> It should also include bpf_skc_lookup_tcp(), right?
> 
>  From what I see it should not.
> 
> cgroup id is available from sk->sk_cgrp_data that is a field of `struct
> sock', i.e. `struct sock_common' doesn't have this field.
> 
> bpf_skc_lookup_tcp() returns RET_PTR_TO_SOCK_COMMON_OR_NULL and it can
> be for example `struct request_sock` that has only `struct sock_common`
> member, i.e. it doesn't have cgroup id.

So you can do bpf_skc_lookup_tcp() and then do tcp_sock() to get a full
socket which will have cgroup_id. I think maybe this is the reason you
added bpf_skc_lookup_tcp() in patch #1, right?

If this is the case, maybe rewording a little bit for the description
to include bpf_skc_lookup_tcp() + bpf_tcp_sock() as another input
to bpf_sk_cgroup_id()?
Andrey Ignatov May 14, 2020, 6:01 p.m. UTC | #4
Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 10:24 -0700]:
> 
> 
> On 5/14/20 9:55 AM, Andrey Ignatov wrote:
> > Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 08:16 -0700]:
> > > On 5/13/20 2:38 PM, Andrey Ignatov wrote:
> > 
> > > > diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
> > > > index bfb31c1be219..e3cbc2790cdf 100644
> > > > --- a/include/uapi/linux/bpf.h
> > > > +++ b/include/uapi/linux/bpf.h
> > > > @@ -3121,6 +3121,37 @@ union bpf_attr {
> > > >     * 		0 on success, or a negative error in case of failure:
> > > >     *
> > > >     *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
> > > > + *
> > > > + * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
> > > > + *	Description
> > > > + *		Return the cgroup v2 id of the socket *sk*.
> > > > + *
> > > > + *		*sk* must be a non-**NULL** pointer that was returned from
> > > > + *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
> > > 
> > > It should also include bpf_skc_lookup_tcp(), right?
> > 
> >  From what I see it should not.
> > 
> > cgroup id is available from sk->sk_cgrp_data that is a field of `struct
> > sock', i.e. `struct sock_common' doesn't have this field.
> > 
> > bpf_skc_lookup_tcp() returns RET_PTR_TO_SOCK_COMMON_OR_NULL and it can
> > be for example `struct request_sock` that has only `struct sock_common`
> > member, i.e. it doesn't have cgroup id.
> 
> So you can do bpf_skc_lookup_tcp() and then do tcp_sock() to get a full
> socket which will have cgroup_id. I think maybe this is the reason you
> added bpf_skc_lookup_tcp() in patch #1, right?
> 
> If this is the case, maybe rewording a little bit for the description
> to include bpf_skc_lookup_tcp() + bpf_tcp_sock() as another input
> to bpf_sk_cgroup_id()?

Yeah, this bpf_skc_lookup_tcp() + bpf_tcp_sock() combination should also
return a full socket that can be used with the helper.

bpf_sk_fullsock() is one more way to get it.

I'm not sure it's worth listing all possible ways to get full socket
since it's 1) easy to miss something; 2) easy to forget to update this
list if a new way to get full socket is being added.

What about rephrasing to highlight that it has to be full socket and
**bpf_sk_lookup_xxx**\ () is an example of getting it?

For example:

	*sk* must be a non-**NULL** pointer to full socket, e.g. one
	returned from **bpf_sk_lookup_xxx**\ () or
	**bpf_sk_fullsock**\ ().

Will it be better?
Yonghong Song May 14, 2020, 6:15 p.m. UTC | #5
On 5/14/20 11:01 AM, Andrey Ignatov wrote:
> Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 10:24 -0700]:
>>
>>
>> On 5/14/20 9:55 AM, Andrey Ignatov wrote:
>>> Yonghong Song <yhs@fb.com> [Thu, 2020-05-14 08:16 -0700]:
>>>> On 5/13/20 2:38 PM, Andrey Ignatov wrote:
>>>
>>>>> diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
>>>>> index bfb31c1be219..e3cbc2790cdf 100644
>>>>> --- a/include/uapi/linux/bpf.h
>>>>> +++ b/include/uapi/linux/bpf.h
>>>>> @@ -3121,6 +3121,37 @@ union bpf_attr {
>>>>>      * 		0 on success, or a negative error in case of failure:
>>>>>      *
>>>>>      *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
>>>>> + *
>>>>> + * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
>>>>> + *	Description
>>>>> + *		Return the cgroup v2 id of the socket *sk*.
>>>>> + *
>>>>> + *		*sk* must be a non-**NULL** pointer that was returned from
>>>>> + *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
>>>>
>>>> It should also include bpf_skc_lookup_tcp(), right?
>>>
>>>   From what I see it should not.
>>>
>>> cgroup id is available from sk->sk_cgrp_data that is a field of `struct
>>> sock', i.e. `struct sock_common' doesn't have this field.
>>>
>>> bpf_skc_lookup_tcp() returns RET_PTR_TO_SOCK_COMMON_OR_NULL and it can
>>> be for example `struct request_sock` that has only `struct sock_common`
>>> member, i.e. it doesn't have cgroup id.
>>
>> So you can do bpf_skc_lookup_tcp() and then do tcp_sock() to get a full
>> socket which will have cgroup_id. I think maybe this is the reason you
>> added bpf_skc_lookup_tcp() in patch #1, right?
>>
>> If this is the case, maybe rewording a little bit for the description
>> to include bpf_skc_lookup_tcp() + bpf_tcp_sock() as another input
>> to bpf_sk_cgroup_id()?
> 
> Yeah, this bpf_skc_lookup_tcp() + bpf_tcp_sock() combination should also
> return a full socket that can be used with the helper.
> 
> bpf_sk_fullsock() is one more way to get it.
> 
> I'm not sure it's worth listing all possible ways to get full socket
> since it's 1) easy to miss something; 2) easy to forget to update this
> list if a new way to get full socket is being added.
> 
> What about rephrasing to highlight that it has to be full socket and
> **bpf_sk_lookup_xxx**\ () is an example of getting it?
> 
> For example:
> 
> 	*sk* must be a non-**NULL** pointer to full socket, e.g. one
> 	returned from **bpf_sk_lookup_xxx**\ () or
> 	**bpf_sk_fullsock**\ ().
> 
> Will it be better?

This should be fine. Thanks!
diff mbox series

Patch

diff --git a/include/uapi/linux/bpf.h b/include/uapi/linux/bpf.h
index bfb31c1be219..e3cbc2790cdf 100644
--- a/include/uapi/linux/bpf.h
+++ b/include/uapi/linux/bpf.h
@@ -3121,6 +3121,37 @@  union bpf_attr {
  * 		0 on success, or a negative error in case of failure:
  *
  *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
+ *
+ * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
+ *	Description
+ *		Return the cgroup v2 id of the socket *sk*.
+ *
+ *		*sk* must be a non-**NULL** pointer that was returned from
+ *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
+ *		as in **bpf_skb_cgroup_id**\ ().
+ *
+ *		This helper is available only if the kernel was compiled with
+ *		the **CONFIG_SOCK_CGROUP_DATA** configuration option.
+ *	Return
+ *		The id is returned or 0 in case the id could not be retrieved.
+ *
+ * u64 bpf_sk_ancestor_cgroup_id(struct bpf_sock *sk, int ancestor_level)
+ *	Description
+ *		Return id of cgroup v2 that is ancestor of cgroup associated
+ *		with the *sk* at the *ancestor_level*.  The root cgroup is at
+ *		*ancestor_level* zero and each step down the hierarchy
+ *		increments the level. If *ancestor_level* == level of cgroup
+ *		associated with *sk*, then return value will be same as that
+ *		of **bpf_sk_cgroup_id**\ ().
+ *
+ *		The helper is useful to implement policies based on cgroups
+ *		that are upper in hierarchy than immediate cgroup associated
+ *		with *sk*.
+ *
+ *		The format of returned id and helper limitations are same as in
+ *		**bpf_sk_cgroup_id**\ ().
+ *	Return
+ *		The id is returned or 0 in case the id could not be retrieved.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3250,7 +3281,9 @@  union bpf_attr {
 	FN(sk_assign),			\
 	FN(ktime_get_boot_ns),		\
 	FN(seq_printf),			\
-	FN(seq_write),
+	FN(seq_write),			\
+	FN(sk_cgroup_id),		\
+	FN(sk_ancestor_cgroup_id),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call
diff --git a/net/core/filter.c b/net/core/filter.c
index f88df77d0ad4..648bbce74861 100644
--- a/net/core/filter.c
+++ b/net/core/filter.c
@@ -4003,16 +4003,22 @@  static const struct bpf_func_proto bpf_skb_under_cgroup_proto = {
 };
 
 #ifdef CONFIG_SOCK_CGROUP_DATA
+static inline u64 __bpf_sk_cgroup_id(struct sock *sk)
+{
+	struct cgroup *cgrp;
+
+	cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
+	return cgroup_id(cgrp);
+}
+
 BPF_CALL_1(bpf_skb_cgroup_id, const struct sk_buff *, skb)
 {
 	struct sock *sk = skb_to_full_sk(skb);
-	struct cgroup *cgrp;
 
 	if (!sk || !sk_fullsock(sk))
 		return 0;
 
-	cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
-	return cgroup_id(cgrp);
+	return __bpf_sk_cgroup_id(sk);
 }
 
 static const struct bpf_func_proto bpf_skb_cgroup_id_proto = {
@@ -4022,16 +4028,12 @@  static const struct bpf_func_proto bpf_skb_cgroup_id_proto = {
 	.arg1_type      = ARG_PTR_TO_CTX,
 };
 
-BPF_CALL_2(bpf_skb_ancestor_cgroup_id, const struct sk_buff *, skb, int,
-	   ancestor_level)
+static inline u64 __bpf_sk_ancestor_cgroup_id(struct sock *sk,
+					      int ancestor_level)
 {
-	struct sock *sk = skb_to_full_sk(skb);
 	struct cgroup *ancestor;
 	struct cgroup *cgrp;
 
-	if (!sk || !sk_fullsock(sk))
-		return 0;
-
 	cgrp = sock_cgroup_ptr(&sk->sk_cgrp_data);
 	ancestor = cgroup_ancestor(cgrp, ancestor_level);
 	if (!ancestor)
@@ -4040,6 +4042,17 @@  BPF_CALL_2(bpf_skb_ancestor_cgroup_id, const struct sk_buff *, skb, int,
 	return cgroup_id(ancestor);
 }
 
+BPF_CALL_2(bpf_skb_ancestor_cgroup_id, const struct sk_buff *, skb, int,
+	   ancestor_level)
+{
+	struct sock *sk = skb_to_full_sk(skb);
+
+	if (!sk || !sk_fullsock(sk))
+		return 0;
+
+	return __bpf_sk_ancestor_cgroup_id(sk, ancestor_level);
+}
+
 static const struct bpf_func_proto bpf_skb_ancestor_cgroup_id_proto = {
 	.func           = bpf_skb_ancestor_cgroup_id,
 	.gpl_only       = false,
@@ -4047,6 +4060,31 @@  static const struct bpf_func_proto bpf_skb_ancestor_cgroup_id_proto = {
 	.arg1_type      = ARG_PTR_TO_CTX,
 	.arg2_type      = ARG_ANYTHING,
 };
+
+BPF_CALL_1(bpf_sk_cgroup_id, struct sock *, sk)
+{
+	return __bpf_sk_cgroup_id(sk);
+}
+
+static const struct bpf_func_proto bpf_sk_cgroup_id_proto = {
+	.func           = bpf_sk_cgroup_id,
+	.gpl_only       = false,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_PTR_TO_SOCKET,
+};
+
+BPF_CALL_2(bpf_sk_ancestor_cgroup_id, struct sock *, sk, int, ancestor_level)
+{
+	return __bpf_sk_ancestor_cgroup_id(sk, ancestor_level);
+}
+
+static const struct bpf_func_proto bpf_sk_ancestor_cgroup_id_proto = {
+	.func           = bpf_sk_ancestor_cgroup_id,
+	.gpl_only       = false,
+	.ret_type       = RET_INTEGER,
+	.arg1_type      = ARG_PTR_TO_SOCKET,
+	.arg2_type      = ARG_ANYTHING,
+};
 #endif
 
 static unsigned long bpf_xdp_copy(void *dst_buff, const void *src_buff,
@@ -6159,6 +6197,10 @@  cg_skb_func_proto(enum bpf_func_id func_id, const struct bpf_prog *prog)
 		return &bpf_skb_cgroup_id_proto;
 	case BPF_FUNC_skb_ancestor_cgroup_id:
 		return &bpf_skb_ancestor_cgroup_id_proto;
+	case BPF_FUNC_sk_cgroup_id:
+		return &bpf_sk_cgroup_id_proto;
+	case BPF_FUNC_sk_ancestor_cgroup_id:
+		return &bpf_sk_ancestor_cgroup_id_proto;
 #endif
 #ifdef CONFIG_INET
 	case BPF_FUNC_sk_lookup_tcp:
diff --git a/tools/include/uapi/linux/bpf.h b/tools/include/uapi/linux/bpf.h
index bfb31c1be219..e3cbc2790cdf 100644
--- a/tools/include/uapi/linux/bpf.h
+++ b/tools/include/uapi/linux/bpf.h
@@ -3121,6 +3121,37 @@  union bpf_attr {
  * 		0 on success, or a negative error in case of failure:
  *
  *		**-EOVERFLOW** if an overflow happened: The same object will be tried again.
+ *
+ * u64 bpf_sk_cgroup_id(struct bpf_sock *sk)
+ *	Description
+ *		Return the cgroup v2 id of the socket *sk*.
+ *
+ *		*sk* must be a non-**NULL** pointer that was returned from
+ *		**bpf_sk_lookup_xxx**\ (). The format of returned id is same
+ *		as in **bpf_skb_cgroup_id**\ ().
+ *
+ *		This helper is available only if the kernel was compiled with
+ *		the **CONFIG_SOCK_CGROUP_DATA** configuration option.
+ *	Return
+ *		The id is returned or 0 in case the id could not be retrieved.
+ *
+ * u64 bpf_sk_ancestor_cgroup_id(struct bpf_sock *sk, int ancestor_level)
+ *	Description
+ *		Return id of cgroup v2 that is ancestor of cgroup associated
+ *		with the *sk* at the *ancestor_level*.  The root cgroup is at
+ *		*ancestor_level* zero and each step down the hierarchy
+ *		increments the level. If *ancestor_level* == level of cgroup
+ *		associated with *sk*, then return value will be same as that
+ *		of **bpf_sk_cgroup_id**\ ().
+ *
+ *		The helper is useful to implement policies based on cgroups
+ *		that are upper in hierarchy than immediate cgroup associated
+ *		with *sk*.
+ *
+ *		The format of returned id and helper limitations are same as in
+ *		**bpf_sk_cgroup_id**\ ().
+ *	Return
+ *		The id is returned or 0 in case the id could not be retrieved.
  */
 #define __BPF_FUNC_MAPPER(FN)		\
 	FN(unspec),			\
@@ -3250,7 +3281,9 @@  union bpf_attr {
 	FN(sk_assign),			\
 	FN(ktime_get_boot_ns),		\
 	FN(seq_printf),			\
-	FN(seq_write),
+	FN(seq_write),			\
+	FN(sk_cgroup_id),		\
+	FN(sk_ancestor_cgroup_id),
 
 /* integer value in 'imm' field of BPF_CALL instruction selects which helper
  * function eBPF program intends to call