diff mbox

[net-next,v5] mpls: support for dead routes

Message ID 1448407342-61181-1-git-send-email-roopa@cumulusnetworks.com
State Changes Requested, archived
Delegated to: David Miller
Headers show

Commit Message

Roopa Prabhu Nov. 24, 2015, 11:22 p.m. UTC
From: Roopa Prabhu <roopa@cumulusnetworks.com>

Adds support for RTNH_F_DEAD and RTNH_F_LINKDOWN flags on mpls
routes due to link events. Also adds code to ignore dead
routes during route selection.

Unlike ip routes, mpls routes are not deleted when the route goes
dead. This is current mpls behaviour and this patch does not change
that. With this patch however, routes will be marked dead.
dead routes are not notified to userspace (this is consistent with ipv4
routes).

dead routes:
-----------
$ip -f mpls route show
100
    nexthop as to 200 via inet 10.1.1.2  dev swp1
    nexthop as to 700 via inet 10.1.1.6  dev swp2

$ip link set dev swp1 down

$ip link show dev swp1
4: swp1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN mode
DEFAULT group default qlen 1000
    link/ether 00:02:00:00:00:01 brd ff:ff:ff:ff:ff:ff

$ip -f mpls route show
100
    nexthop as to 200 via inet 10.1.1.2  dev swp1 dead linkdown
    nexthop as to 700 via inet 10.1.1.6  dev swp2

linkdown routes:
----------------
$ip -f mpls route show
100
    nexthop as to 200 via inet 10.1.1.2  dev swp1
    nexthop as to 700 via inet 10.1.1.6  dev swp2

$ip link show dev swp1
4: swp1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
state UP mode DEFAULT group default qlen 1000
    link/ether 00:02:00:00:00:01 brd ff:ff:ff:ff:ff:ff

/* carrier goes down */
$ip link show dev swp1
4: swp1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc pfifo_fast
state DOWN mode DEFAULT group default qlen 1000
    link/ether 00:02:00:00:00:01 brd ff:ff:ff:ff:ff:ff

$ip -f mpls route show
100
    nexthop as to 200 via inet 10.1.1.2  dev swp1 linkdown
    nexthop as to 700 via inet 10.1.1.6  dev swp2

Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
---

RFC to v1:
        Addressed a few comments from Eric and Robert:
        - remove support for weighted nexthops
        - Use rt_nhn_alive in the rt structure to keep count of alive
        routes.
        What i have not done is: sort nexthops on link events.
        I am not comfortable recreating or sorting nexthops on
        every carrier change. This leaves scope for optimizing in the
future

v1 to v2:
        Fix dead nexthop checks as suggested by dave

v2 to v3:
        Fix duplicated argument reported by kbuild test robot

v3 - v4:
        - removed per route rt_flags and derive it from the nh_flags during dumps
        - use kmemdup to make a copy of the route during route updates
          due to link events

v4 -v5
	- if kmemdup fails, modify the original route in place. This is a
	corner case and only side effect is that in the remote case
	of kmemdup failure, the changes will not be atomically visible
	to datapath.
	- replace for_nexthops with change_nexthops in a bunch of places.
	- fix indent


 net/mpls/af_mpls.c  | 250 ++++++++++++++++++++++++++++++++++++++++++++--------
 net/mpls/internal.h |   2 +
 2 files changed, 215 insertions(+), 37 deletions(-)

Comments

Robert Shearman Nov. 25, 2015, 9:51 a.m. UTC | #1
On 24/11/15 23:22, Roopa Prabhu wrote:
> From: Roopa Prabhu <roopa@cumulusnetworks.com>
>
> Adds support for RTNH_F_DEAD and RTNH_F_LINKDOWN flags on mpls
> routes due to link events. Also adds code to ignore dead
> routes during route selection.
>
> Unlike ip routes, mpls routes are not deleted when the route goes
> dead. This is current mpls behaviour and this patch does not change
> that. With this patch however, routes will be marked dead.
> dead routes are not notified to userspace (this is consistent with ipv4
> routes).
>
...
>

Acked-by: Robert Shearman <rshearma@brocade.com>

> Signed-off-by: Roopa Prabhu <roopa@cumulusnetworks.com>
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
David Miller Nov. 25, 2015, 4:18 p.m. UTC | #2
From: Roopa Prabhu <roopa@cumulusnetworks.com>
Date: Tue, 24 Nov 2015 15:22:22 -0800

> v4 -v5
> 	- if kmemdup fails, modify the original route in place. This is a
> 	corner case and only side effect is that in the remote case
> 	of kmemdup failure, the changes will not be atomically visible
> 	to datapath.

I really don't like this.

Either you need to make the changes appear atomic to the data path,
and you therefore must fail the operation if kmemdup() fails, or it
doesn't matter and you should just always change the route in-place.

As far as I can tell it can't possibly matter.  The alive counter is
never modified by the data path, it is only tested to make a multipath
decision.  Likewise it's rather harmless to send a frame or two via a
device currently going down.

But if you're convinced it matters, then is matters, and you can't
fake things when kmemdup() fails.  And in that case I would recommend
that you use a two pass algorithm, one pass allocates all of the
new routes, and the second fills them in, inserts them, and frees
the old ones.

That is the only way you can unwind and fail cleanly.

And oh yeah, that's right, you can't really fail this and make the
ifdown not proceed.

So you're stuck, right?

That's why this has to be designed in a way where memory allocations
are not necessary.  These notifiers aren't really designed to facilitate
situations that require resource acquisitions that can fail.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Roopa Prabhu Nov. 28, 2015, 6:15 a.m. UTC | #3
On 11/25/15, 8:18 AM, David Miller wrote:
> From: Roopa Prabhu <roopa@cumulusnetworks.com>
> Date: Tue, 24 Nov 2015 15:22:22 -0800
>
>> v4 -v5
>> 	- if kmemdup fails, modify the original route in place. This is a
>> 	corner case and only side effect is that in the remote case
>> 	of kmemdup failure, the changes will not be atomically visible
>> 	to datapath.
> I really don't like this.
>
> Either you need to make the changes appear atomic to the data path,
> and you therefore must fail the operation if kmemdup() fails, or it
> doesn't matter and you should just always change the route in-place.
>
> As far as I can tell it can't possibly matter.  The alive counter is
> never modified by the data path, it is only tested to make a multipath
> decision.  Likewise it's rather harmless to send a frame or two via a
> device currently going down.
>
> But if you're convinced it matters, then is matters, and you can't
> fake things when kmemdup() fails.  And in that case I would recommend
> that you use a two pass algorithm, one pass allocates all of the
> new routes, and the second fills them in, inserts them, and frees
> the old ones.
>
> That is the only way you can unwind and fail cleanly.
>
> And oh yeah, that's right, you can't really fail this and make the
> ifdown not proceed.
>
> So you're stuck, right?
>
> That's why this has to be designed in a way where memory allocations
> are not necessary.  These notifiers aren't really designed to facilitate
> situations that require resource acquisitions that can fail.
>
Get your point. I am not convinced that the atomic update matters much for the transient case.
 I was trying to accommodate comments i got during the review and it seemed ok to cover both
cases by optionally doing the atomic update when we can. But, I get your position on this.

--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Robert Shearman Nov. 30, 2015, 10:32 p.m. UTC | #4
On 28/11/15 06:15, roopa wrote:
> On 11/25/15, 8:18 AM, David Miller wrote:
>> From: Roopa Prabhu <roopa@cumulusnetworks.com>
>> Date: Tue, 24 Nov 2015 15:22:22 -0800
>>
>>> v4 -v5
>>> 	- if kmemdup fails, modify the original route in place. This is a
>>> 	corner case and only side effect is that in the remote case
>>> 	of kmemdup failure, the changes will not be atomically visible
>>> 	to datapath.
>> I really don't like this.
>>
>> Either you need to make the changes appear atomic to the data path,
>> and you therefore must fail the operation if kmemdup() fails, or it
>> doesn't matter and you should just always change the route in-place.
>>
>> As far as I can tell it can't possibly matter.  The alive counter is
>> never modified by the data path, it is only tested to make a multipath
>> decision.  Likewise it's rather harmless to send a frame or two via a
>> device currently going down.
>>
>> But if you're convinced it matters, then is matters, and you can't
>> fake things when kmemdup() fails.  And in that case I would recommend
>> that you use a two pass algorithm, one pass allocates all of the
>> new routes, and the second fills them in, inserts them, and frees
>> the old ones.
>>
>> That is the only way you can unwind and fail cleanly.
>>
>> And oh yeah, that's right, you can't really fail this and make the
>> ifdown not proceed.
>>
>> So you're stuck, right?
>>
>> That's why this has to be designed in a way where memory allocations
>> are not necessary.  These notifiers aren't really designed to facilitate
>> situations that require resource acquisitions that can fail.
>>
> Get your point. I am not convinced that the atomic update matters much for the transient case.
>   I was trying to accommodate comments i got during the review and it seemed ok to cover both
> cases by optionally doing the atomic update when we can. But, I get your position on this.

My comment on atomic updates on v2 was referring to whether partial 
updates could be seen by the forwarding code and if they did whether it 
matters. I don't think it matters for nh_flags, but I think it does for 
rt_nhn_alive.

Thanks,
Rob
--
To unsubscribe from this list: send the line "unsubscribe netdev" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/net/mpls/af_mpls.c b/net/mpls/af_mpls.c
index c70d750..2248015 100644
--- a/net/mpls/af_mpls.c
+++ b/net/mpls/af_mpls.c
@@ -96,22 +96,15 @@  bool mpls_pkt_too_big(const struct sk_buff *skb, unsigned int mtu)
 }
 EXPORT_SYMBOL_GPL(mpls_pkt_too_big);
 
-static struct mpls_nh *mpls_select_multipath(struct mpls_route *rt,
-					     struct sk_buff *skb, bool bos)
+static u32 mpls_multipath_hash(struct mpls_route *rt,
+			       struct sk_buff *skb, bool bos)
 {
 	struct mpls_entry_decoded dec;
 	struct mpls_shim_hdr *hdr;
 	bool eli_seen = false;
 	int label_index;
-	int nh_index = 0;
 	u32 hash = 0;
 
-	/* No need to look further into packet if there's only
-	 * one path
-	 */
-	if (rt->rt_nhn == 1)
-		goto out;
-
 	for (label_index = 0; label_index < MAX_MP_SELECT_LABELS && !bos;
 	     label_index++) {
 		if (!pskb_may_pull(skb, sizeof(*hdr) * label_index))
@@ -165,7 +158,37 @@  static struct mpls_nh *mpls_select_multipath(struct mpls_route *rt,
 		}
 	}
 
-	nh_index = hash % rt->rt_nhn;
+	return hash;
+}
+
+static struct mpls_nh *mpls_select_multipath(struct mpls_route *rt,
+					     struct sk_buff *skb, bool bos)
+{
+	u32 hash = 0;
+	int nh_index = 0;
+	int n = 0;
+
+	/* No need to look further into packet if there's only
+	 * one path
+	 */
+	if (rt->rt_nhn == 1)
+		goto out;
+
+	if (rt->rt_nhn_alive <= 0)
+		return NULL;
+
+	hash = mpls_multipath_hash(rt, skb, bos);
+	nh_index = hash % rt->rt_nhn_alive;
+	if (rt->rt_nhn_alive == rt->rt_nhn)
+		goto out;
+	for_nexthops(rt) {
+		if (nh->nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))
+			continue;
+		if (n == nh_index)
+			return nh;
+		n++;
+	} endfor_nexthops(rt);
+
 out:
 	return &rt->rt_nh[nh_index];
 }
@@ -354,17 +377,24 @@  struct mpls_route_config {
 	int			rc_mp_len;
 };
 
+static int mpls_route_alloc_size(int num_nh, u8 max_alen_aligned)
+{
+	struct mpls_route *rt;
+
+	return (ALIGN(sizeof(*rt) + num_nh * sizeof(*rt->rt_nh),
+		      VIA_ALEN_ALIGN) + num_nh * max_alen_aligned);
+}
+
 static struct mpls_route *mpls_rt_alloc(int num_nh, u8 max_alen)
 {
 	u8 max_alen_aligned = ALIGN(max_alen, VIA_ALEN_ALIGN);
 	struct mpls_route *rt;
 
-	rt = kzalloc(ALIGN(sizeof(*rt) + num_nh * sizeof(*rt->rt_nh),
-			   VIA_ALEN_ALIGN) +
-		     num_nh * max_alen_aligned,
+	rt = kzalloc(mpls_route_alloc_size(num_nh, max_alen_aligned),
 		     GFP_KERNEL);
 	if (rt) {
 		rt->rt_nhn = num_nh;
+		rt->rt_nhn_alive = num_nh;
 		rt->rt_max_alen = max_alen_aligned;
 	}
 
@@ -393,7 +423,8 @@  static void mpls_notify_route(struct net *net, unsigned index,
 
 static void mpls_route_update(struct net *net, unsigned index,
 			      struct mpls_route *new,
-			      const struct nl_info *info)
+			      const struct nl_info *info,
+			      bool notify)
 {
 	struct mpls_route __rcu **platform_label;
 	struct mpls_route *rt;
@@ -404,7 +435,8 @@  static void mpls_route_update(struct net *net, unsigned index,
 	rt = rtnl_dereference(platform_label[index]);
 	rcu_assign_pointer(platform_label[index], new);
 
-	mpls_notify_route(net, index, rt, new, info);
+	if (notify)
+		mpls_notify_route(net, index, rt, new, info);
 
 	/* If we removed a route free it now */
 	mpls_rt_free(rt);
@@ -536,6 +568,16 @@  static int mpls_nh_assign_dev(struct net *net, struct mpls_route *rt,
 
 	RCU_INIT_POINTER(nh->nh_dev, dev);
 
+	if (!(dev->flags & IFF_UP)) {
+		nh->nh_flags |= RTNH_F_DEAD;
+	} else {
+		unsigned int flags;
+
+		flags = dev_get_flags(dev);
+		if (!(flags & (IFF_RUNNING | IFF_LOWER_UP)))
+			nh->nh_flags |= RTNH_F_LINKDOWN;
+	}
+
 	return 0;
 
 errout:
@@ -570,6 +612,9 @@  static int mpls_nh_build_from_cfg(struct mpls_route_config *cfg,
 	if (err)
 		goto errout;
 
+	if (nh->nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))
+		rt->rt_nhn_alive--;
+
 	return 0;
 
 errout:
@@ -577,8 +622,8 @@  errout:
 }
 
 static int mpls_nh_build(struct net *net, struct mpls_route *rt,
-			 struct mpls_nh *nh, int oif,
-			 struct nlattr *via, struct nlattr *newdst)
+			 struct mpls_nh *nh, int oif, struct nlattr *via,
+			 struct nlattr *newdst)
 {
 	int err = -ENOMEM;
 
@@ -681,11 +726,13 @@  static int mpls_nh_build_multi(struct mpls_route_config *cfg,
 			goto errout;
 
 		err = mpls_nh_build(cfg->rc_nlinfo.nl_net, rt, nh,
-				    rtnh->rtnh_ifindex, nla_via,
-				    nla_newdst);
+				    rtnh->rtnh_ifindex, nla_via, nla_newdst);
 		if (err)
 			goto errout;
 
+		if (nh->nh_flags & (RTNH_F_DEAD | RTNH_F_LINKDOWN))
+			rt->rt_nhn_alive--;
+
 		rtnh = rtnh_next(rtnh, &remaining);
 		nhs++;
 	} endfor_nexthops(rt);
@@ -764,7 +811,7 @@  static int mpls_route_add(struct mpls_route_config *cfg)
 	if (err)
 		goto freert;
 
-	mpls_route_update(net, index, rt, &cfg->rc_nlinfo);
+	mpls_route_update(net, index, rt, &cfg->rc_nlinfo, true);
 
 	return 0;
 
@@ -790,7 +837,7 @@  static int mpls_route_del(struct mpls_route_config *cfg)
 	if (index >= net->mpls.platform_labels)
 		goto errout;
 
-	mpls_route_update(net, index, NULL, &cfg->rc_nlinfo);
+	mpls_route_update(net, index, NULL, &cfg->rc_nlinfo, true);
 
 	err = 0;
 errout:
@@ -875,34 +922,114 @@  free:
 	return ERR_PTR(err);
 }
 
-static void mpls_ifdown(struct net_device *dev)
+static bool mpls_route_dev_exists(struct mpls_route *rt,
+				  struct net_device *dev)
+{
+	for_nexthops(rt) {
+		if (rtnl_dereference(nh->nh_dev) != dev)
+			continue;
+		return true;
+	} endfor_nexthops(rt);
+
+	return false;
+}
+
+static void mpls_ifdown(struct net_device *dev, int event)
 {
 	struct mpls_route __rcu **platform_label;
 	struct net *net = dev_net(dev);
-	struct mpls_dev *mdev;
+	struct mpls_route *rt_new;
 	unsigned index;
 
 	platform_label = rtnl_dereference(net->mpls.platform_label);
 	for (index = 0; index < net->mpls.platform_labels; index++) {
 		struct mpls_route *rt = rtnl_dereference(platform_label[index]);
+
 		if (!rt)
 			continue;
-		for_nexthops(rt) {
+
+		if (!mpls_route_dev_exists(rt, dev))
+			continue;
+
+		rt_new = kmemdup(rt, mpls_route_alloc_size(rt->rt_nhn,
+							   rt->rt_max_alen),
+				 GFP_KERNEL);
+		if (!rt_new) {
+			pr_warn("mpls_ifdown: kmemdup failed\n");
+			rt_new = rt;
+		}
+
+		change_nexthops(rt_new) {
 			if (rtnl_dereference(nh->nh_dev) != dev)
 				continue;
-			nh->nh_dev = NULL;
-		} endfor_nexthops(rt);
+			switch (event) {
+			case NETDEV_DOWN:
+			case NETDEV_UNREGISTER:
+				nh->nh_flags |= RTNH_F_DEAD;
+				/* fall through */
+			case NETDEV_CHANGE:
+				nh->nh_flags |= RTNH_F_LINKDOWN;
+				rt_new->rt_nhn_alive--;
+				break;
+			}
+			if (event == NETDEV_UNREGISTER)
+				RCU_INIT_POINTER(nh->nh_dev, NULL);
+		} endfor_nexthops(rt_new);
+
+		if (rt_new != rt)
+			mpls_route_update(net, index, rt_new, NULL, false);
 	}
 
-	mdev = mpls_dev_get(dev);
-	if (!mdev)
-		return;
+	return;
+}
+
+static void mpls_ifup(struct net_device *dev, unsigned int nh_flags)
+{
+	struct mpls_route __rcu **platform_label;
+	struct net *net = dev_net(dev);
+	struct mpls_route *rt_new;
+	unsigned index;
+	int alive;
+
+	platform_label = rtnl_dereference(net->mpls.platform_label);
+	for (index = 0; index < net->mpls.platform_labels; index++) {
+		struct mpls_route *rt = rtnl_dereference(platform_label[index]);
+
+		if (!rt)
+			continue;
+
+		if (!mpls_route_dev_exists(rt, dev))
+			continue;
 
-	mpls_dev_sysctl_unregister(mdev);
+		rt_new = kmemdup(rt, mpls_route_alloc_size(rt->rt_nhn,
+							   rt->rt_max_alen),
+				 GFP_KERNEL);
+		if (!rt_new) {
+			pr_warn("mpls_ifup: kmemdup failed\n");
+			rt_new = rt;
+		}
 
-	RCU_INIT_POINTER(dev->mpls_ptr, NULL);
+		alive = 0;
+		change_nexthops(rt_new) {
+			struct net_device *nh_dev =
+				rtnl_dereference(nh->nh_dev);
 
-	kfree_rcu(mdev, rcu);
+			if (!(nh->nh_flags & nh_flags)) {
+				alive++;
+				continue;
+			}
+			if (nh_dev != dev)
+				continue;
+			alive++;
+			nh->nh_flags &= ~nh_flags;
+		} endfor_nexthops(rt_new);
+
+		rt_new->rt_nhn_alive = alive;
+		if (rt_new != rt)
+			mpls_route_update(net, index, rt_new, NULL, false);
+	}
+
+	return;
 }
 
 static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
@@ -910,9 +1037,9 @@  static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
 {
 	struct net_device *dev = netdev_notifier_info_to_dev(ptr);
 	struct mpls_dev *mdev;
+	unsigned int flags;
 
-	switch(event) {
-	case NETDEV_REGISTER:
+	if (event == NETDEV_REGISTER) {
 		/* For now just support ethernet devices */
 		if ((dev->type == ARPHRD_ETHER) ||
 		    (dev->type == ARPHRD_LOOPBACK)) {
@@ -920,10 +1047,39 @@  static int mpls_dev_notify(struct notifier_block *this, unsigned long event,
 			if (IS_ERR(mdev))
 				return notifier_from_errno(PTR_ERR(mdev));
 		}
-		break;
+		return NOTIFY_OK;
+	}
 
+	mdev = mpls_dev_get(dev);
+	if (!mdev)
+		return NOTIFY_OK;
+
+	switch (event) {
+	case NETDEV_DOWN:
+		mpls_ifdown(dev, event);
+		break;
+	case NETDEV_UP:
+		flags = dev_get_flags(dev);
+		if (flags & (IFF_RUNNING | IFF_LOWER_UP))
+			mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);
+		else
+			mpls_ifup(dev, RTNH_F_DEAD);
+		break;
+	case NETDEV_CHANGE:
+		flags = dev_get_flags(dev);
+		if (flags & (IFF_RUNNING | IFF_LOWER_UP))
+			mpls_ifup(dev, RTNH_F_DEAD | RTNH_F_LINKDOWN);
+		else
+			mpls_ifdown(dev, event);
+		break;
 	case NETDEV_UNREGISTER:
-		mpls_ifdown(dev);
+		mpls_ifdown(dev, event);
+		mdev = mpls_dev_get(dev);
+		if (mdev) {
+			mpls_dev_sysctl_unregister(mdev);
+			RCU_INIT_POINTER(dev->mpls_ptr, NULL);
+			kfree_rcu(mdev, rcu);
+		}
 		break;
 	case NETDEV_CHANGENAME:
 		mdev = mpls_dev_get(dev);
@@ -1237,9 +1393,15 @@  static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
 		dev = rtnl_dereference(nh->nh_dev);
 		if (dev && nla_put_u32(skb, RTA_OIF, dev->ifindex))
 			goto nla_put_failure;
+		if (nh->nh_flags & RTNH_F_LINKDOWN)
+			rtm->rtm_flags |= RTNH_F_LINKDOWN;
+		if (nh->nh_flags & RTNH_F_DEAD)
+			rtm->rtm_flags |= RTNH_F_DEAD;
 	} else {
 		struct rtnexthop *rtnh;
 		struct nlattr *mp;
+		int dead = 0;
+		int linkdown = 0;
 
 		mp = nla_nest_start(skb, RTA_MULTIPATH);
 		if (!mp)
@@ -1253,6 +1415,15 @@  static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
 			dev = rtnl_dereference(nh->nh_dev);
 			if (dev)
 				rtnh->rtnh_ifindex = dev->ifindex;
+			if (nh->nh_flags & RTNH_F_LINKDOWN) {
+				rtnh->rtnh_flags |= RTNH_F_LINKDOWN;
+				linkdown++;
+			}
+			if (nh->nh_flags & RTNH_F_DEAD) {
+				rtnh->rtnh_flags |= RTNH_F_DEAD;
+				dead++;
+			}
+
 			if (nh->nh_labels && nla_put_labels(skb, RTA_NEWDST,
 							    nh->nh_labels,
 							    nh->nh_label))
@@ -1266,6 +1437,11 @@  static int mpls_dump_route(struct sk_buff *skb, u32 portid, u32 seq, int event,
 			rtnh->rtnh_len = nlmsg_get_pos(skb) - (void *)rtnh;
 		} endfor_nexthops(rt);
 
+		if (linkdown == rt->rt_nhn)
+			rtm->rtm_flags |= RTNH_F_LINKDOWN;
+		if (dead == rt->rt_nhn)
+			rtm->rtm_flags |= RTNH_F_DEAD;
+
 		nla_nest_end(skb, mp);
 	}
 
@@ -1419,7 +1595,7 @@  static int resize_platform_label_table(struct net *net, size_t limit)
 
 	/* Free any labels beyond the new table */
 	for (index = limit; index < old_limit; index++)
-		mpls_route_update(net, index, NULL, NULL);
+		mpls_route_update(net, index, NULL, NULL, true);
 
 	/* Copy over the old labels */
 	cp_size = size;
diff --git a/net/mpls/internal.h b/net/mpls/internal.h
index bde52ce..732a5c1 100644
--- a/net/mpls/internal.h
+++ b/net/mpls/internal.h
@@ -41,6 +41,7 @@  enum mpls_payload_type {
 
 struct mpls_nh { /* next hop label forwarding entry */
 	struct net_device __rcu *nh_dev;
+	unsigned int		nh_flags;
 	u32			nh_label[MAX_NEW_LABELS];
 	u8			nh_labels;
 	u8			nh_via_alen;
@@ -74,6 +75,7 @@  struct mpls_route { /* next hop label forwarding entry */
 	u8			rt_payload_type;
 	u8			rt_max_alen;
 	unsigned int		rt_nhn;
+	unsigned int		rt_nhn_alive;
 	struct mpls_nh		rt_nh[0];
 };