diff options
author | Philippe Guibert <philippe.guibert@6wind.com> | 2023-09-01 17:14:06 +0200 |
---|---|---|
committer | Philippe Guibert <philippe.guibert@6wind.com> | 2023-10-18 09:41:02 +0200 |
commit | d162d5f6f538e60385290fddf8ed256d2e15f628 (patch) | |
tree | 3e3cf1151c8f207c2ac662d99d969140b73ee4b8 /bgpd/bgp_labelpool.c | |
parent | bgpd: rewrite 'bgp label vpn export' command (diff) | |
download | frr-d162d5f6f538e60385290fddf8ed256d2e15f628.tar.xz frr-d162d5f6f538e60385290fddf8ed256d2e15f628.zip |
bgpd: fix hardset l3vpn label available in mpls pool
Today, when configuring BGP L3VPN mpls, the operator may
use that command to hardset a label value:
> router bgp 65500 vrf vrf1
> address-family ipv4 unicast
> label vpn export <hardset_label_value>
Today, BGP uses this value without checks, leading to potential
conflicts with other control planes like LDP. For instance, if
LDP initiates with a label chunk of [16;72] and BGP also uses the
50 label value, a conflict arises.
The 'label manager' service in zebra oversees label allocations.
While all the control plane daemons use it, BGP doesn't when a
hardset label is in place.
This update fixes this problem. Now, when a hardset label is set for
l3vpn export, a request is made to the label manager for approval,
ensuring no conflicts with other daemons. But, this means some existing
BGP configurations might become non-operational if they conflict with
labels already allocated to another daemon but not used.
note: Labels below 16 are reserved and won't be checked for consistency
by the label manager.
Fixes: ddb5b4880ba8 ("bgpd: vpn-vrf route leaking")
Signed-off-by: Philippe Guibert <philippe.guibert@6wind.com>
Diffstat (limited to 'bgpd/bgp_labelpool.c')
-rw-r--r-- | bgpd/bgp_labelpool.c | 5 |
1 files changed, 3 insertions, 2 deletions
diff --git a/bgpd/bgp_labelpool.c b/bgpd/bgp_labelpool.c index 883338610..d33f14ac4 100644 --- a/bgpd/bgp_labelpool.c +++ b/bgpd/bgp_labelpool.c @@ -448,7 +448,7 @@ void bgp_lp_get( if (lp_fifo_count(&lp->requests) > lp->pending_count) { if (!bgp_zebra_request_label_range(MPLS_LABEL_BASE_ANY, - lp->next_chunksize)) + lp->next_chunksize, true)) return; lp->pending_count += lp->next_chunksize; @@ -650,7 +650,8 @@ void bgp_lp_event_zebra_up(void) */ list_delete_all_node(lp->chunks); - if (!bgp_zebra_request_label_range(MPLS_LABEL_BASE_ANY, labels_needed)) + if (!bgp_zebra_request_label_range(MPLS_LABEL_BASE_ANY, labels_needed, + true)) return; lp->pending_count = labels_needed; |