summaryrefslogtreecommitdiffstats
path: root/net (follow)
Commit message (Collapse)AuthorAgeFilesLines
* net/sched: taprio: add extack messages in taprio_initVladimir Oltean2022-09-201-2/+6
| | | | | | | | Stop contributing to the proverbial user unfriendliness of tc, and tell the user what is wrong wherever possible. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net/sched: taprio: stop going through private ops for dequeue and peekVladimir Oltean2022-09-201-46/+12
| | | | | | | | | | | | | | | | | | | | Since commit 13511704f8d7 ("net: taprio offload: enforce qdisc to netdev queue mapping"), taprio_dequeue_soft() and taprio_peek_soft() are de facto the only implementations for Qdisc_ops :: dequeue and Qdisc_ops :: peek that taprio provides. This is because in full offload mode, __dev_queue_xmit() will select a txq->qdisc which is never root taprio qdisc. So if nothing is enqueued in the root qdisc, it will never be run and nothing will get dequeued from it. Therefore, we can remove the private indirection from taprio, and always point Qdisc_ops :: dequeue to taprio_dequeue_soft (now simply named taprio_dequeue) and Qdisc_ops :: peek to taprio_peek_soft (now simply named taprio_peek). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net/sched: taprio: remove redundant FULL_OFFLOAD_IS_ENABLED check in ↵Vladimir Oltean2022-09-201-3/+3
| | | | | | | | | | | | | | | | | | taprio_enqueue Since commit 13511704f8d7 ("net: taprio offload: enforce qdisc to netdev queue mapping"), __dev_queue_xmit() will select a txq->qdisc for the full offload case of taprio which isn't the root taprio qdisc, so qdisc enqueues will never pass through taprio_enqueue(). That commit already introduced one safety precaution check for FULL_OFFLOAD_IS_ENABLED(); a second one is really not needed, so simplify the conditional for entering into the GSO segmentation logic. Also reword the comment a little, to appear more natural after the code change. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net/sched: taprio: use rtnl_dereference for oper and admin sched in ↵Vladimir Oltean2022-09-201-4/+8
| | | | | | | | | | | | | | | | | | | | | taprio_destroy() Sparse complains that taprio_destroy() dereferences q->oper_sched and q->admin_sched without rcu_dereference(), since they are marked as __rcu in the taprio private structure. 1671:28: warning: incorrect type in argument 1 (different address spaces) 1671:28: expected struct callback_head *head 1671:28: got struct callback_head [noderef] __rcu * 1674:28: warning: incorrect type in argument 1 (different address spaces) 1674:28: expected struct callback_head *head 1674:28: got struct callback_head [noderef] __rcu * To silence that build warning, do actually use rtnl_dereference(), since we know the rtnl_mutex is held at the time of q->destroy(). Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net/sched: taprio: taprio_dump and taprio_change are protected by rtnl_mutexVladimir Oltean2022-09-201-10/+4
| | | | | | | | | Since the writer-side lock is taken here, we do not need to open an RCU read-side critical section, instead we can use rtnl_dereference() to tell lockdep we are serialized with concurrent writes. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net/sched: taprio: taprio_offload_config_changed() is protected by rtnl_mutexVladimir Oltean2022-09-201-8/+2
| | | | | | | | | | | | | | The locking in taprio_offload_config_changed() is wrong (but also inconsequentially so). The current_entry_lock does not serialize changes to the admin and oper schedules, only to the current entry. In fact, the rtnl_mutex does that, and that is taken at the time when taprio_change() is called. Replace the rcu_dereference_protected() method with the proper RCU annotation, and drop the unnecessary spin lock. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Introduce optional per-netns ehash.Kuniyuki Iwashima2022-09-206-8/+128
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The more sockets we have in the hash table, the longer we spend looking up the socket. While running a number of small workloads on the same host, they penalise each other and cause performance degradation. The root cause might be a single workload that consumes much more resources than the others. It often happens on a cloud service where different workloads share the same computing resource. On EC2 c5.24xlarge instance (196 GiB memory and 524288 (1Mi / 2) ehash entries), after running iperf3 in different netns, creating 24Mi sockets without data transfer in the root netns causes about 10% performance regression for the iperf3's connection. thash_entries sockets length Gbps 524288 1 1 50.7 24Mi 48 45.1 It is basically related to the length of the list of each hash bucket. For testing purposes to see how performance drops along the length, I set 131072 (1Mi / 8) to thash_entries, and here's the result. thash_entries sockets length Gbps 131072 1 1 50.7 1Mi 8 49.9 2Mi 16 48.9 4Mi 32 47.3 8Mi 64 44.6 16Mi 128 40.6 24Mi 192 36.3 32Mi 256 32.5 40Mi 320 27.0 48Mi 384 25.0 To resolve the socket lookup degradation, we introduce an optional per-netns hash table for TCP, but it's just ehash, and we still share the global bhash, bhash2 and lhash2. With a smaller ehash, we can look up non-listener sockets faster and isolate such noisy neighbours. In addition, we can reduce lock contention. We can control the ehash size by a new sysctl knob. However, depending on workloads, it will require very sensitive tuning, so we disable the feature by default (net.ipv4.tcp_child_ehash_entries == 0). Moreover, we can fall back to using the global ehash in case we fail to allocate enough memory for a new ehash. The maximum size is 16Mi, which is large enough that even if we have 48Mi sockets, the average list length is 3, and regression would be less than 1%. We can check the current ehash size by another read-only sysctl knob, net.ipv4.tcp_ehash_entries. A negative value means the netns shares the global ehash (per-netns ehash is disabled or failed to allocate memory). # dmesg | cut -d ' ' -f 5- | grep "established hash" TCP established hash table entries: 524288 (order: 10, 4194304 bytes, vmalloc hugepage) # sysctl net.ipv4.tcp_ehash_entries net.ipv4.tcp_ehash_entries = 524288 # can be changed by thash_entries # sysctl net.ipv4.tcp_child_ehash_entries net.ipv4.tcp_child_ehash_entries = 0 # disabled by default # ip netns add test1 # ip netns exec test1 sysctl net.ipv4.tcp_ehash_entries net.ipv4.tcp_ehash_entries = -524288 # share the global ehash # sysctl -w net.ipv4.tcp_child_ehash_entries=100 net.ipv4.tcp_child_ehash_entries = 100 # ip netns add test2 # ip netns exec test2 sysctl net.ipv4.tcp_ehash_entries net.ipv4.tcp_ehash_entries = 128 # own a per-netns ehash with 2^n buckets When more than two processes in the same netns create per-netns ehash concurrently with different sizes, we need to guarantee the size in one of the following ways: 1) Share the global ehash and create per-netns ehash First, unshare() with tcp_child_ehash_entries==0. It creates dedicated netns sysctl knobs where we can safely change tcp_child_ehash_entries and clone()/unshare() to create a per-netns ehash. 2) Control write on sysctl by BPF We can use BPF_PROG_TYPE_CGROUP_SYSCTL to allow/deny read/write on sysctl knobs. Note that the global ehash allocated at the boot time is spread over available NUMA nodes, but inet_pernet_hashinfo_alloc() will allocate pages for each per-netns ehash depending on the current process's NUMA policy. By default, the allocation is done in the local node only, so the per-netns hash table could fully reside on a random node. Thus, depending on the NUMA policy the netns is created with and the CPU the current thread is running on, we could see some performance differences for highly optimised networking applications. Note also that the default values of two sysctl knobs depend on the ehash size and should be tuned carefully: tcp_max_tw_buckets : tcp_child_ehash_entries / 2 tcp_max_syn_backlog : max(128, tcp_child_ehash_entries / 128) As a bonus, we can dismantle netns faster. Currently, while destroying netns, we call inet_twsk_purge(), which walks through the global ehash. It can be potentially big because it can have many sockets other than TIME_WAIT in all netns. Splitting ehash changes that situation, where it's only necessary for inet_twsk_purge() to clean up TIME_WAIT sockets in each netns. With regard to this, we do not free the per-netns ehash in inet_twsk_kill() to avoid UAF while iterating the per-netns ehash in inet_twsk_purge(). Instead, we do it in tcp_sk_exit_batch() after calling tcp_twsk_purge() to keep it protocol-family-independent. In the future, we could optimise ehash lookup/iteration further by removing netns comparison for the per-netns ehash. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Save unnecessary inet_twsk_purge() calls.Kuniyuki Iwashima2022-09-203-2/+17
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While destroying netns, we call inet_twsk_purge() in tcp_sk_exit_batch() and tcpv6_net_exit_batch() for AF_INET and AF_INET6. These commands trigger the kernel to walk through the potentially big ehash twice even though the netns has no TIME_WAIT sockets. # ip netns add test # ip netns del test or # unshare -n /bin/true >/dev/null When tw_refcount is 1, we need not call inet_twsk_purge() at least for the net. We can save such unneeded iterations if all netns in net_exit_list have no TIME_WAIT sockets. This change eliminates the tax by the additional unshare() described in the next patch to guarantee the per-netns ehash size. Tested: # mount -t debugfs none /sys/kernel/debug/ # echo cleanup_net > /sys/kernel/debug/tracing/set_ftrace_filter # echo inet_twsk_purge >> /sys/kernel/debug/tracing/set_ftrace_filter # echo function > /sys/kernel/debug/tracing/current_tracer # cat ./add_del_unshare.sh for i in `seq 1 40` do (for j in `seq 1 100` ; do unshare -n /bin/true >/dev/null ; done) & done wait; # ./add_del_unshare.sh Before the patch: # cat /sys/kernel/debug/tracing/trace_pipe kworker/u128:0-8 [031] ...1. 174.162765: cleanup_net <-process_one_work kworker/u128:0-8 [031] ...1. 174.240796: inet_twsk_purge <-cleanup_net kworker/u128:0-8 [032] ...1. 174.244759: inet_twsk_purge <-tcp_sk_exit_batch kworker/u128:0-8 [034] ...1. 174.290861: cleanup_net <-process_one_work kworker/u128:0-8 [039] ...1. 175.245027: inet_twsk_purge <-cleanup_net kworker/u128:0-8 [046] ...1. 175.290541: inet_twsk_purge <-tcp_sk_exit_batch kworker/u128:0-8 [037] ...1. 175.321046: cleanup_net <-process_one_work kworker/u128:0-8 [024] ...1. 175.941633: inet_twsk_purge <-cleanup_net kworker/u128:0-8 [025] ...1. 176.242539: inet_twsk_purge <-tcp_sk_exit_batch After: # cat /sys/kernel/debug/tracing/trace_pipe kworker/u128:0-8 [038] ...1. 428.116174: cleanup_net <-process_one_work kworker/u128:0-8 [038] ...1. 428.262532: cleanup_net <-process_one_work kworker/u128:0-8 [030] ...1. 429.292645: cleanup_net <-process_one_work Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Access &tcp_hashinfo via net.Kuniyuki Iwashima2022-09-2014-71/+100
| | | | | | | | | | | | | | | We will soon introduce an optional per-netns ehash. This means we cannot use tcp_hashinfo directly in most places. Instead, access it via net->ipv4.tcp_death_row.hashinfo. The access will be valid only while initialising tcp_hashinfo itself and creating/destroying each netns. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Set NULL to sk->sk_prot->h.hashinfo.Kuniyuki Iwashima2022-09-205-15/+14
| | | | | | | | | | | | | | | | | We will soon introduce an optional per-netns ehash. This means we cannot use the global sk->sk_prot->h.hashinfo to fetch a TCP hashinfo. Instead, set NULL to sk->sk_prot->h.hashinfo for TCP and get a proper hashinfo from net->ipv4.tcp_death_row.hashinfo. Note that we need not use sk->sk_prot->h.hashinfo if DCCP is disabled. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Don't allocate tcp_death_row outside of struct netns_ipv4.Kuniyuki Iwashima2022-09-206-24/+13
| | | | | | | | | | | | | | | | | | | | | | | | | We will soon introduce an optional per-netns ehash and access hash tables via net->ipv4.tcp_death_row->hashinfo instead of &tcp_hashinfo in most places. It could harm the fast path because dereferences of two fields in net and tcp_death_row might incur two extra cache line misses. To save one dereference, let's place tcp_death_row back in netns_ipv4 and fetch hashinfo via net->ipv4.tcp_death_row"."hashinfo. Note tcp_death_row was initially placed in netns_ipv4, and commit fbb8295248e1 ("tcp: allocate tcp_death_row outside of struct netns_ipv4") changed it to a pointer so that we can fire TIME_WAIT timers after freeing net. However, we don't do so after commit 04c494e68a13 ("Revert "tcp/dccp: get rid of inet_twsk_purge()""), so we need not define tcp_death_row as a pointer. Also, we move refcount_dec_and_test(&tw_refcount) from tcp_sk_exit() to tcp_sk_exit_batch() as a debug check. Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* tcp: Clean up some functions.Kuniyuki Iwashima2022-09-205-46/+45
| | | | | | | | | | | | | | | This patch adds no functional change and cleans up some functions that the following patches touch around so that we make them tidy and easy to review/revert. The changes are - Keep reverse christmas tree order - Remove unnecessary init of port in inet_csk_find_open_port() - Use req_to_sk() once in reqsk_queue_unlink() - Use sock_net(sk) once in tcp_time_wait() and tcp_v[46]_connect() Signed-off-by: Kuniyuki Iwashima <kuniyu@amazon.com> Reviewed-by: Eric Dumazet <edumazet@google.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* net: rtnetlink: Enslave device before bringing it upPhil Sutter2022-09-201-7/+7
| | | | | | | | | | | | | | | | | | | | | Unlike with bridges, one can't add an interface to a bond and set it up at the same time: | # ip link set dummy0 down | # ip link set dummy0 master bond0 up | Error: Device can not be enslaved while up. Of all drivers with ndo_add_slave callback, bond and team decline if IFF_UP flag is set, vrf cycles the interface (i.e., sets it down and immediately up again) and the others just don't care. Support the common notion of setting the interface up after enslaving it by sorting the operations accordingly. Signed-off-by: Phil Sutter <phil@nwl.cc> Reviewed-by: David Ahern <dsahern@kernel.org> Link: https://lore.kernel.org/r/20220914150623.24152-1-phil@nwl.cc Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* seg6: add NEXT-C-SID support for SRv6 End behaviorAndrea Mayer2022-09-201-3/+332
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The NEXT-C-SID mechanism described in [1] offers the possibility of encoding several SRv6 segments within a single 128 bit SID address. Such a SID address is called a Compressed SID (C-SID) container. In this way, the length of the SID List can be drastically reduced. A SID instantiated with the NEXT-C-SID flavor considers an IPv6 address logically structured in three main blocks: i) Locator-Block; ii) Locator-Node Function; iii) Argument. C-SID container +------------------------------------------------------------------+ | Locator-Block |Loc-Node| Argument | | |Function| | +------------------------------------------------------------------+ <--------- B -----------> <- NF -> <------------- A ---------------> (i) The Locator-Block can be any IPv6 prefix available to the provider; (ii) The Locator-Node Function represents the node and the function to be triggered when a packet is received on the node; (iii) The Argument carries the remaining C-SIDs in the current C-SID container. The NEXT-C-SID mechanism relies on the "flavors" framework defined in [2]. The flavors represent additional operations that can modify or extend a subset of the existing behaviors. This patch introduces the support for flavors in SRv6 End behavior implementing the NEXT-C-SID one. An SRv6 End behavior with NEXT-C-SID flavor works as an End behavior but it is capable of processing the compressed SID List encoded in C-SID containers. An SRv6 End behavior with NEXT-C-SID flavor can be configured to support user-provided Locator-Block and Locator-Node Function lengths. In this implementation, such lengths must be evenly divisible by 8 (i.e. must be byte-aligned), otherwise the kernel informs the user about invalid values with a meaningful error code and message through netlink_ext_ack. If Locator-Block and/or Locator-Node Function lengths are not provided by the user during configuration of an SRv6 End behavior instance with NEXT-C-SID flavor, the kernel will choose their default values i.e., 32-bit Locator-Block and 16-bit Locator-Node Function. [1] - https://datatracker.ietf.org/doc/html/draft-ietf-spring-srv6-srh-compression [2] - https://datatracker.ietf.org/doc/html/rfc8986 Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* seg6: add netlink_ext_ack support in parsing SRv6 behavior attributesAndrea Mayer2022-09-201-16/+28
| | | | | | | | | | | | | | | | | | | | An SRv6 behavior instance can be set up using mandatory and/or optional attributes. In the setup phase, each supplied attribute is parsed and processed. If the parsing operation fails, the creation of the behavior instance stops and an error number/code is reported to the user. In many cases, it is challenging for the user to figure out exactly what happened by relying only on the error code. For this reason, we add the support for netlink_ext_ack in parsing SRv6 behavior attributes. In this way, when an SRv6 behavior attribute is parsed and an error occurs, the kernel can send a message to the userspace describing the error through a meaningful text message in addition to the classic error code. Signed-off-by: Andrea Mayer <andrea.mayer@uniroma2.it> Reviewed-by: David Ahern <dsahern@kernel.org> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net-next: gro: Fix use of skb_gro_header_slowRichard Gobert2022-09-201-1/+1
| | | | | | | | | | | In the cited commit, the function ipv6_gro_receive was accidentally changed to use skb_gro_header_slow, without attempting the fast path. Fix it. Fixes: 35ffb6654729 ("net: gro: skb_gro_header helper function") Signed-off-by: Richard Gobert <richardbgobert@gmail.com> Link: https://lore.kernel.org/r/20220911184835.GA105063@debian Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: allow masters to join a LAGVladimir Oltean2022-09-205-10/+298
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There are 2 ways in which a DSA user port may become handled by 2 CPU ports in a LAG: (1) its current DSA master joins a LAG ip link del bond0 && ip link add bond0 type bond mode 802.3ad ip link set eno2 master bond0 When this happens, all user ports with "eno2" as DSA master get automatically migrated to "bond0" as DSA master. (2) it is explicitly configured as such by the user # Before, the DSA master was eno3 ip link set swp0 type dsa master bond0 The design of this configuration is that the LAG device dynamically becomes a DSA master through dsa_master_setup() when the first physical DSA master becomes a LAG slave, and stops being so through dsa_master_teardown() when the last physical DSA master leaves. A LAG interface is considered as a valid DSA master only if it contains existing DSA masters, and no other lower interfaces. Therefore, we mainly rely on method (1) to enter this configuration. Each physical DSA master (LAG slave) retains its dev->dsa_ptr for when it becomes a standalone DSA master again. But the LAG master also has a dev->dsa_ptr, and this is actually duplicated from one of the physical LAG slaves, and therefore needs to be balanced when LAG slaves come and go. To the switch driver, putting DSA masters in a LAG is seen as putting their associated CPU ports in a LAG. We need to prepare cross-chip host FDB notifiers for CPU ports in a LAG, by calling the driver's ->lag_fdb_add method rather than ->port_fdb_add. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: propagate extack to port_lag_joinVladimir Oltean2022-09-203-2/+4
| | | | | | | | | | | | Drivers could refuse to offload a LAG configuration for a variety of reasons, mainly having to do with its TX type. Additionally, since DSA masters may now also be LAG interfaces, and this will translate into a call to port_lag_join on the CPU ports, there may be extra restrictions there. Propagate the netlink extack to this DSA method in order for drivers to give a meaningful error message back to the user. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: suppress device links to LAG DSA mastersVladimir Oltean2022-09-201-6/+8
| | | | | | | | | | | These don't work (print a harmless error about the operation failing) and make little sense to have anyway, because when a LAG DSA master goes away, we will introduce logic to move our CPU port back to the first physical DSA master. So suppress these device links in preparation for adding support for LAG DSA masters. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: suppress appending ethtool stats to LAG DSA mastersVladimir Oltean2022-09-201-0/+9
| | | | | | | | | | | | | | | | | | Similar to the discussion about tracking the admin/oper state of LAG DSA masters, we have the problem here that struct dsa_port *cpu_dp caches a single pair of orig_ethtool_ops and netdev_ops pointers. So if we call dsa_master_setup(bond0, cpu_dp) where cpu_dp is also the dev->dsa_ptr of one of the physical DSA masters, we'd effectively overwrite what we cached from that physical netdev with what replaced from the bonding interface. We don't need DSA ethtool stats on the bonding interface when used as DSA master, it's good enough to have them just on the physical DSA masters, so suppress this logic. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: don't keep track of admin/oper state on LAG DSA mastersVladimir Oltean2022-09-201-0/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | We store information about the DSA master's state in cpu_dp->master_admin_up and cpu_dp->master_oper_up, and this assumes a bijective association between a CPU port and a DSA master. However, when we have CPU ports in a LAG (and DSA masters in a LAG too), the way in which we set up things is that the physical DSA masters still have dev->dsa_ptr pointing to our cpu_dp, but the bonding/team device itself also has its dev->dsa_ptr pointing towards one of the CPU port structures (the first one). So logically speaking, that first cpu_dp can't keep track of both the physical master's admin/oper state, and of the bonding master's state. This isn't even needed; the reason why we keep track of the DSA master's state is to know when it is available for Ethernet-based register access. For that use case, we don't even need LAG; we just need to decide upon one of the physical DSA masters (if there is more than 1 available) and use that. This change suppresses dsa_tree_master_{admin,oper}_state_change() calls on LAG DSA masters (which will be supported in a future change), to allow the tracking of just physical DSA masters. Link: https://lore.kernel.org/netdev/628cc94d.1c69fb81.15b0d.422d@mx.google.com/ Suggested-by: Christian Marangi <ansuelsmth@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: allow the DSA master to be seen and changed through rtnetlinkVladimir Oltean2022-09-207-1/+354
| | | | | | | | | | | | | | | | | | | | Some DSA switches have multiple CPU ports, which can be used to improve CPU termination throughput, but DSA, through dsa_tree_setup_cpu_ports(), sets up only the first one, leading to suboptimal use of hardware. The desire is to not change the default configuration but to permit the user to create a dynamic mapping between individual user ports and the CPU port that they are served by, configurable through rtnetlink. It is also intended to permit load balancing between CPU ports, and in that case, the foreseen model is for the DSA master to be a bonding interface whose lowers are the physical DSA masters. To that end, we create a struct rtnl_link_ops for DSA user ports with the "dsa" kind. We expose the IFLA_DSA_MASTER link attribute that contains the ifindex of the newly desired DSA master. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net: dsa: introduce dsa_port_get_master()Vladimir Oltean2022-09-205-27/+26
| | | | | | | | | | | | | | | | | | | There is a desire to support for DSA masters in a LAG. That configuration is intended to work by simply enslaving the master to a bonding/team device. But the physical DSA master (the LAG slave) still has a dev->dsa_ptr, and that cpu_dp still corresponds to the physical CPU port. However, we would like to be able to retrieve the LAG that's the upper of the physical DSA master. In preparation for that, introduce a helper called dsa_port_get_master() that replaces all occurrences of the dp->cpu_dp->master pattern. The distinction between LAG and non-LAG will be made later within the helper itself. Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* flow_offload: Introduce flow_match_l2tpv3Wojciech Drewek2022-09-201-0/+7
| | | | | | | | | Allow to offload L2TPv3 filters by adding flow_rule_match_l2tpv3. Drivers can extract L2TPv3 specific fields from now on. Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* net/sched: flower: Add L2TPv3 filterWojciech Drewek2022-09-201-0/+16
| | | | | | | | | | | | | | | | | | | Add support for matching on L2TPv3 session ID. Session ID can be specified only when ip proto was set to IPPROTO_L2TP. Example filter: # tc filter add dev $PF1 ingress prio 1 protocol ip \ flower \ ip_proto l2tp \ l2tpv3_sid 1234 \ skip_sw \ action mirred egress redirect dev $VF1_PR Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* flow_dissector: Add L2TPv3 dissectorsWojciech Drewek2022-09-201-0/+28
| | | | | | | | | | | | | | | Allow to dissect L2TPv3 specific field which is: - session ID (32 bits) L2TPv3 might be transported over IP or over UDP, this implementation is only about L2TPv3 over IP. IP protocol carries L2TPv3 when ip_proto is IPPROTO_L2TP (115). Acked-by: Guillaume Nault <gnault@redhat.com> Signed-off-by: Wojciech Drewek <wojciech.drewek@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* openvswitch: Change the return type for vport_ops.send function hook to intNathan Huckleberry2022-09-202-2/+2
| | | | | | | | | | | | | | | | | | | | | | All usages of the vport_ops struct have the .send field set to dev_queue_xmit or internal_dev_recv. Since most usages are set to dev_queue_xmit, the function hook should match the signature of dev_queue_xmit. The only call to vport_ops->send() is in net/openvswitch/vport.c and it throws away the return value. This mismatched return type breaks forward edge kCFI since the underlying function definition does not match the function hook definition. Reported-by: Dan Carpenter <error27@gmail.com> Link: https://github.com/ClangBuiltLinux/linux/issues/1703 Cc: llvm@lists.linux.dev Signed-off-by: Nathan Huckleberry <nhuck@google.com> Reviewed-by: Nathan Chancellor <nathan@kernel.org> Acked-by: Eelco Chaudron <echaudro@redhat.com> Link: https://lore.kernel.org/r/20220913230739.228313-1-nhuck@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* Merge tag 'batadv-next-pullrequest-20220916' of ↵Jakub Kicinski2022-09-204-43/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.open-mesh.org/linux-merge Simon Wunderlich says: ==================== This cleanup patchset includes the following patches: - bump version strings, by Simon Wunderlich - drop unused headers in trace.h, by Sven Eckelmann - drop initialization of flexible ethtool_link_ksettings, by Sven Eckelmann - remove unused struct definitions, by Marek Lindner * tag 'batadv-next-pullrequest-20220916' of git://git.open-mesh.org/linux-merge: batman-adv: remove unused struct definitions batman-adv: Drop initialization of flexible ethtool_link_ksettings batman-adv: Drop unused headers in trace.h batman-adv: Start new development cycle ==================== Link: https://lore.kernel.org/r/20220916161454.1413154-1-sw@simonwunderlich.de Signed-off-by: Jakub Kicinski <kuba@kernel.org>
| * batman-adv: remove unused struct definitionsMarek Lindner2022-09-151-39/+0
| | | | | | | | | | | | Signed-off-by: Marek Lindner <mareklindner@neomailbox.ch> Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
| * batman-adv: Drop initialization of flexible ethtool_link_ksettingsSven Eckelmann2022-08-201-1/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The commit 94dfc73e7cf4 ("treewide: uapi: Replace zero-length arrays with flexible-array members") changed various structures from using 0-length arrays to flexible arrays net/batman-adv/bat_v_elp.c: note: in included file: ./include/linux/ethtool.h:148:38: warning: nested flexible array net/batman-adv/bat_v_elp.c:128:9: warning: using sizeof on a flexible structure In theory, this could be worked around by using {} as initializer for the variable on the stack. But this variable doesn't has to be initialized at all by the caller of __ethtool_get_link_ksettings - everything will be initialized by the callee when no error occurs. Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
| * batman-adv: Drop unused headers in trace.hSven Eckelmann2022-08-171-2/+0
| | | | | | | | | | | | | | | | | | | | The commit 9abc291812d7 ("batman-adv: tracing: Use the new __vstring() helper") removed the usage of WARN_ON_ONCE and __dynamic_array in this file. But it was forgotten to adjust the headers accordingly (dropping the now no longer used ones). Signed-off-by: Sven Eckelmann <sven@narfation.org> Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
| * batman-adv: Start new development cycleSimon Wunderlich2022-08-171-1/+1
| | | | | | | | | | | | | | | | | | | | | | This version will contain all the (major or even only minor) changes for Linux 6.1. The version number isn't a semantic version number with major and minor information. It is just encoding the year of the expected publishing as Linux -rc1 and the number of published versions this year (starting at 0). Signed-off-by: Simon Wunderlich <sw@simonwunderlich.de>
* | net: rds: add missing __init/__exit annotations to module init/exit funcsXiu Jianfeng2022-09-203-4/+4
| | | | | | | | | | | | | | | | Add missing __init/__exit annotations to module init/exit funcs. Signed-off-by: Xiu Jianfeng <xiujianfeng@huawei.com> Link: https://lore.kernel.org/r/20220909091840.247946-1-xiujianfeng@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* | rxrpc: remove rxrpc_max_call_lifetime declarationGaosheng Cui2022-09-201-1/+0
| | | | | | | | | | | | | | | | | | | | rxrpc_max_call_lifetime has been removed since commit a158bdd3247b ("rxrpc: Fix call timeouts"), so remove it. Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Link: https://lore.kernel.org/r/20220909064042.1149404-1-cuigaosheng1@huawei.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
* | Merge tag 'linux-can-next-for-6.1-20220915' of ↵David S. Miller2022-09-166-66/+111
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | git://git.kernel.org/pub/scm/linux/kernel/git/mkl/linux-can-next Marc Kleine-Budde says: ==================== Sept. 15, 2022, 8:19 a.m. UTC Hello Jakub, hello David, this is a pull request of 23 patches for net-next/master. the first 2 patches are by me and fix a typo in the rx-offload helper and the flexcan driver. Christophe JAILLET's patch cleans up the error handling in rcar_canfd driver's probe function. Kenneth Lee's patch converts the kvaser_usb driver from kcalloc() to kzalloc(). Biju Das contributes 2 patches to the sja1000 driver which update the DT bindings and support for the RZ/N1 SJA1000 CAN controller. Jinpeng Cui provides 2 patches that remove redundant variables from the sja1000 and kvaser_pciefd driver. 2 patches by John Whittington and me add hardware timestamp support to the gs_usb driver. Gustavo A. R. Silva's patch converts the etas_es58x driver to make use of DECLARE_FLEX_ARRAY(). Krzysztof Kozlowski's patch cleans up the sja1000 DT bindings. Dario Binacchi fixes his invalid email in the flexcan driver documentation. Ziyang Xuan contributes 2 patches that clean up the CAN RAW protocol. Yang Yingliang's patch switches the flexcan driver to dev_err_probe(). The last 7 patches are by Oliver Hartkopp and add support for the next generation of the CAN protocol: CAN with eXtended data Length (CAN XL). ==================== Signed-off-by: David S. Miller <davem@davemloft.net>
| * | can: raw: add CAN XL supportOliver Hartkopp2022-09-151-12/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enable CAN_RAW sockets to read and write CAN XL frames analogue to the CAN FD extension (new CAN_RAW_XL_FRAMES sockopt). A CAN XL network interface is capable to handle Classical CAN, CAN FD and CAN XL frames. When CAN_RAW_XL_FRAMES is enabled, the CAN_RAW socket checks whether the addressed CAN network interface is capable to handle the provided CAN frame. In opposite to the fixed number of bytes for - CAN frames (CAN_MTU = sizeof(struct can_frame)) - CAN FD frames (CANFD_MTU = sizeof(struct can_frame)) the number of bytes when reading/writing CAN XL frames depends on the number of data bytes. For efficiency reasons the length of the struct canxl_frame is truncated to the needed size for read/write operations. This leads to a calculated size of CANXL_HDR_SIZE + canxl_frame::len which is enforced on write() operations and guaranteed on read() operations. NB: Valid length values are 1 .. 2048 (CANXL_MIN_DLEN .. CANXL_MAX_DLEN). Acked-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/all/20220912170725.120748-8-socketcan@hartkopp.net Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| * | can: canxl: update CAN infrastructure for CAN XL framesOliver Hartkopp2022-09-151-1/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - add new ETH_P_CANXL ethernet protocol type - update skb checks for CAN XL - add alloc_canxl_skb() which now needs a data length parameter - introduce init_can_skb_reserve() to reduce code duplication Acked-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/all/20220912170725.120748-6-socketcan@hartkopp.net Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| * | can: set CANFD_FDF flag in all CAN FD frame structuresOliver Hartkopp2022-09-151-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To simplify the testing in user space all struct canfd_frame's provided by the CAN subsystem of the Linux kernel now have the CANFD_FDF flag set in canfd_frame::flags. NB: Handcrafted ETH_P_CANFD frames introduced via PF_PACKET socket might not set this bit correctly. During the check for sufficient headroom in PF_PACKET sk_buffs the uninitialized CAN sk_buff data structures are filled. In the case of a CAN FD frame the CANFD_FDF flag is set accordingly. As the CAN frame content is already zero initialized in alloc_canfd_skb() the obsolete initialization of cf->flags in the CTU CAN FD driver has been removed as it would overwrite the already set CANFD_FDF flag. Acked-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/all/20220912170725.120748-4-socketcan@hartkopp.net Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| * | can: skb: unify skb CAN frame identification helpersOliver Hartkopp2022-09-155-45/+24
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Replace open coded checks for sk_buffs containing Classical CAN and CAN FD frame structures as a preparation for CAN XL support. With the added length check the unintended processing of CAN XL frames having the CANXL_XLF bit set can be suppressed even when the skb->len fits to non CAN XL frames. The CAN_RAW socket needs a rework to use these helpers. Therefore the use of these helpers is postponed to the CAN_RAW CAN XL integration. The J1939 protocol gets a check for Classical CAN frames too. Acked-by: Vincent Mailhol <mailhol.vincent@wanadoo.fr> Signed-off-by: Oliver Hartkopp <socketcan@hartkopp.net> Link: https://lore.kernel.org/all/20220912170725.120748-2-socketcan@hartkopp.net Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| * | can: raw: use guard clause to optimize nesting in raw_rcv()Ziyang Xuan2022-09-061-7/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | We can use guard clause to optimize nesting codes like if (condition) { ... } else { return; } in raw_rcv(); Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com> Link: https://lore.kernel.org/all/0170ad1f07dbe838965df4274fce950980fa9d1f.1661584485.git.william.xuanziyang@huawei.com Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
| * | can: raw: process optimization in raw_init()Ziyang Xuan2022-09-061-3/+11
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now, register notifier after register proto successfully. It can create raw socket and set socket options once register proto successfully, so it is possible missing notifier event before register notifier successfully although this is a low probability scenario. Move notifier registration to the front of proto registration like done in j1939. In addition, register_netdevice_notifier() may fail, check its result is necessary. Signed-off-by: Ziyang Xuan <william.xuanziyang@huawei.com> Link: https://lore.kernel.org/all/7af9401f0d2d9fed36c1667b5ac9b8df8f8b87ee.1661584485.git.william.xuanziyang@huawei.com Signed-off-by: Marc Kleine-Budde <mkl@pengutronix.de>
* | | vsock/vmci: fix repeated words in commentsJilin Yuan2022-09-161-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | Delete the redundant word 'that'. Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Reviewed-by: Stefano Garzarella <sgarzare@redhat.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | | rtnetlink: advertise allmulti counterNicolas Dichtel2022-09-161-0/+3
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Like what was done with IFLA_PROMISCUITY, add IFLA_ALLMULTI to advertise the allmulti counter. The flag IFF_ALLMULTI is advertised only if it was directly set by a userland app. Signed-off-by: Nicolas Dichtel <nicolas.dichtel@6wind.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | | mptcp: account memory allocation in mptcp_nl_cmd_add_addr() to userThomas Haller2022-09-151-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Now that non-root users can configure MPTCP endpoints, account the memory allocation to the user. Signed-off-by: Thomas Haller <thaller@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* | | mptcp: allow privileged operations from user namespacesThomas Haller2022-09-151-9/+9
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | GENL_ADMIN_PERM checks that the user has CAP_NET_ADMIN in the initial namespace by calling netlink_capable(). Instead, use GENL_UNS_ADMIN_PERM which uses netlink_ns_capable(). This checks that the caller has CAP_NET_ADMIN in the current user namespace. See also commit 4a92602aa1cd ("openvswitch: allow management from inside user namespaces") which introduced this mechanism. See also commit 5617c6cd6f84 ("nl80211: Allow privileged operations from user namespaces") which introduced this for nl80211. Signed-off-by: Thomas Haller <thaller@redhat.com> Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* | | mptcp: add do_check_data_fin to replace copiedGeliang Tang2022-09-151-3/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch adds a new bool variable 'do_check_data_fin' to replace the original int variable 'copied' in __mptcp_push_pending(), check it to determine whether to call __mptcp_check_send_data_fin(). Suggested-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Geliang Tang <geliang.tang@suse.com> Reviewed-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* | | mptcp: add mptcp_for_each_subflow_safe helperMatthieu Baerts2022-09-153-4/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | Similar to mptcp_for_each_subflow(): this is clearer now that the _safe version is used in multiple places. Reviewed-by: Mat Martineau <mathew.j.martineau@linux.intel.com> Signed-off-by: Matthieu Baerts <matthieu.baerts@tessares.net> Signed-off-by: Paolo Abeni <pabeni@redhat.com>
* | | net: openvswitch: fix repeated words in commentsJilin Yuan2022-09-091-1/+1
| | | | | | | | | | | | | | | | | | | | | Delete the redundant word 'is'. Signed-off-by: Jilin Yuan <yuanjilin@cdjrlc.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | | net: sched: act_vlan: get rid of tcf_vlan_walker and tcf_vlan_searchZhengchao Shao2022-09-091-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | tcf_vlan_walker() and tcf_vlan_search() do the same thing as generic walk/search function, so remove them. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | | net: sched: act_tunnel_key: get rid of tunnel_key_walker and tunnel_key_searchZhengchao Shao2022-09-091-19/+0
| | | | | | | | | | | | | | | | | | | | | | | | | | | tunnel_key_walker() and tunnel_key_search() do the same thing as generic walk/search function, so remove them. Signed-off-by: Zhengchao Shao <shaozhengchao@huawei.com> Acked-by: Jamal Hadi Salim <jhs@mojatatu.com> Signed-off-by: David S. Miller <davem@davemloft.net>