summaryrefslogtreecommitdiffstats
path: root/drivers/net/eql.c (unfollow)
Commit message (Collapse)AuthorFilesLines
2022-12-15igc: Use strict cycles for Qbv schedulingVinicius Costa Gomes1-9/+2
Configuring strict cycle mode in the controller forces more well behaved transmissions when taprio is offloaded. When set this strict_cycle and strict_end, transmission is not enabled if the whole packet cannot be completed before end of the Qbv cycle. Fixes: 82faa9b79950 ("igc: Add support for ETF offloading") Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Signed-off-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com> Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-15igc: Enhance Qbv scheduling by using first flag bitVinicius Costa Gomes3-29/+151
The I225 hardware has a limitation that packets can only be scheduled in the [0, cycle-time] interval. So, scheduling a packet to the start of the next cycle doesn't usually work. To overcome this, we use the Transmit Descriptor first flag to indicates that a packet should be the first packet (from a queue) in a cycle according to the section 7.5.2.9.3.4 The First Packet on Each QBV Cycle in Intel Discrete I225/6 User Manual. But this only works if there was any packet from that queue during the current cycle, to avoid this issue, we issue an empty packet if that's not the case. Also require one more descriptor to be available, to take into account the empty packet that might be issued. Test Setup: Talker: Use l2_tai to generate the launchtime into packet load. Listener: Use timedump.c to compute the delta between packet arrival and LaunchTime packet payload. Test Result: Before: 1666000610127300000,1666000610127300096,96,621273 1666000610127400000,1666000610127400192,192,621274 1666000610127500000,1666000610127500032,32,621275 1666000610127600000,1666000610127600128,128,621276 1666000610127700000,1666000610127700224,224,621277 1666000610127800000,1666000610127800064,64,621278 1666000610127900000,1666000610127900160,160,621279 1666000610128000000,1666000610128000000,0,621280 1666000610128100000,1666000610128100096,96,621281 1666000610128200000,1666000610128200192,192,621282 1666000610128300000,1666000610128300032,32,621283 1666000610128400000,1666000610128301056,-98944,621284 1666000610128500000,1666000610128302080,-197920,621285 1666000610128600000,1666000610128302848,-297152,621286 1666000610128700000,1666000610128303872,-396128,621287 1666000610128800000,1666000610128304896,-495104,621288 1666000610128900000,1666000610128305664,-594336,621289 1666000610129000000,1666000610128306688,-693312,621290 1666000610129100000,1666000610128307712,-792288,621291 1666000610129200000,1666000610128308480,-891520,621292 1666000610129300000,1666000610128309504,-990496,621293 1666000610129400000,1666000610128310528,-1089472,621294 1666000610129500000,1666000610128311296,-1188704,621295 1666000610129600000,1666000610128312320,-1287680,621296 1666000610129700000,1666000610128313344,-1386656,621297 1666000610129800000,1666000610128314112,-1485888,621298 1666000610129900000,1666000610128315136,-1584864,621299 1666000610130000000,1666000610128316160,-1683840,621300 1666000610130100000,1666000610128316928,-1783072,621301 1666000610130200000,1666000610128317952,-1882048,621302 1666000610130300000,1666000610128318976,-1981024,621303 1666000610130400000,1666000610128319744,-2080256,621304 1666000610130500000,1666000610128320768,-2179232,621305 1666000610130600000,1666000610128321792,-2278208,621306 1666000610130700000,1666000610128322816,-2377184,621307 1666000610130800000,1666000610128323584,-2476416,621308 1666000610130900000,1666000610128324608,-2575392,621309 1666000610131000000,1666000610128325632,-2674368,621310 1666000610131100000,1666000610128326400,-2773600,621311 1666000610131200000,1666000610128327424,-2872576,621312 1666000610131300000,1666000610128328448,-2971552,621313 1666000610131400000,1666000610128329216,-3070784,621314 1666000610131500000,1666000610131500032,32,621315 1666000610131600000,1666000610131600128,128,621316 1666000610131700000,1666000610131700224,224,621317 After: 1666073510646200000,1666073510646200064,64,2676462 1666073510646300000,1666073510646300160,160,2676463 1666073510646400000,1666073510646400256,256,2676464 1666073510646500000,1666073510646500096,96,2676465 1666073510646600000,1666073510646600192,192,2676466 1666073510646700000,1666073510646700032,32,2676467 1666073510646800000,1666073510646800128,128,2676468 1666073510646900000,1666073510646900224,224,2676469 1666073510647000000,1666073510647000064,64,2676470 1666073510647100000,1666073510647100160,160,2676471 1666073510647200000,1666073510647200256,256,2676472 1666073510647300000,1666073510647300096,96,2676473 1666073510647400000,1666073510647400192,192,2676474 1666073510647500000,1666073510647500032,32,2676475 1666073510647600000,1666073510647600128,128,2676476 1666073510647700000,1666073510647700224,224,2676477 1666073510647800000,1666073510647800064,64,2676478 1666073510647900000,1666073510647900160,160,2676479 1666073510648000000,1666073510648000000,0,2676480 1666073510648100000,1666073510648100096,96,2676481 1666073510648200000,1666073510648200192,192,2676482 1666073510648300000,1666073510648300032,32,2676483 1666073510648400000,1666073510648400128,128,2676484 1666073510648500000,1666073510648500224,224,2676485 1666073510648600000,1666073510648600064,64,2676486 1666073510648700000,1666073510648700160,160,2676487 1666073510648800000,1666073510648800000,0,2676488 1666073510648900000,1666073510648900096,96,2676489 1666073510649000000,1666073510649000192,192,2676490 1666073510649100000,1666073510649100032,32,2676491 1666073510649200000,1666073510649200128,128,2676492 1666073510649300000,1666073510649300224,224,2676493 1666073510649400000,1666073510649400064,64,2676494 1666073510649500000,1666073510649500160,160,2676495 1666073510649600000,1666073510649600000,0,2676496 1666073510649700000,1666073510649700096,96,2676497 1666073510649800000,1666073510649800192,192,2676498 1666073510649900000,1666073510649900032,32,2676499 1666073510650000000,1666073510650000128,128,2676500 Fixes: 82faa9b79950 ("igc: Add support for ETF offloading") Signed-off-by: Vinicius Costa Gomes <vinicius.gomes@intel.com> Co-developed-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com> Signed-off-by: Aravindhan Gunasekaran <aravindhan.gunasekaran@intel.com> Co-developed-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Signed-off-by: Muhammad Husaini Zulkifli <muhammad.husaini.zulkifli@intel.com> Signed-off-by: Malli C <mallikarjuna.chilakala@intel.com> Tested-by: Naama Meir <naamax.meir@linux.intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com>
2022-12-15net: dsa: mv88e6xxx: avoid reg_lock deadlock in mv88e6xxx_setup_port()Vladimir Oltean1-5/+4
In the blamed commit, it was not noticed that one implementation of chip->info->ops->phylink_get_caps(), called by mv88e6xxx_get_caps(), may access hardware registers, and in doing so, it takes the mv88e6xxx_reg_lock(). Namely, this is mv88e6352_phylink_get_caps(). This is a problem because mv88e6xxx_get_caps(), apart from being a top-level function (method invoked by dsa_switch_ops), is now also directly called from mv88e6xxx_setup_port(), which runs under the mv88e6xxx_reg_lock() taken by mv88e6xxx_setup(). Therefore, when running on mv88e6352, the reg_lock would be acquired a second time and the system would deadlock on driver probe. The things that mv88e6xxx_setup() can compete with in terms of register access with are the IRQ handlers and MDIO bus operations registered by mv88e6xxx_probe(). So there is a real need to acquire the register lock. The register lock can, in principle, be dropped and re-acquired pretty much at will within the driver, as long as no operations that involve waiting for indirect access to complete (essentially, callers of mv88e6xxx_smi_direct_wait() and mv88e6xxx_wait_mask()) are interrupted with the lock released. However, I would guess that in mv88e6xxx_setup(), the critical section is kept open for such a long time just in order to optimize away multiple lock/unlock operations on the registers. We could, in principle, drop the reg_lock right before the mv88e6xxx_setup_port() -> mv88e6xxx_get_caps() call, and re-acquire it immediately afterwards. But this would look ugly, because mv88e6xxx_setup_port() would release a lock which it didn't acquire, but the caller did. A cleaner solution to this issue comes from the observation that struct mv88e6xxxx_ops methods generally assume they are called with the reg_lock already acquired. Whereas mv88e6352_phylink_get_caps() is more the exception rather than the norm, in that it acquires the lock itself. Let's enforce the same locking pattern/convention for chip->info->ops->phylink_get_caps() as well, and make mv88e6xxx_get_caps(), the top-level function, acquire the register lock explicitly, for this one implementation that will access registers for port 4 to work properly. This means that mv88e6xxx_setup_port() will no longer call the top-level function, but the low-level mv88e6xxx_ops method which expects the correct calling context (register lock held). Compared to chip->info->ops->phylink_get_caps(), mv88e6xxx_get_caps() also fixes up the supported_interfaces bitmap for internal ports, since that can be done generically and does not require per-switch knowledge. That's code which will no longer execute, however mv88e6xxx_setup_port() doesn't need that. It just needs to look at the mac_capabilities bitmap. Fixes: cc1049ccee20 ("net: dsa: mv88e6xxx: fix speed setting for CPU/DSA ports") Reported-by: Maksim Kiselev <bigunclemax@gmail.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Tested-by: Maksim Kiselev <bigunclemax@gmail.com> Link: https://lore.kernel.org/r/20221214110120.3368472-1-vladimir.oltean@nxp.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-15ravb: Fix "failed to switch device to config mode" message during unbindBiju Das1-1/+1
This patch fixes the error "ravb 11c20000.ethernet eth0: failed to switch device to config mode" during unbind. We are doing register access after pm_runtime_put_sync(). We usually do cleanup in reverse order of init. Currently in remove(), the "pm_runtime_put_sync" is not in reverse order. Probe reset_control_deassert(rstc); pm_runtime_enable(&pdev->dev); pm_runtime_get_sync(&pdev->dev); remove pm_runtime_put_sync(&pdev->dev); unregister_netdev(ndev); .. ravb_mdio_release(priv); pm_runtime_disable(&pdev->dev); Consider the call to unregister_netdev() unregister_netdev->unregister_netdevice_queue->rollback_registered_many that calls the below functions which access the registers after pm_runtime_put_sync() 1) ravb_get_stats 2) ravb_close Fixes: c156633f1353 ("Renesas Ethernet AVB driver proper") Cc: stable@vger.kernel.org Signed-off-by: Biju Das <biju.das.jz@bp.renesas.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20221214105118.2495313-1-biju.das.jz@bp.renesas.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-15net: stmmac: fix errno when create_singlethread_workqueue() failsGaosheng Cui1-0/+1
We should set the return value to -ENOMEM explicitly when create_singlethread_workqueue() fails in stmmac_dvr_probe(), otherwise we'll lose the error value. Fixes: a137f3f27f92 ("net: stmmac: fix possible memory leak in stmmac_dvr_probe()") Signed-off-by: Gaosheng Cui <cuigaosheng1@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20221214080117.3514615-1-cuigaosheng1@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-15r6040: Fix kmemleak in probe and removeLi Zetao1-1/+4
There is a memory leaks reported by kmemleak: unreferenced object 0xffff888116111000 (size 2048): comm "modprobe", pid 817, jiffies 4294759745 (age 76.502s) hex dump (first 32 bytes): 00 c4 0a 04 81 88 ff ff 08 10 11 16 81 88 ff ff ................ 08 10 11 16 81 88 ff ff 00 00 00 00 00 00 00 00 ................ backtrace: [<ffffffff815bcd82>] kmalloc_trace+0x22/0x60 [<ffffffff827e20ee>] phy_device_create+0x4e/0x90 [<ffffffff827e6072>] get_phy_device+0xd2/0x220 [<ffffffff827e7844>] mdiobus_scan+0xa4/0x2e0 [<ffffffff827e8be2>] __mdiobus_register+0x482/0x8b0 [<ffffffffa01f5d24>] r6040_init_one+0x714/0xd2c [r6040] ... The problem occurs in probe process as follows: r6040_init_one: mdiobus_register mdiobus_scan <- alloc and register phy_device, the reference count of phy_device is 3 r6040_mii_probe phy_connect <- connect to the first phy_device, so the reference count of the first phy_device is 4, others are 3 register_netdev <- fault inject succeeded, goto error handling path // error handling path err_out_mdio_unregister: mdiobus_unregister(lp->mii_bus); err_out_mdio: mdiobus_free(lp->mii_bus); <- the reference count of the first phy_device is 1, it is not released and other phy_devices are released // similarly, the remove process also has the same problem The root cause is traced to the phy_device is not disconnected when removes one r6040 device in r6040_remove_one() or on error handling path after r6040_mii probed successfully. In r6040_mii_probe(), a net ethernet device is connected to the first PHY device of mii_bus, in order to notify the connected driver when the link status changes, which is the default behavior of the PHY infrastructure to handle everything. Therefore the phy_device should be disconnected when removes one r6040 device or on error handling path. Fix it by adding phy_disconnect() when removes one r6040 device or on error handling path after r6040_mii probed successfully. Fixes: 3831861b4ad8 ("r6040: implement phylib") Signed-off-by: Li Zetao <lizetao1@huawei.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20221213125614.927754-1-lizetao1@huawei.com Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-15unix: Fix race in SOCK_SEQPACKET's unix_dgram_sendmsg()Kirill Tkhai1-2/+9
There is a race resulting in alive SOCK_SEQPACKET socket may change its state from TCP_ESTABLISHED to TCP_CLOSE: unix_release_sock(peer) unix_dgram_sendmsg(sk) sock_orphan(peer) sock_set_flag(peer, SOCK_DEAD) sock_alloc_send_pskb() if !(sk->sk_shutdown & SEND_SHUTDOWN) OK if sock_flag(peer, SOCK_DEAD) sk->sk_state = TCP_CLOSE sk->sk_shutdown = SHUTDOWN_MASK After that socket sk remains almost normal: it is able to connect, listen, accept and recvmsg, while it can't sendmsg. Since this is the only possibility for alive SOCK_SEQPACKET to change the state in such way, we should better fix this strange and potentially danger corner case. Note, that we will return EPIPE here like this is normally done in sock_alloc_send_pskb(). Originally used ECONNREFUSED looks strange, since it's strange to return a specific retval in dependence of race in kernel, when user can't affect on this. Also, move TCP_CLOSE assignment for SOCK_DGRAM sockets under state lock to fix race with unix_dgram_connect(): unix_dgram_connect(other) unix_dgram_sendmsg(sk) unix_peer(sk) = NULL unix_state_unlock(sk) unix_state_double_lock(sk, other) sk->sk_state = TCP_ESTABLISHED unix_peer(sk) = other unix_state_double_unlock(sk, other) sk->sk_state = TCP_CLOSED This patch fixes both of these races. Fixes: 83301b5367a9 ("af_unix: Set TCP_ESTABLISHED for datagram sockets too") Signed-off-by: Kirill Tkhai <tkhai@ya.ru> Link: https://lore.kernel.org/r/135fda25-22d5-837a-782b-ceee50e19844@ya.ru Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-15nfc: pn533: Clear nfc_target before being usedMinsuk Kang1-0/+4
Fix a slab-out-of-bounds read that occurs in nla_put() called from nfc_genl_send_target() when target->sensb_res_len, which is duplicated from an nfc_target in pn533, is too large as the nfc_target is not properly initialized and retains garbage values. Clear nfc_targets with memset() before they are used. Found by a modified version of syzkaller. BUG: KASAN: slab-out-of-bounds in nla_put Call Trace: memcpy nla_put nfc_genl_dump_targets genl_lock_dumpit netlink_dump __netlink_dump_start genl_family_rcv_msg_dumpit genl_rcv_msg netlink_rcv_skb genl_rcv netlink_unicast netlink_sendmsg sock_sendmsg ____sys_sendmsg ___sys_sendmsg __sys_sendmsg do_syscall_64 Fixes: 673088fb42d0 ("NFC: pn533: Send ATR_REQ directly for active device detection") Fixes: 361f3cb7f9cf ("NFC: DEP link hook implementation for pn533") Signed-off-by: Minsuk Kang <linuxlovemin@yonsei.ac.kr> Reviewed-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Link: https://lore.kernel.org/r/20221214015139.119673-1-linuxlovemin@yonsei.ac.kr Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-15net: enetc: avoid buffer leaks on xdp_do_redirect() failureVladimir Oltean1-27/+8
Before enetc_clean_rx_ring_xdp() calls xdp_do_redirect(), each software BD in the RX ring between index orig_i and i can have one of 2 refcount values on its page. We are the owner of the current buffer that is being processed, so the refcount will be at least 1. If the current owner of the buffer at the diametrically opposed index in the RX ring (i.o.w, the other half of this page) has not yet called kfree(), this page's refcount could even be 2. enetc_page_reusable() in enetc_flip_rx_buff() tests for the page refcount against 1, and [ if it's 2 ] does not attempt to reuse it. But if enetc_flip_rx_buff() is put after the xdp_do_redirect() call, the page refcount can have one of 3 values. It can also be 0, if there is no owner of the other page half, and xdp_do_redirect() for this buffer ran so far that it triggered a flush of the devmap/cpumap bulk queue, and the consumers of those bulk queues also freed the buffer, all by the time xdp_do_redirect() returns the execution back to enetc. This is the reason why enetc_flip_rx_buff() is called before xdp_do_redirect(), but there is a big flaw with that reasoning: enetc_flip_rx_buff() will set rx_swbd->page = NULL on both sides of the enetc_page_reusable() branch, and if xdp_do_redirect() returns an error, we call enetc_xdp_free(), which does not deal gracefully with that. In fact, what happens is quite special. The page refcounts start as 1. enetc_flip_rx_buff() figures they're reusable, transfers these rx_swbd->page pointers to a different rx_swbd in enetc_reuse_page(), and bumps the refcount to 2. When xdp_do_redirect() later returns an error, we call the no-op enetc_xdp_free(), but we still haven't lost the reference to that page. A copy of it is still at rx_ring->next_to_alloc, but that has refcount 2 (and there are no concurrent owners of it in flight, to drop the refcount). What really kills the system is when we'll flip the rx_swbd->page the second time around. With an updated refcount of 2, the page will not be reusable and we'll really leak it. Then enetc_new_page() will have to allocate more pages, which will then eventually leak again on further errors from xdp_do_redirect(). The problem, summarized, is that we zeroize rx_swbd->page before we're completely done with it, and this makes it impossible for the error path to do something with it. Since the packet is potentially multi-buffer and therefore the rx_swbd->page is potentially an array, manual passing of the old pointers between enetc_flip_rx_buff() and enetc_xdp_free() is a bit difficult. For the sake of going with a simple solution, we accept the possibility of racing with xdp_do_redirect(), and we move the flip procedure to execute only on the redirect success path. By racing, I mean that the page may be deemed as not reusable by enetc (having a refcount of 0), but there will be no leak in that case, either. Once we accept that, we have something better to do with buffers on XDP_REDIRECT failure. Since we haven't performed half-page flipping yet, we won't, either (and this way, we can avoid enetc_xdp_free() completely, which gives the entire page to the slab allocator). Instead, we'll call enetc_xdp_drop(), which will recycle this half of the buffer back to the RX ring. Fixes: 9d2b68cc108d ("net: enetc: add support for XDP_REDIRECT") Suggested-by: Lorenzo Bianconi <lorenzo.bianconi@redhat.com> Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20221213001908.2347046-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14wireguard: timers: cast enum limits members to int in printsJiri Slaby (SUSE)1-4/+4
Since gcc13, each member of an enum has the same type as the enum. And that is inherited from its members. Provided "REKEY_AFTER_MESSAGES = 1ULL << 60", the named type is unsigned long. This generates warnings with gcc-13: error: format '%d' expects argument of type 'int', but argument 6 has type 'long unsigned int' Cast those particular enum members to int when printing them. Link: https://gcc.gnu.org/bugzilla/show_bug.cgi?id=36113 Cc: Martin Liska <mliska@suse.cz> Signed-off-by: Jiri Slaby (SUSE) <jirislaby@kernel.org> Signed-off-by: Jason A. Donenfeld <Jason@zx2c4.com> Link: https://lore.kernel.org/all/20221213225208.3343692-2-Jason@zx2c4.com/ Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14igb: Initialize mailbox message for VF resetTony Nguyen1-1/+1
When a MAC address is not assigned to the VF, that portion of the message sent to the VF is not set. The memory, however, is allocated from the stack meaning that information may be leaked to the VM. Initialize the message buffer to 0 so that no information is passed to the VM in this case. Fixes: 6ddbc4cf1f4d ("igb: Indicate failure on vf reset for empty mac address") Reported-by: Akihiko Odaki <akihiko.odaki@daynix.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Akihiko Odaki <akihiko.odaki@daynix.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Link: https://lore.kernel.org/r/20221212190031.3983342-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14mISDN: hfcmulti: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave()Yang Yingliang1-6/+13
It is not allowed to call kfree_skb() or consume_skb() from hardware interrupt context or with hardware interrupts being disabled. skb_queue_purge() is called under spin_lock_irqsave() in handle_dmsg() and hfcm_l1callback(), kfree_skb() is called in them, to fix this, use skb_queue_splice_init() to move the dch->squeue to a free queue, also enqueue the tx_skb and rx_skb, at last calling __skb_queue_purge() to free the SKBs afer unlock. Fixes: af69fb3a8ffa ("Add mISDN HFC multiport driver") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14mISDN: hfcpci: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave()Yang Yingliang1-4/+9
It is not allowed to call kfree_skb() or consume_skb() from hardware interrupt context or with hardware interrupts being disabled. skb_queue_purge() is called under spin_lock_irqsave() in hfcpci_l2l1D(), kfree_skb() is called in it, to fix this, use skb_queue_splice_init() to move the dch->squeue to a free queue, also enqueue the tx_skb and rx_skb, at last calling __skb_queue_purge() to free the SKBs afer unlock. Fixes: 1700fe1a10dc ("Add mISDN HFC PCI driver") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14mISDN: hfcsusb: don't call dev_kfree_skb/kfree_skb() under spin_lock_irqsave()Yang Yingliang1-4/+8
It is not allowed to call kfree_skb() or consume_skb() from hardware interrupt context or with hardware interrupts being disabled. It should use dev_kfree_skb_irq() or dev_consume_skb_irq() instead. The difference between them is free reason, dev_kfree_skb_irq() means the SKB is dropped in error and dev_consume_skb_irq() means the SKB is consumed in normal. skb_queue_purge() is called under spin_lock_irqsave() in hfcusb_l2l1D(), kfree_skb() is called in it, to fix this, use skb_queue_splice_init() to move the dch->squeue to a free queue, also enqueue the tx_skb and rx_skb, at last calling __skb_queue_purge() to free the SKBs afer unlock. In tx_iso_complete(), dev_kfree_skb() is called to consume the transmitted SKB, so replace it with dev_consume_skb_irq(). Fixes: 69f52adb2d53 ("mISDN: Add HFC USB driver") Signed-off-by: Yang Yingliang <yangyingliang@huawei.com> Reviewed-by: Alexander Duyck <alexanderduyck@fb.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14selftests: bonding: add bonding prio option testLiang Li2-1/+247
Add a test for bonding prio option. Here is the test result: ]# ./option_prio.sh TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=arp_ip_target and primary_reselect=0) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=arp_ip_target and primary_reselect=1) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=arp_ip_target and primary_reselect=2) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=miimon and primary_reselect=0) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=miimon and primary_reselect=1) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=1 monitor=miimon and primary_reselect=2) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=5 monitor=miimon and primary_reselect=0) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=5 monitor=miimon and primary_reselect=1) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=5 monitor=miimon and primary_reselect=2) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=6 monitor=miimon and primary_reselect=0) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=6 monitor=miimon and primary_reselect=1) [ OK ] TEST: prio_test (Test bonding option 'prio' with mode=6 monitor=miimon and primary_reselect=2) [ OK ] Signed-off-by: Liang Li <liali@redhat.com> Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14bonding: do failover when high prio link upHangbin Liu1-9/+15
Currently, when a high prio link enslaved, or when current link down, the high prio port could be selected. But when high prio link up, the new active slave reselection is not triggered. Fix it by checking link's prio when getting up. Making the do_failover after looping all slaves as there may be multi high prio slaves up. Reported-by: Liang Li <liali@redhat.com> Fixes: 0a2ff7cc8ad4 ("Bonding: add per-port priority for failover re-selection") Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14bonding: add missed __rcu annotation for curr_active_slaveHangbin Liu1-1/+1
There is one direct accesses to bond->curr_active_slave in bond_miimon_commit(). Protected it by rcu_access_pointer() since the later of this function also use this one. Signed-off-by: Hangbin Liu <liuhangbin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-14net: macsec: fix net device access prior to holding a lockEmeel Hakim1-13/+21
Currently macsec offload selection update routine accesses the net device prior to holding the relevant lock. Fix by holding the lock prior to the device access. Fixes: dcb780fb2795 ("net: macsec: add nla support for changing the offloading selection") Reviewed-by: Raed Salem <raeds@nvidia.com> Signed-off-by: Emeel Hakim <ehakim@nvidia.com> Link: https://lore.kernel.org/r/20221211075532.28099-1-ehakim@nvidia.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13netfilter: conntrack: document sctp timeoutsSriram Yagnaraman1-0/+33
Exposed through sysctl, update documentation to describe sctp states and their default timeouts. Signed-off-by: Sriram Yagnaraman <sriram.yagnaraman@est.tech> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-12-13ipvs: add a 'default' case in do_ip_vs_set_ctl()Li Qiong1-0/+5
It is better to return the default switch case with '-EINVAL', in case new commands are added. otherwise, return a uninitialized value of ret. Signed-off-by: Li Qiong <liqiong@nfschina.com> Reviewed-by: Simon Horman <horms@verge.net.au> Acked-by: Julian Anastasov <ja@ssi.bg> Signed-off-by: Pablo Neira Ayuso <pablo@netfilter.org>
2022-12-13ipvs: fix type warning in do_div() on 32 bitJakub Kicinski1-1/+2
32 bit platforms without 64bit div generate the following warning: net/netfilter/ipvs/ip_vs_est.c: In function 'ip_vs_est_calc_limits': include/asm-generic/div64.h:222:35: warning: comparison of distinct pointer types lacks a cast 222 | (void)(((typeof((n)) *)0) == ((uint64_t *)0)); \ | ^~ net/netfilter/ipvs/ip_vs_est.c:694:17: note: in expansion of macro 'do_div' 694 | do_div(val, loops); | ^~~~~~ include/asm-generic/div64.h:222:35: warning: comparison of distinct pointer types lacks a cast 222 | (void)(((typeof((n)) *)0) == ((uint64_t *)0)); \ | ^~ net/netfilter/ipvs/ip_vs_est.c:700:33: note: in expansion of macro 'do_div' 700 | do_div(val, min_est); | ^~~~~~ first argument of do_div() should be unsigned. We can't just cast as do_div() updates it as well, so we need an lval. Make val unsigned in the first place, all paths check that the value they assign to this variables are non-negative already. Fixes: 705dd3444081 ("ipvs: use kthreads for stats estimation") Signed-off-by: Jakub Kicinski <kuba@kernel.org> Link: https://lore.kernel.org/r/20221213032037.844517-1-kuba@kernel.org Signed-off-by: Paolo Abeni <pabeni@redhat.com>
2022-12-13net: lan966x: Remove a useless test in lan966x_ptp_add_trap()Christophe JAILLET1-2/+0
vcap_alloc_rule() can't return NULL. So remove some dead-code Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/27992ffcee47fc865ce87274d6dfcffe7a1e69e0.1670873784.git.christophe.jaillet@wanadoo.fr Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13livepatch: Call klp_match_callback() in klp_find_callback() to avoid code ↵Zhen Lei1-22/+11
duplication The implementation of function klp_match_callback() is identical to the partial implementation of function klp_find_callback(). So call function klp_match_callback() in function klp_find_callback() instead of the duplicated code. Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com> Acked-by: Song Liu <song@kernel.org> Reviewed-by: Petr Mladek <pmladek@suse.com> Suggested-by: Petr Mladek <pmladek@suse.com> Signed-off-by: Luis Chamberlain <mcgrof@kernel.org>
2022-12-13net: ipa: add IPA v4.7 supportAlex Elder8-1/+922
Add the necessary register and data definitions needed for IPA v4.7, which is found on the SM6350 SoC. Co-developed-by: Luca Weiss <luca.weiss@fairphone.com> Signed-off-by: Luca Weiss <luca.weiss@fairphone.com> Signed-off-by: Alex Elder <elder@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13dt-bindings: net: qcom,ipa: Add SM6350 compatibleLuca Weiss1-0/+1
Add support for SM6350, which uses IPA v4.7. Signed-off-by: Luca Weiss <luca.weiss@fairphone.com> Signed-off-by: Alex Elder <elder@linaro.org> Acked-by: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bnxt: Use generic HBH removal helper in tx pathCoco Li1-1/+25
Eric Dumazet implemented Big TCP that allowed bigger TSO/GRO packet sizes for IPv6 traffic. See patch series: 'commit 89527be8d8d6 ("net: add IFLA_TSO_{MAX_SIZE|SEGS} attributes")' This reduces the number of packets traversing the networking stack and should usually improves performance. However, it also inserts a temporary Hop-by-hop IPv6 extension header. Using the HBH header removal method in the previous patch, the extra header be removed in bnxt drivers to allow it to send big TCP packets (bigger TSO packets) as well. Tested: Compiled locally To further test functional correctness, update the GSO/GRO limit on the physical NIC: ip link set eth0 gso_max_size 181000 ip link set eth0 gro_max_size 181000 Note that if there are bonding or ipvan devices on top of the physical NIC, their GSO sizes need to be updated as well. Then, IPv6/TCP packets with sizes larger than 64k can be observed. Signed-off-by: Coco Li <lixiaoyan@google.com> Reviewed-by: Michael Chan <michael.chan@broadcom.com> Tested-by: Michael Chan <michael.chan@broadcom.com> Link: https://lore.kernel.org/r/20221210041646.3587757-2-lixiaoyan@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13IPv6/GRO: generic helper to remove temporary HBH/jumbo header in driverCoco Li2-23/+37
IPv6/TCP and GRO stacks can build big TCP packets with an added temporary Hop By Hop header. Is GSO is not involved, then the temporary header needs to be removed in the driver. This patch provides a generic helper for drivers that need to modify their headers in place. Tested: Compiled and ran with ethtool -K eth1 tso off Could send Big TCP packets Signed-off-by: Coco Li <lixiaoyan@google.com> Link: https://lore.kernel.org/r/20221210041646.3587757-1-lixiaoyan@google.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13selftests: forwarding: Add bridge MDB testIdo Schimmel2-0/+1165
Add a selftests that includes the following test cases: 1. Configuration tests. Both valid and invalid configurations are tested across all entry types (e.g., L2, IPv4). 2. Forwarding tests. Both host and port group entries are tested across all entry types. 3. Interaction between user installed MDB entries and IGMP / MLD control packets. Example output: INFO: # Host entries configuration tests TEST: Common host entries configuration tests (IPv4) [ OK ] TEST: Common host entries configuration tests (IPv6) [ OK ] TEST: Common host entries configuration tests (L2) [ OK ] INFO: # Port group entries configuration tests - (*, G) TEST: Common port group entries configuration tests (IPv4 (*, G)) [ OK ] TEST: Common port group entries configuration tests (IPv6 (*, G)) [ OK ] TEST: IPv4 (*, G) port group entries configuration tests [ OK ] TEST: IPv6 (*, G) port group entries configuration tests [ OK ] INFO: # Port group entries configuration tests - (S, G) TEST: Common port group entries configuration tests (IPv4 (S, G)) [ OK ] TEST: Common port group entries configuration tests (IPv6 (S, G)) [ OK ] TEST: IPv4 (S, G) port group entries configuration tests [ OK ] TEST: IPv6 (S, G) port group entries configuration tests [ OK ] INFO: # Port group entries configuration tests - L2 TEST: Common port group entries configuration tests (L2 (*, G)) [ OK ] TEST: L2 (*, G) port group entries configuration tests [ OK ] INFO: # Forwarding tests TEST: IPv4 host entries forwarding tests [ OK ] TEST: IPv6 host entries forwarding tests [ OK ] TEST: L2 host entries forwarding tests [ OK ] TEST: IPv4 port group "exclude" entries forwarding tests [ OK ] TEST: IPv6 port group "exclude" entries forwarding tests [ OK ] TEST: IPv4 port group "include" entries forwarding tests [ OK ] TEST: IPv6 port group "include" entries forwarding tests [ OK ] TEST: L2 port entries forwarding tests [ OK ] INFO: # Control packets tests TEST: IGMPv3 MODE_IS_INCLUE tests [ OK ] TEST: MLDv2 MODE_IS_INCLUDE tests [ OK ] Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13selftests: forwarding: Rename bridge_mdb testIdo Schimmel2-1/+1
The test is only concerned with host MDB entries and not with MDB entries as a whole. Rename the test to reflect that. Subsequent patches will add a more general test that will contain the test cases for host MDB entries and remove the current test. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Support replacement of MDB port group entriesIdo Schimmel2-5/+98
Now that user space can specify additional attributes of port group entries such as filter mode and source list, it makes sense to allow user space to atomically modify these attributes by replacing entries instead of forcing user space to delete the entries and add them back. Replace MDB port group entries when the 'NLM_F_REPLACE' flag is specified in the netlink message header. When a (*, G) entry is replaced, update the following attributes: Source list, state, filter mode, protocol and flags. If the entry is temporary and in EXCLUDE mode, reset the group timer to the group membership interval. If the entry is temporary and in INCLUDE mode, reset the source timers of associated sources to the group membership interval. Examples: # bridge mdb replace dev br0 port dummy10 grp 239.1.1.1 permanent source_list 192.0.2.1,192.0.2.2 filter_mode include # bridge -d -s mdb show dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.2 permanent filter_mode include proto static 0.00 dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.1 permanent filter_mode include proto static 0.00 dev br0 port dummy10 grp 239.1.1.1 permanent filter_mode include source_list 192.0.2.2/0.00,192.0.2.1/0.00 proto static 0.00 # bridge mdb replace dev br0 port dummy10 grp 239.1.1.1 permanent source_list 192.0.2.1,192.0.2.3 filter_mode exclude proto zebra # bridge -d -s mdb show dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.3 permanent filter_mode include proto zebra blocked 0.00 dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.1 permanent filter_mode include proto zebra blocked 0.00 dev br0 port dummy10 grp 239.1.1.1 permanent filter_mode exclude source_list 192.0.2.3/0.00,192.0.2.1/0.00 proto zebra 0.00 # bridge mdb replace dev br0 port dummy10 grp 239.1.1.1 temp source_list 192.0.2.4,192.0.2.3 filter_mode include proto bgp # bridge -d -s mdb show dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.4 temp filter_mode include proto bgp 0.00 dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.3 temp filter_mode include proto bgp 0.00 dev br0 port dummy10 grp 239.1.1.1 temp filter_mode include source_list 192.0.2.4/259.44,192.0.2.3/259.44 proto bgp 0.00 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Allow user space to specify MDB entry routing protocolIdo Schimmel3-2/+15
Add the 'MDBE_ATTR_RTPORT' attribute to allow user space to specify the routing protocol of the MDB port group entry. Enforce a minimum value of 'RTPROT_STATIC' to prevent user space from using protocol values that should only be set by the kernel (e.g., 'RTPROT_KERNEL'). Maintain backward compatibility by defaulting to 'RTPROT_STATIC'. The protocol is already visible to user space in RTM_NEWMDB responses and notifications via the 'MDBA_MDB_EATTR_RTPROT' attribute. The routing protocol allows a routing daemon to distinguish between entries configured by it and those configured by the administrator. Once MDB flush is supported, the protocol can be used as a criterion according to which the flush is performed. Examples: # bridge mdb add dev br0 port dummy10 grp 239.1.1.1 permanent proto kernel Error: integer out of range. # bridge mdb add dev br0 port dummy10 grp 239.1.1.1 permanent proto static # bridge mdb add dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.1 permanent proto zebra # bridge mdb add dev br0 port dummy10 grp 239.1.1.2 permanent source_list 198.51.100.1,198.51.100.2 filter_mode include proto 250 # bridge -d mdb show dev br0 port dummy10 grp 239.1.1.2 src 198.51.100.2 permanent filter_mode include proto 250 dev br0 port dummy10 grp 239.1.1.2 src 198.51.100.1 permanent filter_mode include proto 250 dev br0 port dummy10 grp 239.1.1.2 permanent filter_mode include source_list 198.51.100.2/0.00,198.51.100.1/0.00 proto 250 dev br0 port dummy10 grp 239.1.1.1 src 192.0.2.1 permanent filter_mode include proto zebra dev br0 port dummy10 grp 239.1.1.1 permanent filter_mode exclude proto static Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Allow user space to add (*, G) with a source list and filter modeIdo Schimmel2-0/+150
Add new netlink attributes to the RTM_NEWMDB request that allow user space to add (*, G) with a source list and filter mode. The RTM_NEWMDB message can already dump such entries (created by the kernel) so there is no need to add dump support. However, the message contains a different set of attributes depending if it is a request or a response. The naming and structure of the new attributes try to follow the existing ones used in the response. Request: [ struct nlmsghdr ] [ struct br_port_msg ] [ MDBA_SET_ENTRY ] struct br_mdb_entry [ MDBA_SET_ENTRY_ATTRS ] [ MDBE_ATTR_SOURCE ] struct in_addr / struct in6_addr [ MDBE_ATTR_SRC_LIST ] // new [ MDBE_SRC_LIST_ENTRY ] [ MDBE_SRCATTR_ADDRESS ] struct in_addr / struct in6_addr [ ...] [ MDBE_ATTR_GROUP_MODE ] // new u8 Response: [ struct nlmsghdr ] [ struct br_port_msg ] [ MDBA_MDB ] [ MDBA_MDB_ENTRY ] [ MDBA_MDB_ENTRY_INFO ] struct br_mdb_entry [ MDBA_MDB_EATTR_TIMER ] u32 [ MDBA_MDB_EATTR_SOURCE ] struct in_addr / struct in6_addr [ MDBA_MDB_EATTR_RTPROT ] u8 [ MDBA_MDB_EATTR_SRC_LIST ] [ MDBA_MDB_SRCLIST_ENTRY ] [ MDBA_MDB_SRCATTR_ADDRESS ] struct in_addr / struct in6_addr [ MDBA_MDB_SRCATTR_TIMER ] u8 [...] [ MDBA_MDB_EATTR_GROUP_MODE ] u8 Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Add support for (*, G) with a source list and filter modeIdo Schimmel2-3/+132
In preparation for allowing user space to add (*, G) entries with a source list and associated filter mode, add the necessary plumbing to handle such requests. Extend the MDB configuration structure with a currently empty source array and filter mode that is currently hard coded to EXCLUDE. Add the source entries and the corresponding (S, G) entries before making the new (*, G) port group entry visible to the data path. Handle the creation of each source entry in a similar fashion to how it is created from the data path in response to received Membership Reports: Create the source entry, arm the source timer (if needed), add a corresponding (S, G) forwarding entry and finally mark the source entry as installed (by user space). Add the (S, G) entry by populating an MDB configuration structure and calling br_mdb_add_group_sg() as if a new entry is created by user space, with the sole difference that the 'src_entry' field is set to make sure that the group timer of such entries is never armed. Note that it is not currently possible to add more than 32 source entries to a port group entry. If this proves to be a problem we can either increase 'PG_SRC_ENT_LIMIT' or avoid forcing a limit on entries created by user space. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Avoid arming group timer when (S, G) corresponds to a sourceIdo Schimmel2-1/+2
User space will soon be able to install a (*, G) with a source list, prompting the creation of a (S, G) entry for each source. In this case, the group timer of the (S, G) entry should never be set. Solve this by adding a new field to the MDB configuration structure that denotes whether the (S, G) corresponds to a source or not. The field will be set in a subsequent patch where br_mdb_add_group_sg() is called in order to create a (S, G) entry for each user provided source. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Add a flag for user installed source entriesIdo Schimmel2-1/+3
There are a few places where the bridge driver differentiates between (S, G) entries installed by the kernel (in response to Membership Reports) and those installed by user space. One of them is when deleting an (S, G) entry corresponding to a source entry that is being deleted. While user space cannot currently add a source entry to a (*, G), it can add an (S, G) entry that later corresponds to a source entry created by the reception of a Membership Report. If this source entry is later deleted because its source timer expired or because the (*, G) entry is being deleted, the bridge driver will not delete the corresponding (S, G) entry if it was added by user space as permanent. This is going to be a problem when the ability to install a (*, G) with a source list is exposed to user space. In this case, when user space installs the (*, G) as permanent, then all the (S, G) entries corresponding to its source list will also be installed as permanent. When user space deletes the (*, G), all the source entries will be deleted and the expectation is that the corresponding (S, G) entries will be deleted as well. Solve this by introducing a new source entry flag denoting that the entry was installed by user space. When the entry is deleted, delete the corresponding (S, G) entry even if it was installed by user space as permanent, as the flag tells us that it was installed in response to the source entry being created. The flag will be set in a subsequent patch where source entries are created in response to user requests. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Expose __br_multicast_del_group_src()Ido Schimmel2-3/+9
Expose __br_multicast_del_group_src() which is symmetric to br_multicast_new_group_src() and does not remove the installed {S, G} forwarding entry, unlike br_multicast_del_group_src(). The function will be used in the error path when user space was able to add a new source entry, but failed to install a corresponding forwarding entry. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Expose br_multicast_new_group_src()Ido Schimmel2-1/+4
Currently, new group source entries are only created in response to received Membership Reports. Subsequent patches are going to allow user space to install (*, G) entries with a source list. As a preparatory step, expose br_multicast_new_group_src() so that it could later be invoked from the MDB code (i.e., br_mdb.c) that handles RTM_NEWMDB messages. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Add a centralized error pathIdo Schimmel1-4/+6
Subsequent patches will add memory allocations in br_mdb_config_init() as the MDB configuration structure will include a linked list of source entries. This memory will need to be freed regardless if br_mdb_add() succeeded or failed. As a preparation for this change, add a centralized error path where the memory will be freed. Note that br_mdb_del() already has one error path and therefore does not require any changes. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Place netlink policy before validation functionsIdo Schimmel1-6/+6
Subsequent patches are going to add additional validation functions and netlink policies. Some of these functions will need to perform parsing using nla_parse_nested() and the new policies. In order to keep all the policies next to each other, move the current policy to before the validation functions. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Split (*, G) and (S, G) addition into different functionsIdo Schimmel1-49/+96
When the bridge is using IGMP version 3 or MLD version 2, it handles the addition of (*, G) and (S, G) entries differently. When a new (S, G) port group entry is added, all the (*, G) EXCLUDE ports need to be added to the port group of the new entry. Similarly, when a new (*, G) EXCLUDE port group entry is added, the port needs to be added to the port group of all the matching (S, G) entries. Subsequent patches will create more differences between both entry types. Namely, filter mode and source list can only be specified for (*, G) entries. Given the current and future differences between both entry types, handle the addition of each entry type in a different function, thereby avoiding the creation of one complex function. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13bridge: mcast: Do not derive entry type from its filter modeIdo Schimmel1-6/+3
Currently, the filter mode (i.e., INCLUDE / EXCLUDE) of MDB entries cannot be set from user space. Instead, it is set by the kernel according to the entry type: (*, G) entries are treated as EXCLUDE and (S, G) entries are treated as INCLUDE. This allows the kernel to derive the entry type from its filter mode. Subsequent patches will allow user space to set the filter mode of (*, G) entries, making the current assumption incorrect. As a preparation, remove the current assumption and instead determine the entry type from its key, which is a more direct way. Signed-off-by: Ido Schimmel <idosch@nvidia.com> Acked-by: Nikolay Aleksandrov <razor@blackwall.org> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13qlcnic: Clean up some inconsistent indentingJiapeng Chong1-1/+1
No functional modification involved. drivers/net/ethernet/qlogic/qlcnic/qlcnic_ethtool.c:714 qlcnic_validate_ring_count() warn: inconsistent indenting. Link: https://bugzilla.openanolis.cn/show_bug.cgi?id=3419 Reported-by: Abaci Robot <abaci@linux.alibaba.com> Signed-off-by: Jiapeng Chong <jiapeng.chong@linux.alibaba.com> Link: https://lore.kernel.org/r/20221212055813.91154-1-jiapeng.chong@linux.alibaba.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13net: dsa: tag_8021q: avoid leaking ctx on dsa_tag_8021q_register() error pathVladimir Oltean1-1/+10
If dsa_tag_8021q_setup() fails, for example due to the inability of the device to install a VLAN, the tag_8021q context of the switch will leak. Make sure it is freed on the error path. Fixes: 328621f6131f ("net: dsa: tag_8021q: absorb dsa_8021q_setup into dsa_tag_8021q_{,un}register") Signed-off-by: Vladimir Oltean <vladimir.oltean@nxp.com> Link: https://lore.kernel.org/r/20221209235242.480344-1-vladimir.oltean@nxp.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13i40e: allow toggling loopback mode via ndo_set_features callbackTirthendu Sarkar4-4/+61
Add support for NETIF_F_LOOPBACK. This feature can be set via: $ ethtool -K eth0 loopback <on|off> This sets the MAC Tx->Rx loopback. This feature is used for the xsk selftests, and might have other uses too. Signed-off-by: Tirthendu Sarkar <tirthendu.sarkar@intel.com> Reviewed-by: Alexander Lobakin <alexandr.lobakin@intel.com> Reviewed-by: Leon Romanovsky <leonro@nvidia.com> Tested-by: Magnus Karlsson <magnus.karlsson@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Link: https://lore.kernel.org/r/20221209185553.2520088-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13i40e: Fix the inability to attach XDP program on downed interfaceBartosz Staszewski1-12/+24
Whenever trying to load XDP prog on downed interface, function i40e_xdp was passing vsi->rx_buf_len field to i40e_xdp_setup() which was equal 0. i40e_open() calls i40e_vsi_configure_rx() which configures that field, but that only happens when interface is up. When it is down, i40e_open() is not being called, thus vsi->rx_buf_len is not set. Solution for this is calculate buffer length in newly created function - i40e_calculate_vsi_rx_buf_len() that return actual buffer length. Buffer length is being calculated based on the same rules applied previously in i40e_vsi_configure_rx() function. Fixes: 613142b0bb88 ("i40e: Log error for oversized MTU on device") Fixes: 0c8493d90b6b ("i40e: add XDP support for pass and drop actions") Signed-off-by: Bartosz Staszewski <bartoszx.staszewski@intel.com> Signed-off-by: Mateusz Palczewski <mateusz.palczewski@intel.com> Tested-by: Shwetha Nagaraju <Shwetha.nagaraju@intel.com> Reviewed-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> Reviewed-by: Saeed Mahameed <saeed@kernel.com> Link: https://lore.kernel.org/r/20221209185411.2519898-1-anthony.l.nguyen@intel.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13net: failover: use IFF_NO_ADDRCONF flag to prevent ipv6 addrconfXin Long1-3/+3
Similar to Bonding and Team, to prevent ipv6 addrconf with IFF_NO_ADDRCONF in slave_dev->priv_flags for slave ports is also needed in net failover. Note that dev_open(slave_dev) is called in .slave_register, which is called after the IFF_NO_ADDRCONF flag is set in failover_slave_register(). Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13net: team: use IFF_NO_ADDRCONF flag to prevent ipv6 addrconfXin Long1-0/+2
This patch is to use IFF_NO_ADDRCONF flag to prevent ipv6 addrconf for Team port. This flag will be set in team_port_enter(), which is called before dev_open(), and cleared in team_port_leave(), called after dev_close() and the err path in team_port_add(). Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13net: add IFF_NO_ADDRCONF and use it in bonding to prevent ipv6 addrconfXin Long3-8/+17
Currently, in bonding it reused the IFF_SLAVE flag and checked it in ipv6 addrconf to prevent ipv6 addrconf. However, it is not a proper flag to use for no ipv6 addrconf, for bonding it has to move IFF_SLAVE flag setting ahead of dev_open() in bond_enslave(). Also, IFF_MASTER/SLAVE are historical flags used in bonding and eql, as Jiri mentioned, the new devices like Team, Failover do not use this flag. So as Jiri suggested, this patch adds IFF_NO_ADDRCONF in priv_flags of the device to indicate no ipv6 addconf, and uses it in bonding and moves IFF_SLAVE flag setting back to its original place. Signed-off-by: Xin Long <lucien.xin@gmail.com> Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13stmmac: fix potential division by 0Piergiorgio Beruto2-2/+3
When the MAC is connected to a 10 Mb/s PHY and the PTP clock is derived from the MAC reference clock (default), the clk_ptp_rate becomes too small and the calculated sub second increment becomes 0 when computed by the stmmac_config_sub_second_increment() function within stmmac_init_tstamp_counter(). Therefore, the subsequent div_u64 in stmmac_init_tstamp_counter() operation triggers a divide by 0 exception as shown below. [ 95.062067] socfpga-dwmac ff700000.ethernet eth0: Register MEM_TYPE_PAGE_POOL RxQ-0 [ 95.076440] socfpga-dwmac ff700000.ethernet eth0: PHY [stmmac-0:08] driver [NCN26000] (irq=49) [ 95.095964] dwmac1000: Master AXI performs any burst length [ 95.101588] socfpga-dwmac ff700000.ethernet eth0: No Safety Features support found [ 95.109428] Division by zero in kernel. [ 95.113447] CPU: 0 PID: 239 Comm: ifconfig Not tainted 6.1.0-rc7-centurion3-1.0.3.0-01574-gb624218205b7-dirty #77 [ 95.123686] Hardware name: Altera SOCFPGA [ 95.127695] unwind_backtrace from show_stack+0x10/0x14 [ 95.132938] show_stack from dump_stack_lvl+0x40/0x4c [ 95.137992] dump_stack_lvl from Ldiv0+0x8/0x10 [ 95.142527] Ldiv0 from __aeabi_uidivmod+0x8/0x18 [ 95.147232] __aeabi_uidivmod from div_u64_rem+0x1c/0x40 [ 95.152552] div_u64_rem from stmmac_init_tstamp_counter+0xd0/0x164 [ 95.158826] stmmac_init_tstamp_counter from stmmac_hw_setup+0x430/0xf00 [ 95.165533] stmmac_hw_setup from __stmmac_open+0x214/0x2d4 [ 95.171117] __stmmac_open from stmmac_open+0x30/0x44 [ 95.176182] stmmac_open from __dev_open+0x11c/0x134 [ 95.181172] __dev_open from __dev_change_flags+0x168/0x17c [ 95.186750] __dev_change_flags from dev_change_flags+0x14/0x50 [ 95.192662] dev_change_flags from devinet_ioctl+0x2b4/0x604 [ 95.198321] devinet_ioctl from inet_ioctl+0x1ec/0x214 [ 95.203462] inet_ioctl from sock_ioctl+0x14c/0x3c4 [ 95.208354] sock_ioctl from vfs_ioctl+0x20/0x38 [ 95.212984] vfs_ioctl from sys_ioctl+0x250/0x844 [ 95.217691] sys_ioctl from ret_fast_syscall+0x0/0x4c [ 95.222743] Exception stack(0xd0ee1fa8 to 0xd0ee1ff0) [ 95.227790] 1fa0: 00574c4f be9aeca4 00000003 00008914 be9aeca4 be9aec50 [ 95.235945] 1fc0: 00574c4f be9aeca4 0059f078 00000036 be9aee8c be9aef7a 00000015 00000000 [ 95.244096] 1fe0: 005a01f0 be9aec38 004d7484 b6e67d74 Signed-off-by: Piergiorgio Beruto <piergiorgio.beruto@gmail.com> Fixes: 91a2559c1dc5 ("net: stmmac: Fix sub-second increment") Reviewed-by: Andrew Lunn <andrew@lunn.ch> Link: https://lore.kernel.org/r/de4c64ccac9084952c56a06a8171d738604c4770.1670678513.git.piergiorgio.beruto@gmail.com Signed-off-by: Jakub Kicinski <kuba@kernel.org>
2022-12-13octeontx2-af: cn10k: mcs: Fix a resource leak in the probe and remove functionsChristophe JAILLET1-1/+5
In mcs_register_interrupts(), a call to request_irq() is not balanced by a corresponding free_irq(), neither in the error handling path, nor in the remove function. Add the missing calls. Fixes: 6c635f78c474 ("octeontx2-af: cn10k: mcs: Handle MCS block interrupts") Signed-off-by: Christophe JAILLET <christophe.jaillet@wanadoo.fr> Link: https://lore.kernel.org/r/69f153db5152a141069f990206e7389f961d41ec.1670693669.git.christophe.jaillet@wanadoo.fr Signed-off-by: Jakub Kicinski <kuba@kernel.org>