summaryrefslogtreecommitdiffstats
path: root/drivers/net/ethernet (follow)
Commit message (Collapse)AuthorAgeFilesLines
* ravb: make ravb_ptp_interrupt() *void*Sergei Shtylyov2016-04-143-10/+9
| | | | | | | | | | When we have the ISS.CGIS bit set, we already know that gPTP interrupt has happened, so an extra GIS register check at the end of ravb_ptp_interrupt() seems superfluous. We can model the gPTP interrupt handler like all other dedicated interrupt handlers in the driver and make it *void*. Signed-off-by: Sergei Shtylyov <sergei.shtylyov@cogentembedded.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* qed* - bump driver versions to 8.7.1.20Yuval Mintz2016-04-142-3/+3
| | | | | Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* qede: add Rx flow hash/indirection support.Sudarsana Reddy Kalluru2016-04-143-10/+283
| | | | | | | | | | | Adds support for the following via ethtool: - UDP configuration of RSS based on 2-tuple/4-tuple. - RSS hash key. - RSS indirection table. Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* qed: add Rx flow hash/indirection support.Sudarsana Reddy Kalluru2016-04-141-16/+1
| | | | | | | | Adds the required API for passing RSS-related configuration from qede. Signed-off-by: Sudarsana Reddy Kalluru <sudarsana.kalluru@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* qed*: remove version dependencyRahul Verma2016-04-145-32/+2
| | | | | | | | | Inbox drivers don't need versioning scheme in order to guarantee compatibility, as both qed and qede are compiled from same codebase. Signed-off-by: Rahul Verma <rahul.verma@qlogic.com> Signed-off-by: Yuval Mintz <Yuval.Mintz@qlogic.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: bcmgenet: add BQL supportPetri Gynther2016-04-141-1/+14
| | | | | | | Add Byte Queue Limits (BQL) support to bcmgenet driver. Signed-off-by: Petri Gynther <pgynther@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: bcmgenet: use __napi_schedule_irqoff()Florian Fainelli2016-04-141-4/+4
| | | | | | | | | | bcmgenet_isr1() and bcmgenet_isr0() run in hard irq context, we do not need to block irq again. Signed-off-by: Florian Fainelli <f.fainelli@gmail.com> Signed-off-by: Eric Dumazet <edumazet@google.com> Acked-by: Petri Gynther <pgynther@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: bcmgenet: use napi_complete_done()Eric Dumazet2016-04-141-1/+1
| | | | | | | | | | | | | | | By using napi_complete_done(), we allow fine tuning of /sys/class/net/ethX/gro_flush_timeout for higher GRO aggregation efficiency for a Gbit NIC. Check commit 24d2e4a50737 ("tg3: use napi_complete_done()") for details. Signed-off-by: Eric Dumazet <edumazet@google.com> Cc: Petri Gynther <pgynther@google.com> Cc: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Florian Fainelli <f.fainelli@gmail.com> Acked-by: Petri Gynther <pgynther@google.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* drivers/net/ethernet/jme.c: Deinline jme_reset_mac_processor, save 2816 bytesDenys Vlasenko2016-04-141-1/+1
| | | | | | | | | | | | | This function compiles to 895 bytes of machine code. Clearly, this isn't a time-critical function. For one, it has a number of udelay(1) calls. Signed-off-by: Denys Vlasenko <dvlasenk@redhat.com> CC: David S. Miller <davem@davemloft.net> CC: linux-kernel@vger.kernel.org CC: netdev@vger.kernel.org Signed-off-by: David S. Miller <davem@davemloft.net>
* net: ethernet: stmmac: GMAC4.xx: Fix TX descriptor preparationAlexandre TORGUE2016-04-141-8/+1
| | | | | | | | | | | On GMAC4.xx each descriptor contains 2 buffers of 16KB (each). Initially, those 2 buffers was filled in dwmac4_rd_prepare_tx_desc but it is actually not needed. Indeed, stmmac driver supports frame up to 9000 bytes (jumbo). So only one buffer is needed. Reported-by: Dan Carpenter <dan.carpenter@oracle.com> Signed-off-by: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: do not set the QID field in the TX DMA descriptorsJohn Crispin2016-04-131-2/+1
| | | | | | | | | | | | | | The QID field gets set to the mac id. This made the DMA linked list queue the traffic of each MAC on a different internal queue. However during long term testing we found that this will cause traffic stalls as the multi queue setup requires a more complete initialisation which is not part of the upstream driver yet. This patch removes the code setting the QID field, resulting in all traffic ending up in queue 0 which works without any special setup. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: move the pending_work struct to the device generic structJohn Crispin2016-04-132-10/+7
| | | | | | | | | The worker always touches both netdevs. It is ethernet core and not MAC specific. We only need one worker, which belongs into the ethernets core struct. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: fix mtk_pending_workJohn Crispin2016-04-131-8/+20
| | | | | | | | | | The driver supports 2 MACs. Both run on the same DMA ring. If we hit a TX timeout we need to stop both netdevs before restarting them again. If we don't do this, mtk_stop() wont shutdown DMA and the consecutive call to mtk_open() wont restart DMA and enable IRQs. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: fix TX lockingJohn Crispin2016-04-131-10/+10
| | | | | | | | | | | | Inside the TX path there is a lock inside the tx_map function. This is however too late. The patch moves the lock to the start of the xmit function right before the free count check of the DMA ring happens. If we do not do this, the code becomes racy leading to TX stalls and dropped packets. This happens as there are 2 netdevs running on the same physical DMA ring. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: fix stop and wakeup of queueJohn Crispin2016-04-131-10/+27
| | | | | | | | | | The driver supports 2 MACs. Both run on the same DMA ring. If we go above/below the TX rings threshold value, we always need to wake/stop the queue of both devices. Not doing to can cause TX stalls and packet drops on one of the devices. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: remove superfluous reset callJohn Crispin2016-04-131-4/+0
| | | | | | | | HW reset is triggered in the mtk_hw_init() function. There is no need to also reset the core during probe. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: mtk_cal_txd_req() returns bad valueJohn Crispin2016-04-131-1/+1
| | | | | | | | | The code used to also support the PDMA engine, which had 2 packet pointers per descriptor. Because of this we had to divide the result by 2 and round it up. This is no longer needed as the code only supports QDMA. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* net: mediatek: watchdog_timeo was not setJohn Crispin2016-04-131-0/+1
| | | | | | | | The original commit failed to set watchdog_timeo. This patch sets watchdog_timeo to HZ. Signed-off-by: John Crispin <blogic@openwrt.org> Signed-off-by: David S. Miller <davem@davemloft.net>
* drivers: net: cpsw: drop host_port field from struct cpsw_privGrygorii Strashko2016-04-111-18/+12
| | | | | | | | | | | | | | The host_port field is constantly assigned to 0 and this value has never changed (since time when cpsw driver was introduced. More over, if this field will be assigned to non 0 value it will break current driver functionality. Hence, there are no reasons to continue maintaining this host_port field and it can be removed, and the HOST_PORT_NUM and ALE_PORT_HOST defines can be used instead. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* drivers: net: cpsw: fix port_mask parameters in ale callsGrygorii Strashko2016-04-111-13/+9
| | | | | | | | | | | | | | | | | | | | | ALE APIs expect to receive port masks as input values for arguments port_mask, untag, reg_mcast, unreg_mcast. But there are few places in code where port masks are passed left-shifted by cpsw_priv->host_port, like below: cpsw_ale_add_vlan(priv->ale, priv->data.default_vlan, ALE_ALL_PORTS << priv->host_port, ALE_ALL_PORTS << priv->host_port, 0, 0); and cpsw is still working just because priv->host_port == 0 and has never ever been changed. Hence, fix port_mask parameters in ALE APIs calls and drop "<< priv->host_port" from all places where it's used to shift valid port mask. Signed-off-by: Grygorii Strashko <grygorii.strashko@ti.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bnxt_en: Add async event handling for speed config changes.Michael Chan2016-04-111-0/+16
| | | | | | | | | | | On some dual port cards, link speeds on both ports have to be compatible. Firmware will inform the driver when a certain speed is no longer supported if the other port has linked up at a certain speed. Add logic to handle this event by logging a message and getting the updated list of supported speeds. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bnxt_en: Call firmware to approve VF MAC address change.Michael Chan2016-04-113-4/+34
| | | | | | | | | | Some hypervisors (e.g. ESX) require the VF MAC address to be forwarded to the PF for approval. In Linux PF, the call is not forwarded and the firmware will simply check and approve the MAC address if the PF has not previously administered a valid MAC address for this VF. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bnxt_en: Shutdown link when device is closed.Michael Chan2016-04-111-0/+16
| | | | | | | | Let firmware know that the driver is giving up control of the link so that it can be shutdown if no management firmware is running. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* bnxt_en: Disallow forced speed for 10GBaseT devices.Michael Chan2016-04-113-0/+10
| | | | | | | | 10GBaseT devices must autonegotiate to determine master/slave clocking. Disallow forced speed in ethtool .set_settings() for these devices. Signed-off-by: Michael Chan <michael.chan@broadcom.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* Merge git://git.kernel.org/pub/scm/linux/kernel/git/davem/netDavid S. Miller2016-04-094-14/+31
|\
| * stmmac: fix adjust link call in case of a switch is attachedGiuseppe CAVALLARO2016-04-061-12/+10
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | While initializing the phy, the stmmac driver sets the PHY_IGNORE_INTERRUPT so the PAL won't call the adjust hook that is needed, on some platforms, e.g. STi, to invoke the glue. The patch allows the PAL to poll the stmmac_adjust_link just one time in case of a switch is attached, setting later the PHY_IGNORE_INTERRUPT flag. Moving this kind of logic inside the adjust_link it makes sense to anticipate the check for EEE that will never initialized in this scenario. Reported-by: Gabriel Fernandez <gabriel.fernandez@linaro.org> Signed-off-by: Giuseppe Cavallaro <peppe.cavallaro@st.com> Tested-by: Gabriel Fernandez <gabriel.fernandez@linaro.org> Cc: Alexandre TORGUE <alexandre.torgue@st.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * cxgb4: Add pci device id for chelsio t520-cr adapterHariprasad Shenai2016-04-061-0/+1
| | | | | | | | | | Signed-off-by: Hariprasad Shenai <hariprasad@chelsio.com> Signed-off-by: David S. Miller <davem@davemloft.net>
| * e1000: Double Tx descriptors needed check for 82544Alexander Duyck2016-04-061-1/+18
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The 82544 has code that adds one additional descriptor per data buffer. However we weren't taking that into account when determining the descriptors needed for the next transmit at the end of the xmit_frame path. This change takes that into account by doubling the number of descriptors needed for the 82544 so that we can avoid a potential issue where we could hang the Tx ring by loading frames with xmit_more enabled and then stopping the ring without writing the tail. In addition it adds a few more descriptors to account for some additional workarounds that have been added over time. Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * e1000: Do not overestimate descriptor counts in Tx pre-checkAlexander Duyck2016-04-051-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The current code path is capable of grossly overestimating the number of descriptors needed to transmit a new frame. This specifically occurs if the skb contains a number of 4K pages. The issue is that the logic for determining the descriptors needed is ((S) >> (X)) + 1. When X is 12 it means that we were indicating that we required 2 descriptors for each 4K page when we only needed one. This change corrects this by instead adding (1 << (X)) - 1 to the S value instead of adding 1 after the fact. This way we get an accurate descriptor needed count as we are essentially doing a DIV_ROUNDUP(). Reported-by: Ivan Suzdal <isuzdal@mirantis.com> Signed-off-by: Alexander Duyck <aduyck@mirantis.com> Tested-by: Aaron Brown <aaron.f.brown@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
| * i40e: fix errant PCIe bandwidth messageJesse Brandeburg2016-04-051-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | There was an error introduced with commit 3fced535079a ("i40e: X722 is on the IOSF bus and does not report the PCI bus info"), where code was added but the enabling flag is never set. CC: Anjali Singhai Jain <anjali.singhai@intel.com> CC: Stefan Assman <sassman@redhat.com> Fixes: 3fced535079a ("i40e: X722 is on the IOSF bus ...") Reported-by: Steve Best <sbest@redhat.com> Signed-off-by: Jesse Brandeburg <jesse.brandeburg@intel.com> Tested-by: Andrew Bowers <andrewx.bowers@intel.com> Signed-off-by: Jeff Kirsher <jeffrey.t.kirsher@intel.com>
* | ibmvnic: Enable use of multiple tx/rx scrqsJohn Allen2016-04-092-20/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Enables the use of multiple transmit and receive scrqs allowing the ibmvnic driver to take advantage of multiqueue functionality. To achieve this, the driver must implement the process of negotiating the maximum number of queues allowed by the server. Initially, the driver will attempt to login with the maximum number of tx and rx queues supported by the server. If the server fails to allocate the requested number of scrqs, it will return partial success in the login response. In this case, we must reinitiate the login process from the request capabilities stage and attempt to login requesting fewer scrqs. Signed-off-by: John Allen <jallen@linux.vnet.ibm.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | mlxsw: reg: Fix SBPM register nameJiri Pirko2016-04-081-2/+2
| | | | | | | | | | | | | | | | Fix copy&paste error and state the name of SBPM register correctly. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | mlxsw: reg: Share direction enum between SBPR, SBCM, SBPMJiri Pirko2016-04-082-26/+17
| | | | | | | | | | | | | | | | Same field, same values, so share the same enum. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | mlxsw: Do not pass around driver_priv directlyJiri Pirko2016-04-084-24/+31
| | | | | | | | | | | | | | | | | | Instead of that, pass mlxsw_core and use a helper to get driver priv from driver code. Looks much cleaner that way. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | mlxsw: Pass mlxsw_core as a param of mlxsw_core_skb_transmit*Jiri Pirko2016-04-084-19/+9
| | | | | | | | | | | | | | | | | | Instead of passing around driver priv, pass struct mlxsw_core * directly. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | mlxsw: Move devlink port registration into common core codeJiri Pirko2016-04-085-41/+55
| | | | | | | | | | | | | | | | | | | | Remove devlink port reg/unreg from spectrum and switchx2 code and rather do the common work in core. That also ensures code separation where devlink is only used in core.c. Signed-off-by: Jiri Pirko <jiri@mellanox.com> Reviewed-by: Ido Schimmel <idosch@mellanox.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: allow ring size reconfiguration at runtimeJakub Kicinski2016-04-083-21/+136
| | | | | | | | | | | | | | | | | | Since much of the required changes have already been made for changing MTU at runtime let's use it for ring size changes as well. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: pass ring count as function parameterJakub Kicinski2016-04-081-9/+14
| | | | | | | | | | | | | | | | | | Soon ring resize will call this functions with values different than the current configuration we need to explicitly pass the ring count as parameter. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: convert .ndo_change_mtu() to prepare/commit paradigmJakub Kicinski2016-04-081-6/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When changing MTU on running device first allocate new rings and buffers and once it succeeds proceed with changing MTU. Allocation of new rings is not really necessary for this operation - it's done to keep the code simple and because size of the extra ring memory is quite small compared to the size of buffers. Operation can still fail midway through if FW communication times out. In that case we retry with old MTU (rings). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: propagate list buffer size in struct rx_ringJakub Kicinski2016-04-082-8/+19
| | | | | | | | | | | | | | | | | | | | Free list buffer size needs to be propagated to few functions as a parameter and added to struct nfp_net_rx_ring since soon some of the functions will be reused to manage rings with buffers of size different than nn->fl_bufsz. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: sync ring state during FW reconfigurationJakub Kicinski2016-04-081-29/+16
| | | | | | | | | | | | | | | | | | | | | | | | FW reconfiguration in .ndo_open()/.ndo_stop() should reset/ restore queue state. Since we need IRQs to be disabled when filling rings on RX path we have to move disable_irq() from .ndo_open() all the way up to IRQ allocation. nfp_net_start_vec() becomes trivial now so it's inlined. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: slice .ndo_open() and .ndo_stop() upJakub Kicinski2016-04-081-82/+136
| | | | | | | | | | | | | | | | Divide .ndo_open() and .ndo_stop() into logical, callable chunks. No functional changes. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: move filling ring information to FW configJakub Kicinski2016-04-081-18/+32
| | | | | | | | | | | | | | | | | | | | | | nfp_net_[rt]x_ring_{alloc,free} should only allocate or free ring resources without touching the device. Move setting parameters in the BAR to separate functions. This will make it possible to reuse alloc/free functions to allocate new rings while the device is running. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: preallocate RX buffers early in .ndo_openJakub Kicinski2016-04-081-23/+11
| | | | | | | | | | | | | | | | | | | | | | | | We want the .ndo_open() to have following structure: - allocate resources; - configure HW/FW; - enable the device from stack perspective. Therefore filling RX rings needs to be moved to the beginning of .ndo_open(). Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: reorganize initial filling of RX ringsJakub Kicinski2016-04-081-41/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Separate allocation of buffers from giving them to FW, thanks to this it will be possible to move allocation earlier on .ndo_open() path and reuse buffers during runtime reconfiguration. Similar to TX side clean up the spill of functionality from flush to freeing the ring. Unlike on TX side, RX ring reset does not free buffers from the ring. Ring reset means only that FW pointers are zeroed and buffers on the ring must be placed in [0, cnt - 1) positions. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: cleanup tx ring flush and rename to resetJakub Kicinski2016-04-081-44/+37
| | | | | | | | | | | | | | | | | | | | | | Since we never used flush without freeing the ring later the functionality of the two operations is mixed. Rename flush to ring reset and move there all the things which have to be done after FW ring state is cleared. While at it do some clean-ups. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: allocate ring SW structs dynamicallyJakub Kicinski2016-04-083-17/+37
| | | | | | | | | | | | | | | | To be able to switch rings more easily on config changes allocate them dynamically, separately from nfp_net structure. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: make *x_ring_init do all the initJakub Kicinski2016-04-081-10/+18
| | | | | | | | | | | | | | | | | | | | nfp_net_[rt]x_ring_init functions used to be called from probe path only and some of their functionality was spilled to the call site. In order to reuse them for ring reconfiguration we need them to do all the init. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: break up nfp_net_{alloc|free}_ringsJakub Kicinski2016-04-081-79/+47
| | | | | | | | | | | | | | | | | | nfp_net_{alloc|free}_rings contained strange mix of allocations and vector initialization. Remove it, declare vector init as a separate function and handle allocations explicitly. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>
* | nfp: move link state interrupt request/free callsJakub Kicinski2016-04-081-11/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | We need to be able to disable the link state interrupt when the device is brought down. We used to just free the IRQ at the beginning of .ndo_stop(). As we now move towards more ordered .ndo_open()/.ndo_stop() paths LSC allocation should be placed in the "allocate resource" section. Since the IRQ can't be freed early in .ndo_stop(), it is disabled instead. Signed-off-by: Jakub Kicinski <jakub.kicinski@netronome.com> Signed-off-by: David S. Miller <davem@davemloft.net>