Linux: Configuring RX POLLING (NAPI)

My CentOS / RHEL based server is configured with Intel PRO/1000 network interface cards. NAPI (Rx polling mode) is supported in the e1000 driver. I’ve multiple CPUs. How do I configure NAPI, decreasing interrupts and improve overall server network performance?

NAPI is enabled or disabled based on the configuration of the kernel. e1000 driver does supports NAPI.

Enable NAPI

Downolad the latest driver version by visting the following url:

  1. Linux kernel driver for the Intel(R) PRO/100 Ethernet devices, Intel(R) PRO/1000 gigabit Ethernet devices, and Intel(R) PRO/10GbE devices.

To enable NAPI, compile the driver module, passing in a configuration option:
make CFLAGS_EXTRA=-DE1000_NAPI install
Once done simply install new drivers.

Decreasing Interrupts

If you’ve multiprocessor systems, consider binding the interrupts of the network interface cards to a physical CPU to gain additional performance. Find out IRQ of of the network card, run:
# ifconfig eth0
# ifconfig eth0 | grep -i Interrupt
Sample outputs:


To set the smp_affinity i.e. bound interrupt 179 of eth0 to the third processor in the system, enter:
# echo 03 > /proc/irq/179/smp_affinity
Add above command to /etc/rc.local. See Intel e1000 documentation for more information (Kernel v2.6.26 or above turns on NAPI support by default). Broadcom tg3 drives also support NAPI and the latest version comes with built in NAPI support.

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 1 comment so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersdf duf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Modern utilitiesbat exa
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg glances gtop jobs killall kill pidof pstree pwdx time vtop
Searchingag grep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
1 comment… add one
  • Jeff Schroeder Nov 28, 2009 @ 3:50

    This is not the proper way to do this. First off, what if you have an ssd drive putting most of the writes on cpu core 3? What if you have some virtualizaton solution driving the interrupts on core3 sky high? This will signifigantly lower performance if you do it incorrectly. How do you know? Blindly pinning a nic to some random CPU core is a BAD idea unless you actually know what you’re doing. You should first watch the interrupts:
    watch -n 1 cat /proc/interrupts

    Then you can find a free cpu core to pin the nic too. It will be the obvious one where the number of interrupts is not going up as quickly or isn’t as high as the other ones. You should also look at running irqbalance unless you actually know what you’re doing.

    This might also help if you want to script this stuff:
    intr=$(ifconfig eth0 | awk -F: ‘/Interrupt:/{print $NF}’)

    Additionally, take a look at network card coalescing. ethtool -c eth0 to see the current coalescing settings for eth0 and you can change them with ethtool -C. That can change the number of interrupts drastically from batching interrupts for large numbers of packets or a single interrupt for packet. The tradeoff is latency vs throughput. Pick one or the other.

    My linux blog

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum