How To Find and Overcome Shell Command Line Length Limitations

While using mv or rm command I get command line length error (Argument list too long error). How do I find out current running shell command line length limitations? How do I overcomes these limitations while writing UNIX / BSD / Linux shell utilities?

All shell have / has a limit for the command line length. UNIX / Linux / BSD system has a limit on how many bytes can be used for the command line argument and environment variables. When you start a new process or type a command these limitations are applied and you will see an error message as follows on screen:

Argument list too long

How do I find out current command line length limitations?

Type the following command (works under Linux / UNIX / BSD operating systems):
$ getconf ARG_MAX
Sample output:


BSD operating system also supports following command:
$ sysctl kern.argmax
Sample output:


To get accurate picture about limitation type the following command (hat tip to Jeff):
$ echo $(( $(getconf ARG_MAX) - $(env | wc -c) ))


How do overcome shell command line length?

You have following option to get around these limitations:

  • Use find or xargs command
  • Use shell for / while loop

find command example to get rid of “argument list too long” error

$ find /nas/data/accounting/ -type f -exec ls -l {} \;
$ find /nas/data/accounting/ -type f -exec /bin/rm -f {} \;

xargs command example to get rid of “argument list too long” error

$ echo /nas/data/accounting/* | xargs ls -l
$ echo /nas/data/accounting/* | xargs /bin/rm -f

while loop example to get rid of “argument list too long” error

ls -1 /nas/data/accounting/ | while read file; do mv /nas/data/accounting/$file /local/disk/ ; done

Alternatively, you can combine above methods:

find /nas/data/accounting/ -type f | 
   while read file
     mv /nas/data/accounting/$file /local/disk/

time command – give resource usage

Use time command to find out exact system resource usage for each command:
$ time find blah blah
$ time ls -1 blah | while read file; do #blah on $file; done

Further readings:

  • Your shell documentation
  • man pages ksh, bash, getconf, sysconf, sysctl, find, and xargs

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 9 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersdf duf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Modern utilitiesbat exa
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg glances gtop jobs killall kill pidof pstree pwdx time vtop
Searchingag grep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
9 comments… add one
  • Mel May 18, 2008 @ 10:20

    When using ‘+’ as end for find(1), you emulate xargs behavior, in that it will only pass the arguments to the command when the argument list is saturated, getting rid of numerous forks.

  • Timothy Hallbeck Jul 16, 2010 @ 21:03

    Brief & precise, very helpful and just what I was looking for. Thanks.

  • Jeff Schroeder Aug 6, 2010 @ 14:20

    It is a small world… I just googled something unrelated and found this page. Then I noticed that command and realized you linked my website. Thanks!

  • felix021 Aug 22, 2012 @ 1:57

    Hi, thanks for your post. But I guess the “echo * | xargs cmd” would fail because the asterisk will be replaced with all file names and then the echo command will be faced with an “argument list too long” error. Again the find command helps: find -type f | xargs rm -f

  • Mike Sep 3, 2012 @ 12:05

    Another way of staying under the limit is to use your resources wisely. Don’t forget that “$ /bin/rm /nas/data/accounting/*” will, after globbing, prepend every filename with “/nas/data/accounting/”, wasting 21 characters per filename, dramatically increasing the length of your command line. Instead, “$ pushd /nas/data/accounting/; /bin/rm *; popd”. Also, I cannot stress enough how important it is to actually test your examples. In addition to the problem with your xargs example, neither of the while loop samples will work, because you specified “/nas/data/accounting” in both the find/ls and the rm. This will duplicate the path in the rm and cause the rm to fail, if you’re lucky. If you’re unlucky it will succeed, and delete the wrong file.

  • mohammed nv Dec 28, 2012 @ 5:42

    With Linux 2.6.23, ARG_MAX is not hardcoded anymore. It is limited to a 1/4-th of the stack size (ulimit -s), which ensures that the program still can run at all.

    getconf ARG_MAX might still report the former limit (being careful about applications or glibc not catching up, but especially because the kernel still defines it)


  • GarryBrown Mar 11, 2013 @ 17:27

    Thanks for the information. Also try Long Path Tool. It helped me with Error 1320 in Win 7. :)

  • Gumnos Feb 19, 2016 @ 21:45

    For the common case, GNU `find` (but not BSD `find`) also offers a `-delete` parameter/command. So your

    $ find /nas/data/accounting/ -type f -exec /bin/rm -f {} \;

    example can become

    $ find /nas/data/accounting/ -type f -delete

  • IronHand Oct 12, 2016 @ 4:45

    I think most OS are limited to that number unless the software you are using can go around that. I’ve been using GS RichCopy 360 in dealing with long path file errors and it works very well. Any other suggestions that can be tested out?

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum