How To Find and Overcome Shell Command Line Length Limitations

by on May 17, 2008 · 7 comments· LAST UPDATED April 3, 2012

in , ,

While using mv or rm command I get command line length error (Argument list too long error). How do I find out current running shell command line length limitations? How do I overcomes these limitations while writing UNIX / BSD / Linux shell utilities?

All shell have / has a limit for the command line length. UNIX / Linux / BSD system has a limit on how many bytes can be used for the command line argument and environment variables. When you start a new process or type a command these limitations are applied and you will see an error message as follows on screen:

Argument list too long

How do I find out current command line length limitations?

Type the following command (works under Linux / UNIX / BSD operating systems):
$ getconf ARG_MAX
Sample output:

262144

BSD operating system also supports following command:
$ sysctl kern.argmax
Sample output:

kern.argmax=262144

To get accurate picture about limitation type the following command (hat tip to Jeff):
$ echo $(( $(getconf ARG_MAX) - $(env | wc -c) ))
Output:

261129

How do overcome shell command line length?

You have following option to get around these limitations:

  • Use find or xargs command
  • Use shell for / while loop

find command example to get rid of "argument list too long" error

$ find /nas/data/accounting/ -type f -exec ls -l {} \;
$ find /nas/data/accounting/ -type f -exec /bin/rm -f {} \;

xargs command example to get rid of "argument list too long" error

$ echo /nas/data/accounting/* | xargs ls -l
$ echo /nas/data/accounting/* | xargs /bin/rm -f

while loop example to get rid of "argument list too long" error

ls -1 /nas/data/accounting/ | while read file; do mv /nas/data/accounting/$file /local/disk/ ; done

Alternatively, you can combine above methods:

find /nas/data/accounting/ -type f |
   while read file
   do
     mv /nas/data/accounting/$file /local/disk/
   done

time command - give resource usage

Use time command to find out exact system resource usage for each command:
$ time find blah blah
$ time ls -1 blah | while read file; do #blah on $file; done

Further readings:

  • Your shell documentation
  • man pages ksh, bash, getconf, sysconf, sysctl, find, and xargs
TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 7 comments… read them below or add one }

1 Mel May 18, 2008 at 10:20 am

When using ‘+’ as end for find(1), you emulate xargs behavior, in that it will only pass the arguments to the command when the argument list is saturated, getting rid of numerous forks.

Reply

2 Timothy Hallbeck July 16, 2010 at 9:03 pm

Brief & precise, very helpful and just what I was looking for. Thanks.

Reply

3 Jeff Schroeder August 6, 2010 at 2:20 pm

It is a small world… I just googled something unrelated and found this page. Then I noticed that command and realized you linked my website. Thanks!

Reply

4 felix021 August 22, 2012 at 1:57 am

Hi, thanks for your post. But I guess the “echo * | xargs cmd” would fail because the asterisk will be replaced with all file names and then the echo command will be faced with an “argument list too long” error. Again the find command helps: find -type f | xargs rm -f
:)

Reply

5 Mike September 3, 2012 at 12:05 pm

Another way of staying under the limit is to use your resources wisely. Don’t forget that “$ /bin/rm /nas/data/accounting/*” will, after globbing, prepend every filename with “/nas/data/accounting/”, wasting 21 characters per filename, dramatically increasing the length of your command line. Instead, “$ pushd /nas/data/accounting/; /bin/rm *; popd”. Also, I cannot stress enough how important it is to actually test your examples. In addition to the problem with your xargs example, neither of the while loop samples will work, because you specified “/nas/data/accounting” in both the find/ls and the rm. This will duplicate the path in the rm and cause the rm to fail, if you’re lucky. If you’re unlucky it will succeed, and delete the wrong file.

Reply

6 mohammed nv December 28, 2012 at 5:42 am

With Linux 2.6.23, ARG_MAX is not hardcoded anymore. It is limited to a 1/4-th of the stack size (ulimit -s), which ensures that the program still can run at all.

getconf ARG_MAX might still report the former limit (being careful about applications or glibc not catching up, but especially because the kernel still defines it)

reference: http://www.in-ulm.de/~mascheck/various/argmax/

Reply

7 GarryBrown March 11, 2013 at 5:27 pm

Thanks for the information. Also try Long Path Tool. It helped me with Error 1320 in Win 7. :)

Reply

Leave a Comment

Tagged as: , , , , , , , , , ,

Previous Faq:

Next Faq: