Bash Shell Read a Line Field By Field

Posted on in Categories last updated March 15, 2010

How do I read a file field-by-field under UNIX / Linux / BSD Bash shell? My sample input data file is as follows:



For each line I need to construct and execute a shell command as follows:
/path/to/deviceMaker --context=$1 -m $permissions $device2 $deviceType $major $minor

You can use a while loop along with the read command, Internal Field Separator (IFS), and HERE STRINGS as follows:

[ $# -eq 0 ] && { echo "Usage: $0 arg1"; exit 1; }
while read -r line
	IFS=, read -r f1 f2 f3 f4 f5 <<<"$line"
        # quote fields if needed 
	$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4
done <"$input"

See also

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin and a trainer for the Linux operating system/Unix shell scripting. He has worked with global clients and in various industries, including IT, education, defense and space research, and the nonprofit sector. Follow him on Twitter, Facebook, Google+.

5 comment

  1. "$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4"

    That line will fail if any of the arguments contain whitespace. They should be quoted:

    $cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"

    The example can be coded more efficiently:

    while IFS=, read -r f1 f2 f3 f4 f5
       	$cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"
    done < "$input"
  2. If the file is longer than X lines, an awk script will be faster because it doesn’t have to interpret a loop for every line.

    X may be anywhere from a dozen to a gross, or perhaps more or less.

  3. > Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

    This is based upon my own experience. awk always performed well when we skipped the ksh [read/while] for file size > 1G. We used time command to get exact values. See thread at nixcraft forum about awk vs shell for counting ips. Also, code in C will not help or speed up (or not worth your time) if awk can not perform well. awk is a real winner when it comes to processing large file size. I have tested and worked on real UNIX (HP-UX v11.0) not on GNU/Linux (no offense, Linux is wonderful too but OIL industry likes to spends its money on HP stuff) tools so YMMV.

Leave a Comment