≡ Menu

Bash Shell Read a Line Field By Field

How do I read a file field-by-field under UNIX / Linux / BSD Bash shell? My sample input data file is as follows:

device1,deviceType,major,minor,permissions
device2,deviceType,major,minor,permissions
...
....
.
deviceN,deviceTypeN,major,minor,permissions

For each line I need to construct and execute a shell command as follows:
/path/to/deviceMaker --context=$1 -m $permissions $device2 $deviceType $major $minor

You can use a while loop along with the read command, Internal Field Separator (IFS), and HERE STRINGS as follows:

#!/bin/bash
input=/path/to/data.txt
[ $# -eq 0 ] && { echo "Usage: $0 arg1"; exit 1; }
arg="$1"
cmd=/path/to/deviceMaker
while read -r line
do
	IFS=, read -r f1 f2 f3 f4 f5 <<<"$line"
        # quote fields if needed 
	$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4
done <"$input"

See also

Tweet itFacebook itGoogle+ itPDF itFound an error/typo on this page?

{ 5 comments… add one }

  • Chris F.A. Johnson March 15, 2010, 10:15 am

    "$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4"

    That line will fail if any of the arguments contain whitespace. They should be quoted:

    $cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"

    —————
    The example can be coded more efficiently:

    while IFS=, read -r f1 f2 f3 f4 f5
    do
       	$cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"
    done < "$input"
    
  • Vinod Dham March 15, 2010, 11:56 am

    > The example can be coded more efficiently:

    Agreed, but consider using AWK instead if you get performance problems.

  • Philippe Petrinko March 15, 2010, 4:13 pm

    @Vinod
    Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

  • Chris F.A. Johnson March 15, 2010, 5:44 pm

    If the file is longer than X lines, an awk script will be faster because it doesn’t have to interpret a loop for every line.

    X may be anywhere from a dozen to a gross, or perhaps more or less.

  • Vinod Dham March 16, 2010, 7:59 am

    > Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

    This is based upon my own experience. awk always performed well when we skipped the ksh [read/while] for file size > 1G. We used time command to get exact values. See thread at nixcraft forum about awk vs shell for counting ips. Also, code in C will not help or speed up (or not worth your time) if awk can not perform well. awk is a real winner when it comes to processing large file size. I have tested and worked on real UNIX (HP-UX v11.0) not on GNU/Linux (no offense, Linux is wonderful too but OIL industry likes to spends its money on HP stuff) tools so YMMV.

Leave a Comment