Bash Shell Read a Line Field By Field

by on March 14, 2010 · 5 comments· LAST UPDATED March 15, 2010

in

How do I read a file field-by-field under UNIX / Linux / BSD Bash shell? My sample input data file is as follows:

device1,deviceType,major,minor,permissions
device2,deviceType,major,minor,permissions
...
....
.
deviceN,deviceTypeN,major,minor,permissions

For each line I need to construct and execute a shell command as follows:
/path/to/deviceMaker --context=$1 -m $permissions $device2 $deviceType $major $minor

You can use a while loop along with the read command, Internal Field Separator (IFS), and HERE STRINGS as follows:

#!/bin/bash
input=/path/to/data.txt
[ $# -eq 0 ] && { echo "Usage: $0 arg1"; exit 1; }
arg="$1"
cmd=/path/to/deviceMaker
while read -r line
do
	IFS=, read -r f1 f2 f3 f4 f5 <<<"$line"
        # quote fields if needed 
	$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4
done <"$input"

See also

TwitterFacebookGoogle+PDF versionFound an error/typo on this page? Help us!

{ 5 comments… read them below or add one }

1 Chris F.A. Johnson March 15, 2010 at 10:15 am

"$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4"

That line will fail if any of the arguments contain whitespace. They should be quoted:

$cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"

—————
The example can be coded more efficiently:

while IFS=, read -r f1 f2 f3 f4 f5
do
   	$cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"
done < "$input"

Reply

2 Vinod Dham March 15, 2010 at 11:56 am

> The example can be coded more efficiently:

Agreed, but consider using AWK instead if you get performance problems.

Reply

3 Philippe Petrinko March 15, 2010 at 4:13 pm

@Vinod
Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

Reply

4 Chris F.A. Johnson March 15, 2010 at 5:44 pm

If the file is longer than X lines, an awk script will be faster because it doesn’t have to interpret a loop for every line.

X may be anywhere from a dozen to a gross, or perhaps more or less.

Reply

5 Vinod Dham March 16, 2010 at 7:59 am

> Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

This is based upon my own experience. awk always performed well when we skipped the ksh [read/while] for file size > 1G. We used time command to get exact values. See thread at nixcraft forum about awk vs shell for counting ips. Also, code in C will not help or speed up (or not worth your time) if awk can not perform well. awk is a real winner when it comes to processing large file size. I have tested and worked on real UNIX (HP-UX v11.0) not on GNU/Linux (no offense, Linux is wonderful too but OIL industry likes to spends its money on HP stuff) tools so YMMV.

Reply

Leave a Comment

Tagged as: , , , , , , , , ,

Previous Faq:

Next Faq: