Bash Shell Read a Line Field By Field

How do I read a file field-by-field under UNIX / Linux / BSD Bash shell? My sample input data file is as follows:



For each line I need to construct and execute a shell command as follows:
/path/to/deviceMaker --context=$1 -m $permissions $device2 $deviceType $major $minor

You can use a while loop along with the read command, Internal Field Separator (IFS), and HERE STRINGS as follows:

[ $# -eq 0 ] && { echo "Usage: $0 arg1"; exit 1; }
while read -r line
	IFS=, read -r f1 f2 f3 f4 f5 <<<"$line"
        # quote fields if needed 
	$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4
done <"$input"

See also

🐧 Get the latest tutorials on Linux, Open Source & DevOps via RSS feed or Weekly email newsletter.

🐧 5 comments so far... add one

CategoryList of Unix and Linux commands
Disk space analyzersdf ncdu pydf
File Managementcat cp mkdir tree
FirewallAlpine Awall CentOS 8 OpenSUSE RHEL 8 Ubuntu 16.04 Ubuntu 18.04 Ubuntu 20.04
Network UtilitiesNetHogs dig host ip nmap
OpenVPNCentOS 7 CentOS 8 Debian 10 Debian 8/9 Ubuntu 18.04 Ubuntu 20.04
Package Managerapk apt
Processes Managementbg chroot cron disown fg jobs killall kill pidof pstree pwdx time
Searchinggrep whereis which
User Informationgroups id lastcomm last lid/libuser-lid logname members users whoami who w
WireGuard VPNAlpine CentOS 8 Debian 10 Firewall Ubuntu 20.04
5 comments… add one
  • Chris F.A. Johnson Mar 15, 2010 @ 10:15

    "$cmd --context="$arg" -m $f5 $f1 $f2 $f3 $f4"

    That line will fail if any of the arguments contain whitespace. They should be quoted:

    $cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"

    The example can be coded more efficiently:

    while IFS=, read -r f1 f2 f3 f4 f5
       	$cmd --context="$arg" -m "$f5" "$f1" "$f2" "$f3" "$f4"
    done < "$input"
  • Vinod Dham Mar 15, 2010 @ 11:56

    > The example can be coded more efficiently:

    Agreed, but consider using AWK instead if you get performance problems.

  • Philippe Petrinko Mar 15, 2010 @ 16:13

    Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

  • Chris F.A. Johnson Mar 15, 2010 @ 17:44

    If the file is longer than X lines, an awk script will be faster because it doesn’t have to interpret a loop for every line.

    X may be anywhere from a dozen to a gross, or perhaps more or less.

  • Vinod Dham Mar 16, 2010 @ 7:59

    > Do you have any evidence that a [read/while loop] script would be less efficient than a awk script ?

    This is based upon my own experience. awk always performed well when we skipped the ksh [read/while] for file size > 1G. We used time command to get exact values. See thread at nixcraft forum about awk vs shell for counting ips. Also, code in C will not help or speed up (or not worth your time) if awk can not perform well. awk is a real winner when it comes to processing large file size. I have tested and worked on real UNIX (HP-UX v11.0) not on GNU/Linux (no offense, Linux is wonderful too but OIL industry likes to spends its money on HP stuff) tools so YMMV.

Leave a Reply

Your email address will not be published.

Use HTML <pre>...</pre> for code samples. Still have questions? Post it on our forum