Output of scripts is .. well, opinions differ.

While I prefer to have no output while everything works, I want information when it fails. Script that run automatically and are triggered by jobs either throw everything away or often handle their own logfiles.

But why? I’ve got no idea.

There’s a whole directory for logs (usually /var/log) and tools like logrotate available to handle this kind of stuff.

So why not logging to syslog (and the console ofc) instead?

This line is pretty handy to get the job done:

exec 1> >(logger -s -t $(basename $0)) 2>&1

Just to give you quick idea of what it does:

  • exec:

    If command is specified, it replaces the shell. […] If command is not specified, any redirections take effect in the current shell, and the return status is 0.

    There is no command in this case, so this redirects all output somehwere else.

  • 1> >(command): Everything going to STDOUT shall go into command instead.
  • 2>&1: Everything from STDERR shall go into STDOUT (which goes to command)
  • logger -s -t $(basename $0): logger gets the input from STDOUT, puts it into syslog and spits it out on STDOUT as well.

So far, so good.

There is one downside, though:

By redirecting STDOUT and STDERR into logger we lose the ability to separate STDOUT and STDERR in the script. If this is important, you cannot use this one-liner.

Another solution would be to split this into two functions (like log() and err()) and handle the problem that way. But then you’d have to explicitly use these functions to log to syslog. Not perfect either.