New Messages
(E-Media Production Manager's note: Figures and Listings referenced within the New Messages/Letters can be found by
scrolling to the bottom of the current page.)
Editor's Note: A particularly happy feature of Sys
Admin is the
responsiveness of our readers. Some write to elaborate
on or improve
or correct code that we've published; others write to
share something
they've done that is perhaps too short for an article;
and many write
to help out fellow readers who've written in with a
problem. Space
is usually short in the magazine, and so we often are
not able to
publish all of the mail we receive: with this issue,
however, we set
aside several extra pages to try to catch you up on
what your colleagues
have been saying.
Subject: UUCP + Pager
To: saletter@rdpub.com
In "UUCP + Pager = Automated Warning System,"
Carlos Francisco
Gomez described a method for using UUCP to send him
messages via pager.
I, likewise, have been using UUCP to keep me informed
of our systems'
status. The method I choose to use, though not as elegant
as Mr. Gomez's,
has served to be a very useful warning system.
Simply put, I just use the cu command to send a message
to
my pager. By having a working "ACU" device
defined in your
"Devices" file, all that you need to do to
call your pager
is : "cu 5555555---12", where "5555555"
is the pager's
phone number, and "12" is some status code.
(If you prefer
to keep it in C, a like method will also work with the
"dial"
function.) The only trick is setting how long UUCP should
wait after
dialing the number until sending the status code. (Each
minus sign
between the pager number and the status code represents
a four-second
delay.)
As Mr. Gomez stated in his caveats, UUCP wants to connect
to another
modem or it believes it has failed. I find this to be
an advantage.
At least for SV3.2, cu will try to call a second time
if it
fails to detect a successful connection. This generally
gives me two
pages about 4-5 minutes apart. I think of it as an added
safeguard
in case there is a true failure.
I have added the cu (or dial) command to a variety
of programs. In general I just look for a bad return
code then page
away! Additionally, I have set any programs which use
the paging system
to read the pager number from a file. This allows us
to change who
is "on call" from week to week without much
trouble (in fact,
we change the pager number via cron).
As far as status codes, I use a two-part code. The first
part represents
the machine reporting the error. The second part represents
the actual
error. For example, if my pager reads "9000 1",
it means on
my computer called "9000" there was a "1"
error. This
might translate to mean my HP9000 failed while verifying
its backup.
The status codes are flexible enough that you can do
whatever you
need (of course you'll have to walk around with a post-it
note in
your wallet until you remember what the code numbers
represent).
I would have to say the main advantage to using cu alone
as
a paging warning system lies in its ease of implementation.
(And if you're wondering . . . I was inspired to set
up this paging
system up after reading the book The Cuckoo's Egg by
Clifford
Stoll.)
Kevin J. Brandich
Systems Analyst, USDC OHN
kbran@ohnd.uscourts.gov
Editor,
In the January/February 1994 issue of Sys Admin, you
printed
a letter from Ben Thomas in which he discussed the changes
needed
to make the "Revised sudo" run on his system
w/ a shadow password
file. In that letter Mr. Thomas made a statement which
I believe to
be incorrect.
Mr. Thomas states "So this code works with both
shadow and non-shadow
passwords." While the code Mr. Thomas submitted
will work on a
system with shadow passwording, I do not believe it
will work on a
system without it. Specifically I believe that on a
system without
shadowing (not even optionally), the compiler/loader
will not be able
to resolve the reference to getspnam, and the #include
<shadow.h> (and hence the struct spwd *spw, *getspnam();
declaration). To make his revised code work on such
a system, it would
be necessary to create a dummy system include file (i.e.,
<shadow.h>)
and a dummy routine (i.e., getspnam()).
I agree with Mr. Thomas's implied intention of having
a single version
of code that will work with or without shadowing. Such
a version could
be made by isolating all code pieces dealing with the
shadow password
file inside #ifdef / #ifndef blocks. A version of
Mr. Thomas's code revised in such a manner is shown
in Figure 1 (revised
sections only: note that the "struct group"
line is not included
since it is not needed in the find_passwd routine).
A sample
compilation command line is shown in Figure 2. The scripts
referenced
in the compilation line (gen and lib) are shown in
Figure 3 and Figure 4 respectively.
The -Dshadow definition determines if the code dealing
with
the shadow password file is included during the compilation.
The -lsec
library inclusion brings in the optional security library
(library with getspent, getspnam, getspuid,
... routines). This example has been tested on these
types of systems:
-- Sun 4/280 running SunOS
no shadowing (neither -Dshadow or -lsec)
-- Pyramid MIS-4T/2 running OSx
shadowing optional and enabled (-Dshadow and -lsec)
-- Pyramid MIS-12S/3 running DC/OSx
shadowing by default (-Dshadow but no -lsec, since
shadowing is std.)
Michael J. Chinni
US Army ARDEC
Picatinny Arsenal, NJ
MIL: mchinni@pica.army.mil
UUCP: ...!uunet!pica.army.mil!mchinni
Editor,
In the January/February 1994 issue of Sys Admin, you
printed
an article by Larry Reznick entitled "Weeding your
System."
In there Mr. Reznick suggests setting up cron jobs to
have cron automatically
"weed" your system. While the method of individual
cron jobs
to do the weeding will work well on moderately sized
systems, on a
large system with many things to weed and many other
things that need
to be run by cron, the crontab file can get to be quite
lengthy and
difficult to keep track of.
What we in the main ARDEC UNIX Systems Administration
and Customer
Service group have done here to deal with this problem
is to have
a single cron job as a sort of weeder controller. This
program controls
all weeding actions that are to be done. We call this
program daily.
The specifications for daily are shown in Figure 5.
A Korn Shell script
which implements daily is shown in Listing 1.
Michael J. Chinni
US Army ARDEC
Picatinny Arsenal, NJ
MIL: mchinni@pica.army.mil
UUCP: ...!uunet!pica.army.mil!mchinni
[Editor's Note: In the March/April issue we published
a letter
from Steve Kaplan asking for help. Here is the portion
of Steve's
letter that states his problem, along with two replies
from readers.]
I work for the Federal courts and we are in the process
of converting
from a Unisys (95/5000) minicomputer to a high-end 80486
PC running
Interactive Unix. On the Unisys, we had a dedicated
console port,
to which we attached a dumb terminal (as you will read,
this "dumb"
terminal was not so dumb after all). We set the console
to "auto
print" and attached a simple dot matrix printer
to the console's
printer port. This allowed us to capture on paper all
of the console
messages, hardware panics, diagnostics, bootup and shutdown
progress
and lots of other goodies (such as failed logins).
The new PC is just that -- a PC with a VGA/monitor port
that drives
a standard VGA monitor. My question is what do I need
to do to also
capture the console messages onto a printer? We are
aware of the "alternate
console" concept with UNIX, but there will still
be a plethora
of messages that are not directed to /dev/console but
will
only appear on the "PC's monitor."
Are there any hardware (or software) solutions that
will send the
console monitor's signal to both the console monitor
AND a printer?
I know devices exist to split the signal and send one
signal to LCD/Overhead
Viewers or to convert the signal into video (for TVs),
but this doesn't
solve our problem.
Steve Kaplan, Systems Manager
steve@cadc.uscourts.gov
U.S. Court of Appeals for the D.C. Circuit
Steve,
I'm not sure about device names, etc. under Interactive
UNIX, nor
what flavor it is, but it sounds like what you need
is something like
this:
1. Make sure your syslog setup is running.
2. Add an entry to /etc/syslog.conf that says:
*.debug /var/adm/all_log.log
(Note: if you change your /etc/syslog.conf while
syslogd is running, you have to issue a kill -HUP
to the syslogd process so it re-reads your /etc/syslog.conf.)
3. Add a line to the beginning of your /etc/rc.local
(_AFTER_ the section where the lpd stuff gets started)
that
says:
tail -f /var/adm/all_log.log > /dev/lp1 &
The tail -f (filename) command will spew out each line
as
it is added to (filename). Check man tail for more info.
WARNING: Be careful with this. Try the tail -f ...
line while
the system is up and running to make sure it works before
sticking
it into /etc/rc.local.
As I said, I have no idea what kind of printer it is,
nor what the
device name is, so /dev/lp1 may be way out of whack.
Jeff Blaine
CIESIN Systems and Operations (UNIX)
Steve,
With reference to your letter published in the March/April
1994 issue
of Sys Admin magazine, there is a software solution
to your
problem. Have you looked at the syslog facility? This
is the
mechanism which sends the messages to the console. By
default, not
all the messages are logged.
By changing the information in the config file /etc/syslog.conf,
you can control what information is sent where. For
example, it is
possible to send key information to a log file, send
vital messages
to specified users, send key messages to a remote machine,
and send
a copy of everything to a specific port. If you attach
a printer to
that port, then you will get a copy of the console output.
The only
thing you might miss would be the section of the boot
output from
start of boot to the moment that the syslog daemon starts.
Use of /usr/etc/dmesg will help to reduce this to a
minimum.
Hope this helps. Feel free to email me if you would
like to discuss
this further.
John Woodgate
Senior Consultant,
Meersbrook Technical Services
john@meertech.demon.co.uk
(Working on site for British Telecom)
From: Steve Wertz - Programmer <stevew@clbooks.com>
To: saletter@rdpub.com
Subject: Useful Utility
I don't know if this little script [see Listing 2] has
a place in
your magazine or not, but here it is.
I have worked only on an SCO system, and am not aware
if a similar
command exists on any other platform. This little script
changes the
mode, owner, and group of a file in one command. This
saves the user
the hassle of having to invoke three separate commands
(chown,
chmod, and chgrp). Once you start to use it, you'll
realize the benefits of such a command.
The script works fine under SCO, but I do not have other
platforms
on which to test it. I'm not an expert at shell programming,
so it
may be written more efficiently/compactly by someone
more experienced.
It's so simple and easy that I can't figure out why
it's not a standard
UNIX command. I understand that this is not your in-depth,
technical
overview of something as complicated as sendmail, but
I'm sure readers
will use the command frequently. We all know how often
System Admistrators
deal with file permissions.
Feel free to do anything you wish with it. Enjoy.
Steve Wertz
Computer Literacy Bookshops, Inc.
San Jose, CA 95164-1897
stevew@clbooks.com
Hi Larry,
Saw your article in Sys Admin called "Timing Out
Idle
Users." I wound up tweaking your killidle script
a little
bit to get it working properly on an SVR4 Pyramid (DC/OSx)
and to
suit my needs [see Listing 3]. You've likely got a really
elaborate
version running at your site, but thought I'd pass along
my changes
in case you were interested. If you're not interested,
just redirect
this to /dev/null.
In particular I found that awk didn't want to work quite
right,
so I switched to nawk. The PATH variable was added
so I could be sure that the shell would have the appropriate
pathing
when run from cron, as many machines have little or
no set-up
for variables when run from cron.
I added re-direction to a logfile and added a second
variable to be
passed to nawk (TIME) so that my logfile would tell
me when the person was logged out.
Finally, instead of simply calling kill -X once, I got
to
thinking about the write-up in the article and the merits
of doing
one over the other. I thought it might be best to try
and kill the
processes by asking nicely first ala kill -1 and then
kill
-15 and finally, if they were still lingering around,
then I'd
do a kill -9. This seems to provide the best of all
possible
worlds. If the silly process won't yield when asked
nicely -- then
stop asking nicely.
The re-direction of stderr to /dev/null, is to keep
the error messages from kill about not finding such-and-such
a PID when they've already killed it from showing up
in the log or
in mail.
Michael Faurot
mfaurot@bogart.UUCP
Larry Reznick responds: Thank you for writing. I'm very
interested in what other administrators are doing with
my ideas and
other Sys Admin writers' ideas.
About awk vs nawk: If I recall correctly,
awk on SCO is hard-linked to nawk. On some systems,
the old awk is named oawk and awk is hard-linked
to whichever you prefer. I can't think of why someone
might want awk
to point to oawk, but there it is. I don't think SCO
provides
oawk. So, I never ran into the problems you may have
had requiring
nawk. On the rare system I've used that has awk leading
to the old awk, I rename it oawk and hard-link awk
to nawk.
Adding the PATH variable can't hurt. I've done it
in several other cron shell scripts. cron does get an
initial PATH setting, though: the current directory,
/bin,
/usr/bin, and /usr/lbin. Because most standard programs
run by cron jobs are in /bin and /usr/bin a PATH
setting usually isn't needed. Sometimes I put a PATH
variable
in the script, sometimes I force full pathnames into
the program's
invocation, depending on how many programs the script
uses outside
of /bin and /usr/bin.
I like your additions of the timestamping and logfile
capturing
of the killed processes. I originally thought about
trying the less
punitive kill signals, but settled for the simple, sure
way. You're
right, of course, that asking nicely before terminating
with extreme
prejudice is worth doing. The one flaw with the way
you put it together
is that your script uses all three kill signals. The
kill(1) man page
doesn't claim an exit status. I wonder if there isn't
a way to OR
those three together so that subsequent signals aren't
necessary if
an earlier one succeeds?
To: saletter@rdpub.com
Subject: Not Finding What One is Looking For
You have published both an article ("checkcron:
Checking for the
Unexpected") and a letter (from T. Mike Howeth,
in your March/April
1994 edition) regarding screening out the grep command
itself
when using it against the output of ps. The article's
solution
was:
ps -furoot | sed '/grep/d' | grep cron
The letter suggested this was overkill, and suggested
the following:
ps -furoot | grep '[c]ron'
Although I agree the above is elegant, I wish the writer
had explained why it works (the explanation given, "This
method
simply uses metacharacters to force and expression to
not match itself"
didn't help).
Although it is not as elegant, I have always used the
following, which
is at least simple and easy to remember:
ps -furoot | grep cron | grep -v grep
This uses the -v option of grep itself
to screen out the grep command.
Mary Straus
Exemplar Logic, Inc.
mary@exemplar.com
Figures 1-5:
Figure 1: Michael Chinni's revision to Ben Thomas's code
#include
#ifdef shadow
#include <shadow.h>
#endif
char *find_passwd(int user_id)
{ struct passwd *pw, *getpwuid();
#ifdef shadow
struct spw *sptr, *getspnam();
#endif
if ((pw == getpwuid(user_id)) == NULL)
return("unknown");
#ifndef shadow
return (pw->pw_passwd);
#else
if ((spw != getspnam(pw->pw_name)) == NULL)
return (spw->sp_pwdp);
else
return ("unknown");
#endif
}
Figure 2: Michael Chinni's sample compilation command line
cc -O `./gen` sudo.c `./lib`
Figure 3: Michael Chinni's gen script
#!/bin/sh
#
# gen
#
INCL_SHADOW=/usr/include/shadow.h
LIBSEC=/usr/lib/libsec.a
SHADOW_FILE=/etc/shadow
OPT=""
# check if should include the <shadow.h> in the program,
if [ -s ${INCL_SHADOW} ] && [ -r ${INCL_SHADOW} ]
then
if [ \( -s ${LIBSEC} -a -r ${LIBSEC} \) -o -s ${SHADOW_FILE} ]
then
OPT="-Dshadow"
fi
fi
echo $OPT
Figure 4: Michael Chinni's lib script
#!/bin/sh
#
# lib
#
INCL_SHADOW=/usr/include/shadow.h
LIBSEC=/usr/lib/libsec.a
OPT=''
SHADOW_FILE=/etc/shadow
# check if it should link the library /usr/lib/libsec.a
if [ -s ${INCL_SHADOW} ] && [ -r ${INCL_SHADOW} ]
then
if [ \( -s ${LIBSEC} -a -r ${LIBSEC} \) -o -s ${SHADOW_FILE} ]
then
OPT="-lsec"
fi
fi
echo $OPT
Figure 5: Specifications for Michael Chinni's "daily" script
1. Run certain commands (with certain optional arguments). A list of
commands will be located in /local/lib/daily/cmds_2_be_run:
/etc/sa -s > /dev/null
# check and archive file system sizes to track growth
/local/bin/dfdaily
# trim the wtmp file
/local/bin/wtmptrim -f /dev/null 45
2. Copy certain files to certain archival directories.
3. Clean out certain archival directories at certain invervals
(smallest unit = 1 day).
4. Combine the above 2 items into a database, /local/lib/daily/db
(db.local supercedes db.global entries where 3rd fields match; db.global is
the master localnet-wide db file; db.local is the db file specific to this
system). Entries will be in the following format:
Full Path of File to Archive; What to Archive to (f=file, d=dir); Where
to Archive; How Many Archive Files to Keep
Entries for the database will (minimally) include:
/usr/adm/security;d;/archive/adm/security;45
/usr/adm/messages;d;/archive/adm/messages;31
;d;/archive/adm/df;45
/usr/adm/pacct;f;/dev/null;0
/usr/adm/savacct;f;/archive/adm/savacct;1
/usr/adm/usracct;f;/archive/adm/usracct;1
Blank lines are allowed in the database as are commented lines (prefixed
with a "#" sign). An entry without a field1 indicates that there is separate
program to do the archiving, "daily" is to just do step 3).
Listings 1-3:
Listing 1: Michael Chinni's "daily" script
#!/bin/ksh
# FILE: daily
#
# PROGRAM: daily
#
# DESCRIPTION:
# Daily Accounting and File Roll-Out Program.
#
# OPTIONS:
# None
#
# ARGUMENTS
# None.
#
# EXIT CODES:
# None
#
# ORIGINAL AUTHOR:
# Michael J. Chinni <mchinni@pica.army.mil>
#
# HISTORY OF PROGRAM CHANGES:
#
# DATE VERSION DONE BY DESCRIPTION
# -------- ------- -------- ----------------------------------------------
# 12/13/93 1.0.00 mchinni Original release.
#
# Set PATH
PATH=/bin:/usr/5bin:/usr/bin:/usr/ucb:/usr/ccs/bin:/usr/mmdf:/local/bin
export PATH
# Set independent variables:
field1=""
field2=""
field3=""
field4=""
NULL="/dev/null"
PROG=`basename $0` # name of this program
THISHOST=`hostname` # name of this host
# Set dependent variables:
LIB="/local/lib/${PROG}"
DB=${LIB}/db
CMDS2BERUN=${LIB}/cmds_2_be_run
# Set dependent vars:
TMP=/tmp/${PROG}$$ # dir for temp files
# Create the dir for temp. files:
rm -rf ${TMP}
mkdir ${TMP}
chmod 0700 ${TMP}
# Set trap for signals:
trap '\
EXITCODE=$? ; rm -rf ${TMP} ; echo "${PROG}: Exiting" >&2 ; exit ${EXITCODE} \
' 01 02 03 04 05 15
#
# script operation is broken into 3 primary segments, with the work for each
# segment being done by a separate function. Functions are:
# run_cmds archive_data cleanup_archive
#
# in addition to the 3 primary functions, there is one utility function:
# set_fields
#
#
# RUN_CMDS
# ========
#
# execute CMDS2BERUN stdout>/dev/null stderr>${TMP}/cmds_report
# if there were any errors from cmds_2_be_run
# then
# set subject of email message
# send email to root using SUBJECT as a subject and ${TMP}/cmds_report as
# the body of the message
# endif
#
run_cmds() {
/bin/ksh ${CMDS2BERUN} 1>$NULL 2>${TMP}/cmds_report
if [ -s ${TMP}/cmds_report ]
then
SUBJECT="${PROG}: ${THISHOST}: cmds error report"
mail -s "${SUBJECT}" root < ${TMP}/cmds_report
fi
}
#
# SET_FIELDS
# ==========
#
# "set_fields" is used as a utility function
# it takes a single input line (i.e. $1) and parses it into global variables:
# field1, field2, field3, and field4.
#
set_fields() {
field1=`echo $1 | cut -f1 -d';'`
field2=`echo $1 | cut -f2 -d';'`
field3=`echo $1 | cut -f3 -d';'`
field4=`echo $1 | cut -f4 -d';'`
}
#
# ARCHIVE_DATA
# ============
#
# while input still remains
# do
# set CHAR1 to be the 1st character of the current line of input
# parse fields from the current line of input
# if the current line of input is not a comment and field1 is not empty
# then
# if field2 is a 'd' (file cp'd into a directory)
# then
# set today to be today's date in YYYMMDD format (using SysV date cmd.)
# set archive_name to be ${field3}/${today} (path NOT division)
# cp the file to be archived ($field3) to archive_name
# chmod 644 archive_name
# chgrp sys archive_name
# null out the file to be archived
# else (field2 is an 'f') (file cp'd over its archive)
# cp the file to be archived on top of its archive
# null out the file to be archived
# fi
# fi
# done
#
archive_data() {
while read line
do
CHAR1=`echo ${line} | cut -c1`
set_fields "${line}"
# test if current line is blank or comment
if [ "${CHAR1}" != "#" -a "${CHAR1}" != " " -a -n
"${CHAR1}" -a \
-n "${field1}" ]
then
if [ "${field2}" = "d" ]
then
today=`att date '+%Y%m%d'`
archive_name="${field3}/${today}"
cp ${field1} ${archive_name}
chmod 644 ${archive_name}
chgrp sys ${archive_name}
cp ${NULL} ${field1}
else
cp ${field1} ${field3}
cp ${NULL} ${field1}
fi
fi
done
}
#
# CLEANUP_ARCHIVE
# ===============
#
# while input still remains
# do
# set CHAR1 to be the 1st character of the current line of input
# parse fields from the current line of input
# if the current line of input is not a comment and field1 is not empty
# then
# cd $field3 (archive directory to be cleaned)
# keep the 1st ${field4} files (based on time modified (latest 1st))
# force the removal of any others
# fi
# done
#
cleanup_archive()
{ while read line
do
CHAR1=`echo ${line} | cut -c1`
set_fields "${line}"
# test if current line is blank or comment
if [ "${CHAR1}" != "#" -a "${CHAR1}" != " " -a -n
"${CHAR1}" -a \
"${field2}" = "d" ]
then
cd ${field3}
let num=${field4}+1
for file in `ls -t | grep -v '^\.' | tail +${num}`
do
rm -f "${file}"
done
fi
done
}
#
# MAIN BODY
#
#
# Run standard 'daily' command file
run_cmds
#
# Archive data using daily/database (DB)
cat ${DB} | archive_data
#
# Clean up archive directory (keep the 1st field4 files)
cat ${DB} | cleanup_archive
#
# Done, so cleanup TMP
# cd to TMP and remove all files
# cd to .. and remove TMP
cd ${TMP} && rm -f *
cd .. && rmdir ${TMP}
exit 0
Listing 2: Steve Wertz's chmog
# Program : chmog
# Author : S. Wertz
# Purpose : Changes mode, owner, and group of a file
if [ $# -lt 3 -o echo $2|grep "\."]; then
echo "Usage: $0 mode owner.group filename..."
exit 1
fi
MODE=$1
OWNER=`echo $2|cut -f1 -d.`
GROUP=`echo $2|cut -f2 -d.`
if cut -f1 -d: /etc/passwd|grep $OWNER> /dev/null; then :
else
echo "$0: No such user \"$OWNER\""
exit 2
fi
if cut -f1 -d: /etc/group|grep $GROUP >/dev/null; then :
else
echo "$0: No such group \"$GROUP\""
exit 3
fi
shift 2
for FNAME do
chmod $MODE $1 && chgrp $GROUP $1 && chown $OWNER $1
shift
done
Listing 3: Michael Faurot's killidle
:
# killidle
#
# Kill any user login idle for too long
PATH="/usr/bin:/sbin:/usr/sbin:/usr/local/bin:"
export PATH
LOGFILE=/var/adm/killidle.log
TIME=`date +"%a %b %e %T"`
IDLEOUT=${1:-50}
if [ $IDLEOUT -lt 1 ]
then
IDLEOUT=50
fi
who -u | nawk -v "IDLEOUT=$IDLEOUT" -v "TIME=$TIME" '{
name = $1;
terminal = $2;
idle = $6;
pid = $7;
if (idle != ".") {
split (idle, idletime, ":");
if (idletime[2] >= IDLEOUT) {
print "Timeout Warning:", \
name, "on", terminal, \
"idle for", idle, \
"minutes at", TIME
system ("/bin/kill -1 " pid);
system ("sleep 2");
system ("/bin/kill -15 " pid);
system ("sleep 2");
system ("/bin/kill -9 " pid);
}
}
}' >> $LOGFILE 2> /dev/null
|