Questions
and Answers
Amy Rich
Q I'm writing a bash script
that is supposed to sort a text file that includes a date, a process,
and a time (measured by the time program). The file is a set of
space-separated records, but the process names sometimes have spaces
in them as well. I want to sort on the time field, but I'm
not sure how, since it's not always in the same field number
when the command has arguments. Here's an example file:
04/05/03 ps -auxwww 0.012
04/02/03 ls 0.003
04/01/03 uptime 0.023
A Since the field containing the time
is not a constant, you use awk and the variable NF to get the last
field. Probably the easiest way to do this is to print out the last
field (on which you want to sort) followed by the whole record. Pass
that to sort, and then cut off that first column that you added.
awk '{print $NF,$0}' file | sort -n | cut -f 2- -d" "
Q I'm trying to get an A1000 disk
array working with an Ultra 10 running Solaris 8 02/02. I've
installed a Symbios dual differential SCSI card and connected the
A1000 (fully populated with 12 disks) to it. The machine is running
Raid Manager 6.22, Rev 01.54. I've rebooted the machine to configure
the devices, and I receive the following error at boot:
Mar 28 10:39:41 philly /usr/lib/osa/bin/fwutil: [ID 136482
user.error] The controller firmware version for controller c3t5d0
(module: mod2) is 2.05.02. The recommended firmware version for
this application is 3.00.00 or higher.
In the hope that this error message wouldn't prevent me from
getting the RAID configured, I started rm6. I can see the module,
but I can't seem to do anything with it. I tried a health check,
and it claims that it can't scan the module. I checked the current
configuration with raidutil, and that seems to have worked just fine:
/usr/sbin/osa/raidutil -c c3t5d0 -i
LUNs found on c3t5d0.
LUN 0 RAID 0 40960 MB
LUN 1 RAID 0 40960 MB
LUN 2 RAID 0 40960 MB
LUN 3 RAID 0 40960 MB
Vendor ID Symbios
ProductID StorEDGE A1000
Product Revision 0205
Boot Level 02.05.01.00
Boot Level Date 12/02/97
Firmware Level 02.05.02.14
Firmware Date 07/13/98
raidutil succeeded!
Do you have any ideas on why can't I configure my A1000 when
it's clearly up, running, and can read the existing configuration?
I don't much care about the existing data since it isn't
ours.
A To begin, you'll want to
upgrade the firmware on the A1000 just to make sure you're
not running into incompatibility issues. See the RAID Manager 6.2.2
Installation and Support Guide at:
http://docs-pdf.sun.com/805-7756-10/805-7756-10.pdf
for information on how to do this.
I suspect that the real issue is that you (or the prior owner
of the A1000) did not clear the existing LUNs when you detached
the A1000 from the previous system. You show four existing LUNs,
and the module name is not a standard default. This will definitely
cause difficulties when you move the A1000 to a new machine. To
reset the array to the default, try:
/usr/sbin/osa/raidutil -c c3t5d0 -X
If you now run /usr/sbin/osa/lad, or raidutil -i, you
should only see one default 10M LUN 0. You can delete this default
placeholder and create your own LUNs. Always make sure you put a LUN
at 0, though, or you may experience issues with the A1000. If this
doesn't work, you might want to try the suggestion at:
http://groups.google.com/groups?selm=701578d3.0301091619.36c8956d%40posting.google.com&rnum=1
This person had a problem very similar to yours and resetting the
array by removing all the disks seemed to work for him.
Q I'm trying to use the access.db
feature of sendmail. My sendmail.cf contains:
Kaccess hash /etc/mail/access.db
My /etc/mail/access contains:
aol.com ERROR:"550 We don't accept mail from aol.com due to excessive spam."
verified_user1@aol.com OK
verified_user2@aol.com OK
...
verified_userN@aol.com OK
This should, in theory, allow mail from verified_user[1-N]@aol.com,
but it bounces with the following error:
Action: failed
Status: 5.0.0
Diagnostic-Code: smtp;550 5.0.0 We don't accept mail from aol.com
due to excessive spam.
A You need to use FEATURE('delay_checks')
to override blocking the SMTP client (aol.com) when you have valid
users (verified_userN@aol.com) in your access database. From
cf/README:
By using FEATURE('delay_checks') the rulesets check_mail
and check_relay will not be called when a client connects or issues
a MAIL command, respectively. Instead, those rulesets will be called
by the check_rcpt ruleset; they will be skipped if a sender has
been authenticated using a "trusted" mechanism, i.e.,
one that is defined via TRUST_AUTH_MECH(). If check_mail returns
an error then the RCPT TO command will be rejected with that error.
If it returns some other result starting with $# then check_relay
will be skipped. If the sender address (or a part of it) is listed
in the access map and it has a RHS of OK or RELAY, then check_relay
will be skipped. This has an interesting side effect: if your domain
is my.domain and you have:
my.domain RELAY
in the access map, then all e-mail with a sender address of <user@my.domain>
gets through, even if check_relay would reject it (e.g., based on
the hostname or IP address). This allows spammers to get around DNS
based blacklist by faking the sender address. To avoid this problem
you have to use tagged entries:
To:my.domain RELAY
Connect:my.domain RELAY
if you need those entries at all (class {R} may take care of them).
FEATURE('delay_checks') can take an optional argument:
FEATURE('delay_checks', 'friend')
enables spamfriend test
FEATURE('delay_checks', 'hater')
enables spamhater test
If such an argument is given, the recipient will be looked up in the
access map (using the tag Spam:). If the argument is 'friend',
then the default behavior is to apply the other rulesets and make
a SPAM friend the exception. The rulesets check_mail and check_relay
will be skipped only if the recipient address is found and has RHS
FRIEND. If the argument is 'hater', then the default behavior
is to skip the rulesets check_mail and check_relay and make a SPAM
hater the exception. The other two rulesets will be applied only if
the recipient address is found and has RHS HATER.
This allows for simple exceptions from the tests, e.g., by activating
the friend option and having:
Spam:abuse@ FRIEND
in the access map, mail to abuse@localdomain will get through (where
"localdomain" is any domain in class {w}). It is also possible
to specify a full address or an address with +detail:
Spam:abuse@my.domain FRIEND
Spam:me+abuse@ FRIEND
Spam:spam.domain FRIEND
Note: The required tag has been changed in 8.12 from To: to Spam:.
This change is incompatible to previous versions. However, you can
(for now) simply add the new entries to the access map, the old ones
will be ignored. As soon as you removed the old entries from the access
map, specify a third parameter ('n') to this feature and
the backward compatibility rules will not be in the generated .cf
file.
So, to allow your verified users, you ideally want the following
in your /etc/mail/access file:
Spam:verified_user1@aol.com FRIEND
Spam:verified_user2@aol.com FRIEND
...
Spam:verified_userN@aol.com FRIEND
Q I'm trying to audit my system
and find all files that are links, either soft or hard. I know the
-type l option to find will give me all of the soft
links, but there doesn't seem to be a similar option for hard
links. Is there a better tool to use, or am I overlooking an option?
A Hard links are not an option
to the file type argument of find. Most finds take the following
as file types:
b block special
c character special
d directory
f regular file
l symbolic link
p FIFO
s socket
One way you might be able to check for hard links is to look at the
link count of each file. A hard link to a file is indistinguishable
from the original directory entry, and any changes to a file are effectively
independent of the name used to reference the file. Hard links may
not normally refer to directories and may not span file systems. If
you're specifying a hard link to be a file that has a link count
greater than one, you can use the links option to find. Since you
want only files, be sure to exclude the file type directory:
find / \! -type d \( -links +2 -o -type l \) -print
Q Due to a merging of namespaces, we
have two legacy domain names (domain1.com and domain2.com) that contain
records for all of the same IP addresses. Our /etc/named.conf specifies
separate zone files for the two domains as such:
zone "domain1.com" in {
type master;
file "domain1.com";
};
zone "domain2.com" in {
type master;
file "domain2.com";
};
The zone files both contain (where domainX is either domain1 or domain2):
$TTL 2D
domainX IN SOA primary root.domain2.com. (
2002012004 ; serial
1D ; refresh
2H ; retry
1W ; expiry
2D ) ; minimum
IN NS ns1.domain2.com.
IN NS ns2.domain2.com.
IN MX 10 mx1.domain2.com.
IN MX 20 mx2.domain2.com.
localhost IN A 127.0.0.1
www IN CNAME host2.domain2.com.
host1 IN A 192.168.1.1
host2 IN A 192.168.1.2
...
hostN IN A 192.168.1.N
Is there a way I can only keep one file for both domains, or is there
a script to make changes to multiple zones automatically? Updating
both by hand whenever there's a change is a pain and highly prone
to error.
A You can use one zone file for
both domains. All you need to do is replace the zone name (domainX)
with the @ symbol. This is shorthand for the zone name, inherited
from named.conf. As long as your zone files are exactly the same,
you never use a FQDN where it should be different in each zone,
and do not receive dynamic updates, it will work just fine.
In your case, the only thing you need to change in your zone file
is the SOA. The rest will remain the same. Reference this same zone
file from both named.conf zone declarations on the master. There
won't be any changes to the slave, as each zone will still
have its own separate file there. So, on your master, you'll
have the following in /etc/named.conf:
zone "domain1.com" in {
type master;
file "domain2.com";
};
zone "domain2.com" in {
type master;
file "domain2.com";
};
and the zone file domain2.com will contain:
$TTL 2D
@ IN SOA primary root.domain2.com. (
2002012004 ; serial
1D ; refresh
2H ; retry
1W ; expiry
2D ) ; minimum
IN NS ns1.domain2.com.
IN NS ns2.domain2.com.
IN MX 10 mx1.domain2.com.
IN MX 20 mx2.domain2.com.
localhost IN A 127.0.0.1
www IN CNAME host2.domain2.com.
host1 IN A 192.168.1.1
host2 IN A 192.168.1.2
...
hostN IN A 192.168.1.N
Q We have a number of text files that
are structured based on line number (e.g., the first line is the file
name, the second line is the owner, the third line is the last modification
date, etc.). Unfortunately, there's no common data between the
same number lines of different files. Line 2 in file1 may be Jack,
and line 2 in file2 may be Jill. What I want is to be able to grep
just line 2 of every file so I can get a listing of all of the owners.
Since there isn't necessarily any commonality, though, I'm
not sure how to go about this.
A In this case, grep isn't
going to help you any, because it expects some common string or
regular expression. If you want to get specific numbered lines from
a file, then head, tail, awk, or sed will be the best tools. To
get line two of file, you can do one of the following:
head -2 file | tail -1
awk 'NR==2{print}' file
sed '2q;d' file
The last option has the benefit of stopping once you've reached
the record you want. If your data files are very large, this will
save you a lot of time because you won't be parsing the entire
file.
Amy Rich, president of the Boston-based Oceanwave Consulting,
Inc. (http://www.oceanwave.com), has been a UNIX systems
administrator for more than 10 years. She received a BSCS at Worcester
Polytechnic Institute, and can be reached at: qna@oceanwave.com.
|