Questions
and Answers
Amy Rich
Q I'm going through my Solaris
8 machine and cleaning out accumulated cruft in the root directory.
I've come across the directory /xfn, which contains a bunch of things:
# ls -al
total 31
dr-xr-xr-x 11 root root 11 Oct 15 22:50 .
drwxr-xr-x 28 root root 9728 Oct 15 22:38 ..
dr-xr-xr-x 1 root root 1 Oct 16 09:07 ...
dr-xr-xr-x 1 root root 1 Oct 16 09:07 _dns
dr-xr-xr-x 1 root root 1 Oct 16 09:07 _orgunit
dr-xr-xr-x 1 root root 1 Oct 16 09:07 _thisens
dr-xr-xr-x 3 root root 3 Oct 16 09:07 _thisorgunit
dr-xr-xr-x 1 root root 1 Oct 16 09:07 _x500
dr-xr-xr-x 1 root root 1 Oct 16 09:07 org
dr-xr-xr-x 1 root root 1 Oct 16 09:07 orgunit
dr-xr-xr-x 1 root root 1 Oct 16 09:07 thisens
dr-xr-xr-x 3 root root 3 Oct 16 09:07 thisorgunit
I'm very suspicious of the "..." directory, since I've seen that as
the signature of a cracked system before. I tried to remove the "..."
directory and got the following:
# rm -rf ...
rm: Unable to remove directory ...: Device busy
Is this a sign that I've been hacked? How can I tell how they got
in?
A The /xfs directory is used by
the Federated Naming Service. The Federated Naming Service provides
a method for hooking up, or federating, multiple naming services
under a single, simple uniform interface for the basic naming and
directory operations. In the Solaris environment, the FNS implementation
consists of a set of naming services with specific policies and
conventions for naming organizations, users, hosts, sites, and services,
as well as support for global naming services such as DNS and X.500.
In other words, it's a way to tie together NIS, NIS+, files, DNS,
X.500, LDAP, and other naming services.
The files and directories you see under /xfn are typical of a
running FNS, as is the fact that you cannot remove them while the
service is running. If you don't want FNS, you can stop/remove it.
If you want to run automounter but not use FNS, comment out the
appropriate lines in /etc/auto_master and run automount -v.
If you want to stop automounter altogether, you can remove /etc/rc2.d/S74autofs.
If you want to remove the software from the system altogether, do:
pkgrm SUNWfnsx SUNWfns SUNWfnsx5
You can also add SUNWatfsu and SUNWatfsr to the above line if you
want to remove automounter from the system as well.
Q I'd like to learn a scripting
language that will help me do a number of our automated system and
Web-oriented tasks. I've heard of Perl and Python and am wondering
if there's anything else out there that I should take a look at
before picking one language to go with.
A If you're looking for a scripting
language to support both Web applications and systems administration
tasks, then there's always good old Bourne shell (or some derivative
thereof). If you're looking for a more robust language, you might
also want to take a look at Ruby:
http://www.ruby-lang.org/
I don't think Ruby has made much headway into the sys admin world
yet, but I've had some programmers tell me it's a great language.
Ruby's strengths are that it's been a pure and complete OO language
right from start, not added on as an afterthought. The OO model
is simple but extensible and features true closures. Ruby has a
very simple syntax and good exception handling features. It also
has good garbage collection and the ability to dynamically load
extension libraries. Another advantage is that variables need no
declaration but scope can be defined by a simple naming convention
(simple 'var' = local variable, '@var' = instance variable, '$var'
= global variable).
When trying to decide which language to learn, be sure to consider
the availability and clarity of documentation, support, established
code base, and any legacy code you may need to support. Also take
a look at the development cycle of the language you choose and whether
it will meet your needs. Do you want a language that's fairly static
in syntax and concepts, or do you want one that's explosively adding
new features and changing syntax with each major revision? How big
of an in-house codebase will you need to support, and can you find
someone with the skills to do that when you move on?
Q I was reading through Sun's Web
site and found the following concerning DiskSuite state database
replicas:
Note -- If you created two replicas on each disk in a two-disk
configuration, DiskSuite will still function if one disk fails.
But because you must have one more than half of the total replicas
available in order for the system to reboot, you will be unable
to reboot in this state.
So if I'm mirroring my boot disk and have 50% of the state database
replicas on each disk, DiskSuite is useless because I won't be able
to boot the machine after I lose a disk? Am I reading this wrong?
A When you lose a disk, the ideal
situation is that you are notified and replace it before the machine
is down, or at least remove the replicas on the bad disk so that
100% are on the good disk. If the machine does go down before you
have a chance to fix or modify things, you can always boot single
user and remove the state database replicas from the failed disk
to continue booting. Let's assume that disk0 has failed and your
working mirror is at disk1. From the ok prompt, type:
boot disk1
You'll see the following output:
boot device: /pci@1f,4000/scsi@3/disk@1,0 File and args:
SunOS Release 5.8 Version Generic_108528-07 64-bit
Copyright 1983-2001 Sun Microsystems, Inc. All rights reserved.
WARNING: md: d10: /dev/dsk/c0t0d0s0 needs maintenance
WARNING: forceload of misc/md_trans failed
WARNING: forceload of misc/md_raid failed
WARNING: forceload of misc/md_hotspares failed
configuring IPv4 interfaces: hme0.
Hostname: sluggy
metainit: sluggy: stale databases
Insufficient metadevice database replicas located.
Use metadb to delete databases which are broken.
Ignore any "Read-only file system" error messages.
Reboot the system when finished to reload the metadevice database.
After reboot, repair any broken database replicas which were deleted.
Type control-d to proceed with normal startup,
(or give root password for system maintenance):
At this point, type in the root password for single user mode.
Next, determine the location of the bad state database replicas
and remove them:
# metadb -i
flags first blk block count
M unknown unknown /dev/dsk/c0t0d0s7
M unknown unknown /dev/dsk/c0t0d0s7
a m lu 16 1034 /dev/dsk/c0t1d0s7
a m lu 1050 1034 /dev/dsk/c0t1d0s7
M unknown unknown /dev/dsk/c0t0d0s7
a m lu 2084 1034 /dev/dsk/c0t1d0s7
o - replica active prior to last mddb configuration change
u - replica is up to date
l - locator for this replica was read successfully
c - replica's location was in /etc/lvm/mddb.cf
p - replica's location was patched in kernel
m - replica is master, this is replica selected as input
W - replica has device write errors
a - replica is active, commits are occurring to this replica
M - replica had problem with master blocks
D - replica had problem with data blocks
F - replica had format problems
S - replica is too small to hold current data base
R - replica had device read errors
# metadb -d c0t0d0s7
metadb: sluggy: /etc/lvm/mddb.cf.new: Read-only file system
Be sure that the dead state database replicas are gone:
# metadb
flags first blk block count
a m lu 16 1034 /dev/dsk/c0t1d0s7
a m lu 1050 1034 /dev/dsk/c0t1d0s7
a m lu 2084 1034 /dev/dsk/c0t1d0s7
Finally, reboot the machine so it will come up automatically in multi-user
mode.
# reboot -- disk1
If you're only using DiskSuite to two-way mirror your boot drive,
you can also put the following into /etc/system as of DiskSuite 4.2.1:
set md:mirrored_root_flag=1
This will allow you to boot with only 50% of the replicas. If you're
doing more than mirror your boot drive, spread your replicas around
on each disk. You'll still run into issues if you've only got two
controllers and you've split your replicas evenly among them. When
you lose an entire controller, you'll have lost 50% again.
Q I've just set up a sendmail server
on my Linux system. When mail is sent to any account added since
install time, it fails with a "user unknown" message. Mail to accounts
like "root" and "daemon" work just fine, though. I suspect it's
the way I'm creating the accounts that's causing mail to fail, though
it's pretty vanilla. Here's a cut-and-paste of one of my adduser
sessions. Am I doing something wrong here?
root@www:~# adduser
Login name for new user []: JDoe
User id for JDoe [ defaults to next available]:
Initial group for JDoe [users]:
Additional groups for JDoe (separated with commas, no spaces) []:
JDoe's home directory [/home/JDoe]:
JDoe's shell [/bin/bash]: /bin/tcsh
JDoe's account expiry date (YYYY-MM-DD) []:
OK, I'm about to make a new account. Here's what you entered so far:
New login name: JDoe
New UID: [Next available]
Initial group: users
Additional groups: [none]
Home directory: /home/JDoe
Shell: /bin/tcsh
Expiry date: [no expiration]
This is it... if you want to bail out, hit Control-C. Otherwise, press
ENTER to go ahead and make the account.
Making new account...
Changing the user information for JDoe
Enter the new value, or press return for the default
Full Name []: John Doe
Room Number []:
Work Phone []:
Home Phone []:
Other []:
Changing password for JDoe
Enter the new password (minimum of 5, maximum of 127 characters)
Please use a combination of upper and lower case letters and numbers.
New password:
Re-enter new password:
Password changed.
Done...
A Sendmail does not preserve case for
usernames by default. You can either not use uppercase in your usernames
(preferred), or you can work around the uppercase names. If you want
sendmail to remain case insensitive, you can create alias entries
of the form:
jdoe: JDoe
You can also change the behavior of sendmail to preserve case. If
you choose to do this, you may have issues with people who send your
users mail with alternate capitalization. To make the modification,
put the following in your .mc file and rebuild your .cf file (this
assumes that you're running a version of sendmail that will understand
this macro):
MODIFY_MAILER_FLAGS('LOCAL', '+u')dnl
Q Our site has both internal and external
DNS servers with different zone files. The internal server maps names
and IPs for our internal namespace and is used by people behind our
firewall. The external nameserver serves the Internet at large and
contains no information about our internal hosts. Due to consolidation
and cutbacks, we will now be left with one set of machines to run
DNS on. What's the best way to do split DNS only using one machine?
How do we reconcile different zone file contents for the same domain
name? Is running two named processes in different chrooted directories
the best way to go, or is there something different/better?
A While you can use one machine
with two named processes, the most elegant way to achieve your goal
is to use BIND 9 and views. This will integrate your split DNS zones
under one named process. A view is a name space that's tagged by
IP addresses or TSIG keys instead of relying on two different listening
sockets. A configuration example of views split by IP (if your internal
range is 192.168.1.0/24) would be:
view "internal" {
match-clients { 192.168.1.0/24; };
recursion yes;
zone "your.domain" {
type master;
file "/var/named/internal/your.domain";
};
};
view "external" {
match-clients { !192.168.1.0/24; };
recursion no;
zone "your.domain" {
type master;
file "/var/named/external/your.domain";
};
};
You'll also need one hint zone statement per view, as well. When you
want to transfer both views to a slave server, both the slave and
the master should have two sets of IP addresses. You can then request
a transfer based on the source IP. See the FAQ that comes with the
BIND 9 source for specific examples.
Q I have been using the following
guide from SANS on securing Solaris 8:
http://www.sans.org/y2k/practical/Jeff_Campione_GCUX.htm
This guide recommends that separate filesystems be made for /dev and
/devices at install time. I did not do the original install, so I'm
trying to modify this after the fact. The one time I tried it, followed
by boot -r, the boot just hung. Can you detail how this should be
done after the initial install, or do I need to reinstall the machine
from scratch?
A You can NOT put /dev and /devices
on anything but the root partition. In fact, there are several incorrect
pieces of information in this document, and I would suggest not
using it at all. If you're looking for information on securing your
Solaris 8 machines, take a look at the Sun Blueprints article:
http://www.sun.com/blueprints/0401/security-updt1.pdf
Amy Rich, president of the Boston-based Oceanwave Consulting, Inc.
(http://www.oceanwave.com), has been a UNIX systems administrator
for more than five years. She received a BSCS at Worcester Polytechnic
Institute, and can be reached at: qna@oceanwave.com.
|