|  Questions 
              and Answers
 Amy Rich
              I'm trying to install Bind 9.2.1 into a chrooted jail environment 
              on a Solaris 8 machine. I've done the following so far:
              
              Create directories for the Bind stuff:
              
             
mkdir /var/bindjail
cd /var/bindjail
mkdir -p dev usr var etc
cd var
mkdir -p run log named
cd ../usr
mkdir -p lib local/etc local/lib share/lib/zoneinfo platform/SUNW,UltraSPARC-IIi-Engine/lib
Create a password and group entry for the Bind user:  
             
groupadd bind
useradd -d /var/bindjail -s /bin/false -g bind -c "bind user" -m bind
I've installed Bind with the root directory of /var/bindjail, 
            and copied in a bunch of system files:  
             
cd /etc
cp syslog.conf netconfig nsswitch.conf resolv.conf TIMEZONE \
  /var/bindjail/etc cd /usr/local/lib
cp liblwres.so.1 libdns.so.5 libcrypto.so.0.9.6 libisccfg.so.0 \
  libisccc.so.0 libisc.so.4 /var/bindjail/usr/local/lib cd /usr/lib
cp libnsl.so.1 libsocket.so.1 libpthread.so.1 libpthread.so.1 \
  libc.so.1 libdl.so.1 libmp.so.2 /var/bindjail/usr/lib cp \
  /usr/platform/SUNW,UltraSPARC-IIi-Engine/lib/libc_psr.so.1
Then I set up my named.conf in /var/bindjail/etc and zone files in 
            /var/bindjail/var/named. I also created the empty log files:  
             
touch var/log/bindlog var/run/named.pid
I set up all the permissions in the jail:  
             
chown -R root.bind /var/bindjail
chgrp -R go-w /var/bindjail
cd /var/bindjail
chmod 770 var/named var/run var/log
chown bind var/log/bindlog var/run/named.pid
chmod -R o-rwx usr var/run var/log
After getting everything set up, I try to run named, and I get the 
            following error:  
             
/var/bindjail/usr/local/sbin/named -t /var/bindjail -u bind
named: user 'bind' unknown.
I obviously missed a step somewhere. Any help would be appreciated.  A  You seem to have missed creating 
              the group, shadow, and password files in /var/bindjail/etc. However, 
              you may also be missing other things required to set up a full jail, 
              and it may not be overly necessary since BIND 9 doesn't fork 
              for zone transfers like BIND 8 did. Instead of setting up a jail 
              under Solaris, the easier route might be just to run named with 
              the -t flag so that it does its own chroot after reading in all 
              of the necessary libraries. This way you only have to copy in the 
              files that BIND actually needs while it's running.
              Q I have a file full of lines that 
              need to be folded onto one single line. I was going to use sed 
              to splice all of the lines together with the following:
              
             
sed -e :1 -e 'N;$!b1' -e 's/\n/ /g'
Unfortunately, this is slow, and sed has a pattern length limitation 
            that becomes a problem when I'm working with really large files. 
            Is there a way to get around the pattern match issue with sed, 
            and maybe speed this up some?  A  This wheel has already been invented 
              in C, so you won't have to worry about shell globbing from 
              the command line. Take a look on your system for a program called 
              paste. By default, paste will join lines with a tab 
              character, but you can modify that with the -d switch. The syntax 
              you'll want is:
              
             
paste -sd' ' infile > outfile
Q I'm trying to take a look at 
            SMTP transactions by using snoop on my Ultra 30 running Solaris 8. 
            As root, I've run the following command:  
             
snoop -ta -V | grep SMTP
I can see all of the packets just fine. I want to leave this running 
            for a while, so I redirected the output to a log file by doing the 
            following:  
             
snoop -ta -V | grep SMTP > /tmp/maillog
I don't seem to be getting any output to the log file when I 
            send myself a test message. Can I not redirect the output from snoop 
            in this manner?  A  The output of grep is 
              buffered when you redirect it to a file or a pipe. Until you have 
              enough data to flush the buffer (matched lines from your snoop output), 
              you won't see any entries in your log file. If you want to 
              force immediate output, then set up a loop to send yourself a few 
              dozen messages. This should be enough to flush the I/O buffer to 
              the log file.
              Q We just purchased a brand new 
              Ultra 280R with two 900-MHz CPUs. I've noticed that I'm 
              logging the following errors via syslog, and I'm rather concerned:
              
             
SUNW,UltraSPARC-III+: [ID 471235 kern.notice] [AFT1] errID \
  0x000005d7.e5cba350 Two Bits were in error
SUNW,UltraSPARC-III+: [ID 697273 kern.info] [AFT2] errID \
  0x000005d7.e5cba350 PA=0x00000000.24000f80
E$tag 0x00000000.90124924 E$state_6 Modified
Do you know what the cause of these errors might be? They're 
            only listed as notice and info, but they sound ominous.  A  I've seen other people report 
              similar errors, and it turned out to be a bad DIMM. You might want 
              to run your machine through extended diagnostics during POST and 
              see if all the memory looks okay physically.
              Q I am writing a Perl script to 
              some processing on two files, but one file matches and the other 
              doesn't. The difference between the two files is an extra newline 
              at the beginning of the second file.
              
             
file1 contains:
 a
 z
   
file2 contains:
 <blank line>
 a
 z
The relevant bits of the script are:  
             
#!/usr/bin/perl -n
undef $/;
if (m/(a.*z)/s)
 {
  print "$1\n";
 }
When I run the script on the file1, there is no output. For some reason 
            the match isn't happening correctly. When I run the script on 
            file2, the output is, as expected: 
             
a
z
A  Your problem is that the undef line 
            is only being executed after the Perl script reads the first line. 
            This means that file1 will never match because the undef happens after 
            "a" is read. In the second file, the blank line is read 
            first, then the undef happens and the "a" is read (with 
            the subsequent "z"), giving you your match.  If you want the undef to be executed before the -n loop starts, 
              put it in a BEGIN block:
              
             
BEGIN { undef $/; }
I think the BEGIN block is more readable, but you can also add the 
            -0777 argument to the Perl call instead: 
             
#!/usr/bin/perl -n -0777
Q I've just inherited a Mandrake 
            7.1 machine running sendmail. This is the mail gateway for a small 
            company, and it needed a bunch of software upgrades for security purposes. 
            After doing all of these upgrades, I've somehow broken sendmail. 
            When I try to update the access database, I get a suspicious error:  
             
# makemap hash /etc/mail/access < /etc/mail/access
makemap: error opening type hash map /etc/mail/access: Invalid argument
What did I break, and how do I fix it?  A  Your syntax to makemap looks 
              fine, and I'm presuming, by the hash sign, that you're 
              running this as root. This error is commonly seen as a result of 
              database type/version mismatches. If you've recently done a 
              bunch of upgrading and built things from source, it's possible 
              that you have an old copy of, for example, sleepycat's db software 
              installed somewhere. If you can track down this old version, remove 
              it then rebuild and reinstall sendmail. Once you have the new version 
              of sendmail in place, try moving and recreating the hash file from 
              the flat text file:
              
             
cd /etc/mail
mv access.db access.db.orig
makemap hash access < access
Q I'm trying to debug a DNS problem 
            that only appears intermittently. I'd like to see a trace of 
            all of the DNS servers that are queried when I do an nslookup, but 
            I don't see any built-in options to nslookup that will do that. 
            Are there other third-party programs that might help?  A  Nslookup is no longer the preferred 
              method of doing DNS lookups. Instead of using nslookup, use dig. 
              Not only does it report information more accurately, but it also 
              has a load of other very verbose features, including the one that 
              you want, trace. To look up the a record for host www.foo.bar 
              and get a recursive listing of servers, do the following with the 
              dig program as shipped with Bind 9.2.1:
              
             
dig +trace www.foo.bar
Q Another department has set up an internal 
            Web server on which the whole company is supposed to keep their documentation. 
            Because the machine is behind a firewall, they decided to leave telnet 
            open. This seems foolish to me, even if the machine is behind 
            a firewall. I don't want my password going over the wire in clear 
            text, so I requested that they install OpenSSH on this machine as 
            well. They complied, but I haven't been able to successfully 
            log in yet. When trying to use ssh or slogin, I get the following 
            error:  
             
ssh_exchange_identification: Connect closed by remote host.
Sadly, I don't have access to the box at all, so I can't 
            debug from that side. The other department sent me the sshd_config 
            file, and everything looks fine as far as I can tell. Any hints on 
            what may be broken or missing on that machine?  A  This indicates that you got a 
              TCP connection to the server on port 22, but the server immediately 
              closed it before ssh could initiate a handshake. This is a common 
              error message if OpenSSH was compiled with libwrap support and the 
              appropriate allow entries are not in /etc/hosts.allow or /etc/hosts.deny. 
              If the people in the other department look in their logs (presuming 
              that they're logging failed connection attempts), then they 
              should see that tcp wrappers or libwrap is denying the connection. 
              If they explicitly allow your host (or your range of hosts inside 
              the firewall), then you should be able to get things to work. If 
              you're still having problems, see if you can convince the other 
              department to start sshd with the -d flag and send you the output.
              Q I've set up a password-protected 
              directory using a .htaccess file under Apache 1.3.27. My users want 
              to be able to access multiple accounts during the same session, 
              and they're finding this to be a problem. Once they're 
              logged in as User A, they can't change and log in as User B. 
              Is there some way to log out User A and let the person log in as 
              User B?
              A  There is no good way to enable 
              logging in and out with basic authentication (use of the .htaccess 
              file). There are a couple hacks you could try, but they're 
              not elegant and rely on features of specific browsers. Instead of 
              using basic authentication, use your favorite scripting language 
              to manage cookies for your directory. Then you can provide a logout 
              link and have the cookie removed from the client browser. To get 
              started, take a look at some of the Perl cookie modules:
              
             
http://search.cpan.org/search?mode=all&query=cookie
Or take a look at the PHP documentation for setcookie():  
             
http://www.php.net/manual/en/function.setcookie.php
Q I have a FreeBSD 4.7-STABLE machine 
            with a very odd disk space problem. According to df, / is over 
            100% full. When I do a du of everything on the root partition, 
            though, it shows up as only being about 50% full. Do you know where 
            my missing space could have gone?  A  I can think of two possibilities 
              off the top of my head. The first is that you have a file somewhere 
              on the root partition that was deleted while it was still open, 
              thereby continuing to take up space. If they are on the root filesystem, 
              try using fstat to look in places like /var/log, /var/run, 
              /tmp, /var/tmp, etc. For example:
              
             
fstat -m -f /tmp
The other possibility is that, at some point, your machine booted 
            without mounting all of its partitions correctly. While the machine 
            was booted in this state, things got written to their "normal" 
            places, which wound up being on the root partition since the proper 
            partition wasn't mounted. Now that the problem is fixed, you 
            have mounted over the directories where the space is being taken up, 
            so you can't see the files that are consuming the space. To fix 
            this, boot from a rescue floppy and mount the root partition on its 
            own. Look for files in any directory that should just be an empty 
            mount point.  Q I have a Sun Ultra 60 running 
              Solaris 8 with DiskSuite 4.2.1 installed. I have four partitions 
              on the disk -- /, /var, /usr, and /local. I have two identical 
              disks that are identically formatted to be mirrored for the above 
              four filesystems and swap. I ran metaroot on d0 so that it should 
              be able to boot from either side of the mirror. I've also enabled 
              UFS logging on all four metadevices so that recover time for a crash 
              is minimal.
              Here are my /etc/vfstab entries:
              
             
fd      -       /dev/fd fd      -       no    -
/proc   -       /proc   proc    -       no    -
/dev/md/dsk/d0  /dev/md/rdsk/d0 /       ufs   1    no   logging
/dev/md/dsk/d1  -    -          swap    -     no   -
/dev/md/dsk/d3  /dev/md/rdsk/d3 /usr    ufs   1    no   logging
/dev/md/dsk/d4  /dev/md/rdsk/d4 /var    ufs   1    no   logging
/dev/md/dsk/d5  /dev/md/rdsk/d5 /local  ufs   2    yes  logging
swap    -       /tmp    tmpfs   -       yes   -
When I went to do testing, I powered off the machine without doing 
            a shutdown. Now, every time the machine boots, I get the following 
            message:  
             
The / file system (/dev/rdsk/c0t0d0s0) is being checked.
Can't roll the log for /dev/rdsk/c0t0d0s0.
/dev/rdsk/c0t0d0s0: FREE BLK COUNT(S) WRONG IN SUPERBLK (SALVAGED)
/dev/rdsk/c0t0d0s0: 4336 files, 75857 used, 880412 free
/dev/rdsk/c0t0d0s0: (365 frags, 1017307 blocks, 0.0% fragmentation)
Why can't the root filesystem be rolled back? Did I set something 
            up incorrectly?  A  Looking at the device that's 
              being fscked, I would guess that you have a corrupted /etc/mnttab 
              file. If the file were correct, you'd be trying to fsck 
              /dev/md/rdsk/d0, not /dev/rdsk/c0t0d0s0. /etc/mnttab is probably 
              an actual file as opposed to information generated by the kernel 
              table. To fix this, you can boot off of a CD-ROM and remove the 
              offending file. After shutting down the machine, start from the 
              ok prompt:
              
             
boot cdrom -s
fsck /dev/dsk/c0t0d0s0
mount /dev/dsk/c0t0d0s0 /a
rm /a/etc/mnttab
Now reboot your machine from the disk. If you try testing the machine 
            by pulling the plug now, the UFS logging should work correctly.  Amy Rich, president of the Boston-based Oceanwave Consulting, 
              Inc. (http://www.oceanwave.com), has been a UNIX systems 
              administrator for more than five years. She received a BSCS at Worcester 
              Polytechnic Institute, and can be reached at: [email protected].
           |