Fw: Re: [clue-tech] Possible troubles with SELinux built into rescue kernel ?

mike havlicek mhavlicek1 at yahoo.com
Tue Nov 18 14:26:59 MST 2008




--- On Tue, 11/18/08, Poler Dan <dan at redhat.com> wrote:

> From: Poler Dan <dan at redhat.com>
> Subject: Re: [clue-tech] Possible troubles with SELinux built into rescue kernel ?
> To: mhavlicek1 at yahoo.com, "CLUE tech" <clue-tech at cluedenver.org>
> Date: Tuesday, November 18, 2008, 11:24 AM
> Mike,
> 
> I don't believe my replies to the list get through due
> to an E-Mail address problem. Feel free to contact me
> directly if you'd like, or cross-post this back to the
> list.
> 
> I strongly doubt that SElinux is causing you grief here.
> Its state on the filesystem is irrelevant when you're
> booted in rescue mode; rescue mode will be either permissive
> or disabled. It's not causing the behavior you're
> describing with kernel panics, seg faults, etc. The restore
> is failing on the SElinux stuff is because the dump contains
> SElinux info that the restore kernel cannot process with no
> SElinux support turned on. I have absolutely no idea if
> there is a way around this. You could try doing a
> 'setenforce 1' then the restore to see if that works
> but I am not hopeful. I suspect that you will need to
> reinstall the OS then do the restore on top of that...
> 
> What does the output of 'getenforce' tell you? If
> SElinux is on (which would be really weird in rescue mode),
> issue 'setenforce 0' (or 1) and try again.
> 
> All that being said, though -- SElinux would not cause the
> behavior you're describing with regard to kernel panics,
> seg faults, etc... That's badness right there.
> 
> d
> 
> --
> Dan Poler
> Global Professional Services
> Red Hat, Inc.
> dan at redhat.com
> 
> 
> On Nov 18, 2008, at 10:16 AM, mike havlicek wrote:
> 
> > Hello all,
> > 
> > Problem:
> > 
> > I have a RHEL AS installation where I mirrored the
> boot and root partitions accross two disks. I want to
> replace one of the disks, but have been having problems
> using the tools I want to use in rescue mode boots from 2
> different installation CDs. I admit several deficiencies: 1)
> Lack of SELinux knowledge 2) Lack of mdadm savvy 3) I have
> not tried the exact CD #1 that was used in the initial
> build... (I suspect correcting #3 would lead to the same
> dilemma expressed herein... besides up2date has been run)
> > 
> > Pre-rescue:
> > 
> > Dumps of the system's mount points were made to an
> NFS mounted volume.
> > 
> > Rescue boot scenarios:
> > 1) booting from CD #1 from RHEL4-U3 i386 circa 07/06
> (boot: linux rescue)
> > enabling network and NOT enabling the search for
> installations...
> > 2) booting from CD #1 from CentOS 5.0 i386 circa 08/07
> (boot: linux rescue)
> > enabling network and NOT enabling the search for
> installations...
> > 
> > Symptoms:
> > 
> > In rescue boot scenario 1, I get segmentation faults
> with some of the network related tools ultimately leading to
> kernel panics. E.G. running nslookup results in a
> segmentation fault, and NFS mounting (via specification of
> the server IP) the volume which contains the above mentioned
> dumps ultimately results in kernel panic...
> > 
> > Scenario 2 led to restore failure from /etc/mdadm.conf
> in a dump file ( I am able to extract that file from the
> same dump file with an NFS mount to a third
> "healthy" Linux system ). The failure occurs with
> SELinux error messages... unfortunately I get:
> > 
> > # restore -xv -f file.dmp mdadm.conf
> > (SNIP)
> > restore:.:EA set
> security.selinux:system_u:object_r:root_t failed: Operation
> not supported
> > #
> > 
> > I can tell you that the mdadm.conf file extracts from
> a non-rescue boot environment with SELinux disabled. From my
> analysis thus far I am tending to believe that SELinux
> enabled in the distro based rescue boot mode that I have
> tried could be the source of some of the nuisances that I
> have encountered, and would not be surprised to see more
> gotchas with the actual mdadm procedures.
> > 
> > System Info: (This is the system in the operating
> room:)
> > 
> > $ cat /etc/redhat-release
> > Red Hat Enterprise Linux AS release 4 (Nahant Update
> 6)
> >
> ---------------------------------------------------------------------
> > $ df -k
> > Filesystem           1K-blocks      Used Available
> Use% Mounted on
> > /dev/mapper/VolGroup00-LogVol00
> >                        547409    161356    357794  32%
> /
> > /dev/md1                124323     27084     90820 
> 23% /boot
> > none                    258100         0    258100  
> 0% /dev/shm
> > /dev/mapper/VolGroup01-LogVol00
> >                      20642428   9623252   9970600  50%
> /home
> > /dev/mapper/VolGroup01-LogVol01
> >                      20642428  11220032   8373820  58%
> /usr
> > /dev/mapper/VolGroup00-LogVol01
> >                       1773912   1011180    672620  61%
> /var
> > ike:/export/archive   41300992  23213664  17880832 
> 57% /archive
> >
> ---------------------------------------------------------------------
> > 
> > So, /boot is part of raid1 as is
> /dev/mapper/VolGroup00-LogVol00 (/dev/md0). The disks are
> /dev/hda and /dev/hdb. I intend to replace /dev/hdb. As will
> be shown below, I overlooked potential future problems when
> initially creating the mirrors by not exactly matching the
> partition tables for the mirror components. Getting ahead of
> myself, the replacement disk for hdb has now been
> partitioned with units matching hda where I think they need
> to ...
> > 
> > This is output from "fdisk -l" representing
> the original disk layout:
> > 
> > Disk /dev/hda: 120.0 GB, 120034123776 bytes
> > 255 heads, 63 sectors/track, 14593 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > 
> >   Device Boot      Start         End      Blocks   Id 
> System
> > /dev/hda1   *           1          16      128488+  fd
>  Linux raid autodetect
> > /dev/hda2              17         311     2369587+  fd
>  Linux raid autodetect
> > /dev/hda3             312         376      522112+  82
>  Linux swap
> > /dev/hda4             377       14593   114198052+  8e
>  Linux LVM
> > 
> > 
> > Disk /dev/hdb: 2559 MB, 2559836160 bytes
> > 128 heads, 63 sectors/track, 620 cylinders
> > Units = cylinders of 8064 * 512 = 4128768 bytes
> > 
> >   Device Boot      Start         End      Blocks   Id 
> System
> > /dev/hdb1   *           1          32      128992+  fd
>  Linux raid autodetect
> > /dev/hdb2              33         620     2370816   fd
>  Linux raid autodetect
> >
> ---------------------------------------------------------------------
> > 
> > Replacement:
> > 
> > Disk /dev/hdb: 13.7 GB, 13701316608 bytes
> > 255 heads, 63 sectors/track, 1665 cylinders
> > Units = cylinders of 16065 * 512 = 8225280 bytes
> > 
> >   Device Boot      Start         End      Blocks   Id 
> System
> > /dev/hdb1   *           1          16      128488+  fd
>  Linux raid autodetect
> > /dev/hdb2              17         311     2369587+  fd
>  Linux raid autodetect
> > 
> >
> ---------------------------------------------------------------------
> > 
> > While running pre mirror restructure, /proc/mdstat and
> mdadm report:
> > 
> > $ cat /proc/mdstat
> > Personalities : [raid1]
> > md1 : active raid1 hdb1[1] hda1[0]
> >      128384 blocks [2/2] [UU]
> > 
> > md0 : active raid1 hdb2[1] hda2[0]
> >      2369472 blocks [2/2] [UU]
> > 
> > unused devices: <none>
> > 
> > $ sudo /sbin/mdadm --detail /dev/md0
> > /dev/md0:
> >        Version : 00.90.01
> >  Creation Time : Wed Mar  9 14:40:17 2005
> >     Raid Level : raid1
> >     Array Size : 2369472 (2.26 GiB 2.43 GB)
> >    Device Size : 2369472 (2.26 GiB 2.43 GB)
> >   Raid Devices : 2
> >  Total Devices : 2
> > Preferred Minor : 0
> >    Persistence : Superblock is persistent
> > 
> >    Update Time : Tue Nov 18 08:31:59 2008
> >          State : active
> > Active Devices : 2
> > Working Devices : 2
> > Failed Devices : 0
> >  Spare Devices : 0
> > 
> >           UUID : ecc4fb96:7a48327b:e5ca87f0:126e2a29
> >         Events : 0.15919474
> > 
> >    Number   Major   Minor   RaidDevice State
> >       0       3        2        0      active sync  
> /dev/hda2
> >       1       3       66        1      active sync  
> /dev/hdb2
> > 
> > $ sudo /sbin/mdadm --detail /dev/md1
> > /dev/md1:
> >        Version : 00.90.01
> >  Creation Time : Wed Mar  9 14:40:34 2005
> >     Raid Level : raid1
> >     Array Size : 128384 (125.40 MiB 131.47 MB)
> >    Device Size : 128384 (125.40 MiB 131.47 MB)
> >   Raid Devices : 2
> >  Total Devices : 2
> > Preferred Minor : 1
> >    Persistence : Superblock is persistent
> > 
> >    Update Time : Sat Nov 15 19:15:38 2008
> >          State : clean
> > Active Devices : 2
> > Working Devices : 2
> > Failed Devices : 0
> >  Spare Devices : 0
> > 
> >           UUID : 1c9f148c:413770b5:eafd4ff9:4c3d525b
> >         Events : 0.4957
> > 
> >    Number   Major   Minor   RaidDevice State
> >       0       3        1        0      active sync  
> /dev/hda1
> >       1       3       65        1      active sync  
> /dev/hdb1
> > 
> >
> ---------------------------------------------------------------------
> > 
> > I think I have the information needed to reconstruct
> the mirror using the replacement drive but this is where my
> lack of mdadm savvy comes into play.
> > My gut tells me that having /etc/mdadm.conf handy in
> rescue mode might make things easier. My thinking is that it
> would be nice to have been able to have put it in place in
> rescue mode from an NFS mounted dump file...
> > 
> > Here is the content of mdadm.conf from the normally
> running system that is going to the operating room:
> > 
> > $ cat /etc/mdadm.conf
> > 
> > # mdadm.conf written out by anaconda
> > DEVICE partitions
> > MAILADDR root
> > ARRAY /dev/md0 super-minor=0
> > ARRAY /dev/md1 super-minor=1
> > 
> > I think I am getting close to collecting the pieces of
> the puzzle, but...
> > back to my suspicion with regard to how SELinux is
> implemented in the kernel I get to work with from the rescue
> mode boots ... I haven't figured out how to put SELinux
> in disabled mode. I should think that the fact that it is in
> permissive mode should mean that at most it should just bark
> at any attempted operations it objects to ... but ... I
> don't get the file I want from the restore in rescue
> mode.
> > 
> > The Read-only filesystem in rescue mode contains
> /etc/selinux/config with
> > SELINUX=permissive
> > SELINUXTYPE=targeted
> > 
> > I suppose one might be able to rig a reread of the
> configuration in files created elsewhere, but that type of
> approach seems to make the procedure a little complex.
> > 
> > I would like to simply turn off SELinux in the rescue
> mode to rule it out as a problem. On that thought I found:
> > 
> > boot: linux rescue selinux=0
> > 
> > doesn't seem to help...
> > 
> > Any insight would be greatly appreciated.
> > 
> > Thanks,
> > -Mike
> > 
> > 
> > 
> > 
> > 
> > _______________________________________________
> > clue-tech mailing list
> > clue-tech at cluedenver.org
> > http://www.cluedenver.org/mailman/listinfo/clue-tech


      


More information about the clue-tech mailing list