Sunday, 17 April 2022

Linux Interview questions 2022

Q1. How to scan newly luns in linux?

1. Identify the existing disks 
fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

2. 2. Identify the number of adapters

# ls /sys/class/fc_host
 host0  host1  host2  host3

 
scan the LUNs
 echo "- - -" > /sys/class/scsi_host/host0/scan

 echo "- - -" > /sys/class/scsi_host/host1/scan

 
3. Identify the new disks 
fdisk -l 2>/dev/null | egrep '^Disk' | egrep -v 'dm-' | wc -l

Q2. How to create Software RAID 1 array?

1. Identify disks in system. I have used  /dev/sdd /dev/sde disk for raid1 array.
RAID 1 uses redundancy with mirroring.
[root@linux01 ~]# fdisk -l



Disk /dev/sdd: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sdd doesn't contain a valid partition table

Disk /dev/sde: 2147 MB, 2147483648 bytes
255 heads, 63 sectors/track, 261 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes

Disk /dev/sde doesn't contain a valid partition table

2. Create raid array using mdadm command
[root@linux01 ~]# mdadm --create /dev/md0 --level=raid1 --raid-devices=2 /dev/sdd /dev/sde
mdadm: array /dev/md0 started.
[root@linux01 ~]#

To verify raid status:
[root@linux01 ~]# cat /proc/mdstat
Personalities : [raid1]
md0 : active raid1 sde[1] sdd[0]
      2097088 blocks [2/2] [UU]

unused devices: <none>

We can format and mount disk /dev/md0

Q3. How to create LVM  partition Software RAID 1 array?
We can create LVM partition on /dev/md0. Refer Q2 for creating raid 1 array.
[root@linux01 ~]# pvcreate /dev/md0
  Physical volume "/dev/md0" successfully created
[root@linux01 ~]# pvs
  /dev/hdc: open failed: No medium found
  PV         VG         Fmt  Attr PSize PFree
  /dev/md0              lvm2 a-   2.00G 2.00G
  /dev/sda2  VolGroup00 lvm2 a-   9.88G    0
[root@linux01 ~]# vgcreate -n raidvg /dev/md0
vgcreate: invalid option -- n
  Error during parsing of command line.
[root@linux01 ~]# vgcreate raidvg /dev/md0
  Volume group "raidvg" successfully created
[root@linux01 ~]# lvs
  LV       VG         Attr   LSize Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao 7.88G
  LogVol01 VolGroup00 -wi-ao 2.00G
[root@linux01 ~]# vgs
  VG         #PV #LV #SN Attr   VSize VFree
  VolGroup00   1   2   0 wz--n- 9.88G    0
  raidvg       1   0   0 wz--n- 2.00G 2.00G
[root@linux01 ~]# pvs
  PV         VG         Fmt  Attr PSize PFree
  /dev/md0   raidvg     lvm2 a-   2.00G 2.00G
  /dev/sda2  VolGroup00 lvm2 a-   9.88G    0
[root@linux01 ~]# lvcreate -L 500M -n raidlv01 /dev/raidvg
  Logical volume "raidlv01" created
[root@linux01 ~]# lvs
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao   7.88G
  LogVol01 VolGroup00 -wi-ao   2.00G
  raidlv01 raidvg     -wi-a- 500.00M
[root@linux01 ~]# mkfs -t ext3 /dev/raidvg/raidlv01
mke2fs 1.39 (29-May-2006)
Filesystem label=
OS type: Linux
Block size=1024 (log=0)
Fragment size=1024 (log=0)
128016 inodes, 512000 blocks
25600 blocks (5.00%) reserved for the super user
First data block=1
Maximum filesystem blocks=67633152
63 block groups
8192 blocks per group, 8192 fragments per group
2032 inodes per group
Superblock backups stored on blocks:
        8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409

Writing inode tables: done
Creating journal (8192 blocks): done
Writing superblocks and filesystem accounting information: done

This filesystem will be automatically checked every 28 mounts or
180 days, whichever comes first.  Use tune2fs -c or -i to override.
[root@linux01 ~]# lvs
  LV       VG         Attr   LSize   Origin Snap%  Move Log Copy%  Convert
  LogVol00 VolGroup00 -wi-ao   7.88G
  LogVol01 VolGroup00 -wi-ao   2.00G
  raidlv01 raidvg     -wi-a- 500.00M
[root@linux01 ~]# mount /dev/raidvg/raidlv01 /mnt
[root@linux01 ~]# df -hP
Filesystem            Size  Used Avail Use% Mounted on
/dev/mapper/VolGroup00-LogVol00  7.7G  6.4G  915M  88% /
/dev/sda1              99M   14M   81M  15% /boot
tmpfs                 440M     0  440M   0% /dev/shm
none                  440M  104K  440M   1% /var/lib/xenstored
/dev/mapper/raidvg-raidlv01  485M   11M  449M   3% /mnt

No comments:

Post a Comment