[Linux 기초] 24. 디스크 관리 - mdadm 명령을 이용한 RAID 구성

Date:     Updated:

카테고리:

태그:


01. mdadm 명령


a) 시작하기 전…


■ MD 관련 커널 모튤 확인

$ {
    for y in `lsmod | grep dm | cut -d " " -f 1`
    do
    echo $y
    modinfo $y | grep description
    echo
    done
}

dm_mirror
description:    device-mapper mirror target

dm_region_hash
description:    device-mapper region hash

dm_log
description:    device-mapper dirty region log

dm_mod
description:    device-mapper driver


■ RAID 테스트를 위한 디스크 추가

$ lsblk | grep disk
sda           8:0    0   20G  0 disk

# 가상머신을 실행 중인 상태에서 10GB 가상 디스크 6개를 추가하고 아래 명령 실행
$ lspci | grep -i scsi
00:10.0 SCSI storage controller: Broadcom / LSI 53c1030 PCI-X Fusion-MPT Dual Ultra320 SCSI (rev 01)

$ find /sys/devices -name scan
/sys/devices/pci0000:00/0000:00:10.0/host0/scsi_host/host0/scan
/sys/devices/pci0000:00/0000:00:07.1/ata1/host1/scsi_host/host1/scan
/sys/devices/pci0000:00/0000:00:07.1/ata2/host2/scsi_host/host2/scan

$ echo "- - -" > /sys/devices/pci0000:00/0000:00:10.0/host0/scsi_host/host0/scan

$ lsblk | grep 10G
sdb           8:16   0   10G  0 disk
sdc           8:32   0   10G  0 disk
sdd           8:48   0   10G  0 disk
sde           8:64   0   10G  0 disk
sdf           8:80   0   10G  0 disk
sdg           8:96   0   10G  0 disk


b) mdadm 명령


■ 명령 형식

$ mdadm [mode] <RAID DEVICE> [옵션] <component-deivces>
명령어 설명
–query DEVICE 디바이스에 대한 간단한 정보를 확인하기 위해 사용한다.
–detail RAID DEVICE RAID 디바이스에 대한 상세 정보를 확인하기 위해 사용한다.
–create RAID 디바이스를 만들기 위해 사용한다.
–level=LEVEL RAID 레벨을 설정하기 위해 사용한다. LEVEL 종류는 아래와 같다. 여기서는 RAID 0, 1, 5, 6에 대해 알아볼 것이다.
- linear: RAID 0 Concatenation과 동일하다.
- stripe: RAID 0 Stripe를 나타낸다.
- mirror: RAID 1 을 나타낸다.
- 5: RAID 5를 나타낸다.
- 6: RAID 6을 나타낸다.
–raid-devices=N RAID 디바이스 구성요소 개수를 설정한다. N은 숫자이다.
–fail COMPONENT 동작 중인 RAID 디바이스의 구성요소를 교체 전 Faulty로 마크 처리하기 위해 사용한다.
–remove CMPT or RAID DEV RAID 디바이스를 구성하는 구성요소를 제거하거나 RAID 디바이스를 제거하기 위해 사용한다.
–add COMPONENT RAID 디바이스에 구성요소를 추가하기 위해 사용한다.
–grow RAID 디바이스의 사이즈를 변경하기 위해 사용한다.
–stop 비활성화 상태로 전환하기 위해 사용한다.
–run 활성화 상태로 전환하기 위해 사용한다.
–remove DEVICE 명시한 디바이스를 제거하기 위해 사용한다.
–spare-devices=N DEV RAID 디바이스의 Hot Spare 개수와 디바이스를 설정하기 위해 사용한다.
–detail –scan RAID DEV RAID 구성 정보를 보거나 파일로 저장하기 위해 사용한다.
–version mdadm 버전을 출력한다.
–zero-superblock 디바이스에 MD 슈퍼블록이 포함되어 있다면 이를 0(Zero)로 덮어쓰기 위해 사용한다.


02. RAID Linear (RAID 0 Concatenation) 구성



■ RAID Linear 구성

$ mdadm --version
mdadm - v4.2 - 2021-12-30 - 6


$ mdadm --query
mdadm: No devices given.


$ cat /proc/mdstat
Personalities :
unused devices: <none>

# RAID 구성
$ mdadm --create /dev/md0 --level=linear --raid-devices=2 /dev/sdd /dev/sdc
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.


$ mdadm --query /dev/md0
/dev/md0: 19.98GiB linear 2 devices, 0 spares. Use mdadm --detail for more detail.


$ mdadm --query /dev/sdc
/dev/sdc: is not an md array
/dev/sdc: device 1 in 2 device active linear /dev/md0.  Use mdadm --examine for more detail.


$ mdadm --detail /dev/md0

~(생략)~

Consistency Policy : none

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 5b41f11a:0907b642:a0e324b3:ae5d6398
            Events : 0

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc


$ mdadm --detail --scan /dev/md0
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398


$ cat /proc/mdstat
Personalities : [linear]
md0 : active linear sdc[1] sdd[0]
      20953088 blocks super 1.2 0k rounding


$ mkfs -t xfs /dev/md0


$ lsblk /dev/md0
NAME MAJ:MIN RM SIZE RO TYPE   MOUNTPOINTS
md0    9:0    0  20G  0 linear


$ blkid /dev/md0
/dev/md0: UUID="57ae3fcb-8493-4f87-8d15-89273c8f1992" TYPE="xfs"

$ mkdir /datafile01

$ mount /dev/md0 /datafile01

$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         20G  175M   20G   1% /datafile01

$ touch /datafile01/file1

$ ls -l /datafile01
total 0
-rw-r--r--. 1 root root 0 Apr 25 13:30 file1


■ RAID Linear 용량 추가 ( 디바이스 추가 )

$ mdadm --verbose --detail --scan /dev/md0
ARRAY /dev/md0 level=linear num-devices=2 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdc,/dev/sdd


# 디바이스 추가
$ mdadm /dev/md0 --grow --add /dev/sdb


$ mdadm --verbose --detail --scan /dev/md0
ARRAY /dev/md0 level=linear num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdb,/dev/sdc,/dev/sdd


$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         20G  175M   20G   1% /datafile01


# 파일 시스템의 크기를 조정 
$ xfs_growfs /datafile01


$  df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         30G  247M   30G   1% /datafile01


■ 새로 생성한 RAID 디바이스 정보 저장하기

  • 새로운 RAID 디바이스를 생성한 후 관련된 정보를 반드시!! /etc/mdadm.conf 파일에 추가해야 한다.

  • 저장하지 않고 리부팅을 한다면 분명 /dev/md0으로 만들었는데 리부팅 후 /dev/md127로 나타나는 것을 확인할 수 있을 것이다.

# RAID 정보를 저장
$ mdadm --detail --brief /dev/md0 >> /etc/mdadm.conf

# 재부팅
$ reboot

# 정보 확인
$ mdadm --detail --scan
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398

$ mdadm --detail /dev/md0

~(생략)~

Consistency Policy : none

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 5b41f11a:0907b642:a0e324b3:ae5d6398
            Events : 1

    Number   Major   Minor   RaidDevice State
       0       8       48        0      active sync   /dev/sdd
       1       8       32        1      active sync   /dev/sdc
       2       8       16        2      active sync   /dev/sdb


03. RAID 0 (Stripe) 구성



■ RAID 0 (Stripe) 구성

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=linear num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdb,/dev/sdc,/dev/sdd

$ lsblk | grep 10G
sdb           8:16   0   10G  0 disk
sdc           8:32   0   10G  0 disk
sdd           8:48   0   10G  0 disk
sde           8:64   0   10G  0 disk
sdf           8:80   0   10G  0 disk
sdg           8:96   0   10G  0 disk

# RAID 구성
$ mdadm --create /dev/md1 --level=stripe --raid-devices=2 /dev/sde /dev/sdf
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md1 started.

# RAID 정보 저장
$ mdadm --detail --brief /dev/md1 >> /etc/mdadm.conf

$ more /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=8f7a4d2b:65d235fe:267768b7:c5ca0bab

$ mkfs -t xfs /dev/md1

$ mkdir /datafile02

$ mount /dev/md1 /datafile02

$ df -h /datafile02
Filesystem      Size  Used Avail Use% Mounted on
/dev/md1         20G  176M   20G   1% /datafile02

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=linear num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdb,/dev/sdc,/dev/sdd
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=localhost.localdomain:1 UUID=8f7a4d2b:65d235fe:267768b7:c5ca0bab
   devices=/dev/sde,/dev/sdf

# fstab 파일에 추가
$ vi /etc/fstab
/dev/md0  /datafile01  xfs defaults 0 0
/dev/md1  /datafile02  xfs defaults 0 0

# 리부팅 후 확인
$ shutdown -r now

$ df -h | grep datafile
/dev/md0              30G  247M   30G   1% /datafile01
/dev/md1              20G  176M   20G   1% /datafile02

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=linear num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdb,/dev/sdc,/dev/sde
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=localhost.localdomain:1 UUID=8f7a4d2b:65d235fe:267768b7:c5ca0bab
   devices=/dev/sdd,/dev/sdg


■ RAID 디바이스 제거

# 마운트 된 파일시스템을 언마운트 
$ umount /datafile01

$ umount /datafile02

$ vi /etc/fstab

/dev/md0                /datafile01               xfs    defaults        1 2   #<== 제거
/dev/md1                /datafile02               xfs    defaults        1 2   #<== 제거

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=linear num-devices=3 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398
   devices=/dev/sdb,/dev/sdc,/dev/sde    # <== 디바이스 확인
ARRAY /dev/md1 level=raid0 num-devices=2 metadata=1.2 name=localhost.localdomain:1 UUID=8f7a4d2b:65d235fe:267768b7:c5ca0bab
   devices=/dev/sdd,/dev/sdg             # <== 디바이스 확인

# RAID 디바이스 비활성화
$ mdadm --stop /dev/md0

$ mdadm --stop /dev/md0

# RAID 디바이스 제거
$ mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sde

$ mdadm --zero-superblock /dev/sdd /dev/sdg

# /etc/mdadm.conf 파일에 해당 내용을 제거
$ vim /etc/mdadm.conf

ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=5b41f11a:0907b642:a0e324b3:ae5d6398 #<== 제거
ARRAY /dev/md1 metadata=1.2 name=localhost.localdomain:1 UUID=8f7a4d2b:65d235fe:267768b7:c5ca0bab #<== 제거

# 리부팅 후, 확인
$ shutdown -r now

$ mdadm --detail --scan --verbose


04. RAID 1 (Mirroring) 구성



■ RAID 1 (Mirroring) 구성

$ lsblk | grep 10G
sdb           8:16   0   10G  0 disk
sdc           8:32   0   10G  0 disk
sdd           8:48   0   10G  0 disk
sde           8:64   0   10G  0 disk
sdf           8:80   0   10G  0 disk
sdg           8:96   0   10G  0 disk

# RAID 구성
$ mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb /dev/sdc
mdadm: Note: this array has metadata at the start and
    may not be suitable as a boot device.  If you plan to
    store '/boot' on this device please ensure that
    your boot-loader understands md/v1.x metadata, or use
    --metadata=0.90
Continue creating array? y      # <== y 입력
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# 생성된 RAID 정보 확인
$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5
   devices=/dev/sdb,/dev/sdc

$ mkfs -t xfs -f /dev/md0

$ mount /dev/md0 /datafile01

$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         10G  104M  9.9G   2% /datafile01

$ mdadm --detail /dev/md0

~(생략)~

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

$ mdadm --detail --brief /dev/md0 >> /etc/mdadm.conf

$ more /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5


■ RAID 1 에 Hot Spare 추가

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5
   devices=/dev/sdb,/dev/sdc

$ mdadm /dev/md0 --add /dev/sdf
mdadm: added /dev/sdf

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid1 num-devices=2 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5
   devices=/dev/sdb,/dev/sdc,/dev/sdf

$ more /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5

$ mdadm --detail --brief /dev/md0
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=9508e678:6e22715b:bcc78336:76240fd5

# 신규 정보로 변경
$ mdadm --detail --brief /dev/md0 > /etc/mdadm.conf

$ mdadm --detail /dev/md0

~(생략)~

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 18

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc

       2       8       80        -      spare   /dev/sdf


■ RAID 1 에 디스크 장애 유도 및 확인

131231231

# RAID 디바이스 위에 만들어진 파일시스템에 I/O 가 발생하면  제거된 디스크가 감지되고  RAID 디바이스 구성에 문제가 있다고 판단
# 그러면 Hot Spare에 있는 디스크가 장애 난 디스크를 대처하게 되고 데이터 재동기화가 발생하게 된다
# 동기화가 끝나면 정상적인 구성으로 동작하게 됩니다. 다시 말해 디스크 단일 장애가 발생하여도 데이터 서비스는 계속 지속된다.
$ mdadm --monitor /dev/md0
Apr 25 15:28:42: Fail on /dev/md0 /dev/sdb
an 16 18:08:19: SpareActive data-on /dev/md0 /dev/sdf

$ cd /datafile01/

$ touch file2

$ touch file2

$ mdadm --detail /dev/md0 | tail

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 41

    Number   Major   Minor   RaidDevice State
       2       8       80        0      spare rebuilding   /dev/sdf
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb


# mdadm 명령으로 데이터 동기화 여부를 확인할 수 있고 동기화가 종료되면 아래와 같이 장애 전과 동일하게  active sync로 동작하게 된다.
$ mdadm --detail /dev/md0 | tail

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 48

    Number   Major   Minor   RaidDevice State
       2       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb


■ RAID 1의 구성 요소 디스크 교체

$ mdadm --detail /dev/md0 | tail

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 48

    Number   Major   Minor   RaidDevice State
       2       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb


$ mdadm --manage /dev/md0 --fail /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0


$ mdadm --detail /dev/md0 | tail

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 48

    Number   Major   Minor   RaidDevice State
       2       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc

       0       8       16        -      faulty   /dev/sdb


$ mdadm --manage /dev/md0 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0


$ mdadm --detail /dev/md0 | tail

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 49

    Number   Major   Minor   RaidDevice State
       2       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc


$ mdadm --manage /dev/md0 --add /dev/sdg
mdadm: added /dev/sdg


$ mdadm --detail /dev/md0 | tail

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 9508e678:6e22715b:bcc78336:76240fd5
            Events : 50

    Number   Major   Minor   RaidDevice State
       2       8       80        0      active sync   /dev/sdf
       1       8       32        1      active sync   /dev/sdc

       3       8       96        -      spare   /dev/sdg


■ RAID 1 디바이스 삭제

$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         10G  104M  9.9G   2% /datafile01

$ umount /datafile01

$ mdadm --stop /dev/md0

$ mdadm --remove /dev/md0

$ mdadm --zero-superblock /dev/sdf /dev/sdc /dev/sdg

$ cat /dev/null > /etc/mdadm.conf

$ more /etc/mdadm.conf

$ mdadm --detail --scan


06. RAID 5 구성



■ RAID 5 구성

$ lsblk | grep 10G
sdb           8:16   0   10G  0 disk
sdc           8:32   0   10G  0 disk
sdd           8:48   0   10G  0 disk
sde           8:64   0   10G  0 disk
sdf           8:80   0   10G  0 disk
sdg           8:96   0   10G  0 disk

# RAID 구성
$ mdadm --create /dev/md0 --level=5 --raid-devices=4 /dev/sdb /dev/sdc /dev/sdd /dev/sde
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.

# RAID 정보 확인
$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=865c444b:1f69832f:17b844a2:cf80abed
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

# 파일 시스템 생성
$ mkfs -t xfs /dev/md0

$ blkid /dev/md0
/dev/md0: UUID="b56b3095-e47b-4cd2-9833-af71ccfde8e0" TYPE="xfs"

$ mount /dev/md0 /datafile01

$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         30G  247M   30G   1% /datafile01

# /datafile01 파일 생성
$ dd bs=1024 if=/dev/zero of=/datafile01/file1 count=409600

409600+0 records in
409600+0 records out
419430400 bytes (419 MB, 400 MiB) copied, 4.80376 s, 87.3 MB/s

$ ls -l /datafile01
total 409600
-rw-r--r--. 1 root root 419430400 Apr 25 15:58 file1

$ df -h /datafile01
Filesystem      Size  Used Avail Use% Mounted on
/dev/md0         30G  647M   30G   3% /datafile01


■ RAID 5 에 Hot Spare 추가

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 name=localhost.localdomain:0 UUID=865c444b:1f69832f:17b844a2:cf80abed
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde

$ mdadm /dev/md0 --add /dev/sdg
mdadm: added /dev/sdg

$ mdadm --detail --scan --verbose
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=865c444b:1f69832f:17b844a2:cf80abed
   devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde,/dev/sdg

$ mdadm --detail /dev/md0 | tail -n 7
    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       5       8       96        -      spare   /dev/sdg

$ mdadm --detail --brief /dev/md0 >> /etc/mdadm.conf

$ more /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=865c444b:1f69832f:17b844a2:cf80abed


■ RAID 5 에 디스크 장애 유도 및 확인

131231231

$ cd /datafile01


$ touch file1 file2


# 동기화 여부 확인
$ mdadm --detail /dev/md0 | tail -n 13
    Rebuild Status : 37% complete

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 865c444b:1f69832f:17b844a2:cf80abed
            Events : 35

    Number   Major   Minor   RaidDevice State
       5       8       96        0      spare rebuilding   /dev/sdg
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb


# 동기화가 종료되면 장애 전과 동일하게 active synce로 동작
$ mdadm --detail /dev/md0 | tail -n 13
Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 865c444b:1f69832f:17b844a2:cf80abed
            Events : 48

    Number   Major   Minor   RaidDevice State
       5       8       96        0      active sync   /dev/sdg
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb


■ RAID 5의 구성 요소 디스크 교체

$ mdadm --detail /dev/md0 | tail -n 7
    Number   Major   Minor   RaidDevice State
       5       8       96        0      active sync   /dev/sdg
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       0       8       16        -      faulty   /dev/sdb


$ mdadm --manage /dev/md0 --fail /dev/sdb
mdadm: set /dev/sdb faulty in /dev/md0


$ mdadm --manage /dev/md0 --remove /dev/sdb
mdadm: hot removed /dev/sdb from /dev/md0


$ mdadm --detail /dev/md0 | tail -n 7
            Events : 49

    Number   Major   Minor   RaidDevice State
       5       8       96        0      active sync   /dev/sdg
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde


■ RAID 5 용량 증설

$ mdadm --manage /dev/md0 --add /dev/sdf
mdadm: added /dev/sdg

$ mdadm --detail /dev/md0 | tail -n 7
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde

       5       8       80        -      spare   /dev/sdf

$ mdadm --grow --raid-devices=5 /dev/md0

$ mdadm --detail /dev/md0 | tail -n 15
    Reshape Status : 21% complete
     Delta Devices : 1, (4->5)

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 332a0617:d945db45:71dd9bc6:dc90c0af
            Events : 58

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       6       8       96        4      active sync   /dev/sdf

# 구성 재조정이 완료되면 Reshape Status와 Delta Devices는 사라지고 구성 완료된다.
$ mdadm --detail /dev/md0 | tail -n 15

Consistency Policy : resync

              Name : localhost.localdomain:0  (local to host localhost.localdomain)
              UUID : 332a0617:d945db45:71dd9bc6:dc90c0af
            Events : 116

    Number   Major   Minor   RaidDevice State
       0       8       16        0      active sync   /dev/sdb
       1       8       32        1      active sync   /dev/sdc
       2       8       48        2      active sync   /dev/sdd
       4       8       64        3      active sync   /dev/sde
       6       8       96        4      active sync   /dev/sdf

# /etc/mdadm.conf 파일 갱신
$ mdadm --detail --brief /dev/md0 > /etc/mdadm.conf

$ more /etc/mdadm.conf
ARRAY /dev/md0 metadata=1.2 spares=1 name=localhost.localdomain:0 UUID=332a0617:
d945db45:71dd9bc6:dc90c0af


■ RAID 5 디바이스 삭제

$ umount /datafile01

$ mdadm --stop /dev/md0
mdadm: stopped /dev/md0

$ mdadm --remove /dev/md0
mdadm: error opening /dev/md0: No such file or directory

$ mdadm --zero-superblock /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf

$ cat /dev/null > /etc/mdadm.conf

$ mdadm --detail --scan 

LINUX 카테고리 내 다른 글 보러가기

댓글 남기기