If you still want to use mdadm then I had a play. It does work. Again I've used a thumb drive (and lvm so I can move stuff around with least fuss). It's a GPT partition which is a lot happier having its partitions fiddled with whilst the disk is live. I used gparted to create two extra partitions.I forgot the 'update-initramfs' initially. It still booted but came up as /dev/md127. Next reboot is came up as /dev/md1.
Insert another thumb drive and move one device onto the new thumb drive. This needs care because the /dev/sdX names can/do get mixed around (consider it random on each boot) so great care is needed no to destroy the source disk (thinking it's the target). Aside from that it's painless..Now it runs using both disks..I've added /dev/sda4 back in because I'm about to yank the new thumb drive..What the heck, I'll reboot it before it's finished..Seems mdadm is perfectly serviceable should you still want to use it. Image may be NSFW.
Clik here to view.
Code:
root@pi20:~# parted -s /dev/sda set 3 raid root@pi20:~# parted -s /dev/sda set 4 raidapt-get install mdadmroot@pi20:~# parted /dev/sda p freeModel: USB SanDisk 3.2Gen1 (scsi)Disk /dev/sda: 123GBSector size (logical/physical): 512B/512BPartition Table: gptDisk Flags: Number Start End Size File system Name Flags 17.4kB 4194kB 4177kB Free Space 1 4194kB 541MB 537MB fat32 msftdata 2 541MB 3842MB 3301MB lvm 3842MB 3843MB 1048kB Free Space 3 3843MB 9212MB 5369MB md1 raid 4 9212MB 14.6GB 5369MB md1 raid 14.6GB 123GB 108GB Free Spacemdadm --create /dev/md1 -n2 -l1 /dev/sda3 /dev/sda4 #wait resyncpvcreate /dev/md1pvs PV VG Fmt Attr PSize PFree /dev/md1 lvm2 --- <5.00g <5.00g /dev/sda2 pi20 lvm2 a-- 3.07g 0vgextend pi20 /dev/md1pvmove /dev/sda2 /dev/md1 -i 30vgreduce pi20 /dev/sda2root@pi20:~# mdadm --detail /dev/md1 | grep UUID UUID : 8ec48ba8:a4ae5347:b037853d:04f89b3fecho 'ARRAY /dev/md/1 level=raid1 num-devices=2 UUID='8ec48ba8:a4ae5347:b037853d:04f89b3f' >> /etc/mdadm/mdadm.confupdate-initramfs -u
Insert another thumb drive and move one device onto the new thumb drive. This needs care because the /dev/sdX names can/do get mixed around (consider it random on each boot) so great care is needed no to destroy the source disk (thinking it's the target). Aside from that it's painless..
Code:
sgdisk /dev/sda -R /dev/sdbsgdisk -G /dev/sdb#rebooted - dev names did indeed swap! (now sda<->sdb)mdadm --add /dev/md1 /dev/sda3mdadm --fail /dev/md1 /dev/sdb4 --remove /dev/sdb4 #wait resyncmdadm --add /dev/md1 /dev/sdb4
Code:
foo@pi20:~ $ cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb3[0] sdb4[3](S) sda3[2] 5237760 blocks super 1.2 [2/2] [UU] unused devices: <none>
Code:
foo@pi20:~ $ cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sdb3[0] sdb4[3] 5237760 blocks super 1.2 [2/1] [U_] [=>...................] recovery = 7.8% (411648/5237760) finish=7.9min speed=10107K/sec unused devices: <none>
Code:
foo@pi20:~ $ cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda4[3] sda3[0] 5237760 blocks super 1.2 [2/1] [U_] [=====>...............] recovery = 29.8% (1563264/5237760) finish=6.0min speed=10116K/sec unused devices: <none>
Code:
foo@pi20:~ $ cat /proc/mdstatPersonalities : [linear] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10] md1 : active raid1 sda4[3] sda3[0] 5237760 blocks super 1.2 [2/2] [UU] unused devices: <none>
Clik here to view.

Statistics: Posted by swampdog — Thu Jun 27, 2024 5:47 am