Logo

My Book Live Duo: rebuild.sh 

The Rebuild Script

Note: This script is designed to (re)format two hard drives with the standard Western Digital firmware to bring a Duo back to life. Any existing data will be destroyed. It assumes that any necessary data recovery on the drives will have been carried out beforehand.

Example of usage:

sudo rebuild.sh sda sdb rootfs.img

The script:

  1. Checks that the input parameters have been supplied and that they exist otherwise shows a help screen.
  2. Checks that is running as 'root' and so has the necessary power to do what it wants.
  3. Checks that the mdadm package has been installed.
  4. Prints out the details of the two specified disk drives and asks for confirmation that these are the ones you are intending to use.
  5. Requests final confirmation before proceeding.
  6. If the disks have been used previously then the existing raid partitions on them may have been detected on boot. So the script:
    1. Umounts any mounted Raid Systems.
    2. Stops any use of the Swap partition.
    3. Stops any raid md* devices.
  7. Zaps the beginning of each drive to remove any existing partition tables.
  8. Creates the new partition table on each drive.
  9. On each drive zaps the beginning of each partition to remove any trace of what was there.
  10. Creates the mount points for our raid arrays.
  11. Clears out any old raid Superblock data. If you don't do this you'll find that the arrays may not be assembled properly when you first boot the system. The mdadm tool does not do this reliably at this point so we use our own program to do this.
  12. Uses mdadm to create the new raid arrays:
    • md0 - Holds the OS.
    • md1 - Holds another copy of the OS (overkill!)
    • md2 - Swap partition
    • md3 - Data Volume.
  13. Formats md2 and md3 partitions. (No need to do md0 and md1 as we restore a mirror-image copy of those partitions as the next step).
  14. Copies rootfs.img to md0 and md1
  15. Mounts md0 and md1 and copy over the boot scripts to the root partitions.
  16. Unmounts md0 and md1
  17. Adjusts the byte ordering in the raid superblocks to suit the Duo's CPU.

Notes: I have issues with the Data Volume md3 raid array in that this is not recognised by the Duo when the system is first booted. Whilst it would be nice to get this sorted, in practice it doesn't matter as doing an immediate 'Reset to factory settings' fixes things.

If you have any interest....I think the problem is a combination of two things:

  1. The Data Volume uses version 1.0 of the Raid Superblock. Further work is needed to work out exactly what superblock needs to do when dealing with a version 1.0 superblock.
  2. Time has moved on since 2015. The modern mkfs creates ext4 file systems with attributes that are not recognised by the Duo's version of Linux. I started to see what additional command line parameters were required to produce a compatible file system, but early on found a combination that consistently crashed the kernel every time the system booted. As the work-around (doing a reset-to-factory-settings) was so simple I left this alone.

Download rebuild.sh

[Go back]

#!/bin/bash
#
# The purpose of the script is to (re)install the operating system (debrick) on 
# the harddrives that are to be fitted to a WD MyBook Live Duo. It makes no attempt
# to preserve any data.
#
# Based on the Script "debrick.sh" with some of the sophistication of that
# removed and a slightly different ending.  Rejigged for two disks.
#
# Also based on various scripts in the WD firmware /usr/local/sbin directory

#help screen
if  [ $# = 1 -a "$1" = "--help" ]; then
echo "
standard use of script is:
  sudo ./rebuild.sh <drive1> <drive2> <firmware_file>
    
  <drive?> is the drive that will be formatted: (sda/sdb/sdc...)
 
  <firmware_file> is the path to the image file extracted from the standard
             WD update package:  e.g. /mnt/sdb1/rootfs.img

example
    sudo ./rebuild.sh sda sdb /mnt/sdc1/rootfs.img
"
exit 1
fi

echo

# Check that the basic requirements are fullfilled

if [ "$(id -u)" != "0" ]; then
    echo -e "this script must be run as root.\n"
    exit 1
fi
if ! which mdadm > /dev/null; then
    echo -e "this script requires the mdadm package.\n"
    exit 1
fi

# Parse the arguments to extract drive and file

disk1=notset
disk2=notset
image=notset

for (( arg=1; arg<=${#}; arg++ ))
do
    case ${!arg} in
    sd?)      if [ $disk1 == notset ]; then disk1=${!arg}
	      else if [ $disk2 == notset ]; then disk2=${!arg}
	      else echo "Two drives already specified"
	           exit 1;
	      fi
	      fi;;
    *.img)        image=${!arg};;
    *)             echo "unknown argument: ${!arg}" 
                exit 1
    esac
done

# Check all the parameters have been given

if [ $disk1 == notset ]; then
    echo "No disks specified"
    exit 1
fi
if [ $disk2 == notset ]; then
    echo "Two disks must be specified"
    exit 1
fi

if [ $image == notset ]; then
    echo "The .img file must be specified"
    exit 1
fi

# Check that what we've been given exist

err=0;
if [ ! -e /dev/$disk1 ]; then
        echo "$disk1 does not exist."
        err=1;
fi
if [ ! -e /dev/$disk2 ]; then
        echo "$disk2 does not exist."
        err=1;
fi
if [ ! -e $image ]; then
        echo "Can't find $image"
        err=1;
fi
if [ $err = 1 ]; then
    echo "Usage: rebuild.sh sd? sd? /path/to/file.img"
    exit 1;
fi

# Say what we're about to do and get confirmation to proceed

echo "Rebuild will use $disk1 and $disk2:"
echo -e "Disk details: \n"
parted --script /dev/$disk1 print
parted --script /dev/$disk2 print

read -p "Are these REALLY the disks you want? [y] " -n 1
if ! [[ $REPLY =~ ^[Yy]$ ]]; then
    echo -e "\nNo confirmation, stopping.\n"
    exit 1;
fi

echo
read -p "This is the point of no return, continue? [y] " -n 1
echo
if ! [[ $REPLY =~ ^[Yy]$ ]]; then
    exit 1;
fi

# If the drives are directly from a Duo the Raid arrays may have been noticed and mounted
# during power-up. We need to ensure everything to do with the old system is not running.

# Unmount any mounted Raid devices

mountedMds=(`grep -o "/dev/md[0-9]" /proc/mounts`)
for mountedMd in "${mountedMds[@]}"
do
   echo "unmounting $mountedMd"
   umount $mountedMd
   if [ $? -ne 0 ]; then
    echo "Can't umount $mountedMd - exiting."
    exit 1
   fi
done
sync

swapoff -a

echo "Stopping any md* devices"

# stop any existing metadata devices

runningMds=(`ls /dev/md[0-9]*`)
for runningMd in "${runningMds[@]}"
do
    sleep 2
    mdadm --stop $runningMd
    mdadm --wait $runningMd
done
sleep 2
sync

# The partition table has four entries:
# 1. Root Filing System
# 2. Root Filing System (copy)
# 3. Swap Partition
# 4. Data Partition.

# Raid stuff
# md0 is the OS and comprises /dev/sda1 /dev/sdb1 
# md1 is also the OS backup and comprises /dev/sda2 /dev/sdb2 
# md2 is Swap
# md3 is the Data Volume.

# Destroy any existing partition table and then write the new one.

echo "Removing old partition table"

dd if=/dev/zero of=/dev/$disk1 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/$disk2 bs=1M count=32 2>/dev/null
sync
sleep 2

# Get the new table (re)read.
partprobe /dev/$disk1
partprobe /dev/$disk2

echo "creating new partition tables"

parted /dev/$disk1 --align optimal <<EOP
mklabel gpt
mkpart primary ext3 528M  2576M
mkpart primary ext3 2576M 4624M
mkpart primary linux-swap 16M 528M
mkpart primary ext4 4624M -1M
set 1 raid on
set 2 raid on
set 3 raid on
set 4 raid on
quit
EOP

parted /dev/$disk2 --align optimal <<EOP
mklabel gpt
mkpart primary ext3 528M  2576M
mkpart primary ext3 2576M 4624M
mkpart primary linux-swap 16M 528M
mkpart primary ext4 4624M -1M
set 1 raid on
set 2 raid on
set 3 raid on
set 4 raid on
quit
EOP
sync
sleep 2

# Get the new table (re)read.
partprobe /dev/$disk1
partprobe /dev/$disk2

# Now we've got the new partition table in place Zap what might have been there

echo "Clearing partitions"

dd if=/dev/zero of=/dev/${disk1}1 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk1}2 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk1}3 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk1}4 bs=1M count=32 2>/dev/null

dd if=/dev/zero of=/dev/${disk2}1 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk2}2 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk2}3 bs=1M count=32 2>/dev/null
dd if=/dev/zero of=/dev/${disk2}4 bs=1M count=32 2>/dev/null
sync
sleep 1

#making sure the mountpoints are available for Raid arrays

test -d "./mnt" || mkdir -p "/mnt"

test -d "/mnt/md0" || mkdir -p "/mnt/md0"
test -d "/mnt/md1" || mkdir -p "/mnt/md1"
test -d "/mnt/md2" || mkdir -p "/mnt/md2"
test -d "/mnt/md3" || mkdir -p "/mnt/md3"

# clear out any old md superblock data

echo "Purging any old superblock data"

# mdadm --zero-superblock --force --verbose /dev/${disk1}1 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk1}2 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk1}3 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk1}4 >/dev/null

# mdadm --zero-superblock --force --verbose /dev/${disk2}1 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk2}2 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk2}3 >/dev/null
# mdadm --zero-superblock --force --verbose /dev/${disk2}4 >/dev/null

superblock /dev/${disk1}1 zero >/dev/null
superblock /dev/${disk1}2 zero >/dev/null
superblock /dev/${disk1}3 zero >/dev/null
# superblock /dev/${disk1}4 zero >/dev/null

superblock /dev/${disk2}1 zero >/dev/null
superblock /dev/${disk2}2 zero >/dev/null
superblock /dev/${disk2}3 zero >/dev/null
# superblock /dev/${disk2}4 zero >/dev/null


sync
sleep 1

echo "Creating Raid Arrays"

mdadm --create /dev/md0 --verbose --metadata=0.9 --raid-devices=2 --level=raid1 --run /dev/${disk1}1 /dev/${disk2}1
mdadm --wait /dev/md0
mdadm --create /dev/md1 --verbose --metadata=0.9 --raid-devices=2 --level=raid1 --run /dev/${disk1}2 /dev/${disk2}2
mdadm --wait /dev/md1
mdadm --create /dev/md2 --verbose --metadata=0.9 --raid-devices=2 --level=raid1 --run /dev/${disk1}3 /dev/${disk2}3
mdadm --wait /dev/md2
mdadm --create /dev/md3 --verbose --metadata=1.0 --raid-devices=2 --level=raid1 --run /dev/${disk1}4 /dev/${disk2}4
mdadm --wait /dev/md3
sync
sleep 1

# create the swap partition
mkswap /dev/md2
if [ $? -ne 0 ]; then
    echo "mkswap failed - exiting."
    exit 1
fi

echo "Creating md0 file system"

# format the rootfs raid mirror file system
mkfs.ext3 -b 4096 /dev/md0
if [ $? -ne 0 ]; then
    echo "mkfs.ext3 failed - exiting."
    exit 1
fi

echo "Creating md1 file system"

# format the rootfs raid mirror file system
mkfs.ext3 -b 4096 /dev/md1 
if [ $? -ne 0 ]; then
    echo "mkfs.ext3 failed - exiting."
    exit 1
fi

# format the data volume 
# Blocksize of 64K required (but not liked by mkfs)

mkfs.ext4 -F -b 65536 -m 0 /dev/md3
if [ $? -ne 0 ]; then
    echo "mkfs.ext4 failed - exiting."
    exit 1
fi

sync
sleep 2

#write the OS image to the raid disks

echo "Writing System Image to md0"
dd if=$image of=/dev/md0 bs=1M
sync
echo "Writing System image to md1"
dd if=$image of=/dev/md1 bs=1M
sync
sleep2

# Mount file systems and copy over boot script

echo "Mounting file systems and copying over boot scripts"

mount /dev/md0 /mnt/md0
mount /dev/md1 /mnt/md1
cp /mnt/md0/usr/local/share/bootmd0.scr /mnt/md0/boot/boot.scr
cp /mnt/md1/usr/local/share/bootmd1.scr /mnt/md1/boot/boot.scr
sync
sleep1

umount /mnt/md0
umount /mnt/md1

echo "Adjusting Superblock data to suit the Duo"

superblock /dev/${disk1}1 swap
superblock /dev/${disk1}2 swap
superblock /dev/${disk1}3 swap
#superblock /dev/${disk1}4 swap

superblock /dev/${disk2}1 swap
superblock /dev/${disk2}2 swap
superblock /dev/${disk2}3 swap
#superblock /dev/${disk2}4 swap

sync
sync

echo
echo "all done! Transfer drives to the Duo"
echo

[Go back]

 


Any comments? email me. Added May 2020