One-at-a-time
The approach documented elsewhere based on using the debrick.sh script was derived from earlier work on the single disk My Book Live. It was amended for the Duo and the suggestion was to run it twice - once on each of the drives.
I tried that and also tried out rebuilding just one drive and inserting a blank one next to it. The theory behind that being that if you have a disk failure then a Raid 1 array will run in degraded mode. Once you replace the failed drive it should rebuild the raid to get back to square one.
In each case I ended up with a system that booted up and was accessible via the Web Interface, but with a Data Volume of zero size. The data volume problem was solved by logging on to the web interface and doing a 'Factory Reset'. Having done that the system rebooted and came up running in raid 0 mode with the expected 2TB of storage.
However, looking at the console output while the system booted, I found entries similar to the one below whilst the raid arrays were being built:
md: considering sdb2 ... md: adding sdb2 ... md: adding sdb1 ... md: adding sda2 ... md: adding sda1 ... md: created md0 md: bind<sda1> md: bind<sda2> md: bind<sdb1> md: bind<sdb2> md: running: <sdb2><sdb1><sda2><sda1> md0: WARNING: sdb2 appears to be on the same physical disk as sdb1. md0: WARNING: sda2 appears to be on the same physical disk as sda1. True protection against single-disk failure might be compromised. raid1: raid set md0 active with 2 out of 2 mirrors md0: detected capacity change from 0 to 2047803392 md: ... autorun DONE.
So it was apparently lumping all the four copies of the system partition into a single array, parking two of them (from the same drive) as surplus to requirements, and using the other two - once again from the same drive -as the array md0.
If you look at the Western Digital start up code it expects to boot up md0 with a possible fall back to md1 if necessary. Here the raid software has constructed just md0, and, as it grabbed all four OS partitions for that purpose, has no md1.
Two-at-a-time
I must admit that I am no expert on exactly how these raid arrays work in Linux, but it struck me that you really needed to have both disks present when you created a raid array. As part of constructing the array a UUID is written into the superblock of all the array members. When a system is subsequently booted the raid software uses the UUID to assemble the various raid arrays from the available partitions. I guess that if the UUIDs are all different then, as a fallback, it has to start guessing as to which goes with what.
So I moved on to writing a suitable script to initialise both drives at the same time.