sometimes the removed device is re-added before the writes
get all the way to the md device - so the array doesn't need
any recovery and the test fails.
So flush first to be safe.
Signed-off-by: NeilBrown <neilb@suse.com>
newer versions of mkfs.extX ask before creating a filesystem
on a device which appears to already have a filesystem.
We don't want that, so add the -F flag.
Also be explicit about fs type as one shouldn't depend on defaults.
Signed-off-by: NeilBrown <neilb@suse.com>
revert-inplace would sometimes find that the original reshape had
finished.
So slow down the reshaping during --stop (which needs to be a little
bit fast so that stop doesn't timeout waiting) and don't wait quite
so long before stopping.
Signed-off-by: NeilBrown <neilb@suse.de>
This checks that raid6check finds no errors in newly created array
with all different layouts.
(it doesn't...)
Signed-off-by: NeilBrown <neilb@suse.de>
1/ use correct data-offset for cmp - that has changed.
2/ flushbufs on the block device before reading to avoid cache issues
Signed-off-by: NeilBrown <neilb@suse.de>
I don't really know why this is needed, but there is a delay
between the reshape finishing and the level/etc changing.
So add some sleeps.
Signed-off-by: NeilBrown <neilb@suse.de>
The current sleep/wait doesn't seem long enough,
particularly when two arrays are being reshaped in the one
container.
So wait a bit more...
Signed-off-by: NeilBrown <neilb@suse.de>
This can report non-zero if there was nothing to do,
and that isn't really an error.
If the array doesn't get started, something else
will complain.
Signed-off-by: NeilBrown <neilb@suse.de>
"--wait" will return non-zero status if it didn't need to wait.
This is no a reason to fail a test.
So ignore the return status from those commands.
Signed-off-by: NeilBrown <neilb@suse.de>
When a DDF array is assembled with missing devices, those devices
are now alway marked as 'missing' and cannot just re-appear in the array
and be working again.
test must be changed to acknowledge this.
Signed-off-by: NeilBrown <neilb@suse.de>
Change how sudden-degraded devices should appear.
We don't record failure, we record that the device isn't there.
Signed-off-by: NeilBrown <neilb@suse.de>
If we assemble a newly-degraded array, the missing devices must be marked
as 'failed' so we don't expect them in future.
Signed-off-by: NeilBrown <neilb@suse.de>
Getting the major number from the hex device number should take
all-but-the-last-two digits, rather than just the first two digits.
Signed-off-by: NeilBrown <neilb@suse.de>
This is a test simulating two temporary missing disks. These will
have less recent meta data than the other disks in the container.
When the array is reassembled, we expect mdadm to detect that
and react to it by using the meta data of the more recent disks
as reference.
This test FAILS with mdadm 3.3 for DDF.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This is a test case for handling incremental
assembly correctly after disks had been missing once.
This test is the basis for other similar but more tricky
test cases involving inconsitent meta data.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
A test for my recent patch "Monitor: write meta data in readonly state,
sometimes". Test that a faulty disk is recorded in the meta data.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This is similar to 10ddf-fail-readd. The difference is that the
array is stopped and incrementally assembled before the disk is
re-added.
Signed-off-by: NeilBrown <neilb@suse.de>
Some ddf tests scripts assume that /dev/sda is always present.
That's wrong e.g. on VMs. Use a more general approach.
Signed-off-by: NeilBrown <neilb@suse.de>
If a disk fails and simulaneously a new array is created, a race
condition may arise because the meta data on disk doesn't reflect
the disk failure yet. This is a test for that case.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This is one more unit test for failure/recovery, this time with
double redundancy, which isn't covered by the other tests.
Signed-off-by: NeilBrown <neilb@suse.de>
This test has some randomness because it is not always deterministic
which of the two arrays gets the spare and which remains degraded.
Handle it.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
helper functions to determine the list of devices in an array,
etc.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
I forgot to check in this helper script, similar to the one for IMSM.
It is needed by tests/10ddf-create-fail-rebuild.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
This test adds a new unit test similar to 009imsm-create-fail-rebuild.
With the previous patches, it actually succeeds on my system.
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>
Let the first created array be RAID5 rather than RAID0. This makes
the test harder than before, because everything after the first
Create has do be done indirectly through mdmon.
Signed-off-by: NeilBrown <neilb@suse.de>
This patch adds RAID10 support to the DDF test script.
It actually passes!
Signed-off-by: Martin Wilck <mwilck@arcor.de>
Signed-off-by: NeilBrown <neilb@suse.de>