mdadm/tests/18imsm-r0_2d-takeover-r10_4d
Krzysztof Wojcik e53d022c72 FIX: Tests: raid0->raid10 without degradation
raid0->raid10 transition needs at least 2 spare devices.
After level changing to raid10 recovery is triggered on
failed (missing) disks. At the end of recovery process
we have fully operational (not degraded) raid10 array.

Initialy there was possibility to migrate raid0->raid10
without recovery triggering (it results degraded raid10).
Now it is not possible.
This patch adapt tests to new mdadm's behavior.

Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
2011-03-24 10:11:58 +11:00

23 lines
469 B
Plaintext

. tests/env-imsm-template
# RAID 0 volume, 2 disks change to RAID 10 volume, 4 disks
# POSITIVE test
num_disks=2
device_list="$dev0 $dev1"
spare_list="$dev2 $dev3"
# Before: RAID 0 volume, 2 disks, 256k chunk size
vol0_level=0
vol0_comp_size=$((5 * 1024))
vol0_chunk=128
vol0_num_comps=$num_disks
vol0_offset=0
# After: RAID 10 volume, 4 disks, 256k chunk size
vol0_new_level=10
vol0_new_num_comps=$vol0_num_comps
vol0_new_chunk=128
. tests/imsm-grow-template 0 1