From: Dan Williams <dan.j.williams@intel.com>
Added curr_state as a parameter to set_disk. Handlers look at this to
record components failures, and set global 'degraded' or 'failed'
status.
When reading the state as faulty:
1/ mark the disk failed in the metadata
2/ write '-blocked' to the rdev state to allow the kernel's failure
mechanism to advance
3/ the kernel will take away the drive's role in remove_and_add_spares()
4/ once the disk no longer has a role writing 'remove' to the rdev state
will get the disk out of array.
There is a window after writing '-blocked' where the kernel will return
-EBUSY to remove requests. We rely on the fact that the disk will
continue to show faulty so we lazily wait until the kernel is ready to
remove the disk. If the manager thread needs to get the disk out of the
way it can ping the monitor and wait, just like the replace_array()
case.
[buglet fix: swap the parameters of attr_match in read_dev_state]
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
If they are later reassembled they will be replaced and deallocated
via replace_array.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
mdadm handles setting resync_start, monitor uses this value to determine
whether to set the 'active' or 'readauto' state.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
1/ Block attempts to add/remove devices from container members
2/ Forward add/remove requests to containers
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
Metadata handlers set mdinfo.resync_start depending on the state of the
array. By default mdadm assumes the array is dirty and needs a full
resync.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
This should probably be made into a generic 'external' capability rather
than hardcoding 'ddf' and 'imsm'.
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
From: Dan Williams <dan.j.williams@intel.com>
The following now work:
--examine
--examine --brief
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Create a ddf array by naming the device /dev/ddf* or
specifying metadata 'ddf'.
If ddf is specified with no level, assume a container (indeed,
anything else would be wrong).
**Need to use text_Version to set external metadata...
More ddf support
Load a ddf container. Now
--examine /dev/ddf
works.
super-ddf: fix compile warning
From: Dan Williams <dan.j.williams@intel.com>
super-ddf.c:723: format %lu expects type long unsigned int, but argument 3 has type unsigned int
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
The current model for creating arrays involves writing
a superblock to each device in the array.
With containers (as with DDF), that model doesn't work.
Every device in the container may need to be updated
for an array made from just some the devices in a container.
So instead of calling write_init_super for each device,
we call it once for the array and have it iterate over
all the devices in the array.
To help with this, ->add_to_super now passes in an 'fd' and name for
the device. These get saved for use by write_init_super. So
add_to_super takes ownership of the fd, and write_init_super will
close it.
This information is stored in the new 'info' field of supertype.
As part of this, write_init_super now removes any old traces of raid
metadata rather than doing this in common code.
Now that validate_geometry opens and checks the device,
we don't need to do it as much in top level Create.
We only need it to check for old array or filesystem info.
So only open the device at that place.
From: Bill Nottingham <notting@redhat.com>
mdadm --incremental doesn't really do any locking. If you get multiple
events in parallel for the same device (that has not yet started), they
will all go down the path to create the array. One will succeed, the
rest will have SET_ARRAY_INFO die with -EBUSY (md: array mdX already has disks!)
and will exit without adding the disk.
Original bug report is: https://bugzilla.redhat.com/show_bug.cgi?id=433932
This is solved by adding very very rudimentary locking. Incremental() now
opens the device with O_EXCL to ensure only one invocation is frobbing the
array at once. A simple loop just tries to open 5 times a second for 5
seconds. If the array stays locked that long, you probably have bigger
issues.
There is still a problem: If array is partially assembled and started
read-only, the last device doesn't get added properly. Probably a kernel
problem.
This did not work before as we couldn't mark it clean as there would
be some parity blocks out of sync, and raid6 will not assemble a
dirty degraded array.
So make such arrays doubly degraded (the last device becomes a spare)
and clean.