This is a better match for reshape_array() and means that
"mdadm --grow --continue" will run in the foreground, which
makes more sense.
Signed-off-by: NeilBrown <neilb@suse.de>
If "--assemble" or "--incremental" is started by udev, then
monitoring the reshape in the background won't work.
So try asking systemd to start a grow-continue.
If that fails, just do it the old way.
Signed-off-by: NeilBrown <neilb@suse.de>
Subsequent patch will allow the background part of "mdadm --grow" to
be run from systemd. This can require the passing of a backup file
name.
To do this, store that name as a symlink in /run/mdadm (or MAP_DIR)
and look for it when appropriate.
It might be useful to also store the name across reboot, but that
would be a different patch. We would need to use the uuid to identify
it, and store it in stable storage.
Signed-off-by: NeilBrown <neilb@suse.de>
1/ when unfreezing, make sure the array is frozen first.
If it isn't we might end up interrupting a reshape.
2/ When the child finishes, don't call abort_reshape() as that
will interrupt the reshape. Just set suspend_* etc
explicitly.
Signed-off-by: NeilBrown <neilb@suse.de>
Since:
commit 84d11e6c6a
Author: NeilBrown <neilb@suse.de>
Date: Thu Aug 1 11:16:14 2013 +1000
Grow: exit background thread cleanly on SIGTERM.
removed the setting of "sync_max" from abort_reshape() we need
to do it explicitly here.
Signed-off-by: NeilBrown <neilb@suse.de>
If the mdadm thread that monitors a reshape gets SIGTERM it should
exit cleanly and clear the 'suspended' region of the array.
However it mustn't clear 'sync_max' as that would allow the
reshape to continue unmonitored.
If the thread ever does get killed, the array should really be
shutdown soon after if possible.
Signed-off-by: NeilBrown <neilb@suse.de>
Coverity discovered a possible double close(fd2) in Grow.c. Avoided by
invalidating fd2 after the first close.
Signed-off-by: Jes Sorensen <Jes.Sorensen@redhat.com>
Signed-off-by: NeilBrown <neilb@suse.de>
If we will need to change array level when a reshape completes, a copy
of mdadm waits in the background.
Currently this copy hold the device (/dev/mdX) open. This prevents
the array from being stopped.
So close the file descriptor and re-open after the reshape completes.
Signed-off-by: NeilBrown <neilb@suse.de>
Having a fix time for a wait is clumsy and can make us
wait much too long.
So use mdstat_wait and keep the mdstat_fd open.
This requires an 'mdstat_close' so it doesn't stay open
forever.
Signed-off-by: NeilBrown <neilb@suse.de>
--stop now tries to wait for a reshape to be at just the right spot.
However for a reducing reshape, mdadm will be running in the
background watching, and might adjust sync_max and mess things up.
So teach "progress_reshape" to notice when "sync_max" is modified, and
leave it alone.
Signed-off-by: NeilBrown <neilb@suse.de>
progress_reshape() may not set reshape_completed if the reshape is
interrupted, so we need to initialize it to the current value before
hand, so the value used afterwards is credible.
Signed-off-by: NeilBrown <neilb@suse.de>
If the restarted reshape needs a backup file and we don't have one,
that should be reported before we try to start the array.
Also we shouldn't say the "Cannot grow" but "cannot complete".
Signed-off-by: NeilBrown <neilb@suse.de>
To be able to revert-reshape of raid4/5/6 which is changing
the number of devices, the reshape must has been stopped on a multiple
of the old and new stripe sizes.
The kernel only enforces the new stripe size multiple.
So we enforce the old-stripe-size multiple by careful use of
"sync_max" and monitoring "reshape_position".
Signed-off-by: NeilBrown <neilb@suse.de>
We have several places that wait for activity on a sysfs
file. Combine most of these into a single 'sysfs_wait' function.
Signed-off-by: NeilBrown <neilb@suse.de>
For RAID10, we must have head/tail space for reshape.
For RAID4/5/6 we can use a spare or a backup file.
So make that distinction.
Signed-off-by: NeilBrown <neilb@suse.de>
When changing the chunksize of an array, the new chunksize must
divide the device size.
If it doesn't we report a very brief message.
Make this message a bit longer and suggest a way forward be reducing
the size of the array.
Reported-by: Mark Knecht <markknecht@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.de>
There are now 3 places which change level.
And they all do it slightly differently with different
messages etc.
Make a single function for this and use it.
Signed-off-by: NeilBrown <neilb@suse.de>
When converting to RAID0, all spares and non-data drives
need to be removed first.
It is possible that the first HOT_REMOVE_DISK will fail because the
personality hasn't let go of it yet, so retry a few times.
Signed-off-by: NeilBrown <neilb@suse.de>
After changing the level, the meaning of layout numbers changes,
so we will keeping a new_layout value around can cause later confusion.
Signed-off-by: NeilBrown <neilb@suse.de>
This means it will be set for a "--data-offset" only reshape so that
case doesn't complain that the array is getting smaller.
Signed-off-by: NeilBrown <neilb@suse.de>
1/ ignore failed devices - obviously
2/ We need to tell the kernel which direction the reshape should
progress even if we didn't choose the particular data_offset
to use.
Signed-off-by: NeilBrown <neilb@suse.de>
Setting new_offset can fail if the v1.x "data_size" is too small.
So if that happens, try increasing it first by writing "0".
That can fail on spare devices due to a kernel bug, so if it doesn't
try writing the correct number of sectors.
Signed-off-by: NeilBrown <neilb@suse.de>
Some people want to create truely enormous arrays.
As we sometimes need to hold one file descriptor for each
device, this can hit the NOFILE limit.
So raise the limit if it ever looks like it might be a problem.
Signed-off-by: NeilBrown <neilb@suse.de>
It is possible that the devices in an array have different sizes, and
different data_offsets. So the 'before_space' and 'after_space' may
be different from drive to drive.
Any decisions about how much to change the data_offset must work on
all devices, so must be based on the minimum available space on
any devices.
So find this minimum first, then do the calculation.
Signed-off-by: NeilBrown <neilb@suse.de>
If space_after and space_before are zero (the default) then assume that
metadata doesn't support changing data_offset.
Signed-off-by: NeilBrown <neilb@suse.de>
If we can modify the data_offset, we can avoid doing any backups at all.
If we can't fall back on old approach - but not if --data-offset
was requested.
Signed-off-by: NeilBrown <neilb@suse.de>
raid10 currently uses the 'backup_blocks' field to store something
else: a minimum offset change.
This is bad practice, we will shortly need to have both for RAID5/6,
so make a separate field.
Signed-off-by: NeilBrown <neilb@suse.de>