If you change the size of a member of an array (e.g. it might be a
dm device that can be resized, or on a smart storage device), md
doesn't notice and so the space cannot be used without explicitly
telling md that the device is bigger.
This change causes "mdadm --grow --size=...." to make sure each
component device is making at least that much space available if it
can.
Normally usage of "--size=max" will cause all devices to make max
space available, the md will use as much as it can of that.
Signed-off-by: NeilBrown <neilb@suse.de>
During reshape restart info->array.raid_disks contains new raid_disks number
It cannot be compared against old disks number. Such check will always fail.
Check raid disks array field against final disks number for restart.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When restarting a reshape, the value of 'raid_disks' is the *new*
value. The old value is found by subtracting delta_disks.
So before calling analyse_change we must set raid_disks to be the
old value, and then reset it afterwards.
All other fields are cleanly separated with the main field being
the 'old' value and a new_* field available.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Values greater than 0, means error. We exit from loop on error
with empty super-block pointer when sd pointer is valid.
This cannot be detected by check condition as error.
For sure we shouldn't go forward with error condition.
It leads to throwing exception with core file when metadata handler
wants to access non existing super-block.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When raid0 expansion is restarted, mdadm refuses to correctly assemble
array because critical section cannot be restored from backup file.
mdadm exits with information:
mdadm: Failed to restore critical section for reshape - sorry.
For raid0 new level is 0, current array level is 4.
Function Grow_restart() doesn't allow for level change.
Grow_restart really shouldn't be checking for level changes.
As they are always instantaneous they should never appear
in the metadata so it doesn't mean anything to check for them.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When we add spared that have been targeted at a specific slot,
we need raid_disks to be bigger than the slot number.
But currently we don't increase raid_disks until after we add
these spares.
So introduce an early increase of raid_disks to allow the spares
to be added.
Signed-off-by: NeilBrown <neilb@suse.de>
When for ping_monitor() input devnum2devname() is used,
received string pointer should be passed to free() for memory release.
It is not made in several places. This use case should have function
to avoid memory leak.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Unfreeze array on success only.
rv is initialized by restart variable so we have 2 cases.
1. regular reshape start
rv == restart == 0
this means that real error (returned by reshape) can cause leaving container frozen
If array is not touched by reshape it can be unfrozen
2. During reshape restart even untouched array under reshape is left unfrozen,
If reshape is started do not unfreeze array on error also.
This allows user for array repair action
(mdmon will not change array state).
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When an array using native metadata is increasing in size, we don't
need to keep monitoring it after the initial 'critical section'.
So detect that case.
If a final level-change is still needed mdadm will wait for that,
otherwise it will simply exit.
Signed-off-by: NeilBrown <neilb@suse.de>
We now test ->reshape_active, but don't set it in a common case.
So just zero out the whole structure to be on the safe side.
Signed-off-by: NeilBrown <neilb@suse.de>
This is done via conversion to RAID4 and back.
To grow the array, extra devices will be needed which cannot
already be present as spares - so allow a list of new devices
to be included in grow request which changed the number of devices.
Signed-off-by: NeilBrown <neilb@suse.de>
When an RAID0 is started using SET_ARRAY_INFO ioctl the
component_size will be zero.
This confused the code for reshaping a RAID0 via RAID4.
So if that seems to be the case, fake a believable component_size
Signed-off-by: NeilBrown <neilb@suse.de>
For not reshaped array in container during assembly array is in
auto-read-only state. It is not possible to set disk slot for such
array and later reshape cannot be started also. To move array from
'auto-read-only' to 'active' state storing 'active' state to sysfs is
added. This allows for disks configuration and reshape.
During reshaped array restart it is disabled by condition on restart
variable.
When reshape is starting, storing 'active' state to already active
array should not matter.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
when starting an array that is in the middle of a migration,
we need to start mdmon, just as we do for arrays which are not
in the middle of a migration.
Repored-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
st->sb is null. This is exception cause.
reshape_container() function expects that super block will be loaded.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
This is a bit of a hack - probably analyse change needs to be
re-written a bit to handle this properly.
However when the metadata deduced the intermediate state for a
reshaping array, the 'new_level' it sets should not be used to
interpret the 'delta_disks' number.
So in that case, hide the new_level while calling analyse_change.
Signed-off-by: NeilBrown <neilb@suse.de>
The 'raid_disks' for a container is zero, so subtracting it
from the given raid_disks to get delta_disks doesn't make sense.
Rather set delta_disks to UnSet and set raid_disks to the requested
number of disks. This then gets passed to reshape_super() which
can use it as required.
Signed-off-by: NeilBrown <neilb@suse.de>
The check that the array info is already in 'native format' is
only relevant when restarting a growth, so only perform it then.
Signed-off-by: NeilBrown <neilb@suse.de>
Normally when reshape_array is called with restart == 0,
info->array is the same as the 'array' read from the kernel
(via ioctl) so both have the same level.
However when called from reshape_container, info->array was
generated by the metadata so it will have 'level' set to the
intermediate (or final) level already.
So to test if we need to change the level, we need to compare the
desired level with that which was loaded from the kernel (array.level)
rather than that which was read from metadata (info->array.level).
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
The check for 'enough spares' doesn't apply to RAID0 as we don't
mind it going degraded. But add a test that there are enough spares
to actually produce a working array.
Signed-off-by: NeilBrown <neilb@suse.de>
Some grow operations must be applied to a whole container. These
are performed one array at a time, so only one array appears to
be reshaping.
When re-assembling such an array, we need to make sure that
when the reshape finished, we move on to the next array.
So require metadata to set ->reshape_active = 2 in that case,
and use reshape_container to complete the reshape.
Signed-off-by: NeilBrown <neilb@suse.de>
Now that the external metadata handler must provide an md-compatible
old/new geometry, sys_set_array can do all of the array set-up for
an array that is undergoing reshape.
That leave less for reshape_array to do.
Also clean up how reshape_array tells if the reshape has started or
not.
Don't use ->reshape_active as that doesn't tell us anything consistent
at this stage, only use the 'restart' flag passed in.
Signed-off-by: NeilBrown <neilb@suse.de>
When assembling array using assemble_container_content() for external
metadata case, array is in 'readonly' state already.
There is not necessary to duplicate this operation.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When calling reshape_array() for external metadata 'container name'
parameter have to be passed.
Find and pass container name in external metadata case.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When changing level when a new number of raid disks was explicitly
specified, we much make sure that the change implied by the
change in level is properly incorporated into the final result.
So explicitly track the change in number of parity disks
(delta_parity) and use it together with delta_disks to determine
final data_disks.
Also set info->delta_disks so other code doesn't need to mirror
this analysis.
And add some errors in cases where a new number of disks was
requested but is not currently supported
Reported-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Transition raid5 to raid0 was not covered in analyse_change()
Missing case added.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
For external metadata cases, information about number of spares cannot
be get via ioctl GET_ARRAY_INFO for particular array
(as info variable is initialized by). In md this information is present
in container object not array one.
This causes need to get spare disks number from external metadata.
This information is required for reshape_array() function to decide
if spare disks number satisfy operation requirements.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Delta_disk can be set to UnSet value.
This can a cause to pass wrong parameter to reshape_super().
To avoid such situations raid_disks and delta_disks parameters
have to be passed to reshape_super() separately.
It will be up to reshape_super() function validation
and usage of this parameters to avoid not valid values.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
After level migration array is left frozen.
When process is not externally forked reshape_array() should
unfreeze array before exit (this is not container operation).
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Problem occurs when we want to expand single disk raid0 array.
This is done via degraded 2 disks raid4 array. When new spare disk
for reshape is added to array, md immediately initiates recovery before
mdadm can configure and start reshape. This is due fact that 2 disk
raid4/5 array is special md case.
Mdmon does nothing here because container is blocked.
This is caused because after takeover array is not in frozen state in md.
Put array in to frozen state after takeover to allow mdadm to finish
configuration before reshape is executed in md.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When container operation fails before child process starts,
array can be left frozen because container_reshape() doesn't make
unfreeze() operation in all error cases, as it is responsible for.
add unfreeze() operation for error case scenarios in reshape_container()
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
reshape_prepare_fdlist and child_monitor currently have slightly
different ideas of the 'old number of raid devices' which can cause
major confusion.
So settle on one value, and assign it to odisks early and always use
it.
Signed-off-by: NeilBrown <neilb@suse.de>
kernel requires sync_max to be a multiple of the current
chunk size. This is not really 'correct', but we need to work
with it. So round down.
Signed-off-by: NeilBrown <neilb@suse.de>
Due to a miscalculation we didn't initialise the whole file.
There is 4K (8 sectors) for the metadata, then the data.
Signed-off-by: NeilBrown <neilb@suse.de>
When restarting a reshape with internal metadata, the new geometry
is already set and the reshape has been start (but has not been
allowed to continue yet).
So in that case, don't set things and don't ask for a reshape.
Signed-off-by: NeilBrown <neilb@suse.de>
The value of 'array' might not be current, so SET_ARRAY_INFO
and fail. Just refresh it before setting raid_disks.
Signed-off-by: NeilBrown <neilb@suse.de>
Problem has been observed when raid10<->raid0 takeover operation
is executed.
In code updating layout, raid_disks and chunk_size for non-restriping
operations in reshape array functions SET_ARRAY_INFO ioctl call was
not succeeded.
Takeover process finish execution with error, mdadm shows message:
"mdadm: failed to set disks"
Cause is not meeting SET_ARRAY_INFO ioctl requirements:
- only one parameter may be changed at one time
- level of current array info and new info should be the same
Patch introduces solution for this issue.
At the beginning of discussed code we read current information
about array and then compare them with new values should be set.
If particular value is different (and should be set),
we are overwrite only this one in array info and then call ioctl.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
For raid10 -> raid0 takeover operation we should reject disks
in mirror by marking them as 'failed' and then remove them from
array by writing "remove" to disk state.
For external metadata second action is executed by mdmon.
According the description in monitor.c:175 when monitor detect
"faulty" in device state, it blocks the device, mark it as failed
in metadata, unblocks the device and finally writes "remove"
to device state.
For external case writing "remove" to device state in mdadm
is not necessary and harmful.
It may cause following issues:
1. "remove" operation for external case in mdadm is not finish
with successful result because monitor may block the device or disk
has been already removed by monitor.
2. If disk is removed by mdadm earlier than mdmon catch "failed" state,
metadata is not properly updated- is not marked as failed.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
After loop can occurs that due to 0 value reported by kernel
we have 0 in completed variable.
This is wrong. we are interested in real completed point.
0 value means that we reached sync point set in md,
so we can set completed variable to just reached point.
this point value is stored in max_progress variable.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
mdadm should verify if reshape is started before it goes
in to check-pointing machine.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Problem occurs when we want to expand single disk raid0 array.
This is done via degraded 2 disks raid4 array. When new spare
is added to array, md immediately initiates recovery before
mdadm can configure and start reshape. This is due fact that 2 disk
raid4/5 array is special md case. Mdmon does nothing here because
container is blocked.
Put array in to frozen state allows mdadm to finish configuration
before reshape is executed in md.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When analyse_change sets level=1, data_disks is meaningless
as is layout.
So don't set them, and make sure we ignore them.
Signed-off-by: NeilBrown <neilb@suse.de>
Add support for raid1 to raid0 takeover operation in user space.
This patch includes support for native and imsm metadata.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
During reshape when reshape is finished in md, progress_reshape() hangs
on select().
This is because 'sync_completed' is reset to zero before 'sync_action'
becomes 'idle', and we don't look for notification on 'sync_action'.
So if completed becomes zero after reshape_progress has made some
progress, then deduce that reshape has finished.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
when in container are present raid0 and raid5 arrays, and reshape order is:
1. raid0 array
2. raid5 array
mdadm cannot set new raid_disks for raid0 array. For this action md has to have
handshake with mdmon. We have the following conditions:
1. Raid0 is not monitored
2. raid0 has been just takeovered to raid4/5 (it has to be monitored
3. monitor has to start monitor new raid4/5 array
4. monitor is not started (it is started to second raid5 array)
In such situation pig_monitor is required to let know to m monitor about new array
(not in the starting monitor case only)
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
The patch introduces takeover from level 10 to level 0 for imsm
metadata. This patch contains procedures connected with preparing
and applying metadata update during 10 -> 0 takeover.
When performing takeover 10->0 mdmon should update the external
metadata (due to disk slot and level changes).
To achieve that mdadm calls reshape_super() and prepare
the "update_takeover" metadata update type.
Prepared update is processed by mdmon in process_update().
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
- when reshaping a container, ->reshape_active is already set
even though it isn't really active yet, so we need to set
the new geometry even when reshape_active is set. This is safe.
- When restarting a reshape, make sure the reshape_position is set
appropriately when external metadata is used.
Signed-off-by: NeilBrown <neilb@suse.de>
Only adjust reshape_progress based on the backup that was found
if the backup covered the current reshape_progress point.
Signed-off-by: NeilBrown <neilb@suse.de>
When reshaping backwards we only backup from backup_blocks to
the start, so initialise backup_point appropriately.
Signed-off-by: NeilBrown <neilb@suse.de>
I'm seen mdadm spinning while failing to read 'degraded'.
This doesn't really fix it, but is a reminder that it needs to be
fixed.
Signed-off-by: NeilBrown <neilb@suse.de>
When reshaping from the end of the array to the start, for times
when the number of data devices is decreasing, the handling of the
backup area isn't a simple mirror of the handling on low-to-hi
reshapes as the backup areas is always low in the array.
So re-write that to make it work.
Signed-off-by: NeilBrown <neilb@suse.de>
When reshaping it is correct to open containers exclusively, but not
arrays. The array could very easily be in use, e.g. by a mounted
filesystem.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
For non re-striping transitions array must be unfrozen
before end of processing.
For restriping transitions we normally let the child
unfreeze the array but in this case there is no child.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
reshape.after.data_disks field must be initiated
for raid0<->raid10 transition.
Instead calculated spares_needed variable in reshape_array
function has random value.
Signed-off-by: Krzysztof Wojcik <krzysztof.wojcik@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When mdadm falls in "reduce size" error on takeovered array it jumps
to release and tries execute backward takeover. This time sra pointer
is not initialized and coredump is generated.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
When we restart an array in the middle of a reshape, we reuse a lot of
the code for starting the reshape, but it needs to know that
circumstances are slightly different.
So add a 'restart' arg which is used:
- skip checking and adding spares
- activate the array (rather than start reshape)
- allow the backup file to already exist
Signed-off-by: NeilBrown <neilb@suse.de>
When restarting an array that is in the middle of a reshape,
sync_min cannot be set. So just ignore any errors we get
when trying to set it.
Signed-off-by: NeilBrown <neilb@suse.de>
Particular problem was that we didn't unfreeze if a reshape
wasn't needed.
But all that 'rv' stuff isn't needed and some of it was wrong,
so simplify it all.
Signed-off-by: NeilBrown <neilb@suse.de>
We only 'goto release' on error, but that branch contained handling
for non-error conditions: reloading metadata. Obviously that doesn't
work.
So re-arrange the code to make it more of a straight line that is
easier to follow and reload the metadata if that might be at all
needed.
Signed-off-by: NeilBrown <neilb@suse.de>
Everything other than the 'child' part of the 'switch(fork)' returns
quickly, so leave them inside the switch but move the other large bit
out so as to make the flow of code more natural.
Signed-off-by: NeilBrown <neilb@suse.de>
1/ don't pass 'frozen' as an arg to unfreeze - just use it
to conditionally call 'unfreeze'.
2/ Always unfreeze at end of reshape_container
3/ Only unfreeze at end of reshape_array if not 'forked'. So when
reshape_array is called from reshape_container it doesn't unfreeze,
but when called directly.
Signed-off-by: NeilBrown <neilb@suse.de>
When container is passed to grow_reshape(), load_container() function
has to be used to get all required information from metadata.
So load_super is never correct here - in particular, cfd is a
'container fd' so we must 'load_container' on it.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
reshape_super() called from reshape_container() with size set to
info->component_size gives size in reshape_super == -2 due to unsigned
signed conversion (info->component_size is not initializes).
As size is not changed during container reshape '-1' value is passed
to indicate this.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
reshape_array uses text_version to reload the container content,
so make sure it is available.
Signed-off-by: Marcin Labun <marcin.labun@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Add disks fails due to empty sys name field.
sysfs_init fills out required fields for add disk operation.
Signed-off-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
To reshape a RAID0 we convert to RAID4 first. This makes it look
like it could be degraded and so we are tempted to ensure there are
enough spares. However this is not appropriate for RAID0, so
explicitly exclude new_level == RAID0 in this check
Reported-by: Adam Kwolek <adam.kwolek@intel.com>
Signed-off-by: NeilBrown <neilb@suse.de>
Child_monitor was design to perform 'manage_reshape' for native
arrays. So change the signature for ->manage_reshape to match
child_monitor and move the all to the same place that child_monitor
is called from.
Also give super-intel a manage_reshape handler which simple calls
child_monitor.
Signed-off-by: NeilBrown <neilb@suse.de>
child_monitor has overall responsibility for backups using the generic
bsb, so move all handling under it's control.
signed-off-by: NeilBrown <neilb@suse.de>
We currently suspend rather large sections of
the array which can take a while to process.
Possibly smaller sections are better. Possibly it should be
adjusted on a timeout basis.
Signed-off-by: NeilBrown <neilb@suse.de>
The array might not be a multiple of the chosen backup size, so
the last bit to be backed up might be a bit smaller. Handle that
correctly.
Signed-off-by: NeilBrown <neilb@suse.de>
At some points we need to perform 2 backups at once so as to
start the 'double buffering' approach. So rather than assuming
what the next backup should be, example suspend_point and backup
as much as possible up to there.
Signed-off-by: NeilBrown <neilb@suse.de>
1/ We need to clean up the backup file after the reshape finishes.
2/ We need to remove the suspended region and clear the resync
controls after the resync finishes.
Signed-off-by: NeilBrown <neilb@suse.de>
The array_size we need to consider is the largest possible size of the
array, which is a different calculation depending on whether the array
is growing or shrinking.
Signed-off-by: NeilBrown <neilb@suse.de>
'sync_completed' can sometimes have a value which is slightly high.
So round-down relevant values to new-chunk size and that is what we
want.
Subtract from component_size after scaling down rather than before as
that is easier.
Make sure max_progress never goes negative when reshaping backwards.
Signed-off-by: NeilBrown <neilb@suse.de>
The current code is right.
Instead compute where we might eventually need to back up to, and
then compare that to how far we have progressed.
Also move suspend_point up towards where we might need to backup to,
rather than just as far as max_progress - as max_progress can never
exceed where we are currently suspended to.
Signed-off-by: NeilBrown <neilb@suse.de>
It isn't needed as we always work in multiples of full
destination stripes.
Also multiply by 'after' disks, not before.
We can progress until the point we would write then lines up with
where we would read now.
We read now from
array-address: reshape_progress device-address: read_offset
So we write then to
device-address: read_offset array-address: read_offset * after.disks
Signed-off-by: NeilBrown <neilb@suse.de>
The 'blocks' number computed by analyse_change is the number of
blocks that it makes sense to back-up at a time.
It is the smallest number of blocks that is a whole number of
stripes in both the old and the new layout.
However we are also using it as the smallest amount of progress
that can be made at a time, which is wrong as it is always valid
to progress a single stripe in the new layout.
So change 'blocks' to be called 'backup_blocks' to make it more clear.
And pass new_chunk size down so it can be used for 'minimum forward
progress' calculations.
Also set 'stripes' (the amount actually backed up) from the
possibly-scaled 'blocks' number rather than ignoring it and using
backup_blocks.
Finally, get rid of 'read_range' as it isn't used (or needed).
Signed-off-by: NeilBrown <neilb@suse.de>
Once we have called reshape_container or reshape_super we have handed
on the responsibility for unfreezing the array, so Grow_reshape
shouldn't call unfreeze.
Signed-off-by: NeilBrown <neilb@suse.de>
When converting to RAID6, the new layout should match the old
layout, not the RAID6 version of the old layout.
Signed-off-by: NeilBrown <neilb@suse.de>
We want start_reshape to work no matter what the current values
of suspend_lo/suspend_hi are. So initialise suspend_lo very high
as this allows suspend_hi to be set to anything.
Signed-off-by: NeilBrown <neilb@suse.de>