Added new type of line to mdadm.conf which allows to specify values of
sysfs attributes for MD devices that should be loaded after the array is
assembled. Each line is interpreted as list of structures containing
sysname of MD device (md126 etc.) and list of sysfs attributes and their
values.
Signed-off-by: Mariusz Dabrowski <mariusz.dabrowski@intel.com>
Signed-off-by: Krzysztof Smolinski <krzysztof.smolinski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Shut up some gcc9 errors by using put_unaligned() accessors. Not pretty,
but better than it was.
Also correct to the correct swap macros.
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
In some cases if more than 6 oroms exist, resource for particular
controller may not be found. Change method for storing
adapter_rom_resources from array to list.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If a member drive disappears and is set faulty by the kernel during
mdmon startup, after ss->load_container() but before manage_new(), mdmon
will try to readd the faulty drive to the array and start rebuilding.
Metadata on the active drive is updated, but the faulty drive is not
removed from the array and is left in a "blocked" state and any write
request to the array will block. If the faulty drive reappears in the
system e.g. after a reboot, the array will not assemble because metadata
on the drives will be incompatible (at least on imsm).
Fix this by adding a new option for sysfs_read(): "GET_DEVS_ALL". This
is an extension for the "GET_DEVS" option and causes all member devices
to be returned, even if the associated block device has been removed.
Use this option in manage_new() to include the faulty device on the
active_array's devices list. Mdmon will then properly remove the faulty
device from the array and update the metadata to reflect the degraded
state.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
This rules will create link under /dev/disk/by-partuuid/ for
MD devices partition, with which will support specify
root=PARTUUID=XXX to boot rootfs.
Signed-off-by: Liwei Song <liwei.song@windriver.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When passed size is smaller than chunk, mdadm rounds it to 0 but 0 there
means max available space.
Block it for every metadata. Remove the same check from imsm routine.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
During spare activation get_extents() calculates metadata reserved space based
on smallest active RAID member or it will take the defaults. Since patch
611d9529("imsm: change reserved space to 4MB") default is extended. If array
was created prior that patch, reserved space is smaller. In case of matrix
RAID - spare is activated in each array one-by-one, so it is spare for first
activation, but treated as "active" during second one.
In case of adding spare drive to old matrix RAID with the size the same as
already existing member drive the routine will take the defaults during second
run and mdmon will refuse to rebuild second volume, claiming that the drive
does not have enough free space.
Add parameter to get_extents(), so the during spare activation reserved space
is always based on smallest active drive - even if given drive is already
active in some other array of matrix RAID.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Within the output of "mdadm --examine", there are three sizes reported
on adjacent lines. For example:
$ sudo mdadm --examine /dev/md3
[...]
Avail Dev Size : 17580545024 (8383.06 GiB 9001.24 GB)
Array Size : 17580417024 (16765.99 GiB 18002.35 GB)
Used Dev Size : 11720278016 (5588.66 GiB 6000.78 GB)
[...]
This can be confusing, since the first and third line are in 512-byte
sectors, and the second is in KiB.
Add units to avoid ambiguity.
(I don't particularly like the "KiB" notation, but it is at least
unambiguous.)
Signed-off-by: Corey Hickey <bugfood-c@fatooh.org>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If array was stopped during reshape initialization,
there might be a "0" checkpoint recorded in metadata.
If array with such condition (reshape with position 0)
is passed to kernel - it will refuse to start such array.
Treat such array as normal during assemble, Grow_continue() will
reinitialize and start the reshape.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Since the patch c76242c5("mdmon: get safe mode delay file descriptor
early"), safe_mode_dalay is set properly by initrd mdmon. But in some
cases with filesystem traffic since the very start of the system, it
might take a while to transit to clean state. Due to fact that new
mdmon does not wait for the old one to exit - it might happen that the
new one switches safe_mode_delay back to seconds, before old one exits.
As the result two mdmons are running concurrently on same array.
Wait for the old mdmon to exit by pinging it with SIGUSR1 signal, just
in case it is sleeping.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When mdmon gets a SIGTERM, it stops managing arrays that are clean. If
there is more that one array in the container and one of them is dirty
and the clean one is still present in mdstat, mdmon will treat it as a
new array and start managing it again. This leads to a cycle of
remove_old() / manage_new() calls for the clean array, until the other
one also becomes clean.
Prevent this by not calling manage_new() if sigterm is set. Also, remove
a check for sigterm in manage_new() because the condition will never be
true.
Signed-off-by: Artur Paszkiewicz <artur.paszkiewicz@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
This creates raid1 device with the failfast option and check all
slaves have the failfast flag. And it does assembling and growing
the raid1 device and check the failfast works fine.
Signed-off-by: Gioh Kim <gi-oh.kim@cloud.ionos.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
...when not changing the number of disks.
This patch needs context to explain. These are the relevant parts of
the original code (condensed and annotated):
if (dir > 0) {
/* Increase data offset (reshape backwards) */
if (data_offset < sd->data_offset + min) {
pr_err("--data-offset too small on %s\n",
dn);
goto release;
}
} else {
/* Decrease data offset (reshape forwards) */
if (data_offset < sd->data_offset - min) {
pr_err("--data-offset too small on %s\n",
dn);
goto release;
}
}
When this code is reached, mdadm has already decided on a reshape
direction. When increasing the data offset, the reshape runs backwards
(dir==1); when decreasing the data offset, the reshape runs forwards
(dir==-1).
The conditional within the backwards reshape is correct: the requested
offset must be larger than the old offset plus a minimum delta; thus the
reshape has room to work.
For the forwards reshape, the requested offset needs to be smaller than
the old offset minus a minimum delta; to do this correctly, the
comparison must be reversed.
Also update the error message.
Note: I have tested this change on a RAID 5 on Linux 4.18.0 and verified
that there were no errors from the kernel and that the device data
remained intact. I do not know if there are considerations for different
RAID levels.
Signed-off-by: Corey Hickey <bugfood-c@fatooh.org>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
'Commit b9c9bd9bac ("Detail: ensure --export names are acceptable as
shell variables")' duplicates mdi->sys_name to sysdev string by,
char *sysdev = xstrdup(mdi->sys_name + 1);
which skips the first character of mdi->sys_name. Then when running
mdadm --detail <md device> --export, the output looks like,
MD_DEVICE_ev_sda2_ROLE=1
MD_DEVICE_ev_sda2_DEV=/dev/sda2
The first character of md device (between MD_DEVICE and _ROLE/_DEV)
is dropped. The expected output should be,
MD_DEVICE_dev_sda2_ROLE=1
MD_DEVICE_dev_sda2_DEV=/dev/sda2
This patch removes the '+ 1' from calling xstrdup() in Detail(), which
gets the dropped first character back.
Reported-by: Arvin Schnell <aschnell@suse.com>
Fixes: b9c9bd9bac ("Detail: ensure --export names are acceptable as 4 shell variables")
Signed-off-by: Coly Li <colyli@suse.de>
Cc: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If reshape is performed on drives larger then 2 TB,
migration checkpoint area that is calculated exeeds 32-bit value.
This checkpoint area is a reserved space threated as backup
during reshape - at the end of the drive, right before metadata.
As a result - wrong space is used and the data that may exists there
is overwritten.
Adding additional field to migration record to track high order 32-bits
of pba of this area. Three other fields that may exceed 32-bit value
for large drives are added as well.
Signed-off-by: Pawel Baldysiak <pawel.baldysiak@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Commit d7a1fda276 ("imsm: update metadata correctly while raid10 double
degradation") resolves main Imsm double degradation problems but it
omits one case. Now metadata hangs in the rebuilding state if the drive
under rebuild is removed during recovery from double degradation.
The root cause of this problem is comparing new map_state with current
and if they both are degraded assuming that nothing new happens.
Don't rely on map states, just check if device is failed. If the drive
under rebuild fails then finish migration, in other cases update map
state only (second fail means that destination map state can't be normal).
To avoid problems with reassembling move end_migration (called after
double degradation successful recovery) after check if recovery really
finished, for details see (7ce057018 "imsm: fix: rebuild does not
continue after reboot").
Remove redundant code responsible for finishing rebuild process. Function
end_migration do exactly the same. Set last_checkpoint to 0, to prepare
it for the next rebuild.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
After cd72f9d(policy: support devices with multiple paths.) compilation
on old compilers fails because "‘p’ may be used uninitialized
in this function".
Initialize it with NULL to prevent this.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
paths could be NULL and paths[0] should be followed by NULL pointer
checking.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Gioh Kim <gi-oh.kim@cloud.ionos.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When using "--grow --chunk=" to change chunk
size, the old chunksize is reported instead of the new.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
With a chunk size of 16Meg and data drive count of 8,
this calculate can easily overflow the 'int' type that
is used for the multiplications.
So force it to use "long" instead.
Reported-and-tested-by: Ed Spiridonov <edo.rus@gmail.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
If devices[].i.disk.state has MD_DISK_FAILFAST or MD_DISK_WRITEMOSTLY
flag, it cannot be the most recent device. Both flags should be masked
before checking the state.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Gioh Kim <gi-oh.kim@cloud.ionos.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Mdmon calls end_migration() when map state changes from normal to
degraded. It is not valid because in raid 10 double degradation case
mdmon breaks checkpointing but array is still rebuilding.
In this case mdmon has to mark map as degraded and continues marking
recovery checkpoint in metadata. Migration can be finished only if newly
failed device is a rebuilding device.
Add catching double degraded to degraded transition. Migration is
finished but map state doesn't change, array is still degraded.
Update failed_disk_num correctly. If double degradation
happens rebuild will start on the lowest slot, but this variable points
to the first failed slot. If second fail happens while rebuild this
variable shouldn't be updated until rebuild is not finished.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
"mdadm --monitor --oneshot" can be used to get a warning
if there are any degraded arrays. It can be helpful to get
this warning periodically while the condition persists.
This patch add a systemd service and timer which can
be enabled with
systemctl enable mdmonitor-oneshot.service
and will then provide daily warnings.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Having the mdcheck script is not use if is never run.
This patch adds systemd unit files so that it can easily
be run on the first Sunday of each month for 6 hours,
then on every subsequent morning until the check is
finished.
The units still need to be enabled with
systemctl enable mdcheck_start.timer
The timer will only actually be started when an array
which might need it becomes active.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
As new releases of Linux some time change the name of
a path, some distros keep "legacy" names as well. This
is useful, but confuses mdadm which assumes each device has
precisely one path.
So change this assumption: allow a disk to have several
paths, and allow any to match when looking for a policy
which matches a disk.
Reported-and-tested-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
PART-POLICY has been accepted in mdadm.conf since the same
time that POLICY was accepted, but it was never documented.
So add the missing documentation.
Also fix a bug which would have stopped it from working if
anyone had ever tried to use it.
Signed-off-by: NeilBrown <neilb@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Before updating superblock of slave disks, desired_state value
is set for the target state of the slave disks. But it forgets
to check MD_DISK_FAILFAST and MD_DISK_WRITEMOSTLY flags. Then
start_arrays() calls ADD_NEW_DISK ioctl-call and pass the state
without MD_DISK_FAILFAST and MD_DISK_WRITEMOSTLY.
Currenlty it does not generate any problem because kernel does not
care MD_DISK_FAILFAST or MD_DISK_WRITEMOSTLY flags.
Reviewed-by: NeilBrown <neilb@suse.com>
Signed-off-by: Gioh Kim <gi-oh.kim@profitbricks.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When IMSM_NO_PLATFORM is exported mdadm allows to create array with
partitions or add partition to existing array but there is no
possibilty to assemble it after stopping, see commit 691c6ee1b6
("IMSM/DDF: don't recognised these metadata on partitions.").
When searching for hba capabilities first test device and print
corresponding error if it is a partition.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Since load_devices frees "devices" when it can't find any
device, we should set it to NULL to avoid double free issue
which can be reproduced by below steps:
mdadm -CR /dev/md/vol -l0 -e 1.2 -n2 /dev/sd[b-c] --assume-clean
mdadm -Ss
mdadm -A /dev/md127 /dev/sd[b-c] --update metadata
Reported-by: Tkaczyk Mariusz <mariusz.tkaczyk@intel.com>
Tested-by: Tkaczyk Mariusz <mariusz.tkaczyk@intel.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Like other failure cases in load_devices, we need
to free those resources as well.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
In some scenarios mdadm --detail-platform shows duplicated info about one
of controllers. Block it.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
There is a possibility to create a RAID with empty name. Block it. Also
remove trailing and leading whitespaces from given name.
Signed-off-by: Roman Sobanski <roman.sobanski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When Kill() cannot open device or find superblock it return the same
error and mdadm ignores it.
Change error handling in Kill() function. Return error if device is
busy, ignore it only when superblock doesn't exist- assume that metadata
is zeroed.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Kernel returns EBUSY when device fail invokes array fail.
In external metadata if kernel returns it, mdadm doesn't stop member
arrays but it will try to stop container directly. It fails because
container still has working arrays, so udev remove is triggered.
Try to set faulty state on device in member arrays first. If kernel
returns EBUSY, stop this array. After that remove the device from
container.
In external metadata mdmon has to remove faulty devices from degraded
arrays, just remove device from container.
Raid5 array doesn't return EBUSY, it allows to remove every device.
Mdadm shouldn't block it.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When array is frozen but there is no recovery/reshape in mdstat,
check_idle() will not return error but grow countinue can still working.
Check is array frozen. Do not use sysfs sync_action parameter because it
doesn't exist for Raid0, simply check metadata_version in mdstat.
Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Instead of /usr/bin/sh, and /usr/bin/echo, use /bin/sh and shell
built-in echo respectively. This makes
udev-md-raid-safe-timeouts.rules to be compatible with both usr-merged
and split-usr systems alike.
Signed-off-by: Dimitri John Ledkov <xnox@ubuntu.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
For HA product, RA (resource agent) assembles cluster raid
through call below cmd:
$MDADM --assemble $mddev --config=$RAIDCONF $MDADM_HOMEHOST
Sometimes node can't assemble array because all the nodes
need to contend dlm lock, which causes node fence in automatic
test.
And in fact, we don't need the protection since the assemble
cmd called by RA doesn't change superblock, so revert the
commit 76781701a4 ("Assemble:
provide protection when clustered raid do assemble") to remove
unneccessary protection.
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
We can see "double free or corruption" with below steps
as reported by Mariusz:
export IMSM_NO_PLATFORM=1
export IMSM_DEVNAME_AS_SERIAL=1
mdadm --zero-super /dev/sd*
mdadm -C /dev/md/imsm -n2 -eimsm /dev/sdb /dev/sdc --run
mdadm -C /dev/md/r1 -n2 -z15G -eimsm /dev/sdb /dev/sdc -l1 --run --assume-clean
mdadm -f /dev/md126 /dev/sdb
mdadm -Ss
It is caused by Manage_stop calls map_remove and map_unlock,
but *mapp is not set to NULL after map_remove -> map_free,
so map_unlock will call map_free again.
Reported-by: Tkaczyk Mariusz <mariusz.tkaczyk@intel.com>
Tested-by: Tkaczyk Mariusz <mariusz.tkaczyk@intel.com>
Signed-off-by: Guoqing Jiang <gqjiang@suse.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Tests should calucalte expected array_size accordingly to raid level. Also
tests should take care about runding to neares MB introduced from b53bfba6
"imsm: use rounded size for metadata initialization".
Expect proper size in tests. Simplify 09imsm-overlap test by creating array
with size which has not been rounded. Main purpose of this test is checking
something else.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
When test tries to change RAID level from RAID5 to RAID0 mdadm respond with
error about unsporrted operation.
Make 16imsm-r5_3d-migrate-r0_3d and 16imsm-r5_5d-migrate-r0_5d test
negative.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Since a3b831c9 "Grow.c: Block any level migration with chunk size change"
there is no possibility to perform migration between level and chunk in
one operation. When any test tries to do this error message is printed
and tests finishes with fail.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
In some migration tests, variable new_num_disks should be set to expected
number of disks after migration. This is required for proper expected size
calculation.
Pass new_num_disks variable during test execution for:
- 16imsm-r0_3d-migrate-r5_4d
- 18imsm-r1_2d-takeover-r0_1d
- 16imsm-r0_5d-migrate-r5_6d
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Chunk size copied from sysfs should be divied by 1024 to compare with
expected chunk size.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>
Since 611d9529 (imsm: change reserved space to 4MB) gap between RAID
volumes has changed. Tests should expect correct offset in size
calulations.
Fix expected offset for tests.
Signed-off-by: Michal Zylowski <michal.zylowski@intel.com>
Signed-off-by: Jes Sorensen <jsorensen@fb.com>