summaryrefslogtreecommitdiffstats
path: root/clustermd_tests (follow)
Commit message (Collapse)AuthorAgeFilesLines
* tests: increase sleeps from 1s to 2sMateusz Kusiak2024-12-132-3/+3
| | | | | | | | | | | The issue here is that some of the tests sporadically fail due to things being still processed. Default 1s delays proven not to be sufficient for newly created CI, as tests tend to ocassionally fail. This patch increases default 1s sleep to 2s, to hopefully get rid of sporadical fails. Signed-off-by: Mateusz Kusiak <mateusz.kusiak@intel.com>
* mdadm/clustermd_tests: adjust test cases to support md module changesHeming Zhao2024-07-104-5/+7
| | | | | | | | | | Since kernel commit db5e653d7c9f ("md: delay choosing sync action to md_start_sync()") delays the start of the sync action, clustermd array sync/resync jobs can happen on any leg of the array. This commit adjusts the test cases to follow the new kernel layer behavior. Signed-off-by: Heming Zhao <heming.zhao@suse.com> Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
* mdadm/clustermd_tests: add some APIs in func.sh to support running the tests ↵Heming Zhao2024-07-101-0/+60
| | | | | | | | | | without errors clustermd_tests/func.sh lacks some APIs to run, this patch makes clustermd_tests runnable from the test suite. Signed-off-by: Heming Zhao <heming.zhao@suse.com> Signed-off-by: Mariusz Tkaczyk <mariusz.tkaczyk@linux.intel.com>
* Don't need to check recovery after re-add when no I/O writes to raidXiao Ni2019-09-301-2/+0
| | | | | | | | If there is no write I/O between removing member disk and re-add it, there is no recovery after re-adding member disk. Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* Init devlist as an arrayXiao Ni2019-09-301-0/+3
| | | | | | | | | | | | devlist is an string. It will change to an array if there is disk that is sbd disk. If one device is sbd, it runs devlist=(). This line code changes devlist from a string to an array. If there is no sbd device, it can't run this line code. So it will still be a string. The later codes need an array, rather than an string. So init devlist as an array to fix this problem. Signed-off-by: Xiao Ni <xni@redhat.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/test: mdadm needn't make install on the systemZhilong Liu2018-06-011-7/+4
| | | | | | | | | | | | Fixes: beb71de04d31 ("mdadm/test: enable clustermd testing under clustermd_tests/") clustermd_tests/func.sh: remove unnecessary 'make install', just ensure 'make everything' has done. the original idea is to make the /sbin/mdadm version same as ./mdadm, and this breakage has pointed out by commit: 59416da78fc6 ("tests/func.sh: Fix some total breakage in the test scripts") Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switch-recovery against cluster-raid10Zhilong Liu2018-03-081-0/+21
| | | | | | | | | | | 03r10_switch-recovery: Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail', it triggers recovery and the spare disk would replace the failure disk, then stop the array in doing recovery node, the other node would take it over and continue to complete the recovery. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switch-recovery against cluster-raid1Zhilong Liu2018-03-081-0/+21
| | | | | | | | | | | 03r1_switch-recovery: Create new array with 2 active and 1 spare disk, set 1 active disk as 'fail', it triggers recovery and the spare disk would replace the failure disk, then stop the array in doing recovery node, the other node would take it over and continue to complete the recovery. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switch-resync against cluster-raid10Zhilong Liu2018-03-081-0/+18
| | | | | | | | | | 03r10_switch-resync: Create new array, 1 node is doing resync and other node would keep PENDING, stop the array in resync node, other node would take it over and continue to complete the resync. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switch-resync against cluster-raid1Zhilong Liu2018-03-081-0/+18
| | | | | | | | | | 03r1_switch-resync: Create new array, 1 node is doing resync and other node would keep PENDING, stop the array in resync node, other node would take it over and continue to complete the resync. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_re-add against cluster-raid10Zhilong Liu2018-03-081-0/+18
| | | | | | | | | 02r10_Manage_re-add: 2 active disk in array, set 1 disk 'fail' and 'remove' it from array, then re-add the disk back to array and triggers recovery. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_re-add against cluster-raid1Zhilong Liu2018-03-081-0/+18
| | | | | | | | | 02r1_Manage_re-add: 2 active disk in array, set 1 disk 'fail' and 'remove' it from array, then re-add the disk back to array and triggers recovery. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_add-spare against cluster-raid10Zhilong Liu2018-03-081-0/+30
| | | | | | | | | | 02r10_Manage_add-spare: it has 2 scenarios against manage_add-spare. 1. 2 active disks in md array, using add-spare to add spare disk. 2. 2 active disks and 1 spare in array, add-spare 1 new disk into array, then check spares. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_add-spare against cluster-raid1Zhilong Liu2018-03-081-0/+30
| | | | | | | | | | 02r1_Manage_add-spare: it has 2 scenarios against manage_add-spare. 1. 2 active disks in md array, using add-spare to add spare disk. 2. 2 active disks and 1 spare in array, add-spare 1 new disk into array, then check spares. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_add against cluster-raid10Zhilong Liu2018-03-081-0/+33
| | | | | | | | | | | 02r10_Manage_add: it covers testing 2 scenarios against manage_add. 1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it from array, then add 1 pure disk into array. 2. 2 active disks in array, add 1 new disk into array directly, now the 'add' in equal to 'add-spare'. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test manage_add against cluster-raid1Zhilong Liu2018-03-081-0/+33
| | | | | | | | | | | 02r1_Manage_add: it covers testing 2 scenarios against manage_add. 1. 2 active disks in md array, set 1 disk 'fail' and 'remove' it from array, then add 1 pure disk into array. 2. 2 active disks in array, add 1 new disk into array directly, now the 'add' in equal to 'add-spare'. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test grow_add against cluster-raid1Zhilong Liu2018-03-081-0/+68
| | | | | | | | | | | | 01r1_Grow_add: It contains 3 kinds of growing array. 1. 2 active disk in md array, grow and add new disk into array. 2. 2 active and 1 spare disk in md array, grow and add new disk into array. 3. 2 active and 1 spare disk in md array, grow the device-number and make spare disk as active disk in array. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switching bitmap against cluster-raid10Zhilong Liu2018-03-081-0/+51
| | | | | | | | | | 01r10_Grow_bitmap-switch: It tests switching bitmap during three modes include of clustered, none and internal, this case is testing the clustered raid10. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* clustermd_tests: add test case to test switching bitmap against cluster-raid1Zhilong Liu2018-03-081-0/+51
| | | | | | | | | | 01r1_Grow_bitmap-switch: It tests switching bitmap during three modes include of clustered, none and internal, this case is testing the clustered raid1. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: delete meaningless commands in checkZhilong Liu2018-03-081-2/+0
| | | | | Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: add nobitmap in checkZhilong Liu2018-03-081-0/+7
| | | | | Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/test: add do_clean to ensure each case only catch its own testlogZhilong Liu2018-03-081-2/+7
| | | | | Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/test: add disk metadata infos in save_logZhilong Liu2018-03-081-0/+2
| | | | | Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: add test case to test grow_resize cluster-raid10Zhilong Liu2018-01-211-0/+38
| | | | | | | | | | | 01r10_Grow_resize: 1. Create clustered raid10 with smaller size, then resize the mddev to max size, finally change back to smaller size. 2. Create clustered raid10 with smaller chunk-size, then resize it to larger, and trigger reshape. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: add test case to test creating cluster-raid10Zhilong Liu2018-01-211-0/+50
| | | | | | | | | | | | 00r10_Create: It contains 4 scenarios of creating clustered raid10. 1. General creating, master node does resync and slave node does Pending. 2. Creating clustered raid10 with --assume-clean. 3. Creating clustered raid10 with spare disk. 4. Creating clustered raid10 with --name. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: add test case to test grow_resize cluster-raid1Zhilong Liu2018-01-211-0/+23
| | | | | | | | 01r1_Grow_resize: Create clustered raid1 with smaller size, then resize the mddev to max size, finally change back to smaller size. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/clustermd_tests: add test case to test creating cluster-raid1Zhilong Liu2018-01-211-0/+50
| | | | | | | | | | | | 00r1_Create: It contains 4 scenarios of creating clustered raid1. 1. General creating, master node does resync and slave node does Pending. 2. Creating clustered raid1 with --assume-clean parameter. 3. Creating clustered raid1 with spare disk. 4. Creating clustered raid1 with --name. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/test: add '--testdir=' to switch choosing test suiteZhilong Liu2018-01-211-2/+0
| | | | | | | | | By now, mdadm has two test suites to cover traditional sofr-raid testing and clustermd testing, the '--testdir=' option supports to switch which suite to test, tests/ or clustermd_tests/. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>
* mdadm/test: enable clustermd testing under clustermd_tests/Zhilong Liu2018-01-212-0/+365
For clustermd testing, it needs user deploys the basic cluster manually, test scripts don't cover auto-deploy cluster due to different linux distributions have lots of difference. Then complete the configuration in cluster_conf, please refer to the detail comments in 'cluster_conf'. 1. 'func.sh' source file, it achieves feature functions for clustermd testing. 2. 'cluster_conf' configure file, it contains two parts as the input of testing. Signed-off-by: Zhilong Liu <zlliu@suse.com> Signed-off-by: Jes Sorensen <jsorensen@fb.com>