Releases: linux-system-roles/storage
Version 1.9.7
Version 1.9.6
Version 1.9.5
Version 1.9.4
[1.9.4] - 2022-12-16
New Features
- none
Bug Fixes
- none
Other Changes
- Tests - use a threshold of 2 percent in volume size check (#313)
There seems to be an issue calculating the expected size and the
actual size of the volume. On some systems, the difference is
greater than 1% but less than 2%. We are working on a better, more
reliable method of calculating the expected and actual sizes. In
the meantime, make the threshold 2%.
Version 1.9.3
[1.9.3] - 2022-12-06
New Features
- none
Bug Fixes
- Thin pool test with large size volume fix (#310)
fixed size calculation for large size thin pools in the test
modified provision.fmf disk size to simulate larger disks in the tests
Other Changes
- none
Version 1.9.2
[1.9.2] - 2022-11-01
New Features
- none
Bug Fixes
- Master thin support size fix (#299)
Fixed calculation of relative thinp sizes
- percent specified 'size' of thin pool volume is now properly
calculated from size of parent thinpool
Fixed size and percentage handling for thin pools
- percentage size thin volume now correctly references its parent device
for size calculation - percentage values are now accepted size for thin pool size
Other Changes
- Add disks_needed for raid test cases (#300)
Creating raid will be failed if we don't have enough unused disks, set
disks_needed earlier.
Set disks_needed=2 for tests_swap.yml
- use block instead of end_play (#302)
Do not use end_play
with the conditional when
which uses variables
for the condition. The problem is that end_play
is executed in a
different scope where the variables are not defined, even when using
set_fact
. The fix is to instead use a block
and a when
.
- Modified lvmvdo check
VDO check was failing due to issue in 'vdostats'.
Modified vdo testing so 'lvs' is used to get data instead
Version 1.9.1
[1.9.1] - 2022-07-26
New Features
- none
Bug Fixes
- Update README.md with latest changes (#290)
- LVM thin provisioning support.
- Support for adding/removing disks to/from existing pools.
- Cache can now be attached to an pre-existing volume.
Fixes: #287
Fixes: #288
Fixes: #289
Other Changes
- changelog_to_tag action - Use GITHUB_REF_NAME for main branch name
Version 1.9.0
[1.9.0] - 2022-07-19
New Features
- Add support for attaching LVM cache to existing LVs (#273)
Fixes: #252
- Add support for managing pool members (#264)
For LVM pools this adds support for adding and removing members
(PVs) from the pool (VG).
- Do not allow removing members from existing pools in safe mode
Bug Fixes
- loop variables are scoped local - no need to reset them (#282)
If you use
loop_control:
loop_var: storage_test_pool
Then the variable storage_test_pool
is scoped local to the task
and is undefined after the task. In addition, referencing the
variable after the loop causes this warning:
[WARNING]: The loop variable 'storage_test_pool' is already in use. You should
set the `loop_var` value in the `loop_control` option for the task to something
else to avoid variable collisions and unexpected behavior.
- support ansible-core-2.13 (#278)
Looks like ansible-core-2.13 (or latest jinja3) does not support
constructs like this:
var: "{{ [some list] }} + {{ [other list] }}"
instead, the entire thing has to be evaluated in the same jinja
evaluation context:
var: "{{ [some list] + [other list] }}"
In addition - it is an Ansible antipattern to use
- set_fact:
var: "{{ var + item }}"
loop: "{{ some_list }}"
so that was rewritten to use filters instead
Other Changes
- ensure role works with gather_facts: false (#277)
Ensure tests work when using ANSIBLE_GATHERING=explicit
-
ensure cryptsetup is available for testing (#279)
-
make min_ansible_version a string in meta/main.yml (#281)
The Ansible developers say that min_ansible_version
in meta/main.yml
must be a string
value like "2.9"
, not a float
value like 2.9
.
- Skip the entire test_lvm_pool_members playbook with old blivet (#280)
Multiple bugs in blivet were fixed in order to make the feature
work and without the correct version even the most basic test to
remove a PV from a VG will fail so we should skip the entire test
with old versions of blivet.
Skip test on el7 if blivet version is too old
Add support for is_rhel7
Refactor EL platform and version checking code
Add a name for the end_play
task
- Add CHANGELOG.md (#283)
check for thinlv name before assigning to thinlv_params (#276)
check for thinlv name before assigning to thinlv_params (#276)
Only set the thinlv_param name if thinlv has 'name'
Thin pool support; deprecate support for "striped" RAID
Thin pool support (#269)
- Argument validator extension
Thin provisioning requires yaml input parameters to be checked for
various unsupported combinations.
This commit provides framework that allows searching and denying predefined
parameter combinations.
Checks are performed at the beginning of the role run before blivet
initialization.
- Thin pool support (WIP)
Storage role support for thin provisioning.
Volumes under LVM pool now can use three new options:
- 'thin' - type bool
- setting to True puts the volume into a thin pool
- 'thin_pool_name' - type str
- specifies the name of the thin pool for this volume
- selects thin pool with the same name if it exists
- tries to create new thin pool if it does not
- if omitted and no existing thin pool is present, the name is generated automatically
- if omitted and exactly one thin pool is present, the volume is put into existing thin pool
- if omitted and more than one thin pool is present, exception is raised
- 'thin_pool_size' - type Size
- specifies size of the thin pool
LVM RAID raid0 level support (#272)
- Add workaround for missing LVM raid0 support in blivet
Blivet supports creating LVs with segment type "raid0" but it is
not in the list of supported RAID levels. This will be fixed in
blivet, see storaged-project/blivet#1047
-
Add a test for LVM RAID raid0 level
-
README: Remove "striped" from the list of supported RAID for pools
We use MD RAID for RAIDs on the pool level which doesn't support
"striped" level.
- README: Clarify supported volume RAID levels
We support different levels for LVM RAID and MD RAID.