Add optional availabilityZone.from:Machine to disks.#263
Add optional availabilityZone.from:Machine to disks.#263
Conversation
Previously capo defaulted to creating disks in the failureDomain (availability zone) of the virtual machine that is created. In some clouds this is a requirement for good performance or to be able to attach the disk at all. Other clouds have a global AZ for disks, so we need a pararmeter to control this. So we introduce three new boolean variables, all defaulting to false: controlPlaneRootDiskPin workerRootDiskPin workerAdditionalBlockDevicesPin which control whether we set availabilityZone/from: Machine for the respective disks. Signed-off-by: Kurt Garloff <kurt@garloff.de>
|
This should address #262. |
|
OK, looks good so far. Use openstack-scs2-1-34-v0-git-f4ddbaf for testing. linux@infra-mgmt(:openstack):~/scs-training-kaas-scripts [2]$ osc compute server list -f id -f name -f "os-ext-az:availability_zone" -f flavor -f status --sort-key display_name
┌──────────────────────────────────────┬──────────────────────────────┬────────┬──────────────┬─────────────────────────────┐
│ id ┆ name ┆ status ┆ flavor ┆ OS-EXT-AZ:availability_zone │
╞══════════════════════════════════════╪══════════════════════════════╪════════╪══════════════╪═════════════════════════════╡
│ a2cab588-d258-4429-a780-6aeaee2036ae ┆ zuul-md-2-7jjnp-7sxrb-gqhms ┆ ACTIVE ┆ SCS-8V-32 ┆ nbg6 │
│ ff09e6c2-ecc0-491e-806f-a6fc2a4ebc19 ┆ zuul-md-1-vq7j7-psw4m-xtwsk ┆ ACTIVE ┆ SCS-8V-32 ┆ nbg3 │
│ ea0d58d4-3215-4489-b74e-bfd0d7d7e86c ┆ zuul-md-0-bnr4n-7jxln-rtgd4 ┆ ACTIVE ┆ SCS-8V-32 ┆ nbg1 │
│ d978aefb-8990-4d12-9f7c-15268b0e3247 ┆ zuul-4rft9-kvppg ┆ ACTIVE ┆ SCS-2V-4-20s ┆ nbg1 │
│ 6a61679c-7cc2-42a6-8259-c39c7369c74a ┆ Infra-Mgmt ┆ ACTIVE ┆ SCS-2V-4 ┆ nbg6 │
│ 29ca3411-e55e-4855-8266-5cbc8c6eba48 ┆ infra-md-2-hxm9s-kdgq7-8hl6d ┆ ACTIVE ┆ SCS-4V-16 ┆ nbg6 │
│ 021ca9d1-f70e-4f8a-86ff-08d7afcc7239 ┆ infra-md-1-qb4l8-xqck4-4pt2z ┆ ACTIVE ┆ SCS-4V-16 ┆ nbg3 │
│ ab01ff3d-f865-4e4a-98a4-2fe25314b1b2 ┆ infra-md-0-z5cdp-xf9wc-vfhbt ┆ ACTIVE ┆ SCS-4V-16 ┆ nbg1 │
│ 5175315d-5429-43ac-965c-0c528fd1e584 ┆ infra-lmj8t-bgl6r ┆ ACTIVE ┆ SCS-2V-4-20s ┆ nbg1 │
│ 9aee5840-246d-4a74-a60f-a6b6e0597ed7 ┆ infra-lmj8t-46c9c ┆ ACTIVE ┆ SCS-2V-4-20s ┆ nbg3 │
│ cc5c012d-4b66-43ac-9527-084f5458bdff ┆ infra-lmj8t-2hhpw ┆ ACTIVE ┆ SCS-2V-4-20s ┆ nbg6 │
└──────────────────────────────────────┴──────────────────────────────┴────────┴──────────────┴─────────────────────────────┘ |
|
Reviews would be nice ... |
|
Is there a reason to use 3 separate variables instead of only one like volumesInSameAZ? |
|
I change three different places and thus thought I need 3 different variables. But you are right: I don't. Thanks for the review, @Nils98Ar ! |
Previously capo defaulted to creating disks in the failureDomain (availability zone) of the virtual machine that is created. In some clouds this is a requirement for good performance or to be able to attach the disk at all.
Other clouds have a global AZ for disks, so we need a pararmeter to control this.
So we introduce three new boolean variables, all defaulting to false: controlPlaneRootDiskPin
workerRootDiskPin
workerAdditionalBlockDevicesPin
which control whether we set
availabilityZone/from: Machine for the respective disks.
Needs testing!