You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
== Creating custom rules for the Validation service
18
17
19
18
The `Validation` service uses Open Policy Agent (OPA) policy rules to check the suitability of each virtual machine (VM) for migration. The `Validation` service generates a list of _concerns_ for each VM, which are stored in the `Provider Inventory` service as VM attributes. The web console displays the concerns for each VM in the provider inventory.
By default, {virt} assigns the destination nodes during VM migration. However, you can use the scheduling target VMs feature to define the destination nodes and apply specific conditions to schedule when the VMs are switched from `pending` to `on`.
Starting with {project-first} 2.10, you can use the _target VM scheduling_ feature to direct {project-short} to migrate virtual machines (VMs) to specific nodes of {virt} as well as to schedule when the VMs are powered on. Using the feature, you can design and enforce rules that you set using either the UI or command-line interface.
11
+
12
+
Previously, when you migrated VMs to {virt}, {virt} automatically determined the node the VMs would be migrated to. Although this served many customers' needs, there are certain situations in which it is very useful to be able to specify the target node of a VM or the conditions under which the VM is powered on, regardless of the type of migration involved.
13
+
14
+
== Use cases
15
+
16
+
Target VM scheduling is designed to help you with the following use cases, among others:
17
+
18
+
* *Prioritizing critical workloads*: In many migrations, there are VMs that must be among the first migrated and powered up. Node selector rules let you ensure that specific VMs are migrated first to support other VMs that are migrated afterwards.
19
+
20
+
* *Business continuity and disaster recovery*: You can use scheduling rules to migrate and power up critical VMs to several sites, in different time zones or otherwise geographically separated by significant distances. This allows you to deploy these VMs as strategic assets for business continuity, such as disaster recovery.
21
+
22
+
* *Working with fluctuating demands*: In situations where demand for a service might vary significantly, rules for scheduling when to spin up VMs based on demand allows you to use your resources more efficiently.
= Scheduling target VMs from the command-line interface
8
+
9
+
[role="_abstract"]
10
+
You can use the command-line interface (CLI) to tell {project-first} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.
11
+
12
+
The {project-soft} CLI supports the following scheduling-related labels, all of which are added to the `Plan` CR:
13
+
14
+
`targetAffinity`: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of label uses hard (requirements) and soft (preferences) conditions combined with logical operators, such as `and`, `or,` and `not`, to provide greater flexibility than the `targetLabelSelector` label discussed following.
15
+
`targetLabels`: Applies organizational or operational labels to migrated VMs for identification and management.
16
+
`targetNodeSelector`: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.
17
+
18
+
[IMPORTANT]
19
+
====
20
+
System-managed labels, such as migration, plan, VM ID, or application labels, override any user-defined labels.
21
+
====
22
+
23
+
.Prerequisites
24
+
25
+
Migrations that use target VM scheduling require the following prerequisites, in addition to the prerequisites for your source provider:
26
+
27
+
* {project-first} 2.10 or later.
28
+
* Version of {virt} that is compatible with your version of {project-short}. For {project-short} 2.10, the compatible versions of {virt} are 4.18, 4.19, and 4.20 only.
29
+
* `cluster-admin` or equivalent security privileges that allow managing `VirtualMachineInstance` objects and associated Kubernetes scheduling primitives.
30
+
31
+
.Procedure
32
+
33
+
. Create custom resources (CR)s for the migration according to the procedure for the provider.
34
+
. In the `Plan` CR, add the following labels before `spec:targetNamespace`. All are optional.
You can use the {project-first} user interface, which is located in the {ocp} web console, to tell {project-first} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.
11
+
12
+
The *Virtualization* section of the {ocp} web console supports the following options for scheduling target VMs:
13
+
14
+
* *VM target mode selector*: Ensures VMs are scheduled on nodes that are an exact match for key-value pairs you create. This type of label is often used for nodes with special capabilities, such as GPU nodes or storage nodes.
15
+
* *VM target labels*: Applies organizational or operational labels to migrated VMs for identification and management.
16
+
* *VM target affinity rules*: Implements placement policies such as co-locating related workloads or, for disaster recovery, ensuring that specific VMs are migrated to different nodes. This type of rule uses hard (requirements) and soft (preferences) conditions combined with logical operators, such as `Exists` or `DoesNotExist` instead of using the rigid key-value pairs used by a VM target node selector. As a result, target affinity rules are more flexible than target affinity rules.
17
+
+
18
+
The {project-short} supports the following affinity rules:
19
+
+
20
+
** Node affinity rules
21
+
** Workload (pod) affinity and anti-affinity rules
22
+
23
+
You configure target VM scheduling options on the *Plan details* page of the relevant migration plan. The options apply to all VMs that are included in that migration.
24
+
25
+
Instructions for the VM target scheduling options are included in the procedures for creating migration plans. The same options are supported for all vendors ({virt}, {rhv-full}, {osp}, Open Virtual Appliance (OVA), and {virt}).
Copy file name to clipboardExpand all lines: documentation/modules/creating-plan-wizard-vmware.adoc
+47-5Lines changed: 47 additions & 5 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -192,10 +192,10 @@ The wizard opens to the page where you defined the item.
192
192
When your plan is validated, the *Plan details* page for your plan opens in the *Details* tab.
193
193
+
194
194
The *Plan settings* section of the page includes settings that you specified in the *Other settings (optional)* page and some additional optional settings. The steps below refer to the additional optional steps, but all of the settings can be edited by clicking the {kebab}, making the change, and then clicking *Save*.
195
-
195
+
// Plan settings page
196
196
. Check the following items on the *Plan settings* section of the page:
197
197
198
-
**Volume name template*: Specifies a template for the volume interface name for the VMs in your plan.
198
+
.. *Volume name template*: Specifies a template for the volume interface name for the VMs in your plan.
199
199
+
200
200
The template follows the Go template syntax and has access to the following variables:
Changes you make on the *Virtual Machines* tab override any changes on the *Plan details* page.
305
305
====
306
306
307
-
**Raw copy mode*: By default, during migration, virtual machines (VMs) are converted using a tool named `virt-v2v` that makes them compatible with {virt}. For more information about the virt-v2v conversion process, see 'How {project-short} uses the virt-v2v tool' in _Migrating your virtual machines to Red Hat {virt}_. _Raw copy mode_ copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on {virt}.
307
+
.. *Raw copy mode*: By default, during migration, virtual machines (VMs) are converted using a tool named `virt-v2v` that makes them compatible with {virt}. For more information about the virt-v2v conversion process, see 'How {project-short} uses the virt-v2v tool' in _Migrating your virtual machines to Red Hat {virt}_. _Raw copy mode_ copies VMs without converting them. This allows for faster conversions, migrating VMs running a wider range of operating systems, and supporting migrating disks encrypted using Linux Unified Key Setup (LUKS) without needing keys. However, VMs migrated using raw copy mode might not function properly on {virt}.
308
308
309
309
** To use raw copy mode for your migration plan, do the following:
310
310
*** Click the *Edit* icon.
311
311
*** Toggle the *Raw copy mode* switch.
312
312
*** Click *Save*.
313
+
314
+
.. *VM target node selector*, *VM target labels*, and *VM target affinity rules* are options that support VM target scheduling, a feature that lets you direct {project-short} to migrate virtual machines (VMs) to specific nodes or workloads (pods) of {virt} as well as to schedule when the VMs are powered on.
315
+
+
316
+
For more information on the feature in general, see TBD Target VM scheduling options. For more details on using the feature with the UI, see TBD Scheduling target VMs from the user interface.
317
+
318
+
* *VM target node selector* allows you to create mandatory exact match key-value label pairs that the target node must possess. If no node on the cluster has all the labels specified, the VM is not scheduled and it remains in a `Pending` state until there is space on a node that fits the key-value label pairs.
319
+
320
+
** To use the VM target node selector for your migration plan, do the following:
321
+
*** Click the *Edit* icon.
322
+
*** Enter a key-value label pair. For example, to require that all VMs in the plan be migrated to your `east` data center, enter `dataCenter` as your *key* and `east` as your *label*.
323
+
*** To add another key-value label pair, click *+* and enter another key-value pair.
324
+
*** Click *Save*.
325
+
326
+
* *VM target labels* allows you to apply organizational or operational labels to migrated VMs for identification and management. One use for these labels is to use them to specify a different scheduler for your migrated VMs, by creating a specific target VM label for it.
327
+
328
+
** To use the VM target node selector for your migration plan, do the following:
329
+
*** Click the *Edit* icon.
330
+
*** Enter one or more VM target labels.
331
+
*** Click *Save*.
332
+
333
+
* *VM target affinity rules*: Target affinity rules let you use conditions to either require or prefer scheduling on specific nodes or workloads (pods).
334
+
+
335
+
Target anti-affinity rules let you prevent VMs from being scheduled to run on selected workloads (pods) or prefer that they are not scheduled. These kind of rules offer more flexible placement control than rigid Node Selector rules, because they support conditionals such as `In`, or `NotIn`. For example, you could require that a VM be powered on "only if it is migrated to node A _or_ if it is migrated to an SSD disk, but it _cannot_ be migrated to a node for which `license-tier=silver` is true."
336
+
+
337
+
Additionally, both types of rules allow you to include both _hard_ and _soft_ conditions in the same rule. A hard condition is a requirement, and a soft condition is a preference. The previous example used only hard conditions. A rule that states that "a VM can be powered on if it is migrated to node A _or_ if it is migrated to an SSD disk, but it is preferred not to migrate it to a node for which `license-tier=silver` is true" is an example of a rule that uses soft conditions.
338
+
+
339
+
{project-short} supports target affinity rules at both the node level and the workload (pod) level. It supports anti-affinity rules at the workload (pod) level only.
340
+
341
+
** To use VM target affinity rules for your migration plan, do the following:
342
+
*** Click the *Edit* icon.
343
+
*** Click *Add affinity rule*.
344
+
*** Select the *Type* of affinity rule from the list. Valid options: Node Affinity, Workload (pod) Affinity, Workload (pod) Anti-Affinity.
345
+
*** Select the *Condition* rom the list. Valid options: Preferred during scheduling (soft condition), Required during scheduling (hard condition).
346
+
*** Soft condition only: Enter a numerical *Weight*. The higher the weight, the stronger the preference. Valid options: whole numbers greater than 0.
347
+
*** Enter a *Typology key*, the key for the node label that the system uses to denote the domain.
348
+
*** Optional: Select the *Workload labels* that you want to set by doing the following:
349
+
**** Enter a *Key*.
350
+
**** Select an *Operator* from the list. Valid options: `Exists`, `DoesNotExist`, `In`, and `NotIn`.
351
+
**** Enter a *Value*.
352
+
*** To add another label, click *Add expression* and add another key-value pair with an operator.
353
+
*** Click *Save affinity rule*.
354
+
*** To add another affinity rule, click *Add affinity rule*. Rules with a preferred condition will stack with an `AND` relation between them. Rules with a required condition will stack with an `OR` relation between them.
313
355
+
314
356
{project-short} validates any changes you made on this page.
0 commit comments