Filter by Host Aggregate Metadata or by Image Extra Specs

ID 659445
Updated 1/4/2017
Version Latest
Public

author-image

By

Motivation

Cloud Management systems, like OpenStack, were designed from scratch to take the control of a group of homogeneous nodes and, on top of that, build a complex infrastructure of virtual machines interconnected with a mixed virtual and physical network. Over time, more and more systems have been migrated to a virtual infrastructure. Now the problem we are facing is how to handle disparate requirements demanded by clients using the same Cloud Manager.

Let’s take a look at the following example where we compare two common use cases:

Primary use cases Enterprise apps Telco VNF’s
Networking 10 Gb, varied packets 40 Gb, small packets
Scale Local or limited Massively distributed
Regulation Little or None High
Hardware offload ×
Software Out-of-Box Custom
Why? Lowered costs, improved agility, ...

Table 1: Enterprise vs. Telco, source: “Accelerating your Cloud with DPDK”1

The Cloud manager, in the first use case (i.e. Enterprise), will spawn CPU power hungry virtual machines. It’s logical to think that spreading the virtual machines across the compute nodes of the Cloud could be a good strategy. However, in the second use case (i.e. Telco VNF’s), the cloud manager will try to spawn virtual machines in the same node in an effort to increase the bandwidth throughput between these VNFs.

How OpenStack Chooses the Most Suitable Host

OpenStack uses a scheduler to determine which compute node is suitable to contain the new virtual machine to be spawned. In a nutshell, this scheduler uses a set of filters2 and weights to determine the compute node.

The filtering process is very simple. All the filters are executed in a fixed order. Every filter will output a set of suitable nodes and this set will be evaluated by the next filter. At the end of this chain, the final result is a reduced set of hosts matching all the conditions imposed by the filters.

At present, most of the filters are defined in a monolithic way: one entry variable is read, compared with a fixed value and the result of this comparison will determine if the host is suitable or not. This simplicity is always welcome to reduce the compute time but limits configuration flexibility. Even more, in some environments the administrator will need to create a complex chain of filters to achieve the expected configuration.

AggregateInstanceExtraSpecsFilter (Current Implementation) vs AggregateInstanceTypeFilter (New Implementation)

To avoid any confusion due to the long names, I’ll use blue letters for the current filter and yellow for the new one presented in this article.

At present, the filter scheduler allows operators to associate an instance type with a host aggregate3 via the AggregateInstanceExtraSpecsFilter4. This filter enforces that an aggregate of hosts satisfies all conditions, defined as “extra specs” in the flavor; these conditions are set as metadata in the aggregate class.

However, the operator must include all the values to be matched in the aggregate metadata in the “extra specs” and can’t specify a unique value to represent all data [use case 1]. Also, each variable included in the extra specs must be present in the aggregate metadata; the operator can’t choose to make this variable not mandatory [use case 2] or force the absence of this variable [use case 3]. (See use cases outlined below in the “example is better than precept” section.)

Another limitation of the current implementation of AggregateInstanceExtraSpecsFilter filter is the logic imposed. The current implemented logic implies an injective logic from flavor extra specifications to aggregate metadata. That means, all elements present in the flavor must be present in the aggregate to pass the filter. If the flavor has, for example, three extra specs, the aggregate must have those three specs and the values must satisfy the logic conditions present in the flavor extra specs.

AggregateInstanceTypeFilter5 has, as an aggregate option, a surjective logic. That means that, instead of forcing the aggregate to satisfy the extra specs conditions present in the flavor, now the flavor is enforced to satisfy the conditions given by the aggregate defined as metadata. In this case, all metadata elements must be present in flavor extra specs and must satisfy the logic conditions present [use case 4].

To add more flexibility to this new filter, the new sentinels defined in this filter could be used in the aggregate metadata [use case 5].

Other limitation is the inability of using the operators defined in AggregateInstanceExtraSpecsFilter [use case 6] in the host aggregate metadata.

Finally, the latest limitation detected is how the namespaced variables are filtered. Only those variables using a defined scope are actually used to filter the host, nevertheless the rest of them are skipped and not being used as part of the filter [use case 7].

Example is Better than Precept

Use case 1

An operator wants to filter hosts having a key in their aggregate metadata, independently of its value, e.g.:

flavor extra specs: {"key": "*"}

All hosts inside a host aggregate containing this key, despite of the value, will pass this check.

Use case 2

An operator wants to filter hosts having a specific value, but if the aggregate doesn’t have this key, the host should pass anyway, e.g.:

flavor extra specs: {"key": "<or> 1 <or> ~"}
aggregate 1 metadata: {"key": "1"}
aggregate 2 metadata: {"key": "2"}
aggregate 3 metadata: {}

Hosts in aggregate 1 and 3 will pass this filter.

Use case 3

In this case, the operator wants to stop any host inside an aggregate containing a defined key, e.g.:

flavor extra specs: {"key": "!"}
aggregate 1 metadata: {"key": "1"}
aggregate 2 metadata: {}

Only hosts in aggregate 2 will pass.

Use case 4

This use case could be used to force a flavor to contain a set of keys present in the aggregate metadata. This constraint is added to the normal filter process, which will try to match the keys present in the flavor with the keys in the aggregate metadata. To activate in the filter this new verification logic, a new metadata key is introduced: {“force_metadata_check”: “True”}.

E.g.

flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {}
aggregate metadata: {"key": "1", "force_metadata_check": "True"}

In this example, hosts in this aggregate will pass only using the flavor 1. Without the key “force_metadata_check” set to “True”, the flavor 3 will allow the use of hosts in the aggregate.

Use case 5

If the key “force_metadata_check”, explained in the last use case, is set, the administrator will be able to also use the sentinels in the aggregate metadata keys, e.g.:

flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {}
aggregate metadata: {"key": "*", "force_metadata_check": "True"}

In this example, flavor 1 and 2 will pass hosts belonging to this aggregate. Using this example, if the key “force_metadata_check” is removed (or set to “False”), the unique accepted flavor will be 3.

E.g.:

flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {}
flavor 4 extra specs: {"key": "*", "key2": "2"}
aggregate metadata: {"key": "*"}

In this example:

  • Flavor 1 key value, “1”, doesn't match the string “*”.
  • The same behavior applies to flavor 2.
  • Because flavor 3 doesn't have any requirement, it's accepted in this host aggregate; any flavor with extra specs will be accepted.
  • Flavor 4 won't pass because “key2” doesn't exists in the aggregate metadata.
flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {}
aggregate metadata: {"key": "!", "force_metadata_check": "True"}

In this third example, only flavor 3 will allow hosts in the aggregate. This additional logic is backwards compatible with the existing one.

Use case 6

Again, if the key “force_metadata_check” is set in the aggregate metadata, the operator will be able to use the operator “<or>” to define multiple values for a key. This change doesn’t break the logic of the old filter: aggregate metadata checked inside the filter is a set of values combining the data contained in the key of each aggregate metadata; this set now will contain also the values inside the “or” junction. E.g.:

flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {"key": "<or> 2 <or> 3"}
flavor 4 extra specs: {}
aggregate metadata: {"key": "<or> 1 <or> 2", "force_metadata_check": "True"}

In this example, only flavor 4 won’t pass the hosts inside the aggregate. It should be noted that if the key “force_metadata_check” is not set, the strings contained in the aggregate metadata keys will be checked literally. Using the last example, if the key “force_metadata_check” is removed (or set to “False”), the filter will use the aggregate metadata key value strings without the new logic added by this filter, to maintain backwards compatibility. E.g.:

flavor 1 extra specs: {"key": "1"}
flavor 2 extra specs: {"key": "2"}
flavor 3 extra specs: {"key": "<or> 2 <or> 3"}
flavor 4 extra specs: {}
flavor 5 extra specs: {"key": "<or> 1 <or> 2"}
aggregate metadata: {"key": "<or> 1 <or> 2"}

In this second example, no flavor will pass the filter. The 5th flavor have the same string value in “key”, but the current filter, AggregateInstanceExtraSpecsFilter, compares each value in flavor’s key, “1” and “2”, independently.

Use case 7

The use of namespaced variables could be extended, allowing the operator to filter hosts by these values. To maintain the backward compatibility, any namespaced key without the escape scope used in the old filter, “aggregate_instance_extra_specs”, will be considered optional: if the key is not present in the aggregate metadata, the filter will skip this key; if the key is also present in the aggregate metadata, the value will be checked as a regular key. E.g.:

flavor extra specs: {"hw:cpu_policy": "shared"}
aggregate 1 metadata: {"hw:cpu_policy": "shared"}
aggregate 2 metadata: {"hw:cpu_policy": "dedicated"}
aggregate 3 metadata: {}

In this example, hosts in aggregates 1 and 3 will pass. But hosts in aggregate 2 won’t because namespace key is present in both extra specs and metadata and the value is different. This new feature could collide with the old behavior.

In the aggregate metadata, if the key “force_metadata_check” is set, all keys, with or without namespace will be checked. This new feature check allows the operator to define host aggregates with restrictions to spawn virtual machines within their hosts. With this extension, the operator can easily, only modifying the aggregate metadata (instead of the flavor definition), define sets of hosts with specific properties and use restrictions.

References

1Accelerating your Cloud with DPDK

2OpenStack Nova Scheduler filters

3OpenStack Nova host aggregates

4AggregateInstanceExtraSpecsFilter

5AggregateInstanceTypeFilter