What’s new#

Rackspace Private Cloud Powered By OpenStack (RPCO) release 14.0 (r14) is based on the OpenStack-Ansible (OSA) project. For OSA release notes, see OpenStack-Ansible Newton Release Notes.

Major new releases of OpenStack such as Newton typically include many changes, enhancements, and new features. RPCO is a tested configuration of a subset of all available OpenStack services.

This page lists some of the significant upstream OpenStack changes and is provided for your awareness. It is not a statement of support. For more information about supported features and configurations, contact your Rackspace sales team or support specialist.

Block Storage service (cinder)#


  • Generic volume groups

    • This feature adds the ability to create groups of volumes and create snapshots of groups of volumes. Volume groups are more generic than consistency groups, which are only supported by a small number of backends. Volume groups allow operations to be executed against a number of volumes at the same time.

    • The create/delete/update/list/show APIs are now supported for groups.

    • To use this feature, a customer must have a previously unencrypted volume to be encrypted, or vice versa.

  • Volume encryption, retype support

    • This feature adds the ability to migrate or retype volumes to either unencrypted or encrypted states. You can reconfigure an environment to use encryption best practices.

  • Supported driver checks

    • This feature adds a flag for each driver in use. If a driver no longer meets the Block Storage community testing standards, the driver supported flag is set to False.

    • Supported status is checked when the cinder-volume service starts. If the status is False, an error is logged. If enable_unsupported_driver = False is set and the driver status is False, the driver will not start.


  • [Not supported in RPCO] Hybrid aggregate support for NetApp cDot

    • Hybrid aggregates (Flash Pools) are now available when using a NetApp FAS with cDot as a Block Storage backend.

    • This feature supports a customer carrying out the same operation, at the same time, against multiple volumes. Some, but not all, of the backends hosting these volumes support consistency group operations.

    • To use this feature, the customer must have a NetApp FAS with SSDs or HDDs.

Compute service (nova)#


  • The 2.37 micro version requires that the API server create request body includes networks. Specifying networks:auto is similar to not requesting specific networks when creating a server before version 2.37.

  • Compute requires the Glance v2 API.


  • The default value of the live_migration_tunnelled configuration option (in the libvirt section) is now False. When upgrading nova to Newton, all live migrations become non-tunneled, unless live_migration_tunnelled is set to True.

    • By default, the migration traffic does not go through libvirt and therefore is not encrypted.


  • All APIs that proxy to other services are deprecated.

    • These APIs return 404 on micro version 2.36 or higher.

    • Use native APIs instead of using pure proxies for other REST APIs.

    • The quotas and limits related to network resources including fixed_ips, floating ips, security_groups, security_group_rules, and networks are filtered out of the os-quotas and limit APIs. Manage those quotas through the OpenStack network service.

    • You can only use APIs and manage quotas under micro version 2.36 for nova-network.

    • The deprecated os-fping API is only related to nova-network and depends on the deployment.

    • The deprecated APIs are:

  • The use_usb_tablet option is deprecated.

    • The pointer_model configuration option and the hw_pointer_model image property replace the use_usb_tablet option. You can specify different pointer models for input devices. The default value for pointer_model is usb_tablet.


  • The nova-lxd hypervisor is not supported in RPCO.

  • [Not supported in RPCO] Cells v2 now supports booting instances for one v2 cell only. Multi-cell support is planned for Ocata. You can prepare for Ocata now by creating a cellv2 by using the nova-manage commands. Configuring Cells v2 is optional for Newton.


  • Perf event support for the libvirt driver

    • You can use this feature by adding the enabled_perf_events configure option in the libvirt section of nova.conf. This feature requires libvirt>=2.0.0.

  • Two new oslo.policy CLI scripts

    • The first script, when called as oslopolicy-list-redundant –namespace nova outputs a list of policy rules in policy.[json|yaml] that match the project defaults. You can remove these redundant rules from the policy file.

    • The second script, when called as oslopolicy-policy-generator –namespace nova –output-file policy-merged.yaml, populates the policy-merged.yaml file with the effective policy. This policy is the merged result of project defaults and configuration file overrides.

  • nova-manage quota refresh

    • Newton adds a nova-manage command to refresh the quota usages for a project or user. You can use this command when the usages in the quota-usages database table are out of sync with the actual usages. For example, if a resource usage is at the limit in the quota_usages table, but the actual usage is less, nova does not allow VMs to be created for that project or user. Use the nova-manage command to re-sync the quota_usages table with the actual usage.

  • New reserved_huge_pages option

    • You can use the reserved_huge_pages option to reserve the amount of huge pages used by third-party components.

  • New soft-affinity and soft-anti-affinity policies

    • These policies are implemented for the server-group Compute feature. A POST /v2.1/{tenant_id}/os-server-groups API resource now accepts soft-affinity and soft-anti-affinity as values of the policies request body key.

  • API policy defaults defined in code like configuration options

    • The sample policy.json file shipped with nova is empty. Editing this file is only necessary if you want to override the API policy from the defaults in the code.

    • To generate the policy file, type:

      # oslopolicy-sample-generator --config-file=etc/nova/nova-policy-generator.conf

Dashboard (horizon)#



  • Allowed Address Pairs tab

    • The port-details page has a new tab for managing allowed address pairs. This tab and its features are only available when this extension is active in Neutron. The Allowed Address Pairs tab enables creating, deleting, and listing address pairs for the current port.

  • Support for managing neutron L3 agent hosts

    • The Admin screen for system information now provides links and views showing which routers reside on what hosts. In addition, the Admin view of routers also provides a list of where the router is hosted and the link to see which other routers are sharing the same host.

    • For more information, see https://blueprints.launchpad.net/horizon/+spec/admin-neutron-l3-agent.

  • Scheduler hints added to Launch Instance

    • You can use this feature to add scheduler hints to an instance at launch. In addition to adding custom key-value pairs, you can also choose from properties in the glance metadata definitions catalog that have the OS::Nova::Server resource type and the scheduler_hints properties target.

  • Administrator ability to graphically manage floating IPs

  • Cinder consistency groups

    • This feature adds two tabs to the Project -> Volumes panel.

    • The first tab displays consistency groups. The second tab displays consistency group snapshots. Consistency groups (CG) contain existing volumes and allow users to perform actions on the volumes in one step. Actions include: create/update/delete CGs, snapshot all volumes in a CG, clone all volumes in a CG, and create a new CG and volumes from a CG snapshot.

    • Policies associated with consistency groups exist in the cinder policy file. By default, all actions are disabled.

    • For more information, see https://blueprints.launchpad.net/horizon/+spec/cinder-consistency-groups.

  • Boot sources restriction in Launch Instance

  • Glance V2 support

    • Dashboard adds complete support for glance v2. Dashboard no longer depends on having a glance v1 endpoint in the keystone catalog. The feature also provides code compatibility between glance v1 and v2.

  • Dashboard runs without nova or glance

    • Nova and glance are no longer required to run Dashboard. As long as keystone is present, Dashboard runs correctly.

  • Support for network IP availability

    • The admin network dashboard now displays IP availability. Two columns in the admin network subnets table display the allocated IPs in a given subnet and unallocated free IPs for each subnet in the network.

  • Limiting Overview panel scope

    • This feature adds a new OVERVIEW_DAYS_RANGE setting. This setting defines the default date range in the Overview panel meters - either today minus n days (if the value is integer n), or from the beginning of the current month until today (if set to None).

    • You can use this setting to limit the amount of data fetched by default when rendering the Overview panel. The default value is 1. The default differs from the past behavior, which caused serious lags on large deployments.

  • New logout behavior setting

    • Deployers can set the TOKEN_DELETE_DISABLED variable to revoke a user’s token on logout.

Identity service (keystone)#



  • LDAP mapping table pre-population

    • The keystone-manage mapping_populate command pre-populates a mapping table with all LDAP users. Pre-population improves future query performance. Use this feature when an LDAP is first configured, or before making any domain queries and calling the keystone-manage mapping_purge command.

    • For help with command options, type keystone-manage mapping_populate --help.


      Pre-population causes the keystone database to grow as large as the customer-configured LDAP tree. If all of those users won’t log into OpenStack, pre-population can be counterproductive and permanently degrade keystone database performance.

  • Add cache_on_issue attribute to validation cache

    • This feature enables placing issued tokens into the validation cache, which reduces the first validation time.

    • Caching might make debugging a running system more difficult, since data might be stale.

    • For more information, see the keystone sample configuration file.

  • Add password_expires_at attribute to user response object

    • PCI-DSS support requires this attribute in the user response object.

    • For more information, see the list-users API operation.


  • The keystone-manage domain_config_upload option is deprecated in favor of using the domain configuration API.


    RPCO does not use the new domain configuration API.

  • If you customize the policy.json file, update it to use these new variables:

    • identity:list_projects_for_user replaces identity:list_projects_for_groups.

    • identity:list_domains_for_user replaces identity:list_domains_for_groups.


Image service (glance)#


  • Database downgrade ability removed

    • At a project level, OpenStack is dropping support for downgrading databases. In this release, Image service removed the capability to downgrade its database to an older version.


It is highly recommended to back up the database before upgrading Image service. Backups provide the only method to revert to a previous version of the database.




  • Glance v2 API support

    • This release adds complete support for the glance v2 API. Dashboard no longer depends on having a glance v1 endpoint in the Identity catalog. This release also provides code compatibility between glance v1 and v2.

    • New installations of RPC should use only glance v2.


    The glance v1 API is deprecated and marked for removal in OpenStack Pike. All existing RPC installations must move to v2. See https://github.com/rcbops/u-suk-dev/issues/624.

  • Image service not required by Dashboard

    • With this release, Dashboard does not require Compute or Image service.

Networking service (neutron)#


  • The min_13_agents_per_router and cache_url router configuration options are replaced by the following setting: [cache] group, setting enable = True.

  • Scheduling of new HA routers is always allowed.

  • The global_physnet_mtu, path_mtu, and physical_network_mtus variables replace network_device_mtu.

  • The is_default variable replaces default_ipv4_subnet_pool and default_ipv6_subnet_pool.


  • RPCO does not support OVS.

  • RPCO does not support the external_network_bridge and ipam_driver variables.

  • RPCO does not support creating VLAN-aware virtual machines. VLAN-aware virtual machines use “trunks” of neutron ports that carry VLAN tags from the VM out through virtual switches.

Object Storage (swift)#



  • Object versioning

    • Object versioning supports a “history” mode in addition to the older “stack” mode. The modes differ in how DELETE requests are handled. For more information, see: http://docs.openstack.org/developer/swift/overview_object_versioning.html.

    • Deletion of objects is handled in two different methods. Using the header X-History-Location, the current object version is removed, but can still be recovered from the archive container. By using header X-Versions-Location, the current version is removed by copying the previous version over it. The copy in the archive container is deleted. If there are multiple version of an object, multiple deletes must be performed to remove the object.

  • Concurrent bulk deletes

    • This feature allows concurrent bulk deletes for server-side deletes of static large objects.

    • Before this feature, deletes were single-threaded and each DELETE executed serially. The new delete_concurrency value defaults to 2 in the [filter:slo] and [filter:bulk] sections of the proxy server configurations. This value controls the concurrency used to perform the DELETE requests for referenced segments. The default value is recommended. Setting the value to 1 restores the previous behavior.

  • TempURL responses now include an Expires header with the expiration time embedded in the URL.

  • The object auditor deletes old tombstones. This ensures that they are reclaimed and do not waste inodes.

    • To save disk I/O, Object Storage uses a hash to determine what changed so that the replicator knows what needs to be synced. The replicator reclaims deleted objects after the reclaim_age time period. Deleted objects are empty tombstone files with the .ts extension. If the suffix hash file for a tombstone file was calculated before the reclaim age and no object is placed in the same suffix, the replicator would never delete them. The situation led to many .ts files consuming inodes. The object auditor now marks the suffix hash for reclaimable .ts files as dirty, so they can be cleaned and the hashes recalculated.

    • This feature potentially changes the load on an Object Storage cluster. For example, the initial object replicator run can cause a spike in disk I/O while old .ts files (which can be numerous) are reclaimed. The impact might be negligible on smaller clusters, but can regularly be high in large clusters.

    • For more information, see https://bugs.launchpad.net/swift/+bug/1301728 and https://review.openstack.org/#/c/346865/.

  • Atomic object creation in Linux

    • Object Storage uses the O_TMPFILE flag when opening a file instead of creating a temporary file and renaming it on commit. This makes the data path simpler and allows the filesystem to more efficiently optimize the files on disk, increasing performance.

    • The Linux kernel from 3.15 onward (and XFS) adds support for O_TMPFILE, while enables atomic object creation in Linux. Previously, the object was written to a temporary folder and renamed after the object finished writing. This new kernel feature creates a write-only lock in the final destination and links to the directory. Writing the object becomes an atomic operation.

    • This change corrects a performance problem discovered in XFS that allocated all objects created in the same device-specific temporary folder to the same allocation group. Storage nodes with kernels newer than 3.15 automatically use atomic object creation, but fall back to the non-atomic method if unsupported.

    • For more information, see https://review.openstack.org/#/c/162243/.

Orchestration (heat)#


  • Support for conditionals

    • This feature adds condition functions such as equals, not, and and or. When added to a conditions section, they define one or more conditions that are evaluated based on input parameter values provided when a user creates or updates a stack.

    • An optional section condition is available for resource and output definitions. Condition names defined in conditions and condition functions can be referenced in this section. They conditionally create resources or conditionally give outputs of a stack.

    • The feature includes a function to return corresponding values based on condition evaluation. You can use this function to conditionally set the value of resource properties and outputs.

  • Enhanced support for Block Storage QoS and quotas

    • A new OS::Cinder::QoSSpecs resource plugin supports Block Storage QoS specs provided by the qos-specs API extension.

    • A new cinder.qos_specs constraint supports validation of the QoS specs attribute.

    • A new OS::Cinder::Quota resource is added to manage Block Storage quotas. These quotas are operational limits to projects on Block Storage resources. These include gigabytes, snapshots, and volumes.

  • Support for DNS resolution using internal or external DNS services for neutron resources

    • This feature supports internal DNS resolution and integration with external DNS services for neutron resources. Template authors can use the dns_name and dns_domain properties of neutron resource plugins.

  • New max_server_name_length configuration option

    • This feature adds the max_server_name_length configuration option. The option defaults to the prior upper bound (53). Users can change the value if necessary. For example, lower the value to comply with LDAP other internal name limit restrictions.

  • New template directory configuration option

    • This feature adds the template_dir configuration option. Typically, Orchestration uses the /etc/heat/templates directory. This option makes it configurable.

  • New map_replace function

    • A new map_replace function accepts two arguments: an input map and a map containing a keys/values map. Key or value substitutions on the input map are performed based on arguments.

  • New yaql function

    • The yaql function accepts two arguments: an expression of type string and data of type map. The function evaluates expressions on a given data set.