Rackspace Cloud Orchestration Templates User Guide
This guide provides tutorials for writing Orchestration templates for the Rackspace Cloud Orchestration service, built on OpenStack’s Heat project.
Topics covered include bootstrapping software configuration, such as using Ansible, as well as information about interacting with Rackspace Cloud services that have resources provided through Orchestration templates.
Generic software config
Brief summary
If you have ever needed to configure a server with Heat, chances are you have written a user_data script inside of a OS::Nova::Server resource. The OS::Heat::SoftwareConfig resource is another way to configure a server. It provides several advantages over defining a user_data script.
One SoftwareConfig resource can be associated with many servers. Each time it is triggered, it can be run with different parameters. In addition, a SoftwareConfig resource can be updated and rerun without causing the server to be rebuilt.
SoftwareConfig can also be used to configure your server using configuration management software, such as Ansible or Puppet. In this tutorial, we will configure the server with a simple shell script.
Pre-reading
The following introductory material should give you enough background to proceed with this tutorial.
- Application software configuration using Heat
- HOT guide - Software configuration
- Software Config example templates
Example template
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
A template which demonstrates doing boot-time installation of the required
files for script based software deployments.
This template expects to be created with an environment which defines
the resource type Heat::InstallConfigAgent such as
../boot-config/fedora_pip_env.yaml
parameters:
resources:
outputs:
Parameters section
Add a template parameter for the server image:
image:
type: string
Resources section
Add an OS::Heat::SoftwareConfig resource, which will be used to define a software configuration.
config:
type: OS::Heat::SoftwareConfig
properties:
group: script
inputs:
- name: foo
- name: bar
outputs:
- name: result
config: |
#!/bin/sh -x
echo "Writing to /tmp/$bar"
echo $foo > /tmp/$bar
echo -n "The file /tmp/$bar contains \`cat /tmp/$bar\` for server $deploy\_server\_id during $deploy\_action" > $heat\_outputs_path.result
echo "Written to /tmp/$bar"
echo "Output to stderr" 1>&2
The “group” property is used to specify the type of SoftwareConfig hook that will be used to deploy the configuration. Other SoftwareConfig hooks are available in the openstack/heat-templates repository on GitHub.
Add an OS::Heat::SoftwareDeployment resource, which will be used to associate a SoftwareConfig resource and a set of input values with the server to which it will be deployed.
deployment:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: TEMP\_URL\_SIGNAL
config:
get_resource: config
server:
get_resource: server
input_values:
foo: fooooo
bar: baaaaa
It is advisable to specify a “signal_transport” of “TEMP_URL_SIGNAL”, because Rackspace’s deployment of Heat does not support the other transports at this time. However, since this is the default transport on the Rackspace Cloud, it should be safe to omit.
Add an InstallConfigAgent resource, which will be mapped via the environment to a “provider” resource:
boot_config:
type: Heat::InstallConfigAgent
The purpose of this resource is to provide output for the user_data section that will be used to install the config agent on the Server resource below. See the Usage section below for more information on using this resource.
Add a Nova server key pair resource as a way to access the server to confirm deployment results:
ssh_key:
type: OS::Nova::KeyPair
properties:
name: private\_access\_key
save\_private\_key: true
Finally, add the OS::Nova::Server resource and reference the boot_config resource in the user_data section:
server:
type: OS::Nova::Server
properties:
image: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf \# Ubuntu 14.04
flavor: 2 GB Performance
key_name: { get_resource: ssh_key }
software\_config\_transport: POLL\_TEMP\_URL
user\_data\_format: SOFTWARE_CONFIG
user_data: {get_attr: \[boot_config, config\]}
config_drive: True
Outputs section
Add the following to your outputs section:
result:
value:
get_attr: \[deployment, result\]
stdout:
value:
get_attr: \[deployment, deploy_stdout\]
stderr:
value:
get_attr: \[deployment, deploy_stderr\]
status_code:
value:
get_attr: \[deployment, deploy\_status\_code\]
server_ip:
value:
get_attr: \[server, accessIPv4\]
private_key:
value:
get_attr: \[ssh_key, private_key\]
This will show the actual script output from the SoftwareConfig resource.
Full template
heat\_template\_version: 2014-10-16
description: |
A template which demonstrates doing boot-time installation of the required
files for script based software deployments.
This template expects to be created with an environment which defines
the resource type Heat::InstallConfigAgent such as
../boot-config/fedora\_pip\_env.yaml
parameters:
image:
type: string
resources:
config:
type: OS::Heat::SoftwareConfig
properties:
group: script
inputs:
- name: foo
- name: bar
outputs:
- name: result
config: |
#!/bin/sh -x
echo "Writing to /tmp/$bar"
echo $foo > /tmp/$bar
echo -n "The file /tmp/$bar contains \`cat /tmp/$bar\` for server $deploy\_server\_id during $deploy\_action" > $heat\_outputs_path.result
echo "Written to /tmp/$bar"
echo "Output to stderr" 1>&2
deployment:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: TEMP\_URL\_SIGNAL
config:
get_resource: config
server:
get_resource: server
input_values:
foo: fooooo
bar: baaaaa
boot_config:
type: Heat::InstallConfigAgent
ssh_key:
type: OS::Nova::KeyPair
properties:
name: private\_access\_key
save\_private\_key: true
server:
type: OS::Nova::Server
properties:
image: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf \# Ubuntu Ubuntu 14.04
flavor: 2 GB Performance
key_name: { get_resource: ssh_key }
software\_config\_transport: POLL\_TEMP\_URL
user\_data\_format: SOFTWARE_CONFIG
user_data: {get_attr: \[boot_config, config\]}
config_drive: True
outputs:
result:
value:
get_attr: \[deployment, result\]
stdout:
value:
get_attr: \[deployment, deploy_stdout\]
stderr:
value:
get_attr: \[deployment, deploy_stderr\]
status_code:
value:
get_attr: \[deployment, deploy\_status\_code\]
server_ip:
value:
get_attr: \[server, accessIPv4\]
private_key:
value:
get_attr: \[ssh_key, private_key\]
Usage
Before you create the stack, you need an environment file that will define a Heat::InstallConfigAgent resource to tell Heat how to install the config agent on Ubuntu 14.04.
First, clone the heat-templates repository:
git clone https://github.com/openstack/heat-templates.git
The environment file you will use is located under heat-templates/hot/software-config/boot-config/ubuntu_pip_env.yaml
. It will supply the image parameter to the template. A ready-made InstallConfigAgent resource for Fedora also exists in the heat-templates repository in case you want to use Fedora.
Then, issue the stack-create
command with the template and environment file just created using python-heatclient:
heat --heat-url=https://dfw.orchestration.api.rackspacecloud.com/v1/$RS_ACCOUNT_NUMBER --os-username $RS_USER_NAME --os-password $RS_PASSWORD --os-tenant-id $RS_ACCOUNT_NUMBER --os-auth-url https://identity.api.rackspacecloud.com/v2.0/ stack-create -f generic-software-config.yaml -e heat-templates/hot/software-config/boot-config/ubuntu_pip_env.yaml generic-software-config1
Next, edit the template and perform a stack-update
. Edit the SoftwareDeployment parameters in the template:
sed -i.bak -e 's/fooooo/fooooo1/' -e 's/baaaaa/baaaaa1/' generic-software-config.yaml
Issue the stack-update
command:
heat --heat-url=https://dfw.orchestration.api.rackspacecloud.com/v1/$RS_ACCOUNT_NUMBER --os-username $RS_USER_NAME --os-password $RS_PASSWORD --os-tenant-id $RS_ACCOUNT_NUMBER --os-auth-url https://identity.api.rackspacecloud.com/v2.0/ stack-update -f generic-software-config.yaml -e heat-templates/hot/software-config/boot-config/ubuntu_pip_env.yaml generic-software-config1
Notice that the config agent re-runs the script without rebuilding the server. In a couple of minutes, a new file should exist alongside the original one: /tmp/fooooo1
with the content baaaaa1
.
Reference documentation
Bootstrapping software config
Brief summary
In the Generic software config tutorial, you learned how to use Heat’s generic software configuration mechanism to treat the configuration of compute instances the same way you treat any other resource in your template. This tutorial goes into more detail about bootstrapping a pristine image for use with software config as well as show how you can then create your own image with the necessary tools pre-installed for easier use in future stacks.
Pre-reading
Make sure you completed the previous tutorial, Generic software config first as we will be using that example template as a basis for this tutorial. You will also need a very basic understanding of Heat template composition and Environments.
Following along
You will probably want to clone this repository (https://github.com/rackerlabs/rs-heat-docs/) in order to easily follow along. Otherwise, you may need to modify some of the commands to point to the correct locations of various templates and environments. You may also have to modify the environment file to point to the correct bootconfig_all.yaml
.
Modifying the example template
We have started by making a copy of the original example template and saving it as software_config_custom_image.yaml
. We then removed resources from the resource section except for the parts that bootstrap the instance as well as the server itself. The following resources were removed from the template:
config
deployment
other_deployment
We revised the outputs
section so that we can easily access the server’s IP address and root credentials (we will explain a little more in the next section):
outputs:
server_ip:
value: { get_attr: [ server, addresses, public, 0, addr ] }
description: IP address of the server
admin_password:
value: { get_attr: [ admin_password, value ] }
description: Root password to the server
We left the parameters
, description
, and heat_template_version
sections as-is.
Modify the server
We added an OS::Heat::RandomString resource to generate a random root password for the instance so that we can log into the instance after the stack is complete. This is so that we can make some small modifications later if we want to create an image we can reuse the next time we want to apply software config to a server.
admin_password:
type: OS::Heat::RandomString
Since we are not actually deploying any software config to the instance, we can just use cloud-init to do our installation. To do this, we will clean up some of this from the server resource by removing the software_config_transport
property and changing the user_data_format
to RAW
. We will also pass in the generated password to the instance:
server:
type: OS::Nova::Server
properties:
name: { get_param: "OS::stack_name" }
admin_pass: { get_attr: [ admin_password, value ] }
image: { get_param: image }
flavor: 2 GB Performance
user_data_format: RAW
user_data: {get_attr: [boot_config, config]}
Your template should now look like:
heat_template_version: 2014-10-16
description: |
A template that creates a server bootstrapped for use
with Heat Software Config
parameters:
image:
type: string
resources:
boot_config:
type: Heat::InstallConfigAgent
admin_password:
type: OS::Heat::RandomString
server:
type: OS::Nova::Server
properties:
name: { get_param: "OS::stack_name" }
admin_pass: { get_attr: [ admin_password, value ] }
image: { get_param: image }
flavor: 2 GB Performance
user_data_format: RAW
user_data: {get_attr: [boot_config, config]}
outputs:
server_ip:
value: { get_attr: [ server, addresses, public, 0, addr ] }
description: IP address of the server
admin_password:
value: { get_attr: [ admin_password, value ] }
description: Root password to the server
The Heat::InstallConfigAgent resource
You will notice that this resource has no real properties or other configuration. That is because we use the Environment and Template Resource features of Heat so that we can create several bootstrap configurations and use them for different base images as required.
The configuration template
First, look at the template that we will use to provide the underlying definition for the boot_config
resource. Since this template is a bit large, it will not be included in its entirety here, but it can always be found in the templates
directory of this repository as bootconfig_all.yaml
.
In Generic Software Config, we used the same mechanism to bootstrap our clean instance using a template provided by the OpenStack Heat project. While that works well, the repository used is laid out for maximum reusability, so it can be hard to follow what is actually going on in the template. For this tutorial, we’ve “de-normalized” the bootstrap template to more easily explain the different sections and what they do.
Before we dive in, also note that there is nothing special about this template. Heat allows for and encourages template composition so that you can abstract and reuse parts of your application architecture. Having said that, we will not talk at all about basic things like descriptions or versions, but rather go over the resources and how they prepare the instance for use with Heat Software Config.
Install the basics
The first resource is the most complex and uses cloud-init to lay down the needed software, scripts, and configuration needed. Since there is a lot going on here, we will break down the actual cloud-config rather than the resource wrapping it.
First, we install the supporting software packages:
apt_upgrade: true
apt-sources:
- source: "ppa:ansible/ansible"
packages:
- python-pip
- git
- gcc
- python-dev
- libyaml-dev
- libssl-dev
- libffi-dev
- libxml2-dev
- libxslt1-dev
- python-apt
- ansible
- salt-minion
The next section writes several files. The first four are fairly generic and are to configure the base OpenStack agents os-collect-config
, os-apply-config
, and os-refresh-config
. Note that these agents are actually installed in a separate section described later. You can read more about these agents in the reference sections. Their job is to coordinate the reading, running, and updating of the software configuration that will be sent via Heat.
Following are a few files that tell the generic OpenStack agents how to handle configurations received from Heat. The script written to /opt/stack/os-config-refresh/configure.d/55-heat-config
is executed when a config is to be applied or refreshed. It is this script that decides which config handler agent to call to apply the configuration (shell script, Ansible, Puppet, Salt, and so forth).
The script written to /var/lib/heat-config/hooks/script
is the default config handler agent that executes the configuration in the default
group and assumes the configuration is a shell script.
The other available agent handlers are written similarly using the same root hooks directory (/var/lib/heat-config/hooks
) and using the name of the config group handled as the file name. In our example, we have included handlers for using configurations in the default, Ansible, Salt, and Puppet config groups. You can customize this for your needs by removing handlers you do not want or adding additional ones from https://github.com/openstack/heat-templates/tree/master/hot/software-config/elements. Note that you may also need to add required packages to the packages
or runcmd
sections of the cloud-config if you add additional handlers.
The final section installs puppet for the puppet group handler and then runs the commands that bootstrap the generic OpenStack agents.
runcmd:
- wget https://apt.puppetlabs.com/puppetlabs-release-trusty.deb
- dpkg -i puppetlabs-release-trusty.deb
- apt-get update
- apt-get install puppet
- os-collect-config --one-time --debug
- cat /etc/os-collect-config.conf
- os-collect-config --one-time --debug
Install the generic agents
The actual generic OpenStack agents are installed using Python pip since there aren’t any reliable packages for them on the Ubuntu operating system.
install_agents:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: |
#!/bin/bash
set -eux
pip install os-collect-config os-apply-config os-refresh-config dib-utils
Configure the agents service
Next, we declare a config resource to create the service configuration (upstart or systemd) that will start the collection agent and ensure that it runs on boot:
start:
type: "OS::Heat::SoftwareConfig"
properties:
group: ungrouped
config: |
#!/bin/bash
set -eux
if [[ `systemctl` =~ -\.mount ]]; then
# if there is no system unit file, install a local unit
if [ ! -f /usr/lib/systemd/system/os-collect-config.service ]; then
cat <<EOF >/etc/systemd/system/os-collect-config.service
[Unit]
Description=Collect metadata and run hook commands.
[Service]
ExecStart=/usr/bin/os-collect-config
Restart=on-failure
[Install]
WantedBy=multi-user.target
EOF
cat <<EOF >/etc/os-collect-config.conf
[DEFAULT]
command=os-refresh-config
EOF
fi
# enable and start service to poll for deployment changes
systemctl enable os-collect-config
systemctl start --no-block os-collect-config
elif [[ `/sbin/init --version` =~ upstart ]]; then
if [ ! -f /etc/init/os-collect-config.conf ]; then
cat <<EOF >/etc/init/os-collect-config.conf
start on runlevel [2345]
stop on runlevel [016]
respawn
# We're logging to syslog
console none
exec os-collect-config 2>&1 | logger -t os-collect-config
EOF
fi
initctl reload-configuration
service os-collect-config start
else
echo "ERROR: only systemd or upstart supported" 1>&2
exit 1
fi
Combine and expose the configs
Finally, the configurations are all combined into a single multi-part-mime so that they can be output as a single file for use in user-data:
install_config_agent:
type: "OS::Heat::MultipartMime"
properties:
parts:
- config: { get_resource: configure }
- config: { get_resource: install_agents }
- config: { get_resource: start }
outputs:
config:
value: { get_resource: install_config_agent }
The environment file
The environment file that we will send as part of our stack-create
call is quite simple:
# Installs software-config agents for the Ubuntu operating system with pip install
parameters:
image: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
resource_registry:
"Heat::InstallConfigAgent": bootconfig_all.yaml
This sets the image
parameter value to “Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)” and maps the resource namespace Heat::InstallConfigAgent
to the template resource we created in the previous section. If you have used another file name or want to use the one included in this repository, be sure to change this mapping to point to the appropriate location.
Deploy the bootstrapped instance
All that is left to do is to deploy the template:
heat stack-create -f templates/software_config_custom_image.yaml -e templates/bootconfig.all.env.yaml sw_config_base
Wait for the stack to be CREATE_COMPLETE
and you have a basic vm configured for use with Heat software config. You can stop here and modify this template to actually deploy software configurations to your server using OS::Heat::SoftwareConfig and OS::Heat::SoftwareDeployment using “clean” images. However you may prefer to continue directly to the next section, since it explains how you can use this bootstrapped instance to create your own image pre-configured for use with Heat software config. Also, future advanced tutorials, such as Using Ansible with Heat later in this guide, will make use of this pre-bootstrapped image, so that is another reason you may want to continue directly to the next section.
Custom Image
Remove cloud-init artifacts
In order for cloud-init to run on machines booted from the new image, we will need to remove some artifacts from the current vm left over from the initial bootstrapping. First, retrieve the root password from the stack:
heat output-show sw_config_base admin_password
Now, log into the server via ssh by issuing the following command:
ssh root@$(heat output-show sw_config_base server_ip)
Enter the password you retrieved previously.
Once logged into the server, run the following commands to remove the artifacts created by cloud-init when it bootstrapped this server:
rm /var/lib/cloud/instance
rm -rf /var/lib/cloud/instances/*
rm -rf /var/lib/cloud/data/*
rm /var/lib/cloud/sem/config_scripts_per_once.once
rm /var/log/cloud-init.log
rm /var/log/cloud-init-output.log
Snapshot your bootstrapped server
Now we can create an image of our server. First, log into the Rackspace Cloud control panel and under Orchestration, find the sw_config_base
stack. Viewing the details, you should see the server listed in the Infrastructure section. Select that server to view its details. Under the Actions button, select Create an Image and name it “Ubuntu 14.04 LTS (HEAT)”.
Once this process is complete, you are all done!
Using your new image
We will make use of this new image in our future tutorials on using Heat software config, but in summary, you can omit using the Heat::InstallConfigAgent resource once you have this image. Instead, set the image
property of any servers you want to configure this way to “Ubuntu 14.04 LTS (HEAT)” and the user_data_format
property to “SOFTWARE_CONFIG” and it should just work!
Reference documentation
- OS::Heat::SoftwareConfig
- OS::Heat::SoftwareDeployment
- Template Composition
- Environment Guide
- os-collect-config
- os-refresh-config
- os-apply-config
Customizing Rackspace supported templates for RackConnect V3 customers
Note: This document assumes that the reader is familiar with HOT specification. If that is not the case, please go to the References section given at the end of this tutorial for the HOT specification link.
Brief summary
Rackspace supported templates are not currently supported for RackConnect V3 customers. This document outlines the steps needed to make a template work in a rackconnected V3 account.
Prerequisite
Some of the Rackspace supported templates use the ChefSolo resource. If you are customizing a template that contains the ChefSolo resource, make sure that rackconnected servers can access the internet. This is required because the ChefSolo resource downloads chef from the internet. Please contact the RackConnect Customer Service to update outbound NAT for your RackConnect account.
Customizing a template
1. Clone the template repository you want to customize into your public personal github account. This repository must be accessible to the public without any authentication.
2. A template repository may have multiple template files (a template can have multiple child templates). Find all the template files ending with .yaml (except rackspace.yaml
).
3. In the template files, find all the places where the OS::Nova::Server resource is being used and provide servicenet and RackConnect networks to that server resource.
server:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
networks:
- network: <rackconnect_network_name>
- uuid: 11111111-1111-1111-1111-111111111111
4. Find all the references to the OS::Heat::ChefSolo resource and use the servicenet/private IP of the rackconnected server instead of the public IP.
5. Inside the template, if any rackconnected server is connecting/communicating with other rackconnected servers, then use the rackconnected IP instead of the servicenet or public IP.
Example (customized template)
For example consider customizing the mongodb template.
1. Rackspace supported mongodb template that doesn’t work for RackConnect V3 customers is available at https://github.com/rackspace-orchestration-templates/mongodb-replset
2. Cloned and customized template repository is available at https://github.com/vikomall/mongodb-replset
3. List of changes made to the original template can be seen at https://github.com/rackspace-orchestration-templates/mongodb-replset/compare/master…vikomall:master
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- RackConnect compatibility information
- Orchestration support for RackConnect v3
Rackspace Cloud Files CDN using Heat
Brief summary
A CDN-enabled container is a public container that is served by the Akamai content delivery network. The files in a CDN-enabled container are publicly accessible and do not require an authentication token for read access. However uploading content into a CDN-enabled container is a secure operation and does require a valid authentication token. (Private containers are not CDN-enabled and the files in a private container are not publicly accessible.)
You can download the full template for this example from this repository’s templates directory.
Prerequisite(s):
You should be familiar with general Heat template authoring and resource usage.
Example Template
This is a simple template that will illustrate using the Rackspace::Cloud::CloudFilesCDN resource to enable CDN functionality on a Cloud Files container.
As always, we start with a basic template outline:
heat_template_version: 2015-10-15
description: |
Test Cloud Files CDN
resources:
outputs:
Resources section
We only need two simple resources for this template:
resources:
container:
type: OS::Swift::Container
properties:
name: { get_param: "OS::stack_name" }
container_cdn:
type: Rackspace::Cloud::CloudFilesCDN
properties:
container: { get_resource: container }
ttl: 3600
The container
resource simply creates a new Cloud Files container while the container_cdn
resource activates CDN functionality for that container. The container
property defines the container to enable while the ttl
property tells the CDN service how long to cache objects.
Outputs section
We will use the outputs
section to get relevant information from the CDN configuration:
outputs:
show:
value: { get_attr: [ container_cdn, show ] }
description: |
Show all attributes of the CDN configuration for the
container.
cdn_url:
value: { get_attr: [ container_cdn, cdn_uri ] }
description: |
The URI for downloading the object over HTTP. This URI can be combined
with any object name within the container to form the publicly
accessible URI for that object for distribution over a CDN system.
ssl_url:
value: { get_attr: [ container_cdn, ssl_uri ] }
description: The URI for downloading the object over HTTPS, using SSL.
streaming_url:
value: { get_attr: [ container_cdn, streaming_uri ] }
description: |
The URI for video streaming that uses HTTP Dynamic Streaming from Adobe.
ios_url:
value: { get_attr: [ container_cdn, ios_uri ] }
description: |
The URI for video streaming that uses HTTP Live Streaming from Apple.
Full example template
heat_template_version: 2015-10-15
description: |
Test Cloud Files CDN
resources:
container:
type: OS::Swift::Container
properties:
name: { get_param: "OS::stack_name" }
container_cdn:
type: Rackspace::Cloud::CloudFilesCDN
properties:
container: { get_resource: container }
ttl: 3600
outputs:
show:
value: { get_attr: [ container_cdn, show ] }
description: |
Show all attributes of the CDN configuration for the
container.
cdn_url:
value: { get_attr: [ container_cdn, cdn_uri ] }
description: |
The URI for downloading the object over HTTP. This URI can be combined
with any object name within the container to form the publicly
accessible URI for that object for distribution over a CDN system.
ssl_url:
value: { get_attr: [ container_cdn, ssl_uri ] }
description: The URI for downloading the object over HTTPS, using SSL.
streaming_url:
value: { get_attr: [ container_cdn, streaming_uri ] }
description: |
The URI for video streaming that uses HTTP Dynamic Streaming from Adobe.
ios_url:
value: { get_attr: [ container_cdn, ios_uri ] }
description: |
The URI for video streaming that uses HTTP Live Streaming from Apple.
Reference
SwiftSignal and SwiftSignalHandle
Brief summary
SwiftSignal can be used to coordinate resource creation with notifications/signals that could be coming from sources external or internal to the stack. It is often used in conjunction with SwiftSignalHandle resource.
SwiftSignalHandle is used to create a temporary URL and this URL is used by applications/scripts to send signals. SwiftSignal resource waits on this URL for a specified number of signals in a given time.
Example template
In the following example template, we will set up a single node Linux server that signals success/failure of user_data script execution at a given URL.
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Single node linux server with swift signaling.
resources:
outputs:
Resources section
Add a SwiftSignalHandle resource
SwiftSignalHandle is a resource to create a temporary URL to receive notification/signals. Note that the temporary URL is created using Rackspace Cloud Files.
signal_handle:
type: "OS::Heat::SwiftSignalHandle"
Add SwiftSignal resource
The SwiftSignal resource waits for a specified number of “SUCCESS” signals (number is provided by the count
property) on the given URL (handle
property). The stack will be marked as a failure if the specified number of signals are not received in the given timeout
or if a non “SUCCESS” signal is received such as a “FAILURE”. A data string and a reason string may be attached along with the success or failure notification. The data string is an attribute that can be pulled as template output.
wait_on_server:
type: OS::Heat::SwiftSignal
properties:
handle: {get_resource: signal_handle}
count: 1
timeout: 600
Here SwiftSignal resource would wait for 600 seconds to receive 1 signal on the handle.
Add a Server resource
Add a Linux server with a bash script in the user_data
property. At the end of the script execution, send a success/failure message to the temporary URL created by the above SwiftSignalHandle resource.
linux_server:
type: OS::Nova::Server
properties:
image: 4b14a92e-84c8-4770-9245-91ecb8501cc2
flavor: 1 GB Performance
user_data:
str_replace:
template: |
#!/bin/bash -x
# assume you are doing a long running operation here
sleep 300
# Assuming long running operation completed successfully, notify success signal
wc_notify --data-binary '{"status": "SUCCESS", "data": "Script execution succeeded"}'
# Alternatively if operation fails a FAILURE with reason and data may be sent,
# notify failure signal example below
# wc_notify --data-binary '{"status": "FAILURE", "reason":"Operation failed due to xyz error", "data":"Script execution failed"}'
params:
# Replace all occurances of "wc_notify" in the script with an
# appropriate curl PUT request using the "curl_cli" attribute
# of the SwiftSignalHandle resource
wc_notify: { get_attr: ['signal_handle', 'curl_cli']
Outputs section
Add swift signal URL to the outputs
section.
#Get the signal URL which contains all information passed to the signal handle
signal_url:
value: { get_attr: ['signal_handle', 'curl_cli'] }
description: Swift signal URL
#Obtain data describing script results. If nothing is passed, this value will be NULL
signal_data:
value: { get_attr: ['wait_on_server', 'data'] }
description: Data describing script results
server_public_ip:
value:{ get_attr: [ linux_server, accessIPv4 ] }
description: Linux server public IP
Full example template
heat_template_version: 2014-10-16
description: |
Single node linux server with swift signaling.
resources:
signal_handle:
type: "OS::Heat::SwiftSignalHandle"
wait_on_server:
type: OS::Heat::SwiftSignal
properties:
handle: {get_resource: signal_handle}
count: 1
timeout: 600
linux_server:
type: OS::Nova::Server
properties:
image: 4b14a92e-84c8-4770-9245-91ecb8501cc2
flavor: 1 GB Performance
user_data:
str_replace:
template: |
#!/bin/bash -x
# assume you are doing a long running operation here
sleep 300
# Assuming long running operation completed successfully, notify success signal
wc_notify --data-binary '{"status": "SUCCESS", "data": "Script execution succeeded"}'
# Alternatively if operation fails a FAILURE with reason and data may be sent,
# notify failure signal example below
# wc_notify --data-binary '{"status": "FAILURE", "reason":"Operation failed due to xyz error", "data":"Script execution failed"}'
params:
wc_notify: { get_attr: ['signal_handle', 'curl_cli'] }
outputs:
#Get the signal URL which contains all information passed to the signal handle
signal_url:
value: { get_attr: ['signal_handle', 'curl_cli'] }
description: Swift signal URL
#Obtain data describing script results. If nothing is passed, this value will be NULL
signal_data:
value: { get_attr: ['wait_on_server', 'data'] }
description: Data describing script results
server_public_ip:
value: { get_attr: [ linux_server, accessIPv4 ] }
description: Linux server public IP
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud-init format documentation
- Swift TempURL
Rackspace Cloud Server
Note: This document assumes that the reader is familiar with HOT specification. If that is not the case, please go to the References section listed at the end of this tutorial for the HOT specification link.
Brief summary
Rackspace Cloud Servers can be created, updated, and deleted using Cloud Orchestration.
A basic template to create a server is shown below
heat_template_version: 2014-10-16
resources:
test_server:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
OS::Nova::Server properties
The complete list of properties that can be provided to a server resource follows below:
admin_pass: {Description: The administrator password for the server., Type: String}
admin_user: {Description: 'Name of the administrative user to use on the server.
The default cloud-init user set up for each image (e.g. "ubuntu" for Ubuntu
12.04+, "fedora" for Fedora 19+ and "cloud-user" for CentOS/RHEL 6.5).', Type: String}
availability_zone: {Description: Name of the availability zone for server placement.,
Type: String}
block_device_mapping: {Description: Block device mappings for this server., Type: CommaDelimitedList}
block_device_mapping_v2: {Description: Block device mappings v2 for this server.,
Type: CommaDelimitedList}
config_drive:
AllowedValues: ['True', 'true', 'False', 'false']
Description: If True, enable config drive on the server.
Type: Boolean
diskConfig:
AllowedValues: [AUTO, MANUAL]
Description: Control how the disk is partitioned when the server is created.
Type: String
flavor: {Description: The ID or name of the flavor to boot onto., Type: String}
flavor_update_policy:
AllowedValues: [RESIZE, REPLACE]
Default: RESIZE
Description: Policy on how to apply a flavor update; either by requesting a server
resize or by replacing the entire server.
Type: String
image: {Description: The ID or name of the image to boot with., Type: String}
image_update_policy:
AllowedValues: [REBUILD, REPLACE, REBUILD_PRESERVE_EPHEMERAL]
Default: REBUILD
Description: Policy on how to apply an image-id update; either by requesting a
server rebuild or by replacing the entire server
Type: String
key_name: {Description: Name of keypair to inject into the server., Type: String}
metadata: {Description: Arbitrary key/value metadata to store for this server. Both
keys and values must be 255 characters or less. Non-string values will be serialized
to JSON (and the serialized string must be 255 characters or less)., Type: Json}
name: {Description: Server name., Type: String}
networks: {Description: 'An ordered list of nics to be added to this server, with
information about connected networks, fixed ips, port etc.', Type: CommaDelimitedList}
personality:
Default: {}
Description: A map of files to create/overwrite on the server upon boot. Keys
are file names and values are the file contents.
Type: Json
reservation_id: {Description: A UUID for the set of servers being requested., Type: String}
scheduler_hints: {Description: Arbitrary key-value pairs specified by the client
to help boot a server., Type: Json}
security_groups:
Default: []
Description: List of security group names or IDs. Cannot be used if neutron ports
are associated with this server; assign security groups to the ports instead.
Type: CommaDelimitedList
software_config_transport:
AllowedValues: [POLL_SERVER_CFN, POLL_SERVER_HEAT, POLL_TEMP_URL]
Default: POLL_TEMP_URL
Description: How the server should receive the metadata required for software
configuration. POLL_SERVER_CFN will allow calls to the cfn API action DescribeStackResource
authenticated with the provided keypair. POLL_SERVER_HEAT will allow calls to
the Heat API resource-show using the provided keystone credentials. POLL_TEMP_URL
will create and populate a Swift TempURL with metadata for polling.
Type: String
user_data: {Default: '', Description: User data script to be executed by cloud-init.,
Type: String}
user_data_format:
AllowedValues: [HEAT_CFNTOOLS, RAW, SOFTWARE_CONFIG]
Default: HEAT_CFNTOOLS
Description: How the user_data should be formatted for the server. For HEAT_CFNTOOLS,
the user_data is bundled as part of the heat-cfntools cloud-init boot configuration
data. For RAW the user_data is passed to Nova unmodified. For SOFTWARE_CONFIG
user_data is bundled as part of the software config data, and metadata is derived
from any associated SoftwareDeployment resources.
Type: String
Known behaviors/issues
- A rackconnected customer must provide the rackconnected network ID in the
networks
property to create a server in a rackconnected region - A rackconnected managed operations customer must provide the ServiceNet id in
networks
property if the server is created in a rackconnected region (RackConnect compatibility information) - If a shell script is provided in
user_data
property,user_data_format
property must be set toRAW
. - To inject data into the file system of the cloud server instance, provide file name and contents in
personality
property. - Provide key_name to authenticate via key-based authentication instead of password-based authentication.
- Rackspace::Cloud::WinServer is very similar to OS::Nova::Server, but it does not work with Rackconnected accounts(both Rackconnect v2 and v3).
Example template-1
In the following example template, we will create a single Linux server using the Orchestration service. For the sake of simplicity, we will not use template parameters in this example.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud server using orchestration service.
resources:
test_server:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
outputs:
server_ip:
value:
get_attr: [test_server, accessIPv4]
Example template-2
In the following example template, we will create a single Linux server and provide user_data
that can be used by a server when booting an image.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud server with user_data.
resources:
test_server:
type: "OS::Nova::Server"
properties:
name: test-server
admin_pass: password1
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
user_data_format: RAW
user_data: |
#!/bin/bash -x
echo "hello world" > /root/hello-world.txt
outputs:
server_ip:
value:
get_attr: [test_server, accessIPv4]
This template creates a server in the Rackspace cloud and during the server boot time, the script provided in the user_data
property will be executed. Here the user_data script is creating a hello-world.txt file with ‘hello world’ as contents. You can login to the cloud server using admin_pass and verify whether the ‘hello-world.txt’ file exists or not.
Please note that if there was any error during execution of the script that was provided as user_data
, then it will be silently ignored and the stack-creation will continue. To handle error scenarios, please take a look at SwiftSignal resource documentation.
Example template-3
In the following example template, we will create a single Linux server providing private key for SSH access.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud server with SSH access private key.
resources:
ssh_key:
type: OS::Nova::KeyPair
properties:
name: private_access_key
save_private_key: true
test_server:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
key_name: { get_resource: ssh_key }
outputs:
server_ip:
value:
get_attr: [test_server, accessIPv4]
private_key:
value:
get_attr: [ssh_key, private_key]
This template first creates a Nova server key pair. Instead of using username/password, private_key
can be used to access the server.
Example template-4
This template creates a single Linux server and installs the WordPress application on the server.
heat_template_version: 2014-10-16
description: |
Create a Rackspace cloud server and install wordpress application.
resources:
wordpress_server:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
user_data_format: RAW
user_data:
str_replace:
template: |
#!/bin/bash -v
yum -y install mysql-server httpd wordpress
sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf
sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf
sed --in-place --e "s/localhost/%dbhost%/" --e "s/database_name_here/%dbname%/" --e "s/username_here/%dbuser%/" --e "s/password_here/%dbpass%/" /usr/share/wordpress/wp-config.php
/etc/init.d/httpd start
chkconfig httpd on
/etc/init.d/mysqld start
chkconfig mysqld on
cat << EOF | mysql
CREATE DATABASE %dbname%;
GRANT ALL PRIVILEGES ON %dbname%.* TO "%dbuser%"@"localhost"
IDENTIFIED BY "%dbpass%";
FLUSH PRIVILEGES;
EXIT
EOF
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables-save > /etc/sysconfig/iptables
params:
"%dbhost%": localhost
"%dbname%": wordpress
"%dbuser%": admin
"%dbpass%": test_pass
outputs:
server_public_ip:
value:
get_attr: [wordpress_server, accessIPv4]
description: The public ip address of the server
website_url:
value:
str_replace:
template: http://%ip%/wordpress
params:
"%ip%": { get_attr: [ wordpress_server, accessIPv4 ] }
description: URL for Wordpress wiki
Please note that to keep the template simple, all the values were hard coded in the above template.
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud-init format documentation
- Cloud servers getting started guide
- Cloud servers API developer guide
- Cloud servers FAQs
- Cloud servers How to articles and other resources
SecurityGroup and SecurityGroupAttachment
Brief summary
SecurityGroupAttachment is used to attach a security group to a port.
SecurityGroupAttachment resource is used in the following cases;
- If the user wanted to attach a security group to an operator-created port.
- The user created a port outside of a template and wanted to attach a security group to the port as part of a template.
Limitations / Known Issues
- In Rackspace cloud you cannot apply security groups to a port at boot time.
- Security groups can be applied to Rackspace Cloud Servers on Public and ServiceNet Neutron ports. They are not supported for Isolated Networks.
- Applying Security Groups to outbound traffic, or egress direction, is supported via the API only (via curl or neutron client).
- Limited to no more than 5 security groups per Neutron port. When a Neutron port has multiple security groups applied, the rules from each security group are effectively aggregated to dictate the rules for access on that port.
- RackConnect v3 customers are able to use Security Groups if you plan on using Cloud Load Balancers as part of your RackConnected environment. To enable Security Groups on RackConnect v3, please contact Rackspace Support.
Example template
In the following example template, we will create a Linux server and attach a security group to the public network port of the server.
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
A linux server with security group attached to public port.
resources:
outputs:
Resources section
Add a Server resource
Add a Linux server to the template.
server:
type: OS::Nova::Server
properties:
image: 4b14a92e-84c8-4770-9245-91ecb8501cc2
flavor: 1 GB Performance
This creates a server with the given image and flavor and also by default attaches public and ServiceNet to the server instance created.
Add SecurityGroup resource
A security group is a named container for security group rules, which provide Rackspace Public Cloud users the ability to specify the types of traffic that are allowed to pass through, to, and from ports (Public/ServiceNet) on a Cloud server instance.
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: the_sg
description: Ping and SSH
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 5000
port_range_max: 5000
Here we added a rule for SSH traffic to the security group.
Add SecurityGroupAttachment resource
Now attach security group to the public network port of the server instance.
security_group_attachment:
type: Rackspace::Neutron::SecurityGroupAttachment
properties:
port: { get_attr: [ server, addresses, public, 0, port ] }
security_group: {get_resource: security_group}
Here we added a security group to public port of the server instance created.
Full Example Template
+heat_template_version: 2014-10-16
description: |
A linux server with security group attached to public port.
resources:
server:
type: OS::Nova::Server
properties:
image: 4b14a92e-84c8-4770-9245-91ecb8501cc2
flavor: 1 GB Performance
security_group:
type: OS::Neutron::SecurityGroup
properties:
name: the_sg
description: Ping and SSH
rules:
- protocol: icmp
- protocol: tcp
port_range_min: 22
port_range_max: 22
- protocol: tcp
port_range_min: 5000
port_range_max: 5000
security_group_attachment:
type: Rackspace::Neutron::SecurityGroupAttachment
properties:
port: { get_attr: [ server, addresses, public, 0, port ] }
security_group: {get_resource: security_group}
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud networks getting started documentation
- Cloud networks API documentation
Rackspace Cloud Load Balancer
Note: This document assumes that the reader is familiar with HOT specification. If that is not the case, please go to the References section listed at the end of this tutorial for HOT specification link.
Brief summary
A load balancer is used to distribute workloads between multiple back-end systems or services based on the criteria defined as part of its configuration.
Example load balancer template
A simple load balancer template is listed below.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud loadbalancer using orchestration service.
resources:
api_loadbalancer:
type: Rackspace::Cloud::LoadBalancer
properties:
name: test_load_balancer
metadata:
rax-heat: { get_param: "OS::stack_id" }
protocol: HTTPS
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: SERVICENET
ipVersion: IPV4
Load balancer properties
The complete list of load balancer properties that can be provided to the resource follows.
accessList: {Type: CommaDelimitedList}
algorithm:
AllowedValues: [LEAST_CONNECTIONS, RANDOM, ROUND_ROBIN, WEIGHTED_LEAST_CONNECTIONS,
WEIGHTED_ROUND_ROBIN]
Type: String
connectionLogging:
AllowedValues: ['True', 'true', 'False', 'false']
Type: Boolean
connectionThrottle: {Type: Json}
contentCaching:
AllowedValues: [ENABLED, DISABLED]
Type: String
errorPage: {Type: String}
halfClosed:
AllowedValues: ['True', 'true', 'False', 'false']
Type: Boolean
healthMonitor: {Type: Json}
httpsRedirect:
AllowedValues: ['True', 'true', 'False', 'false']
Default: false
Description: Enables or disables HTTP to HTTPS redirection for the load balancer.
When enabled, any HTTP request returns status code 301 (Moved Permanently),
and the requester is redirected to the requested URL via the HTTPS protocol
on port 443. Only available for HTTPS protocol (port=443), or HTTP protocol
with a properly configured SSL termination (secureTrafficOnly=true, securePort=443).
Type: Boolean
metadata: {Type: Json}
name: {Type: String}
nodes:
Type: CommaDelimitedList
Required: True
port:
Type: Number
Required: True
protocol:
AllowedValues: [DNS_TCP, DNS_UDP, FTP, HTTP, HTTPS, IMAPS, IMAPv4, LDAP, LDAPS,
MYSQL, POP3, POP3S, SMTP, TCP, TCP_CLIENT_FIRST, UDP, UDP_STREAM, SFTP]
Type: String
Required: True
sessionPersistence:
AllowedValues: [HTTP_COOKIE, SOURCE_IP]
Type: String
sslTermination: {Type: Json}
timeout: {MaxValue: 120, MinValue: 1, Type: Number}
virtualIps:
MinLength: 1
Type: CommaDelimitedList
Required: True
Example template with load balancer
In the following example template, we will create a multi node WordPress application with two Linux servers, one Trove (DBaaS) instance, and one load balancer.
First add a database instance resource (OS::Trove::Instance) to the template.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud server with user_data.
resources:
db:
type: OS::Trove::Instance
properties:
name: wordpress
flavor: 1GB Instance
size: 30
users:
- name: admin
password: admin
databases:
- wordpress
databases:
- name: wordpress
This template creates a database instance with the name wordpress
and admin
as the username and password.
Now add two server resources and install WordPress application.
heat_template_version: 2014-10-16
description: |
Creating Rackspace cloud server with SSH access private key.
resources:
web_nodes:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
user_data:
str_replace:
template: |
#!/bin/bash -v
yum -y install mysql-server httpd wordpress
sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf
sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf
sed --in-place --e "s/localhost/%dbhost%/" --e "s/database_name_here/%dbname%/" --e "s/username_here/%dbuser%/" --e "s/password_here/%dbpass%/" /usr/share/wordpress/wp-config.php
/etc/init.d/httpd start
chkconfig httpd on
/etc/init.d/mysqld start
chkconfig mysqld on
cat << EOF | mysql
CREATE DATABASE %dbname%;
GRANT ALL PRIVILEGES ON %dbname%.* TO "%dbuser%"@"localhost"
IDENTIFIED BY "%dbpass%";
FLUSH PRIVILEGES;
EXIT
EOF
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables-save > /etc/sysconfig/iptables
params:
"%dbhost%": { get_attr: [ db, hostname ] }
"%dbname%": wordpress
"%dbuser%": admin
"%dbpass%": admin
db:
type: OS::Trove::Instance
properties:
name: wordpress
flavor: 1GB Instance
size: 30
users:
- name: admin
password: admin
databases:
- wordpress
databases:
- name: wordpress
Here a ResourceGroup of type ‘OS::Nova::Server’ is added to the template. The user_data
property contains a script to install the WordPress application. Please note that database instance hostname information is passed to the script.
Finally, add the load balancer resource and provide the server addresses to the load balancer. Given below is the complete template that can be used to create a load balanced multi node WordPress application.
Full Template
heat_template_version: 2014-10-16
description: |
Create a loadbalanced two node wordpress application.
resources:
lb:
type: "Rackspace::Cloud::LoadBalancer"
properties:
name: wordpress_loadbalancer
nodes:
- addresses: { get_attr: [ web_nodes, privateIPv4 ] }
port: 80
condition: ENABLED
protocol: HTTP
halfClosed: False
algorithm: LEAST_CONNECTIONS
connectionThrottle:
maxConnections: 50
minConnections: 50
maxConnectionRate: 50
rateInterval: 50
port: 80
timeout: 120
sessionPersistence: HTTP_COOKIE
virtualIps:
- type: PUBLIC
ipVersion: IPV4
healthMonitor:
type: HTTP
delay: 10
timeout: 10
attemptsBeforeDeactivation: 3
path: "/"
statusRegex: "."
bodyRegex: "."
contentCaching: ENABLED
web_nodes:
type: OS::Heat::ResourceGroup
properties:
count: 2
resource_def:
type: "OS::Nova::Server"
properties:
name: test-server
flavor: 2 GB General Purpose v1
image: Debian 7 (Wheezy) (PVHVM)
user_data:
str_replace:
template: |
#!/bin/bash -v
yum -y install mysql-server httpd wordpress
sed -i "/Deny from All/d" /etc/httpd/conf.d/wordpress.conf
sed -i "s/Require local/Require all granted/" /etc/httpd/conf.d/wordpress.conf
sed --in-place --e "s/localhost/%dbhost%/" --e "s/database_name_here/%dbname%/" --e "s/username_here/%dbuser%/" --e "s/password_here/%dbpass%/" /usr/share/wordpress/wp-config.php
/etc/init.d/httpd start
chkconfig httpd on
/etc/init.d/mysqld start
chkconfig mysqld on
cat << EOF | mysql
CREATE DATABASE %dbname%;
GRANT ALL PRIVILEGES ON %dbname%.* TO "%dbuser%"@"localhost"
IDENTIFIED BY "%dbpass%";
FLUSH PRIVILEGES;
EXIT
EOF
iptables -I INPUT -p tcp --dport 80 -j ACCEPT
iptables-save > /etc/sysconfig/iptables
params:
"%dbhost%": { get_attr: [ db, hostname ] }
"%dbname%": wordpress
"%dbuser%": admin
"%dbpass%": admin
db:
type: OS::Trove::Instance
properties:
name: wordpress
flavor: 1GB Instance
size: 30
users:
- name: admin
password: admin
databases:
- wordpress
databases:
- name: wordpress
outputs:
wordpress_url:
value:
str_replace:
template: "http://%ip%/wordpress"
params:
"%ip%": { get_attr: [ lb, PublicIp ] }
description: Public URL for the wordpress blog
Please note that to keep the template simple, all the values were hard coded in the above template.
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud Load Balancer Getting Started guide
- Cloud Load Balancer API Developer Guide
Cloud Monitoring resources for Heat
Brief summary
The Rackspace Cloud Monitoring resources allow you to configure monitoring on resources that you create with Heat.
In this tutorial, you create a web server that is monitored with a web site check and a CPU check.
Pre-reading
The following introductory material should give you enough background to proceed with this tutorial.
- Rackspace Cloud Monitoring Getting Started Guide (especially “How Rackspace Cloud Monitoring works”)
- Getting Started With Rackspace Monitoring CLI
- Rackspace Cloud Monitoring Checks and Alarms
Example template
Start by adding the top-level template sections:
heat_template_version: 2013-05-23
description: |
Test template using Cloud Monitoring
resources:
Resources section
Add an OS::Nova::Server resource and configure it to install the Cloud Monitoring agent:
server:
type: OS::Nova::Server
properties:
image: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
flavor: 2 GB Performance
name: { get_param: "OS::stack_name" }
user_data_format: RAW
config_drive: true
user_data:
str_replace:
template: |
#!/bin/bash
echo "deb http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64 cloudmonitoring main" > /etc/apt/sources.list.d/rackspace-monitoring-agent.list
curl https://monitoring.api.rackspacecloud.com/pki/agent/linux.asc | sudo apt-key add -
apt-get -y update
apt-get -y install rackspace-monitoring-agent apache2
echo "monitoring_token " > /etc/rackspace-monitoring-agent.cfg
service rackspace-monitoring-agent restart
params:
"": { get_resource: token }
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
It is possible to monitor one or more servers by creating a Rackspace::CloudMonitoring::Entity resource 1. Entities are automatically created for cloud servers, so we will refer to the server resource above as our Cloud Monitoring entity.
Add a Rackspace::CloudMonitoring::AgentToken resource that will create a token used by the monitoring agent to authenticate with the monitoring service:
token:
type: Rackspace::CloudMonitoring::AgentToken
properties:
label: { get_param: "OS::stack_name" }
Add a Rackspace::CloudMonitoring::Check resource and configure it to check that the web service on the server entity is responsive:
webcheck:
type: Rackspace::CloudMonitoring::Check
properties:
entity: { get_resource: server }
type: remote.http
details:
url:
str_replace:
template: http://server_ip/
params:
server_ip: { get_attr: [ server, accessIPv4 ] }
label: webcheck
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
period: 120
timeout: 10
monitoring_zones_poll:
- Northern Virginia (IAD)
- Chicago (ORD)
target_hostname: { get_attr: [ server, accessIPv4 ] }
target_receiver: IPv4
Add another Rackspace::CloudMonitoring::Check resource and configure it to check the server’s CPU resources via the monitoring agent:
cpucheck:
type: Rackspace::CloudMonitoring::Check
properties:
entity: { get_resource: server }
type: agent.cpu
label: cpu_check
details: {}
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
period: 30
timeout: 10
The actual alarm criteria for the CPU check will be defined in the Rackspace::CloudMonitoring::Alarm resource below.
Add a Rackspace::CloudMonitoring::Notification resource that will send an email to [email protected] whenever it is triggered:
email_notification_1:
type: Rackspace::CloudMonitoring::Notification
properties:
label: email_ops_team
type: email
details:
address: "[email protected]"
Add a similar Rackspace::CloudMonitoring::Notification resource that will send an email to [email protected] whenever it is triggered:
email_notification_2:
type: Rackspace::CloudMonitoring::Notification
properties:
label: email_ops_team_2
type: email
details:
address: "[email protected]"
Add a Rackspace::CloudMonitoring::NotificationPlan resource to configure Cloud Monitoring to trigger the email_notification1
notification whenever an alarm enters the WARNING or CRITICAL state and email_notification2
whenever an alarm enters the OK state:
notify_ops_team:
type: Rackspace::CloudMonitoring::NotificationPlan
properties:
label: { get_param: "OS::stack_name" }
warning_state:
- { get_resource: email_notification_1 }
critical_state:
- { get_resource: email_notification_1 }
ok_state:
- { get_resource: email_notification_2 }
Finally, add a Rackspace::CloudMonitoring::Alarm resource that will configure the agent to enter the WARNING state when CPU usage is over 85% for 5 seconds, the CRITICAL state when CPU usage is over 95% for 5 seconds, and the OK state otherwise:
alert_ops:
type: Rackspace::CloudMonitoring::Alarm
properties:
label: test_cpu_alarm
check: { get_resource: cpucheck }
plan: { get_resource: notify_ops_team }
criteria: |
:set consecutiveCount=5
if (metric['usage_average'] > 95) {
return new AlarmStatus(CRITICAL, 'CPU usage is #{usage_average}%');
}
if (metric['usage_average'] > 85) {
return new AlarmStatus(WARNING, 'CPU usage is #{usage_average}%');
}
return new AlarmStatus(OK);
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
Full template
The following template is a combination of all of the snippets above. It will create a web server that is monitored with a web site check and a CPU check.
heat_template_version: 2013-05-23
description: |
Test template using Cloud Monitoring
resources:
server:
type: OS::Nova::Server
properties:
image: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
flavor: 2 GB Performance
name: { get_param: "OS::stack_name" }
user_data_format: RAW
config_drive: true
user_data:
str_replace:
template: |
#!/bin/bash
echo "deb http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64 cloudmonitoring main" > /etc/apt/sources.list.d/rackspace-monitoring-agent.list
curl https://monitoring.api.rackspacecloud.com/pki/agent/linux.asc | sudo apt-key add -
apt-get -y update
apt-get -y install rackspace-monitoring-agent apache2
echo "monitoring_token " > /etc/rackspace-monitoring-agent.cfg
service rackspace-monitoring-agent restart
params:
"": { get_resource: token }
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
token:
type: Rackspace::CloudMonitoring::AgentToken
properties:
label: { get_param: "OS::stack_name" }
webcheck:
type: Rackspace::CloudMonitoring::Check
properties:
entity: { get_resource: server }
type: remote.http
details:
url:
str_replace:
template: http://server_ip/
params:
server_ip: { get_attr: [ server, accessIPv4 ] }
label: webcheck
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
period: 120
timeout: 10
monitoring_zones_poll:
- Northern Virginia (IAD)
- Chicago (ORD)
target_hostname: { get_attr: [ server, accessIPv4 ] }
target_receiver: IPv4
cpucheck:
type: Rackspace::CloudMonitoring::Check
properties:
entity: { get_resource: server }
type: agent.cpu
label: cpu_check
details: {}
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
period: 30
timeout: 10
email_notification_1:
type: Rackspace::CloudMonitoring::Notification
properties:
label: email_ops_team
type: email
details:
address: "[email protected]"
email_notification_2:
type: Rackspace::CloudMonitoring::Notification
properties:
label: email_ops_team_2
type: email
details:
address: "[email protected]"
notify_ops_team:
type: Rackspace::CloudMonitoring::NotificationPlan
properties:
label: { get_param: "OS::stack_name" }
warning_state:
- { get_resource: email_notification_1 }
critical_state:
- { get_resource: email_notification_1 }
ok_state:
- { get_resource: email_notification_2 }
alert_ops:
type: Rackspace::CloudMonitoring::Alarm
properties:
label: test_cpu_alarm
check: { get_resource: cpucheck }
plan: { get_resource: notify_ops_team }
criteria: |
:set consecutiveCount=5
if (metric['usage_average'] > 95) {
return new AlarmStatus(CRITICAL, 'CPU usage is #{usage_average}%');
}
if (metric['usage_average'] > 85) {
return new AlarmStatus(WARNING, 'CPU usage is #{usage_average}%');
}
return new AlarmStatus(OK);
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
Reference documentation
The following is an example of a Rackspace::CloudMonitoring::Entity resource definition:
entity:
type: Rackspace::CloudMonitoring::Entity
properties:
label: { get_param: "OS::stack_name" }
metadata:
rax-heat: { get_param: "OS::stack_id" }
stack-name: { get_param: "OS::stack_name" }
ip_addresses:
web_server: { get_attr: [ server, accessIPv4 ] }
Rackspace Cloud Backup
Brief summary
Rackspace cloud backup configuration resource enables you to select and backup specific files/folders from a cloud server using Cloud Orchestration.
Prerequisite(s):
Cloud backup agent is installed on the server from where you want to backup files/folders.
Installing cloud backup agent on the server
Option-1
If the server from which you want to create a backup was created as part of a Heat stack, then pass ‘{build_config: backup_agentonly}’ as metadata to OS::Nova::Server or Rackspace::Cloud::WinServer. For example,
wordpress_server:
type: "Rackspace::Cloud::WinServer"
properties:
name: wordpress-server
flavor: 4GB Standard Instance
image: Windows Server 2012
metadata: {build_config: backup_agent_only}
Option-2
If the server was not created as part of a Heat stack, then follow the links given below to install backup agent manually.
- Install cloud backup agent on Linux server
- Install cloud backup agent on Windows server
- Install cloud backup agent on Windows server (silent installation)
Example template
In the following example template, we will set up a single node WordPress web application (on a Windows server) with a cloud backup resource. For the sake of simplicity, we will not use template parameters in this example.
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Wordpress application on a Windows server with cloud backup enabled.
resources:
outputs:
Resources section
Add a Rackspace::Cloud::WinServer resource that will create a Windows server and install the WordPress web application.
wordpress_server:
type: "Rackspace::Cloud::WinServer"
properties:
name: wordpress-server
flavor: 4GB Standard Instance
image: Windows Server 2012
metadata: {build_config: backup_agent_only}
user_data:
str_replace:
template: |
$source = "http://download.microsoft.com/download/7/0/4/704CEB4C-9F42-4962-A2B0-5C84B0682C7A/WebPlatformInstaller_amd64_en-US.msi"
$destination = "webpi.msi"
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($source, $destination)
Start-Process msiexec -ArgumentList "/i webpi.msi /qn" -NoNewWindow -Wait
echo DBPassword[@]%dbpassword% DBAdminPassword[@]%dbadminpassword% > test.app
$tmpprofile = $env:userprofile
$env:userprofile = "c:\users\administrator"
$wpicmd = "C:\Program Files\Microsoft\Web Platform Installer\WebPICMD.exe"
Start-Process $wpicmd -ArgumentList "/Install /Application:[email protected] /MySQLPassword:%dbadminpassword% /AcceptEULA /Log:.\wpi.log" -NoNewWindow -Wait
$env:userprofile = $tmpprofile
params:
"%dbpassword%": testpassword_123
"%dbadminpassword%": testpassword_123
The above resource creates a Windows server and installs the WordPress application. Please note that ‘{build_config: backup_agentonly}’ was passed as metadata to install a cloud backup agent.
Cloud backup config resource
Add a Rackspace::Cloud::BackupConfig resource to back up the WordPress application installed at c:\inetpub\wwwroot\wordpress folder.
rax_backup_config:
properties:
BackupConfigurationName: wordpress-daily-backup
DayOfWeekId: null
Frequency: Daily
StartTimeHour: 11
StartTimeMinute: 30
StartTimeAmPm: PM
HourInterval: 1
IsActive: true
Enabled: true
NotifyFailure: true
NotifyRecipients: [email protected]
NotifySuccess: false
TimeZoneId: Eastern Standard Time
VersionRetention: 60
host_ip_address: { get_attr: [wordpress_server, accessIPv4] }
Inclusions:
- {"FilePath": "c:\\inetpub\\wwwroot\\wordpress", "FileItemType": "Folder" }
type: Rackspace::Cloud::BackupConfig
In the above backup resource, the cloud backup service was configured to create a backup of the ‘c:\inetpub\wwwroot\wordpress’ folder ‘Daily’ at ‘11:30PM’ and to retain the created backup for ‘60’ days. Also, it was configured to notify at the given email ID upon any error during the backup creation. Please note that host_ip_address
is the IP address of the cloud server from where files/folders will be backed up. Here the IP address of the Windows server that was created in the earlier resource example was passed. If the server was created outside of the stack, make sure that a backup agent was installed on that server and pass the IP address to host_ip_address
.
Outputs section
Add the WordPress website URL to the outputs section.
website_url:
value:
str_replace:
template: http://%ip%/wordpress
params:
"%ip%": { get_attr: [ wordpress_server, accessIPv4 ] }
description: URL for Wordpress site
Full Example Template
heat_template_version: 2014-10-16
description: |
HEAT template for installing Wordpress on Windows Server
resources:
rax_backup_config:
properties:
BackupConfigurationName: wordpressbackup
DayOfWeekId: null
Frequency: Daily
StartTimeHour: 7
StartTimeMinute: 30
StartTimeAmPm: PM
HourInterval: null
IsActive: true
Enabled: true
NotifyFailure: true
NotifyRecipients: [email protected]
NotifySuccess: true
TimeZoneId: Eastern Standard Time
VersionRetention: 60
host_ip_address: { get_attr: [rs_windows_server, accessIPv4] }
Inclusions:
- {"FilePath": "c:\\inetpub\\wwwroot\\wordpress", "FileItemType": "Folder" }
type: Rackspace::Cloud::BackupConfig
rs_windows_server:
type: "Rackspace::Cloud::WinServer"
properties:
name: wordpress-server
flavor: 4GB Standard Instance
image: Windows Server 2012
metadata: {build_config: backup_agent_only}
user_data:
str_replace:
template: |
$source = "http://download.microsoft.com/download/7/0/4/704CEB4C-9F42-4962-A2B0-5C84B0682C7A/WebPlatformInstaller_amd64_en-US.msi"
$destination = "webpi.msi"
$wc = New-Object System.Net.WebClient
$wc.DownloadFile($source, $destination)
Start-Process msiexec -ArgumentList "/i webpi.msi /qn" -NoNewWindow -Wait
echo DBPassword[@]%dbpassword% DBAdminPassword[@]%dbadminpassword% > test.app
$tmpprofile = $env:userprofile
$env:userprofile = "c:\users\administrator"
$wpicmd = "C:\Program Files\Microsoft\Web Platform Installer\WebPICMD.exe"
Start-Process $wpicmd -ArgumentList "/Install /Application:[email protected] /MySQLPassword:%dbadminpassword% /AcceptEULA /Log:.\wpi.log" -NoNewWindow -Wait
$env:userprofile = $tmpprofile
params:
"%dbpassword%": testpassword_123
"%dbadminpassword%": testpassword_123
outputs:
website_url:
value:
str_replace:
template: http://%ip%/wordpress
params:
"%ip%": { get_attr: [ rs_windows_server, accessIPv4 ] }
description: URL for Wordpress site
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud-init format documentation
- Cloud backup getting started guide
- Cloud backup API developer guide
- Install cloud backup agent on Linux server
- Install cloud backup agent on Windows server
- Install cloud backup agent on Windows server (silent installation)
Rackspace Cloud Databases
Brief summary
Rackspace Cloud Databases can be created, updated, and deleted using the OS::Trove::Instance resource. Cloud Databases instances can also be created as replicas of other Cloud Databases instances.
Example template
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Create a Rackspace Cloud Database instance and make a replica.
resources:
outputs:
Resources section
Add an OS::Trove::Instance resource with a list of databases and users:
db:
type: OS::Trove::Instance
properties:
name: db
flavor: 1GB Instance
size: 10
databases:
- name: my_data
users:
- name: john
password: secrete
databases: [ my_data ]
This resource will create your Cloud Databases instance.
Add another OS::Trove::Instance, but this time leave out the databases and users and specify a replica_of
property:
db_replica:
type: OS::Trove::Instance
properties:
name: db_replica
flavor: 1GB Instance
size: 10
replica_of: { get_resource: db }
This will create a replica of your first Cloud Databases instance. Alternatively, you can add a template parameter for the UUID of the database instance that you want a replica of and pass in the UUID upon stack creation.
Outputs section
Add the following to your outputs section:
"DB ID":
value: { get_resource: db }
description: Database instance ID.
"DB hostname":
value: { get_attr: [db, hostname] }
description: Database instance hostname.
"DB href":
value: { get_attr: [db, href] }
description: Api endpoint of the database instance.
"DB replica ID":
value: { get_resource: db_replica }
description: Database replica ID.
"DB replica hostname":
value: { get_attr: [db_replica, hostname] }
description: Database replica hostname.
"DB replica href":
value: { get_attr: [db_replica, href] }
description: Api endpoint of the database replica.
Full template
heat_template_version: 2014-10-16
description: |
Test template using Trove with replication
resources:
db:
type: OS::Trove::Instance
properties:
name: db
flavor: 1GB Instance
size: 10
databases:
- name: my_data
users:
- name: john
password: secrete
databases: [ my_data ]
db_replica:
type: OS::Trove::Instance
properties:
name: db_replica
flavor: 1GB Instance
size: 10
replica_of: { get_resource: db }
outputs:
"DB ID":
value: { get_resource: db }
description: Database instance ID.
"DB hostname":
value: { get_attr: [db, hostname] }
description: Database instance hostname.
"DB href":
value: { get_attr: [db, href] }
description: Api endpoint of the database instance.
"DB replica ID":
value: { get_resource: db_replica }
description: Database replica ID.
"DB replica hostname":
value: { get_attr: [db_replica, hostname] }
description: Database replica hostname.
"DB replica href":
value: { get_attr: [db_replica, href] }
description: Api endpoint of the database replica.
Reference documentation
Rackspace Cloud Databases and Scheduled Backups
Brief summary
Cloud Databases allows you to create a schedule for running a weekly backup for your database instance. There is an incremental backup run at the end of every day and a full backup is run on the day as defined by the backup schedule. The backup can always be restored to a new database instance.
Cloud Orchestration allows you to create, update, and delete these backup schedules by using the Rackspace::CloudDatabase::ScheduledBackup
resource.
Example template
Start by adding the top-level template sections:
heat_template_version: 2015-10-15
description: |
Simple template to illustrate creating a scheduled backup
for a Cloud Database instance
resources:
outputs:
Resources section
We first add an OS::Heat::RandomString
resource to generate a password for the database user we’ll create later:
# generate a password for our db user
db_pass:
type: OS::Heat::RandomString
Next, we add an OS::Trove::Instance
resource with a test database and user. Note we’ve set the user’s password to the value of the OS::Heat::RandomString
resource we defined earlier:
service_db:
type: OS::Trove::Instance
properties:
name: trove_test_db
datastore_type: mariadb
datastore_version: 10
flavor: 1GB Instance
size: 10
databases:
- name: test_data
users:
- name: dbuser
password: { get_attr: [ db_pass, value ] } # use generated password
databases: [ test_data
Lastly, we add the Rackspace::CloudDatabase::ScheduledBackup
resource and configure it to backup our instance every Monday at 5:45pm and to retain the last 15 full backups:
backup:
type: Rackspace::CloudDatabase::ScheduledBackup
properties:
source:
id: { get_resource: service_db }
type: instance
day_of_week: 1 # Monday (0-6 Sunday to Saturday)
hour: 17 # 5pm (24hr clock with 0 being midnight)
minute: 45 # 5:45pm
full_backup_retention: 15
Outputs section
As a convenience, we’ll output the user’s generated password so we can log in to the database if needed:
"Database Password":
value: { get_attr: [ db_pass, value ] }
description: Database password for "dbuser"
Reference documentation
- You can always view the full template for this guide at https://github.com/rackerlabs/rs-heat-docs/tree/master/templates/cloud_db_backups.yaml
- OS::Trove::Instance
- Rackspace::CloudDatabase::ScheduledBackup
- Rackspace Cloud Databases Developer Guide
Event and schedule-based auto scaling with Heat
Brief summary
Rackspace Auto Scale supports both schedule-based and event-based auto scaling. Schedule-based auto scaling can be used to scale your application up or down at a specific time of the day, whereas event-based auto scaling can be used to automatically scale your application up or down according to load. In this tutorial, we will use Rackspace Orchestration to automate the configuration of both event-based and schedule-based auto scaling.
Pre-reading
If you are just getting started with Rackspace Auto Scale, Cloud Monitoring, or Cloud Orchestration, the following introductory material should give you enough background to proceed with this tutorial.
Auto Scale
- Easily Scale Your Cloud With Rackspace Auto Scale gives a brief overview of the different types of scaling policies for Auto Scale.
- Auto Scale concepts
Cloud Monitoring
- How Rackspace Cloud Monitoring works
- How the monitoring agent works
- Developer Guide: Available check types and fields
- Developer Guide: Server-Side agent configuration YAML file examples
Cloud Orchestration
Schedule-based auto scaling
Schedule-based auto scaling can be useful if there is a particular time when your application experiences a higher load. By scheduling a scaling event, you will be able to proactively scale your application.
In this example, we will create a scaling group that scales up by one node each Monday at 1:00 AM and down by one node each Saturday at 1:00 AM.
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and Event-based scaling using Rackspace Cloud Autoscale
resources:
outputs:
Resources section
Add an OS::Nova::KeyPair resource that will generate an SSH keypair which you can use to login to your web servers if you need to:
access_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: "OS::stack_name" }
save_private_key: true
Add a Rackspace::Cloud::LoadBalancer resource that will balance the load between web servers.
scaling_lb:
type: Rackspace::Cloud::LoadBalancer
properties:
name: { get_param: "OS::stack_name" }
protocol: HTTP
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: PUBLIC
ipVersion: IPV4
Add a Rackspace::AutoScale::Group resource:
scaled_servers:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
maxEntities: 5
minEntities: 1
cooldown: 120
launchConfiguration:
type: launch_server
args:
loadBalancers:
- loadBalancerId: { get_resource: scaling_lb }
port: 80
server:
name: { get_param: "OS::stack_name" }
flavorRef: performance1-1
imageRef: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
key_name: { get_resource: access_key }
networks:
- uuid: 11111111-1111-1111-1111-111111111111
This resource will be responsible for creating/destroying Cloud Servers based on the auto scaling policy. The maxEntities and minEntities properties above ensure that the group will create at least 1 server but not more than 5 servers.
Add a Rackspace::AutoScale::ScalingPolicy for scaling up:
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
args:
cron: "0 1 * * 1"
change: 1
cooldown: 600
type: schedule
This resource will create a scaling policy that scales the auto scaling group up by one server every Monday at 1:00 AM.
Finally, add a Rackspace::AutoScale::ScalingPolicy for scaling down:
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
args:
cron: "0 1 * * 6"
change: -1
cooldown: 600
type: schedule
Similarly, this resource will scale the auto scaling group down by one server every Saturday at 1:00 AM.
Outputs sectio¶
Add the private SSH key to the outputs section. You will be able to log into your scaling group servers using this SSH key.
"Access Private Key":
value: { get_attr: [ access_key, private_key ] }
description: Private key for accessing the scaled server instances if needed
To see the stack outputs, issue a heat stack-show <stack name>
on the created stack.
Full template
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and schedule-based scaling using Rackspace Cloud Autoscale
resources:
access_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: "OS::stack_name" }
save_private_key: true
scaling_lb:
type: Rackspace::Cloud::LoadBalancer
properties:
name: { get_param: "OS::stack_name" }
protocol: HTTP
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: PUBLIC
ipVersion: IPV4
scaled_servers:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
maxEntities: 10
minEntities: 2
cooldown: 120
launchConfiguration:
type: launch_server
args:
loadBalancers:
- loadBalancerId: { get_resource: scaling_lb }
port: 80
server:
name: { get_param: "OS::stack_name" }
flavorRef: performance1-1
imageRef: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
key_name: { get_resource: access_key }
networks:
- uuid: 11111111-1111-1111-1111-111111111111
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
args:
cron: "0 1 * * 1"
change: 1
cooldown: 600
type: schedule
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
args:
cron: "0 1 * * 6"
change: -1
cooldown: 600
type: schedule
outputs:
"Access Private Key":
value: { get_attr: [ access_key, private_key ] }
description: Private key for accessing the scaled server instances if needed
Event-based auto scaling
To configure your web application running on the Rackspace Cloud to automatically scale up or down according to load, Rackspace Auto Scale can be used in conjunction with Rackspace Cloud Monitoring. The Cloud Monitoring agent monitors various resources on the servers inside the scaling group and makes calls to the Auto Scale API when it is time to scale up or down.
In the following example template, we will set up a web application with a load balancer and a scaling group that contains between 2 and 10 web servers. For the sake of simplicity, we will not use template parameters in this example.
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and Event-based scaling using Rackspace Cloud Autoscale
resources:
outputs:
Resources section
Add an OS::Nova::KeyPair resource and a Rackspace::Cloud::LoadBalancer as in the previous example:
access_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: "OS::stack_name" }
save_private_key: true
Add a Rackspace::Cloud::LoadBalancer resource that will balance the load between web servers.
scaling_lb:
type: Rackspace::Cloud::LoadBalancer
properties:
name: { get_param: "OS::stack_name" }
protocol: HTTP
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: PUBLIC
ipVersion: IPV4
Autoscale resources
Add the Rackspace::AutoScale::Group resource, which will contain at least 2 servers and not more than 10 servers:
scaled_servers:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
maxEntities: 10
minEntities: 2
cooldown: 120
launchConfiguration:
type: launch_server
args:
loadBalancers:
- loadBalancerId: { get_resource: scaling_lb }
port: 80
server:
name: { get_param: "OS::stack_name" }
flavorRef: performance1-1
imageRef: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
key_name: { get_resource: access_key }
config_drive: true
networks:
- uuid: 11111111-1111-1111-1111-111111111111
user_data:
str_replace:
template: |
#cloud-config
apt_upgrade: true
apt_sources:
- source: deb http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64 cloudmonitoring main
key: | # This is the apt repo signing key
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.10 (GNU/Linux)
mQENBFAZuVEBCAC8iXu/UEDLdkzRJzBKx14cgAiPHxSCjV4CPWqhOIrN4tl0PVHD
BYSJV7oSu0napBTfAK5/0+8zNnnq8j0PNg2YmPOFkL/rIMHJH8eZ08Ffq9j4GQdM
fSHDa6Zvgz68gJMLQ1IRPguen7p2mIEoOl8NuTwpjnWBZTdptImUoj53ZTKGYYS+
OWs2iZ1IHS8CbmWaTMxiEk8kT5plM3jvbkJAKBAaTfYsddo1JqqMpcbykOLcgSrG
oipyiDo9Ppi+EAOie1r6+zqmWpY+ScANkOpaVSfLjGp8fo4RP7gHhl26nDiqYB1K
7tV1Rl3RMPnGuh4g/8YRkiExKd/XdS2CfO/DABEBAAG0jFJhY2tzcGFjZSBDbG91
ZCBNb25pdG9yaW5nIEFnZW50IFBhY2thZ2UgUmVwbyAoaHR0cDovL3d3dy5yYWNr
c3BhY2UuY29tL2Nsb3VkL2Nsb3VkX2hvc3RpbmdfcHJvZHVjdHMvbW9uaXRvcmlu
Zy8pIDxtb25pdG9yaW5nQHJhY2tzcGFjZS5jb20+iQE4BBMBAgAiBQJQGblRAhsD
BgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRCghvB30Fq5FCo6B/9Oel0Q/cX6
1Lyk+teFywmB2jgn/UC51ioPZBHnHZLIjKH/CA6y7B9jm3+VddH60qDDANzlK/LL
MyUgwLj9+flKeS+H5AL6l3RarWlGm11fJjjW2TnaUCUXQxw6A/QQvpHpl7eknEKJ
m3kWMGAT6y/FbkSye18HUu6dtxvxosiMzi/7yVPJ7MwtUy2Bv1z9yHvt4I0rR8L5
CdFeEcqY4FlGmFBG200BuGzLMrqv6HF6LH3khPoXbGjVmHbHKIzqCx4hPWNRtZIv
fnu/aZcXJOJkB3/jzxaCjabOU+BCkXqVVFnUkbOYKoJ8EVLoepnhuVLUYErRjt7J
qDsI4KPQoEjTuQENBFAZuVEBCACUBBO83pdDYHfKe394Il8MSw7PBhtxFRHjUty2
WZYW12P+lZ3Q0Tqfc5Z8+CxnnkbdfvL13duAXn6goWObPRlQsYg4Ik9wO5TlYxqu
igtPZ+mJ9KlZZ/c2+KV4AeqO+K0L5k96nFkxd/Jh90SLk0ckP24RAYx2WqRrIPyX
xJCZlSWSqITMBcFp+kb0GdMk+Lnq7wPIJ08IKFJORSHgBbfHAmHCMOCUTZPhQHLA
yBDMLcaLP9xlRm72JG6tko2k2/cBV707CfbnR2PyJFqq+zuEyMdBpnxtY3Tpdfdk
MW9ScO40ndpwR72MG+Oy8iM8CTnmzRzMHMPiiPVAit1ZIXtZABEBAAGJAR8EGAEC
AAkFAlAZuVECGwwACgkQoIbwd9BauRSx0QgApV/n2L/Qe5T8aRhoiecs4gH+ubo2
uCQV9W3f56X3obHz9/mNkLTIKF2zHQhEUCCOwptoeyvmHht/QYXu1m3Gvq9X2F85
YU6I2PTEHuI/u6oZF7cEa8z8ofq91AWSOrXXEJiZUQr5DNjO8SiAzPulGM2teSA+
ez1wn9hhG9Kdu4LpaQ3EZHHBUKCLNU7nN/Ie5OeYA8FKbudNz13jTNRG+GYGrpPj
PlhA5RCmTY5N018O51YXEiTh4C7TLskFwRFPbbexh3mZx2s6VlcaCK0lEdQ/+XK3
KW+ZuPEh074b3VujLvuUCXd6T5FT5J6U/6qZgEoEiXwODX+fYIrD5PfjCw==
=S1lE
-----END PGP PUBLIC KEY BLOCK-----
write_files:
- path: /etc/rackspace-monitoring-agent.conf.d/load.yaml
content: |
type: agent.load_average
label: Load Average
period: 60
timeout: 10
alarms:
load_alarm:
label: load average alarm
notification_plan_id: {notification_plan}
criteria: |
:set consecutiveCount=3
if (metric['5m'] > 0.85){
return new AlarmStatus(CRITICAL);
}
if (metric['15m'] < 0.3){
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
- path: /etc/rackspace-monitoring-agent.cfg
content: |
monitoring_token {agent_token}
packages:
- rackspace-monitoring-agent
- apache2
params:
"{notification_plan}": { get_resource: scaling_plan }
"{agent_token}": { get_resource: agent_token }
mQENBFAZuVEBCAC8iXu/UEDLdkzRJzBKx14cgAiPHxSCjV4CPWqhOIrN4tl0PVHD
BYSJV7oSu0napBTfAK5/0+8zNnnq8j0PNg2YmPOFkL/rIMHJH8eZ08Ffq9j4GQdM
fSHDa6Zvgz68gJMLQ1IRPguen7p2mIEoOl8NuTwpjnWBZTdptImUoj53ZTKGYYS+
OWs2iZ1IHS8CbmWaTMxiEk8kT5plM3jvbkJAKBAaTfYsddo1JqqMpcbykOLcgSrG
oipyiDo9Ppi+EAOie1r6+zqmWpY+ScANkOpaVSfLjGp8fo4RP7gHhl26nDiqYB1K
7tV1Rl3RMPnGuh4g/8YRkiExKd/XdS2CfO/DABEBAAG0jFJhY2tzcGFjZSBDbG91
ZCBNb25pdG9yaW5nIEFnZW50IFBhY2thZ2UgUmVwbyAoaHR0cDovL3d3dy5yYWNr
c3BhY2UuY29tL2Nsb3VkL2Nsb3VkX2hvc3RpbmdfcHJvZHVjdHMvbW9uaXRvcmlu
Zy8pIDxtb25pdG9yaW5nQHJhY2tzcGFjZS5jb20+iQE4BBMBAgAiBQJQGblRAhsD
BgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRCghvB30Fq5FCo6B/9Oel0Q/cX6
1Lyk+teFywmB2jgn/UC51ioPZBHnHZLIjKH/CA6y7B9jm3+VddH60qDDANzlK/LL
MyUgwLj9+flKeS+H5AL6l3RarWlGm11fJjjW2TnaUCUXQxw6A/QQvpHpl7eknEKJ
m3kWMGAT6y/FbkSye18HUu6dtxvxosiMzi/7yVPJ7MwtUy2Bv1z9yHvt4I0rR8L5
CdFeEcqY4FlGmFBG200BuGzLMrqv6HF6LH3khPoXbGjVmHbHKIzqCx4hPWNRtZIv
fnu/aZcXJOJkB3/jzxaCjabOU+BCkXqVVFnUkbOYKoJ8EVLoepnhuVLUYErRjt7J
qDsI4KPQoEjTuQENBFAZuVEBCACUBBO83pdDYHfKe394Il8MSw7PBhtxFRHjUty2
WZYW12P+lZ3Q0Tqfc5Z8+CxnnkbdfvL13duAXn6goWObPRlQsYg4Ik9wO5TlYxqu
igtPZ+mJ9KlZZ/c2+KV4AeqO+K0L5k96nFkxd/Jh90SLk0ckP24RAYx2WqRrIPyX
xJCZlSWSqITMBcFp+kb0GdMk+Lnq7wPIJ08IKFJORSHgBbfHAmHCMOCUTZPhQHLA
yBDMLcaLP9xlRm72JG6tko2k2/cBV707CfbnR2PyJFqq+zuEyMdBpnxtY3Tpdfdk
MW9ScO40ndpwR72MG+Oy8iM8CTnmzRzMHMPiiPVAit1ZIXtZABEBAAGJAR8EGAEC
AAkFAlAZuVECGwwACgkQoIbwd9BauRSx0QgApV/n2L/Qe5T8aRhoiecs4gH+ubo2
uCQV9W3f56X3obHz9/mNkLTIKF2zHQhEUCCOwptoeyvmHht/QYXu1m3Gvq9X2F85
YU6I2PTEHuI/u6oZF7cEa8z8ofq91AWSOrXXEJiZUQr5DNjO8SiAzPulGM2teSA+
ez1wn9hhG9Kdu4LpaQ3EZHHBUKCLNU7nN/Ie5OeYA8FKbudNz13jTNRG+GYGrpPj
PlhA5RCmTY5N018O51YXEiTh4C7TLskFwRFPbbexh3mZx2s6VlcaCK0lEdQ/+XK3
KW+ZuPEh074b3VujLvuUCXd6T5FT5J6U/6qZgEoEiXwODX+fYIrD5PfjCw==
=S1lE
-----END PGP PUBLIC KEY BLOCK-----
write_files:
\- path: /etc/rackspace-monitoring-agent.conf.d/load.yaml
content: |
type: agent.load_average
label: Load Average
period: 60
timeout: 10
alarms:
load_alarm:
label: load average alarm
notification\_plan\_id: {notification_plan}
criteria: |
:set consecutiveCount=3
if (metric\['5m'\] > 0.85){
return new AlarmStatus(CRITICAL);
}
if (metric\['15m'\] < 0.3){
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
\- path: /etc/rackspace-monitoring-agent.cfg
content: |
monitoring\_token {agent\_token}
packages:
\- rackspace-monitoring-agent
\- apache2
params:
"{notification_plan}": { get_resource: scaling_plan }
"{agent_token}": { get_resource: agent_token }
In the resource above, the Cloud Monitoring agent is installed and configured via the user_data
section (using the cloud-config format). The alarm is configured to trigger a warning state when the system load is below 0.3 for 15 minutes and a critical state when the system load is above 0.85 for 5 minutes. We use the warning state here to trigger scale-down events in lieu of an alternative alarm status.
The scaling_plan
and agent_token
resources referenced in the user_data
section will be defined below.
Next, define a Rackspace::AutoScale::ScalingPolicy resource for scaling up:
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
change: 1
cooldown: 600
type: webhook
Add a Rackspace::AutoScale::WebHook resource:
scale_up_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale up hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_up_policy }
The webhook resource generates a URL that will be used to trigger the scale-up policy above.
Similarly to the previous two resources for scaling-up, we will add another Rackspace::AutoScale::ScalingPolicy and Rackspace::AutoScale::WebHook resource for scaling down:
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
change: -1
cooldown: 600
type: webhook
scale_down_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale down hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_down_policy }
Cloud Monitoring resources
Add a Rackspace::CloudMonitoring::AgentToken resource that will create a token used by the monitoring agent to authenticate with the monitoring service:
agent_token:
type: Rackspace::CloudMonitoring::AgentToken
properties:
label:
str_replace:
template: stack monitoring agent token
params:
stack: { get_param: "OS::stack_name" }
Add a Rackspace::CloudMonitoring::Notification resource that will call the scale-up webhook created above:
scaleup_notification:
type: Rackspace::CloudMonitoring::Notification
properties:
label:
str_replace:
template: stack scale up notification
params:
stack: { get_param: "OS::stack_name" }
type: webhook
details:
url: { get_attr: [ scale_up_webhook, executeUrl ] }
Below, the notification resource will be associated with an alarm state using a notification plan.
Add another Rackspace::CloudMonitoring::Notification resource that will call the scale-down webhook:
scaledown_notification:
type: Rackspace::CloudMonitoring::Notification
properties:
label:
str_replace:
template: stack scale down notification
params:
stack: { get_param: "OS::stack_name" }
type: webhook
details:
url: { get_attr: [ scale_down_webhook, executeUrl ] }
Finally, create a Rackspace::CloudMonitoring::NotificationPlan and Rackspace::CloudMonitoring::PlanNotifications resource.
scaling_plan:
type: Rackspace::CloudMonitoring::NotificationPlan
properties:
label:
str_replace:
template: stack scaling notification plan
params:
stack: { get_param: "OS::stack_name" }
plan_notifications:
type: Rackspace::CloudMonitoring::PlanNotifications
properties:
plan: { get_resource: scaling_plan }
warning_state: # scale down on warning since this is configured for low load
- { get_resource: scaledown_notification }
critical_state:
- { get_resource: scaleup_notification }
The scaling_plan
resource was referenced in the Cloud Monitoring agent configuration inside of the user_data
section of the Rackspace::AutoScale::Group resource above. It tells the monitoring agent how to respond to certain alarm states.
The Rackspace::CloudMonitoring::PlanNotifications resource is a way to update an existing NotificationPlan resource. This allows us to associate the alarm state with the Notification resource while avoiding circular dependencies.
This notification plan will trigger a scale up event when any of the load_alarm
alarms configured in the scaling group (via cloud-init) issue a CRITICAL
alarm state. This plan also triggers a scale down event when any of the load_alarm
alarms configured in the scaling group issue a WARNING
alarm state.
Outputs section
Add the private SSH key and, optionally, the webhook URLs to the outputs section. You can use the webhooks to manually scale your scaling group up or down.
"Access Private Key":
value: { get_attr: [ access_key, private_key ] }
description: Private key for accessing the scaled server instances if needed
"Scale UP servers webhook":
value: { get_attr: [ scale_up_webhook, executeUrl ] }
description: Scale UP API servers webhook
"Scale DOWN servers webhook":
value: { get_attr: [ scale_down_webhook, executeUrl ] }
Full template
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and Event-based scaling using Rackspace Cloud Autoscale
resources:
access_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: "OS::stack_name" }
save_private_key: true
scaling_lb:
type: Rackspace::Cloud::LoadBalancer
properties:
name: { get_param: "OS::stack_name" }
protocol: HTTP
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: PUBLIC
ipVersion: IPV4
scaled_servers:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
maxEntities: 10
minEntities: 2
cooldown: 120
launchConfiguration:
type: launch_server
args:
loadBalancers:
- loadBalancerId: { get_resource: scaling_lb }
port: 80
server:
name: { get_param: "OS::stack_name" }
flavorRef: performance1-1
imageRef: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
key_name: { get_resource: access_key }
config_drive: true
networks:
- uuid: 11111111-1111-1111-1111-111111111111
user_data:
str_replace:
template: |
#cloud-config
apt_upgrade: true
apt_sources:
- source: deb http://stable.packages.cloudmonitoring.rackspace.com/ubuntu-14.04-x86_64 cloudmonitoring main
key: | # This is the apt repo signing key
-----BEGIN PGP PUBLIC KEY BLOCK-----
Version: GnuPG v1.4.10 (GNU/Linux)
mQENBFAZuVEBCAC8iXu/UEDLdkzRJzBKx14cgAiPHxSCjV4CPWqhOIrN4tl0PVHD
BYSJV7oSu0napBTfAK5/0+8zNnnq8j0PNg2YmPOFkL/rIMHJH8eZ08Ffq9j4GQdM
fSHDa6Zvgz68gJMLQ1IRPguen7p2mIEoOl8NuTwpjnWBZTdptImUoj53ZTKGYYS+
OWs2iZ1IHS8CbmWaTMxiEk8kT5plM3jvbkJAKBAaTfYsddo1JqqMpcbykOLcgSrG
oipyiDo9Ppi+EAOie1r6+zqmWpY+ScANkOpaVSfLjGp8fo4RP7gHhl26nDiqYB1K
7tV1Rl3RMPnGuh4g/8YRkiExKd/XdS2CfO/DABEBAAG0jFJhY2tzcGFjZSBDbG91
ZCBNb25pdG9yaW5nIEFnZW50IFBhY2thZ2UgUmVwbyAoaHR0cDovL3d3dy5yYWNr
c3BhY2UuY29tL2Nsb3VkL2Nsb3VkX2hvc3RpbmdfcHJvZHVjdHMvbW9uaXRvcmlu
Zy8pIDxtb25pdG9yaW5nQHJhY2tzcGFjZS5jb20+iQE4BBMBAgAiBQJQGblRAhsD
BgsJCAcDAgYVCAIJCgsEFgIDAQIeAQIXgAAKCRCghvB30Fq5FCo6B/9Oel0Q/cX6
1Lyk+teFywmB2jgn/UC51ioPZBHnHZLIjKH/CA6y7B9jm3+VddH60qDDANzlK/LL
MyUgwLj9+flKeS+H5AL6l3RarWlGm11fJjjW2TnaUCUXQxw6A/QQvpHpl7eknEKJ
m3kWMGAT6y/FbkSye18HUu6dtxvxosiMzi/7yVPJ7MwtUy2Bv1z9yHvt4I0rR8L5
CdFeEcqY4FlGmFBG200BuGzLMrqv6HF6LH3khPoXbGjVmHbHKIzqCx4hPWNRtZIv
fnu/aZcXJOJkB3/jzxaCjabOU+BCkXqVVFnUkbOYKoJ8EVLoepnhuVLUYErRjt7J
qDsI4KPQoEjTuQENBFAZuVEBCACUBBO83pdDYHfKe394Il8MSw7PBhtxFRHjUty2
WZYW12P+lZ3Q0Tqfc5Z8+CxnnkbdfvL13duAXn6goWObPRlQsYg4Ik9wO5TlYxqu
igtPZ+mJ9KlZZ/c2+KV4AeqO+K0L5k96nFkxd/Jh90SLk0ckP24RAYx2WqRrIPyX
xJCZlSWSqITMBcFp+kb0GdMk+Lnq7wPIJ08IKFJORSHgBbfHAmHCMOCUTZPhQHLA
yBDMLcaLP9xlRm72JG6tko2k2/cBV707CfbnR2PyJFqq+zuEyMdBpnxtY3Tpdfdk
MW9ScO40ndpwR72MG+Oy8iM8CTnmzRzMHMPiiPVAit1ZIXtZABEBAAGJAR8EGAEC
AAkFAlAZuVECGwwACgkQoIbwd9BauRSx0QgApV/n2L/Qe5T8aRhoiecs4gH+ubo2
uCQV9W3f56X3obHz9/mNkLTIKF2zHQhEUCCOwptoeyvmHht/QYXu1m3Gvq9X2F85
YU6I2PTEHuI/u6oZF7cEa8z8ofq91AWSOrXXEJiZUQr5DNjO8SiAzPulGM2teSA+
ez1wn9hhG9Kdu4LpaQ3EZHHBUKCLNU7nN/Ie5OeYA8FKbudNz13jTNRG+GYGrpPj
PlhA5RCmTY5N018O51YXEiTh4C7TLskFwRFPbbexh3mZx2s6VlcaCK0lEdQ/+XK3
KW+ZuPEh074b3VujLvuUCXd6T5FT5J6U/6qZgEoEiXwODX+fYIrD5PfjCw==
=S1lE
-----END PGP PUBLIC KEY BLOCK-----
write_files:
- path: /etc/rackspace-monitoring-agent.conf.d/load.yaml
content: |
type: agent.load_average
label: Load Average
period: 60
timeout: 10
alarms:
load_alarm:
label: load average alarm
notification_plan_id: {notification_plan}
criteria: |
:set consecutiveCount=3
if (metric['5m'] > 0.85){
return new AlarmStatus(CRITICAL);
}
if (metric['15m'] < 0.3){
return new AlarmStatus(WARNING);
}
return new AlarmStatus(OK);
- path: /etc/rackspace-monitoring-agent.cfg
content: |
monitoring_token {agent_token}
packages:
- rackspace-monitoring-agent
- apache2
params:
"{notification_plan}": { get_resource: scaling_plan }
"{agent_token}": { get_resource: agent_token }
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
change: 1
cooldown: 600
type: webhook
scale_up_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale up hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_up_policy }
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
change: -1
cooldown: 600
type: webhook
scale_down_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale down hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_down_policy }
agent_token:
type: Rackspace::CloudMonitoring::AgentToken
properties:
label:
str_replace:
template: stack monitoring agent token
params:
stack: { get_param: "OS::stack_name" }
scaleup_notification:
type: Rackspace::CloudMonitoring::Notification
properties:
label:
str_replace:
template: stack scale up notification
params:
stack: { get_param: "OS::stack_name" }
type: webhook
details:
url: { get_attr: [ scale_up_webhook, executeUrl ] }
scaledown_notification:
type: Rackspace::CloudMonitoring::Notification
properties:
label:
str_replace:
template: stack scale down notification
params:
stack: { get_param: "OS::stack_name" }
type: webhook
details:
url: { get_attr: [ scale_down_webhook, executeUrl ] }
scaling_plan:
type: Rackspace::CloudMonitoring::NotificationPlan
properties:
label:
str_replace:
template: stack scaling notification plan
params:
stack: { get_param: "OS::stack_name" }
plan_notifications:
type: Rackspace::CloudMonitoring::PlanNotifications
properties:
plan: { get_resource: scaling_plan }
warning_state: # scale down on warning since this is configured for low load
- { get_resource: scaledown_notification }
critical_state:
- { get_resource: scaleup_notification }
outputs:
"Access Private Key":
value: { get_attr: [ access_key, private_key ] }
description: Private key for accessing the scaled server instances if needed
"Scale UP servers webhook":
value: { get_attr: [ scale_up_webhook, executeUrl ] }
description: Scale UP API servers webhook
"Scale DOWN servers webhook":
value: { get_attr: [ scale_down_webhook, executeUrl ] }
Auto-scaling using webhooks
If you decide to use a monitoring system other than Rackspace Cloud Monitoring, you can remove the monitoring agent configuration from the Rackspace::Autoscale::Group resource and remove the Rackspace::CloudMonitoring resources. Be sure to include the webhooks in the output values, as they will be needed when configuring monitoring.
Here is an example template for auto scaling with webhooks alone:
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and Event-based scaling using Rackspace Cloud Autoscale
resources:
access_key:
type: OS::Nova::KeyPair
properties:
name: { get_param: "OS::stack_name" }
save_private_key: true
scaling_lb:
type: Rackspace::Cloud::LoadBalancer
properties:
name: { get_param: "OS::stack_name" }
protocol: HTTP
port: 80
algorithm: ROUND_ROBIN
nodes: []
virtualIps:
- type: PUBLIC
ipVersion: IPV4
scaled_servers:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
maxEntities: 10
minEntities: 2
cooldown: 120
launchConfiguration:
type: launch_server
args:
loadBalancers:
- loadBalancerId: { get_resource: scaling_lb }
port: 80
server:
name: { get_param: "OS::stack_name" }
flavorRef: performance1-1
imageRef: 6f29d6a6-9972-4ae0-aa80-040fa2d6a9cf # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
key_name: { get_resource: access_key }
config_drive: true
networks:
- uuid: 11111111-1111-1111-1111-111111111111
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
change: 1
cooldown: 600
type: webhook
scale_up_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale up hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_up_policy }
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: scaled_servers }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
change: -1
cooldown: 600
type: webhook
scale_down_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale down hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_down_policy }
outputs:
"Access Private Key":
value: { get_attr: [ access_key, private_key ] }
description: Private key for accessing the scaled server instances if needed
"Scale UP servers webhook":
value: { get_attr: [ scale_up_webhook, executeUrl ] }
description: Scale UP API servers webhook
"Scale DOWN servers webhook":
value: { get_attr: [ scale_down_webhook, executeUrl ] }
Reference documentation
- Cloud Monitoring API Developer Guide
- Auto Scale API Developer Guide
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT) Specification
- Cloud-init format documentation
Auto-scaling Heat stacks
Brief summary
In the Event and schedule-based auto-scaling with Cloud Orchestration tutorial, we used the “server” launchConfiguration type to scale the Cloud Servers in your web application. Rackspace Auto Scale also supports a “stack” launchConfiguration type, where the unit of scale is a stack instead of a server.
In this tutorial, we will learn how to scale using a group of Heat stacks which allows us to scale configurations more complex than a single server. In this example, we will use a stack containing both a server and a OS::Heat::SoftwareConfig
to configure new instances in the group.
Pre-reading
The Auto Scale and Cloud Orchestration pre-reading from the Event and schedule-based auto-scaling with Cloud Orchestration tutorial will be useful in this tutorial as well. In addition, please look over the new “launchConfiguration.args” body parameters in the Otter docs.
Example template
Start by creating a new template with the following top-level template sections:
heat_template_version: 2014-10-16
description: |
Rackspace Cloud Monitoring and Event-based scaling using Rackspace Cloud Autoscale
parameters:
resources:
Add a parameter for key_name
, flavor
, and image
:
key_name:
type: string
description : Name of a key pair to enable SSH access to instances.
default: my_key
flavor:
type: string
description: Flavor to use for the WordPress server.
constraints:
- custom_constraint: nova.flavor
default: 4 GB Performance
image:
type: string
description: >
Name or ID of the image to use for the WordPress server.
The image must have the software config agents baked-in.
default: f4bbbce2-50b0-4b07-bf09-96c175a45f4b
It is important that the image being used has the the base OpenStack agents (os-collect-config
, os-apply-config
, and os-refresh-config
) baked into the image.
Next, add alaunchConfiguration
type of launch_stack
:
Rackspace::AutoScale::Group
resource with a
lamp_asg:
type: Rackspace::AutoScale::Group
properties:
groupConfiguration:
name: { get_param: "OS::stack_name" }
metadata:
rax-heat: { get_param: "OS::stack_id" }
maxEntities: 3
minEntities: 1
cooldown: 120
launchConfiguration:
type: launch_stack
args:
stack:
template_url: https://raw.githubusercontent.com/rackerlabs/rs-heat-docs/master/templates/launch_stack_template.yaml
disable_rollback: False
parameters:
flavor: {get_param: flavor}
image: {get_param: image}
key_name: {get_param: key_name}
timeout_mins: 30
The template referenced by URL in the template_url
property is the template for the stack being scaled. It is a simple template that creates a LAMP server using an OS::Nova::Server
, OS::Heat::SoftwareDeployment
, and OS::Heat::SoftwareConfig
resource. Please read through the template before proceeding.
Next, add Rackspace::AutoScale::ScalingPolicy and Rackspace::AutoScale::WebHook resources to create webhooks for scaling up and down:
scale_up_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: lamp_asg }
name:
str_replace:
template: stack scale up policy
params:
stack: { get_param: "OS::stack_name" }
change: 1
cooldown: 600
type: webhook
scale_up_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale up hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_up_policy }
scale_down_policy:
type: Rackspace::AutoScale::ScalingPolicy
properties:
group: { get_resource: lamp_asg }
name:
str_replace:
template: stack scale down policy
params:
stack: { get_param: "OS::stack_name" }
change: -1
cooldown: 600
type: webhook
scale_down_webhook:
type: Rackspace::AutoScale::WebHook
properties:
name:
str_replace:
template: stack scale down hook
params:
stack: { get_param: "OS::stack_name" }
policy: { get_resource: scale_down_policy }
Finally, add following to the outputs section so that the webhooks created above are displayed in the stack outputs:
"Scale UP webhook":
value: { get_attr: [ scale_up_webhook, executeUrl ] }
"Scale DOWN webhook":
value: { get_attr: [ scale_down_webhook, executeUrl ] }
The full template can be found at https://raw.githubusercontent.com/rackerlabs/rs-heat-docs/master/templates/launch_stack.yaml
Create stack and scale group
Create the parent stack with the following heatclient command:
heat stack-create -f https://raw.githubusercontent.com/rackerlabs/rs-heat-docs/master/templates/launch_stack.yaml launch_stack
The scaling group will be created within a few seconds (the stack will show as CREATE COMPLETE
) and then the scaling group will begin scaling to the specified minimum number of entities (one stack, in our case). In a few minutes, the PHP info page should become available at the URL shown in the stack details page in the MyRackspace Portal for the child stack that was created by AutoScale.
To see the AutoScale group scale, view the “Scale UP webhook” in the stack outputs and then trigger the webhook using curl:
curl -X POST -I -k <webhook_scale_up_address_from_stack_outputs>
You should now have two LAMP stacks and associated LAMP servers.
Using Ansible with Heat
Brief summary
In this tutorial, we will show you how to leverage Ansible via Heat and software config to bootstrap an instance with a fully configured Nginx server.
Pre-reading
- You should prepare a bootstrapped image according to the Bootstrapping Software Config tutorial as we will be making use of the image pre-configured with all the required agents for software config.
- This tutorial borrows heavily from An Ansible Tutorial. Reading this guide will give you a good idea of what we will be installing/configuring so you can focus on how we use Orchestration to integrate with Ansible rather than using Ansible itself.
Following along
You will probably want to clone this repository (https://github.com/rackerlabs/rs-heat-docs/) in order to easily follow along. Once cloned, change to the ansible
directory of the repository.
Otherwise, you may need to modify some of the commands to point to the correct locations of various templates and/or environments. Full templates can always be found in the templates
directory.
Basic template
As with all Heat templates, we start with the basic version and description sections:
heat_template_version: 2014-10-16
description: |
Deploy Nginx server with Ansible
For the parameters section, we will define a single parameter that will tell Orchestration which image to use for our server. This allows us some flexibility should the image name change or we have several to choose from:
parameters:
image:
type: string
Resources
Now for the details. First we create a random password for accessing the server:
server_pw:
type: OS::Heat::RandomString
Next we specify the playbook that will install Nginx:
nginx_config:
type: OS::Heat::SoftwareConfig
properties:
group: ansible
config: |
---
- name: Install and run Nginx
connection: local
hosts: localhost
tasks:
- name: Install Nginx
apt: pkg=nginx state=installed update_cache=true
notify:
- Start Nginx
handlers:
- name: Start Nginx
service: name=nginx state=started
We then use an OS::Heat::SoftwareDeployment
to tell Orchestration we want to run the playbook on our server (which we will define in awhile):
deploy_nginx:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: TEMP_URL_SIGNAL
config:
get_resource: nginx_config
server:
get_resource: server
Finally we will define the server the playbook will run on:
server:
type: OS::Nova::Server
properties:
image: { get_param: image }
admin_pass: { get_attr: [ server_pw, value ] }
flavor: 2 GB Performance
software_config_transport: POLL_TEMP_URL
user_data_format: SOFTWARE_CONFIG
Notice that we have to specify the user_data_format
as “SOFTWARE_CONFIG” so that Orchestration knows to set up the proper signal handling between it and the server. It is good practice to specify software_config_transport
, and while “POLL_TEMP_URL” is the only value supported on the Rackspace Cloud, it should also be the default for Cloud Orchestration and can be safely omitted.
Outputs
The outputs defined in this template give us ready access to the results of the deployment and show off how software config makes it easier to see the state of your configuration, the results, and any errors or output it may have generated without having to remotely log into your servers and search through logs. The description
property of these outputs tells you what each represents.
outputs:
stdout:
description: Ansible Output
value:
get_attr: [ deploy_nginx, deploy_stdout ]
stderr:
description: Ansible Error Output
value:
get_attr: [ deploy_nginx, deploy_stderr ]
status_code:
description: Exit Code
value:
get_attr: [ deploy_nginx, deploy_status_code ]
server_ip:
description: Server IP Address
value:
get_attr: [ server, accessIPv4 ]
server_password:
description: Server Password
value:
get_attr: [ server_pw, value ]
Deploy the basic template
Before you deploy, you will need to have created an image that already has the needed agents for software config. The Bootstrapping Software Config walks you through it. Alternatively, you can use the information in that and previous tutorials to add the appropriate bootstrapping to this template.
To deploy this template, simply issue the standard command:
heat stack-create -f templates/software_config_ansible.yaml -P "image=Ubuntu 14.04 LTS (HEAT)" my_nginx_simple
Once the stack is CREATE_COMPLETE
, you can visit your new Nginx homepage by checking the stack output for the ip and entering that into your web browser:
heat output-show my_nginx_simple server_ip
You can also check the results of the playbook by checking the other outputs:
heat output-show my_nginx_simple status_code # Ansible return code
heat output-show my_nginx_simple stdout # Ansible output
heat output-show my_nginx_simple stderr # Error details (if any; should be empty)
Advanced Template with Role
While the basic template gives a good idea of how Orchestration integrates with Ansible, we will look at a slightly more advanced usage leveraging Ansible roles. We will tweak the previous template a small amount, so make a copy and call it “software_config_ansible_role.yaml”.
The role and its components can be found in this repository (https://github.com/rackerlabs/rs-heat-docs/) under the roles
directory.
New resources
We will add two new resources to pull down the role we want to use and put it in a place Ansible can access it:
pull_role_config:
type: OS::Heat::SoftwareConfig
properties:
group: script
config: |
#!/bin/bash
git clone https://github.com/rackerlabs/rs-heat-docs.git
cp -r rs-heat-docs/ansible/roles /etc/ansible/roles
# needed dependency by one of the Ansible modules
apt-get install -y python-pycurl
This is a simple script that clones this repository and copies the role to the right place. It also installs a dependency needed by one of the modules used in the role.
We’ll also deploy that script to the server:
deploy_role:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: TEMP_URL_SIGNAL
config:
get_resource: pull_role_config
server:
get_resource: server
Modify playbook
Since we’re using roles to do all of the heavy lifting, we will modify our nginx_config
resource to simply apply the role:
nginx_config:
type: OS::Heat::SoftwareConfig
properties:
group: ansible
config: |
---
- name: Apply Nginx Role
hosts: localhost
connection: local
roles:
- nginx
We will also need to modify the deployment of the playbook to depend on the deploy_role
resource, since we will need the role installed before we can apply it:
deploy_nginx:
type: OS::Heat::SoftwareDeployment
depends_on: deploy_role
properties:
signal_transport: TEMP_URL_SIGNAL
config:
get_resource: nginx_config
server:
get_resource: server
Modify outputs
Our script for pulling the role definition is not very sophisticated. We are not capturing or writing any output, but we can examine the exit code of our script. We will add that to the outputs
section so we can check it if we need to:
role_status_code:
description: Exit Code returned from deploying the role to the server
value:
get_attr: [ deploy_role, deploy_status_code ]
Deploy the advanced template
Deploying the new template is the same as above, we just change the template name:
heat stack-create -f templates/software_config_ansible_role.yaml -P "image=Ubuntu 14.04 LTS (HEAT)" my_nginx_role
We can also check outputs the same way, by simply changing the stack name:
heat output-show my_nginx_role status_code # Ansible return code
heat output-show my_nginx_role stdout # Ansible output
heat output-show my_nginx_role stderr # Error details (if any; should be empty)
heat output-show my_nginx_role role_status_code # Exit code of the role script
Reference documentation
Using Chef with Heat
Brief summary
In this tutorial, we will show you how to leverage Chef via Heat and software config to bootstrap an instance with a fully configured Redis server.
Pre-reading
- You should already be familiar with Generic Software Config
- You should already be familiar with Chef, its usage and features.
Basic template
As with all Heat templates, we start with the basic version and description sections:
heat_template_version: 2014-10-16
description: |
Using Orchestration software config and Chef
There will be no parameters or outputs in this example, but you can add them as well as appropriate calls to intrinsic functions (get_attr
, get_resource
, etc) if you want to make the example more configurable.
Resources
The first resource we’ll add is the server we want to configure with Chef:
server:
type: OS::Nova::Server
properties:
name:
str_replace:
template: stack-server
params:
stack: { get_param: "OS::stack_name" }
metadata:
rax-heat: { get_param: "OS::stack_id" }
image: Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM) (Orchestration)
flavor: 2 GB Performance
config_drive: true
software_config_transport: POLL_TEMP_URL
user_data_format: SOFTWARE_CONFIG
This server uses an image pre-configured with all of the agents needed to run software configuration. You can also use your own custom image or bootstrap a “pristine” image in your template. See Bootstrapping Software Config if you are unfamiliar with that process.
Also note that we’ve set up config_drive
, software_config_transport
and user_data_format
as required.
Next we specify a random password to use when accessing the Redis service on the instance:
redis_password:
type: OS::Heat::RandomString
properties:
length: 16
sequence: lettersdigits
We then use an OS::Heat::SoftwareConfig
to define what attributes, recipes, and repository the Chef software config agent should apply:
redis_config:
type: OS::Heat::SoftwareConfig
properties:
group: chef
inputs:
- name: redisio
type: Json
config: |
["recipe[apt]",
"recipe[build-essential]",
"recipe[redisio::default]",
"recipe[redisio::enable]" ]
options:
kitchen: https://github.com/rackspace-orchestration-templates/redis-single
kitchen_path: /opt/heat/chef/kitchen
For this agent, use the group chef
to tell Orchestration which agent should process the configuration.
For this agent, it is important to specify the top-level elements of any attribute overrides you plan to use in the inputs
section to ensure that this information is formatted correctly when sent to the agent.
The config
property simply defines the run-list you want applied to the instance. Additionally, the chef agent allows for an input named environment
of type String that you can use to specify which environment to use when applying the config. You do not have to explicitly declare this input in the config resource. We don’t use this input in this example, but it is included in a comment in a following section to illustrate its use.
The options
property allows you to optionally specify both the source location and the local path to the kitchen containing the roles, recipes, attributes, and other elements needed to converge the instance. The kitchen_path
property defaults to /var/lib/heat-config/heat-config-chef/kitchen
if it is not specified.
kitchen
allows you to specify the url of a Github repository that contains your kitchen. Here, we re-use a repository from one of the existing Rackspace curated templates. If you do not specify a kitchen
to clone, you will need to make sure that your kitchen is available at the specified kitchen_path
either via another OS::Heat::SoftwareConfig
resource, user_data
, custom image, or some other “manual” means.
Finally we deploy the configuration to the instance:
deploy_redis:
type: OS::Heat::SoftwareDeployment
properties:
signal_transport: TEMP_URL_SIGNAL
input_values:
# environment: production -- This isn't used in this example
redisio:
default_settings:
requirepass: { get_attr: [redis_password, value] }
servers:
- name:
str_replace:
template: stack-server
params:
stack: { get_param: "OS::stack_name" }
port: 6379
version: "2.8.14"
config:
get_resource: redis_config
server:
get_resource: server
Note that the input values take the form of a dictionary just like they would for any other node. Also note as mentioned earlier that we’ve commented out the environment
input since its not actually used in the recipes we’ve used.
References
Rackspace shared IP resource
Brief summary
You can use the Rackspace shared IP resource (SharedIP) and AssociateSharedIP resources to create a shared IP address and associate the shared IP address with two or more virtual server instances.
Setup process
The following steps describe the process to set up and use a shared IP address between servers.
For additional information, see Share IP address between servers in the Rackspace Cloud Networks documentation.
- Create (two or more) servers in the same
publicIPzoneId
and write down the public IP address ports. - Create a shared IP address with the given network ID and port IDs.
- Associate shared IPs with the servers.
The following example template provides the code to create a shared IP address and associate it with two server instances. For the sake of simplicity, assume that two servers were already created in the same publicIPzoneId
.
Example template
Start by adding the top-level template sections:
heat_template_version: 2014-10-16
description: |
Shared IP example template.
resources:
outputs:
Resources
The following sections provide information about the resources, outputs, and an example of the full template to set up a shared IP address between servers.
SharedIP resource
Add a Rackspace::Cloud::SharedIP resource to create a shared IP address.
shared_ip:
properties:
network_id: 00000000-0000-0000-0000-000000000000
ports: [55xxfxx6-cxx7-4xxb-8xx3-3cxxd12xxe0d, 17xxfxxca-exx2-4xxe-bxx7-91xxf6xxbb2]
type: Rackspace::Cloud::SharedIP
The network_id
property provides the value for the public network ID, 00000000-0000-0000-0000-000000000000
. The ports
property specifies a list of public port IDs, 55xxfxx6-cxx7-4xxb-8xx3-3cxxd12xxe0d
and 17xxfxxca-exx2-4xxe-bxx7-91xxf6xxbb2
.
For information about creating a server and getting port IDs, see the Setup process.
AssociateSharedIP resource
Add a Rackspace::Cloud::AssociateSharedIP
resource to associate a shared IP address with the given server instances.
associate_shared_ip:
properties:
shared_ip: {get_attr: [shared_ip, shared_ip_address, ip_address, id]}
servers: [62cxx03b-axx7-4xxb-bxxb-f1axx14370b4, 6exx610f-1xx2-4xx9-9xx5c-bxx2c735e463]
type: Rackspace::Cloud::AssociateSharedIP
The servers
property specifies a list of the server instance IDs: 62cxx03b-axx7-4xxb-bxxb-f1axx14370b4
and 6exx610f-1xx2-4xx9-9xx5c-bxx2c735e463
. Note that these values are not port IDs.
Outputs section
Add the shared IP address to the outputs section.
shared_ip_address:
value:
get_attr: [shared_ip, shared_ip_address, ip_address, address]
Full Example Template
heat_template_version: 2014-10-16
description: |
Shared IP example template.
outputs:
shared_ip_address:
value:
get_attr: [shared_ip, shared_ip_address, ip_address, address ]
resources:
server1:
type: OS::Nova::Server
properties:
image: Ubuntu 18.04 LTS (Bionic Beaver) (PVHVM)
flavor: 2 GB General Purpose v1
server2:
type: OS::Nova::Server
properties:
image: Ubuntu 18.04 LTS (Bionic Beaver) (PVHVM)
flavor: 2 GB General Purpose v1
shared_ip:
properties:
network_id: 00000000-0000-0000-0000-000000000000
ports: [{ get_attr: [ server1, addresses, public, 0, port ] }, { get_attr: [ server2, addresses, public, 0, port ] }]
type: Rackspace::Cloud::SharedIP
associate_shared_ip:
properties:
shared_ip: {get_attr: [shared_ip, shared_ip_address, ip_address, id]}
servers: [{get_resource: server1}, {get_resource: server2}]
type: Rackspace::Cloud::AssociateSharedIP
Reference
- Cloud Orchestration API Developer Guide
- Heat Orchestration Template (HOT)
- Share IP address between servers
- Shared IP address operations
Updating stacks
Overview
Rackspace Orchestration has the ability to modify running stacks using the update stack operation. This gives you the ability to add, edit, or delete resources in a stack. In the python-heatclient CLI, this operation is called heat stack-update
.
Create a stack
For this tutorial, we will create a simple stack with one server:
heat_template_version: 2013-05-23
resources:
hello_world:
type: "OS::Nova::Server"
properties:
flavor: 1GB Standard Instance
image: 5b0d5891-f80c-412a-9b73-cc996de9d719
config_drive: "true"
user_data_format: RAW
user_data: |
#!/bin/bash -xv
echo "hello world" > /root/hello-world.txt
outputs:
public_ip:
value: { get_attr: [ hello_world, accessIPv4 ] }
description: The public ip address of the server
Save this template as stack-update-example.yaml
and create a stack using the following heatclient command:
heat stack-create -f stack-update-example.yaml stack-update-example
Update the stack without replacement
Next, edit the template file and change the server flavor to “2GB Standard Instance”:
heat_template_version: 2013-05-23
resources:
hello_world:
type: "OS::Nova::Server"
properties:
flavor: 2GB Standard Instance
image: 5b0d5891-f80c-412a-9b73-cc996de9d719
config_drive: "true"
user_data_format: RAW
user_data: |
#!/bin/bash -xv
echo "hello world" > /root/hello-world.txt
outputs:
public_ip:
value: { get_attr: [ hello_world, accessIPv4 ] }
description: The public ip address of the server
Modifying some resource properties will trigger a delete->rebuild of that resource. Because some architectures are less tolerant of nodes being rebuilt, you can check the Template Guide to see which properties trigger a rebuild. For example, each property in the OS::Nova::Server documentation says either “Updates cause replacement” or “Can be updated without replacement”.
Alternatively, you can preview what will happen with the stack is updated by adding a “-y” option to the “heat stack-update” command:
heat stack-update -y -f stack-update-example.yaml stack-update-example
The hello_world
resource should show up in the “updated” section, since resizing can be done without replacement.
To actually update the stack, resubmit the modified template:
heat stack-update -f stack-update-example.yaml stack-update-example
If there were any parameters or flags passed to the original stack-create, they need to be passed unmodified to the stack-update command (unless you are changing them as part of the stack-update). Leaving them out may result in unexpected changes to the stack.
Update the stack with replacement
In the next example, we will modify a property that will cause the server to be rebuilt. Change “hello world” to “foo” in the user_data section:
heat_template_version: 2013-05-23
resources:
hello_world:
type: "OS::Nova::Server"
properties:
flavor: 2GB Standard Instance
image: 5b0d5891-f80c-412a-9b73-cc996de9d719
config_drive: "true"
user_data_format: RAW
user_data: |
#!/bin/bash -xv
echo "foo" > /root/hello-world.txt
outputs:
public_ip:
value: { get_attr: [ hello_world, accessIPv4 ] }
description: The public ip address of the server
The stack-update preview output with this template should result in the hello_world
resource being in the “replaced” section:
heat stack-update -y -f stack-update-example.yaml stack-update-example
Issue the update as before:
heat stack-update -f stack-update-example.yaml stack-update-example
Update the stack to add a resource
In this example, we will add a resource to a stack. Add another server to the template:
heat_template_version: 2013-05-23
resources:
hello_world:
type: "OS::Nova::Server"
properties:
flavor: 2GB Standard Instance
image: 5b0d5891-f80c-412a-9b73-cc996de9d719
config_drive: "true"
user_data_format: RAW
user_data: |
#!/bin/bash -xv
echo "foo" > /root/hello-world.txt
hello_world2:
type: "OS::Nova::Server"
properties:
flavor: 2GB Standard Instance
image: 5b0d5891-f80c-412a-9b73-cc996de9d719
config_drive: "true"
user_data_format: RAW
user_data: |
#!/bin/bash -xv
echo "bar" > /root/hello-world.txt
outputs:
public_ip:
value: { get_attr: [ hello_world, accessIPv4 ] }
description: The public ip address of the server
public_ip2:
value: { get_attr: [ hello_world2, accessIPv4 ] }
description: The public ip address of the server
The stack-update preview output with this template should result in the hello_world2
resource being in the “added” section, and the hello_world
resource being in the “unchanged” section:
heat stack-update -y -f stack-update-example.yaml stack-update-example
Issue the update to create the other server:
heat stack-update -f stack-update-example.yaml stack-update-example
Writing templates using CloudFormation format
Overview
Rackspace Orchestration supports templates written in AWS’ CloudFormation (CFN) format. CFN-compatible functions exist for most of the CFN functions (condition functions are not supported).
In addition, the following four CFN resources are supported:
- AWS::EC2::Instance
- AWS::ElasticLoadBalancing::LoadBalancer
- AWS::CloudFormation::WaitCondition
- AWS::CloudFormation::WaitConditionHandle
An AWS::EC2::Instance resource will result in a Rackspace Cloud Server being created, and a AWS::ElasticLoadBalancing::LoadBalancer resource will result in a Rackspace Cloud Loadbalancer being created. The wait condition resources are used internally as a signaling mechanism and do not map to any cloud resources.
Writing a CFN template
CFN templates are written in JSON format. Here is an example of a template that creates a server and executes a bash script on it:
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Hello world",
"Parameters" : {
"InstanceType" : {
"Description" : "WebServer EC2 instance type",
"Type" : "String",
"Default" : "1GB Standard Instance",
"AllowedValues" : [ "1GB Standard Instance", "2GB Standard Instance" ],
"ConstraintDescription" : "must be a valid EC2 instance type."
}
},
"Resources" : {
"TestServer": {
"Type": "AWS::EC2::Instance",
"Properties": {
"ImageId" : "4b14a92e-84c8-4770-9245-91ecb8501cc2",
"InstanceType" : { "Ref" : "InstanceType" },
"UserData" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -v\n",
"echo \"hello world\" > /root/hello-world.txt\n"
]]}}
}
}
},
"Outputs" : {
"PublicIP" : {
"Value" : { "Fn::GetAtt" : [ "TestServer", "PublicIp" ]},
"Description" : "Public IP of server"
}
}
}
Notice that the “InstanceType” must be a valid Cloud Server flavor (“m1.small”, for example, is not). Also, the “ImageId” property must be a valid Cloud Server image ID or image name.
Using the CFN resources
It is possible to use a CFN resource in a HOT template. In this example, we will create an AWS::EC2::Instance, keep track of the user_data script’s progress using AWS::CloudFormation::WaitCondition and AWS::CloudFormation::WaitConditionHandle, and add the server to an AWS::ElasticLoadBalancing::LoadBalancer.
heat_template_version: 2014-10-16
description: |
Test template for AWS supported resources
resources:
aws_server1:
type: AWS::EC2::Instance
properties:
ImageId: 753a7703-4960-488b-aab4-a3cdd4b276dc # Ubuntu 14.04 LTS (Trusty Tahr) (PVHVM)
InstanceType: 4 GB Performance
UserData:
str_replace:
template: |
#!/bin/bash
apt-get update
apt-get -y install curl
sleep 2
curl -i -X PUT --data-binary '{"status": "SUCCESS", "reason": "AWS Signal"}' "wc_notify"
params:
wc_notify: { get_resource: aws_handle }
aws_handle:
type: AWS::CloudFormation::WaitConditionHandle
aws_wait_condition:
type: AWS::CloudFormation::WaitCondition
properties:
Handle: { get_resource: aws_handle }
Timeout: 600
elastic_load_balancer:
type: AWS::ElasticLoadBalancing::LoadBalancer
properties:
AvailabilityZones: []
Instances: [ get_resource: aws_server1 ]
Listeners: [{
LoadBalancerPort: 8945,
InstancePort: 80,
Protocol: "HTTP"
}]
HealthCheck:
Target: "HTTP:80/"
HealthyThreshold: 3
UnhealthyThreshold: 10
Interval: 10
Timeout: 60
outputs:
"AWS Server ID":
value: { get_resource: aws_server1 }
description: ID of the AWS::EC2::Instance resource
"AWS EC2 Server AvailabilityZone":
value: { get_attr: [ aws_server1, AvailabilityZone ] }
description: AWS EC2 Server AvailabilityZone
"AWS EC2 Server PrivateDnsName":
value: { get_attr: [ aws_server1, PrivateDnsName ] }
description: AWS EC2 Server PrivateDnsName
"AWS EC2 Server PrivateIp":
value: { get_attr: [ aws_server1, PrivateIp ] }
description: AWS EC2 Server PrivateIp
"AWS EC2 Server PublicDnsName":
value: { get_attr: [ aws_server1, PublicDnsName ] }
description: AWS EC2 Server PublicDnsName
"AWS EC2 Server PublicIp":
value: { get_attr: [ aws_server1, PublicIp ] }
description: AWS EC2 Server PublicIp
"AWS Cloud Formation Wait Condition":
value: { get_attr: [ aws_wait_condition, Data ] }
description: AWS Cloud Formation Wait Condition data
"AWS ElasticLoadBalancer CanonicalHostedZoneName":
value: { get_attr: [ elastic_load_balancer, CanonicalHostedZoneName ] }
description: details the CanonicalHostedZoneName
"AWS ElasticLoadBalancer CanonicalHostedZoneNameID":
value: { get_attr: [ elastic_load_balancer, CanonicalHostedZoneNameID ] }
description: details the CanonicalHostedZoneNameID
"AWS ElasticLoadBalancer DNSName":
value: { get_attr: [ elastic_load_balancer, DNSName ] }
description: details the DNSName
Likewise, you can use HOT resources in a CFN template. In this example, an OS::Nova::Server resource is embedded in a CFN template.
{
"AWSTemplateFormatVersion" : "2010-09-09",
"Description" : "Hello world",
"Parameters" : {
"InstanceType" : {
"Description" : "WebServer EC2 instance type",
"Type" : "String",
"Default" : "1GB Standard Instance",
"AllowedValues" : [ "1GB Standard Instance", "2GB Standard Instance" ],
"ConstraintDescription" : "must be a valid EC2 instance type."
}
},
"Resources" : {
"TestServer": {
"Type": "OS::Nova::Server",
"Properties": {
"image" : "4b14a92e-84c8-4770-9245-91ecb8501cc2",
"flavor" : { "Ref" : "InstanceType" },
"config_drive" : "true",
"user_data_format" : "RAW",
"user_data" : { "Fn::Base64" : { "Fn::Join" : ["", [
"#!/bin/bash -v\n",
"echo \"hello world\" > /root/hello-world.txt\n"
]]}}
}
}
},
"Outputs" : {
"PublicIP" : {
"Value" : { "Fn::GetAtt" : [ "TestServer", "accessIPv4" ]},
"Description" : "Public IP of server"
}
}
}
Updated about 1 year ago