Rackspace Auto Scale FAQ
Last updated on: 2020-09-23
Authored by: Stephanie Fillmon
No. You cannot migrate your configurations from other providers.
Authentication is required to create a scaling group; you must send an X-Auth-Token header with most API requests. Authentication is not required to execute policies via anonymous webhooks.
Auto Scale works by horizontally scaling a particular tier of an application, such as the web tier. You need to know which servers you want to scale. To get started, you need to configure a server image with all needed applications and settings. You should also configure the image to be ready when the server starts. You can ensure your servers deploy fully ready for service by using various programs such as Chef, Puppet, and Salt.
Some of the actions Auto Scale takes on your behalf are deferred, such as when you set a schedule to create additional servers.
Auto Scale is available at no cost to Rackspace Cloud customers, although you do pay for the servers created by a scale-up until they are removed.
No. Even if you add the autoscale-group-id metadata to the server, the Auto Scale back end service does not know the server belongs in the group. Auto Scale manages only servers created by Auto Scale.
Auto Scale currently does not track what happens to servers outside of the Auto Scale system. If you delete a server outside of the system, Auto Scale continues to treat the server as if it still exists. If you try to delete the server through Auto Scale (for example, by scaling down), no problems should occur.
Rackspace added a new API endpoint to Auto Scale, DELETE server, that allows you to remove a specific server from a scaling group. You can use this endpoint to bring Auto Scale back in sync with the correct number of servers in a group when a server has been deleted through the API or the Cloud Control Panel. For more information, see the Rackspace Auto Scale Developer’s Guide Delete server from scaling group section.
You can remove a server from an Auto Scale group and keep it on your Cloud account for observation. Auto Scale will automatically replace it with a new server.
Newly created servers have different IP addresses unless they are created in a scaling group with a load balancer.
A server may be created for a scale-up operation and then be immediately deleted if there is a problem with the load balancer associated with the scaling group. The load balancer problems that can cause this are:
- The load balancer is mis-configured.
- The load balancer is at its limit.
- The load balancer has been deleted.
If any of these problems are present, Auto Scale immediately deletes the newly-created server so that the customer doesn’t get billed for servers not in the load balancers.
One possibility is that you tried to scale up or down beyond the configured minimum or maximum value. As a result, no servers could be created or destroyed. The error message could also mean that you are trying to set the needed capacity equal to what Auto Scale thinks is already there.
No. The server is removed from the load balancer before the delete command is sent. At present, connections are not drained.
The maximum is 86400 seconds, equal to 24 hours.
Zero seconds. We recommend having the group cooldown being around 5 minutes (300 seconds) by default.
Cooldown timers are built into the scaling group and the individual scaling policies so that you can prevent too many servers from being created or deleted too quickly.
The system triggers scheduled policies at the scheduled time. It triggers other policies by a webhook. A webhook is a construct that defines the name or “handle” for each policy, which is a unique URL endpoint you call to invoke the policy execution.
For information on the parameters used with the Auto Scale Control Panel, see the Create a scaling group section in the Rackspace Auto Scale Control Panel User Guide.
No. There are no specific rules within Auto Scale for monitoring specific servers. However, you can do this through Monitoring configurations, which are documented in the Cloud Monitoring API Developer’s Guide.
Yes. However, if you need to scale beyond 25 servers with a Cloud Load Balancer, we recommend creating multiple Auto Scale groups and creating a tree of load balancers.
There is no maximum number of servers in a scaling group. However, a scaling group used with a Cloud Load Balancer instance is limited to 50 servers per load balancer group. You might have overall Cloud Servers limits on the number of servers you can create without having your quota bumped up. If you reach Cloud Load Balancer limits, Auto Scale will fail to add additional servers. If you are running up against limits with Cloud Load Balancer instances, you should consider creating multiple scaling groups and a tree of load balancers to service requests or using RackConnect to use a higher capacity hardware load balancer solution. For more information on RackConnect, see How do I get started with RackConnect?
Yes. A scaling policy is associated with a specific group. The system manages all of the scaled-up servers for health and monitoring in aggregate, so they need to be part of a group.
Yes. You can add servers later.
A scaling group is a construct that contains the configuration for creating individual servers, has zero or more servers associated with it, and has one or more associated scaling policies that describe what actions to take when the policy is activated.
No. Auto Scale does not scale up servers or load balancers in a particular order.
Yes. You don’t need a load balancer as part of the launch configuration. However, you do need to configure how your servers get requests.
No, all resources must be in the same data center. There is a different Auto Scale endpoint for each data center, and each endpoint orchestrates only within that data center. The Auto Scale Control Panel refers to data centers as Regions.
No, you must create separate scaling groups for different data centers.
Auto Scale is service agnostic and API based, so it works well with these services but does not explicitly integrate with them.