No. Your configurations cannot be migrated from other providers.
Auto Scale: FAQs
- Getting Started -
Authentication is required to create a scaling group; you must send an x-auth-token header with most API requests. Authentication is not required to execute policies via anonymous webhooks.
Auto Scale works by horizontally scaling a particular tier of an application; for example, the web tier. You need to know which servers you want to scale. To get started, you need a server image that you have configured with all needed applications, and settings, and that is configured to be ready when the server is started. You can ensure your servers deploy fully ready for service by using various programs such as Chef, Puppet, and Salt.
Not currently. The history resource on the Auto Scale API endpoint will show scaling history, triggers, and user changes. It will be available in a future release.
Some of the actions Auto Scale takes on your behalf are deferred; for example, when you set a schedule to create additional servers. Auto Scale will soon have an advanced audit log to track when Rackspace takes actions on your behalf. You will be able to access this through the history resource on the Auto Scale API endpoint.
Auto Scale is available at no cost to Rackspace Cloud customers, although you do pay for the servers created by a scale-up until they are removed.
Auto Scale currently does not track what happens to servers outside of the Auto Scale system. If a server is deleted outside of the system, Auto Scale will continue to treat the server as if it still exists. If you then try to delete the server through Auto Scale (for example, by scaling down), no problems should occur.
A new API endpoint has been added to Auto Scale, DELETE server, that allows you to remove a specific server from a scaling group. You can use this endpoint to bring Auto Scale back in sync with the correct number of servers in a group when a server has been deleted through the API or the Cloud Control Panel. For more information, see the Rackspace Auto Scale Developer's Guide Delete server from scaling group section.
You can remove a server from an Auto Scale group and keep it on your Cloud account for observation. Auto Scale will automatically replace it with a new server.
Newly created servers have different IP addresses unless they are created in a scaling group with a load balancer.
No. Even if you add the autoscale-group-id metadata to the server, the Auto Scale back end service will not know the server belongs in the group. Auto Scale manages only servers created by Auto Scale.
Scaling and High Availability
Yes. You can add servers later.
A scaling group is a construct that contains the configuration for creating individual servers, has zero or more servers associated with it, and has one or more associated scaling policies that describe what actions to take when the policy is activated.
No. Auto Scale does not scale up servers or load balancers in a particular order.
Yes. A load balancer is not required as part of the launch configuration. However, you do need to configure how your servers get requests.
No, all the resources must be in the same data center. There is a different Auto Scale endpoint for each data center, and each endpoint orchestrates only within that data center. In the Auto Scale control panel, data centers are called Regions.
No, you must create separate scaling groups for different data centers.
Auto Scale is service agnostic and API based, so it works well with these services but does not explicitly integrate with them.
A server may be created for a scale-up operation and then be immediately deleted if there is a problem with the load balancer associated with the scaling group. The load balancer problems that can cause this are:
- The load balancer is mis-configured
- The load balancer is at its limit
- The load balancer has been deleted
If any of those problems are present, Auto Scale immediately deletes the newly-created server so the customer doesn't get billed for servers not in the load balancer.
One possibility is that you tried to scale up or down beyond the configured minimum or maximum value. As a result, no servers could be created or destroyed. The error message could also mean that you are trying to set the needed capacity equal to what Auto Scale thinks is already there.
No. The server is removed from the load balancer before the delete command is sent. At present, connections are not drained.
The maximum is 86400 seconds, equal to 24 hours.
Zero seconds. We recommend having the group cooldown being around 5 minutes (300 seconds) by default.
Cooldown timers are built in to the scaling group and the individual scaling policies, so that you can prevent too many servers from being created or deleted too quickly.
Scheduled policies are triggered at the time they are scheduled. Other policies are triggerd by a webhook and the name or "handle" for each policy is defined by a construct called a webhook, which is a unique URL endpoint you call to invoke the policy execution.
For information on the parameters used with the Auto Scale API, see the Auto Scale API Developer's Guide, Group Configurations and the Auto Scale API Developer's Guide, Launch Configuration.
For information on the parameters used with the Auto Scale Control Panel, see the Auto Scale User Guide, Creating Scaling Groups.
No. There are no specific rules within Auto Scale for monitoring specific servers. However, you can do this through Monitoring configurations, see the Cloud Monitoring API Developer's Guide for details.
Yes, however, if you need to scale beyond 25 servers with a cloud load balancer, we recommend creating multiple Auto Scale groups and creating a tree of load balancers.
There is no maximum number of servers in a scaling group. However, a scaling group used with a cloud load balancer instance is limited to 50 servers per load balancer group and you may have overall Cloud Servers limits on the number of servers you are allowed to create without having your quota bumped up. If you reach cloud load balancer limits, Auto Scale will fail to add additional servers. If you are running up against limits with cloud load balancer instances, you should consider creating multiple scaling groups and a tree of load balancers to service requests or using RackConnect to use a higher capacity hardware load balancing solution. For more information on RackConnect, see How do I get started with RackConnect?
Yes. A scaling policy is associated with a specific group. All of the scaled-up servers are managed for health and monitoring in aggregate so they need to be a part of a group.