Last updated: Feb 11, 2022
This documentation serves as a quick reference for Rackspace customers who have questions about Rackspace Kubernetes-as-a-Service.
Rackspace Kubernetes-as-a-Service (KaaS) is a managed service that enables Rackspace deployment engineers to provision Kubernetes® clusters in supported cloud provider environments. Kubernetes, an open-source container orchestration tool, enables system administrators to manage containerized applications in an automated manner. Efficiently running containerized applications is a complex task that typically requires a team of experts to architect, deploy, and maintain your cluster in your specific environment. Rackspace KaaS does these things for you so you can focus on what is vital for your business.
The Rackspace KaaS product includes the following features:
A recent conformant open-source version of Kubernetes
Your Kubernetes cluster runs the latest stable community Kubernetes software and is compatible with all Kubernetes tools. In a default configuration, three Kubernetes worker nodes are created that ensure high availability and fault tolerance for your cluster.
Logging and monitoring
Based on widespread monitoring tools Prometheus®, Elasticsearch™, Grafana®, and others, the Rackspace KaaS solution ensures real-time analytics and statistics across your cluster.
Private container image registry
While using public container image registries, such as DockerHub and Quay® is still an option, some of your images might require an additional level of security. A private container image registry enables you to store and manage your container images in a protected location that restricts public access.
Advanced network configuration
Rackspace KaaS uses Calico® for network policies to enable you to configure a flexible networking architecture. Many cloud environments require a complex networking configuration that isolates one type of network traffic from another. Network policies provide answers to many networking issues.
Backup and recovery
Rackspace KaaS integrates VMware® Velero to create snapshots automatically of your data and restore your persistent volumes and cluster resources with minimum downtime in the event of an emergency. Velero enables you to move cluster resources between cloud providers, as well as create replicas of your production environment for testing purposes.
Broad platform support
Rackspace KaaS enables IT operators to run Kubernetes clusters on top of the following new or existing environments:
- Rackspace Private Cloud powered by OpenStack (RPCO)
- Rackspace Private Cloud powered by Red Hat (RPCR)
- Amazon® Elastic Kubernetes Services (EKS)
Rackspace built KaaS on top of various open-source projects, such as Kubespray, Terraform®, and others. The installer should have no impact on any initiatives to avoid vendor lock-in because it exists outside of the scope of what a user accesses or operates.
The current offering does not account for a Build, Operate, and Transfer (BOT) model. This feature is on the current roadmap. If this is a requirement or concern, contact your Rackspace Account Manager to discuss available options.
The current Rackspace KaaS offering consumes the following OpenStack components:
- OpenStack® Compute service (nova)
- OpenStack Networking service (neutron)
- OpenStack Load Balancing service (octavia)
- OpenStack Identity service (keystone)
- OpenStack Block Storage service (cinder)
- OpenStack DNS service (designate)
- OpenStack Object Storage service (swift)
The Rackspace KaaS offering deploys an authentication bridge and a user interface on the physical servers that are also known as the OpenStack Infrastructure nodes.
Also, Rackspace KaaS on OpenStack requires Cinder backed by Ceph®. We use Cinder with Ceph because of Cinder’s default Logic Volume Manager (LVM) backend does not support data replication, which is a requirement for the data volume failover and the resiliency of the Kubernetes worker node.
The current Rackspace KaaS offering consumes the following EKS components:
- EC2 (Compute)
- VPC (Networking)
- ELB (Load Balancing)
- IAM (Roles and Identity)
- EBS (Block Storage)
- Route53 (DNS)
- S3 (Storage)
The choice of load balancers and Ingress Controllers depends on the underlying cloud platform capabilities.
Rackspace KaaS on OpenStack leverages a highly available instance of OpenStack Octavia Load Balancer as a service (LBaaS) with NGINX® Ingress Controllers preconfigured and deployed. This configuration enables application developers to deploy Kubernetes applications with native
type: loadbalancer support for service exposure.
Rackspace KaaS on EKS uses Elastic Load Balancers with the same NGINX Ingress Controller pre-configuration.
Rackspace KaaS leverages several types of storage, depending on the use case. At the object store level, we require access to a Swift object API or Ceph Rados Gateway (RGW). The object store stores backups, snapshots, and container images pushed to the private container registry (Harbor). These object store APIs are also exposed to application developers to use within their applications.
Rackspace KaaS uses object storage is used for Velero backups and Harbor image storage. At this time, we do not offer configurable storage for those services.
Using an object store for backups, snapshots, container image storage, and versioning is the Kubernetes community standard. By using the object store native features, Rackspace KaaS enables support for storage and versioning of disaster recovery binary large objects (blobs) over months and years.
To support OpenStack, Rackspace KaaS requires an end-to-end, highly available architecture.
By default, Cinder does not support volume replication. If a single Cinder host fails, the data stored on that block device is lost. In turn, Kubernetes cannot failover data volumes to Kubernetes nodes. By using Ceph’s volume replication, we ensure that all failure scenarios result in a volume or block device that can fulfill Kubernetes failover semantics.
Support for other IaaS platforms is something that we are currently examining and scoping on our product roadmap. If you have an urgent requirement to support a specific IaaS platform, such as a hybrid/burst scenario, contact your Rackspace Account Manager to raise the priority to our product team.
The minimum requirements for a highly available Kubernetes cluster include the following items:
- 3 x Kubernetes Master nodes (VMs): According to the OpenStack Compute service
anti-affinitypolicy, each Kubernetes Master node exists on a separate nova host.
- 5 x etcd nodes (VMs): According to the OpenStack Compute service
anti-affinitypolicy, each etcd master node exists on a different nova host.
The specific deployment needs and workloads determine the custom number of Kubernetes worker nodes. Ideally, two Kubernetes worker nodes should not be hosted on the same OpenStack compute node. However, Rackspace KaaS does not enforce this situation because it might dramatically increase the total count of the OpenStack compute nodes in some deployments.
Therefore, you need a minimum of five OpenStack compute hosts to set affinity rules correctly for the etcd cluster.
By default, Ceph requires a minimum of three nodes for data replication and resiliency.
We work with the Fanatical AWS team for the best architecture that aligns with customer needs. Depending on the requirements, this might include autoscaling groups, using multiple availability zones, or other options.
In a typical OpenStack deployment, you have a control plane and a data plane. The control plane is the nodes that serve the OpenStack services. The data plane is the aggregated physical hosts where your workloads or virtual machines (VMs) run.
Because Rackspace KaaS on RPCO and RPCR runs within the context of OpenStack Compute nodes, it runs within the data plane of your OpenStack deployment. However, supporting services, such as authentication and others, run on the same control plane nodes as other OpenStack services.
When Kubernetes needs a new node, it creates a new nova API call, the Kubernetes provisioner configures and installs Kubernetes and supporting software, and then adds that node to the cluster.
Kubernetes runs multiple container formats. After you deploy a Kubernetes cluster or add a node, Kubernetes schedules work on the worker nodes of the cluster. In a Kubernetes environment, these are pods, deployments, and services, and not specifically Docker containers.
For information about how Kubernetes schedules work, see the official Kubernetes documentation.
Each Rackspace KaaS deployment includes a fully configured Elasticsearch®, Fluentd®, and Kibana® (EFK) stack. Your application developers can use EFK for centralized application logging for the services and applications that you deploy.
These services are open-source, upstream software with a rational default configuration for application development use.
When running applications in a Kubernetes cluster, you can use the
kubectl logs command to collect the entire output of an application. Rackspace KaaS preconfigures your Kubernetes cluster to aggregate all of your application logs and make them searchable by using Elasticsearch and Kibana. You can access these logs by using the Ingress Controller that came with your cluster.
To view the logs, complete the following steps:
- In your browser, navigate to
- When prompted, enter your Identity Provider credentials.
By default, we deploy Rackspace KaaS with three Elasticsearch containers. If you need more Elasticsearch instances, you can ask your Account Manager to increase the replica count of the Elasticsearch containers.
Yes. Every Rackspace KaaS installation uses nova VMs that run a hardened Linux® OS (Container Linux) that has log rotation enabled. However, Rackspace KaaS does not expose this functionality to end-users.
Currently, the EFK stack includes Fluentd as a fully managed service. If you need to customize your deployment, contact your Account Manager to provide your use case and work with Support to enable the required customization.
Yes. Rackspace offers best practices and assistance with creating the various YAML files that are used by the Kubernetes primitives. Rackspace employees do not replace a team with Kubernetes knowledge but augment it.
Integrated Authentication allows Rackspace to configure clusters to connect and authenticate against customer-provided Identity Services, such as ones that use Security Assertion Markup Language (SAML), Lightweight Directory Access Protocol (LDAP), and so on. Examples of configurable Identity backends include Okta, Ping, and ADFS.
Using Integrated Authentication enables users to authenticate to Rackspace managed services by using a customer identity from a centralized service.
For more information, see Integrated authentication.
Currently, Rackspace support personnel add and recover nodes. To request the addition of worker nodes to your Kubernetes cluster, submit a ticket in your account Control Panel.
Updated 4 months ago