Internal Release Notes
Rackspace Kubernetes-as-a-Service
Last updated: Feb 11, 2022
Release: v6.0.2
Release notes:
- Rackspace KaaS Internal Release Notes
- Release 6.0.2
- Release 6.0.1
- Release 6.0.0
- Release 5.1.0
- Release 5.0.0
- Release 4.0.0
- Release 4.0.0-beta.1
- Release 3.0.0
- Release 2.1.0
- Release 2.0.4
- Release 2.0.3
- Release 2.0.2
- Release 2.0.1
- Release 2.0.0
- Release 1.5.0
- Release 1.4.0
- Release 1.3.0
- Release 1.2.0
- Release 1.1.0
- Release 1.0.0
- Release 0.10.1
- Release 0.10.0
- Release 0.9.0
- Release 0.8.0
- Release 0.7.0
- Release 0.6.0
- Release 0.5.3
- Release 0.5.2
- Release 0.5.1
- Release 0.5.0
- Release 0.4.0
- Release 0.3.0
- Release 0.2.0
Rackspace KaaS Internal Release Notes
These release notes list describes new, updated, and deprecated features in each version of Rackspace Kubernetes-as-a-Service (KaaS). This document is for internal audience only and must not be shared with customers. For the external release notes, see developer.rackspace.com.
Release 6.0.2
Patch Changes
- Managed Services
harbor
harbor
has been upgraded tov1.6.1
(appVersion 2.2.1) (#2340)
Release 6.0.1
Patch Changes
- Managed Services
harbor
harbor
has been upgraded tov1.6.0
(appVersion 2.2.0) (#2323)
nginx-ingress
nginx-ingress
has been upgraded tov3.23.0
(appVersion 0.44.0) (#2326)
- CI
- Updated Checkmarx URL and password (#2321)
- Added missing ‘bats/bats’ Docker image for the retagger script (#2319)
Release 6.0.0
The 6.0.0 Release of KaaS adds support for Kubernetes 1.19.
Minor Changes
- Managed Services
cert-manager-certs
- Updated the name and namespace of the webhook configuration in the CRDs to match with our release name and namespace respectively (#2291)
dex
- Updated Ingress apiVersion to
networking.k8s.io/v1beta1
to support Kubernetes 1.19 (#2301)
- Updated Ingress apiVersion to
fluentd-elasticsearch
fluentd-elasticsearch
has been upgraded tov3.1.0
(#2286)
kam
- Updated Ingress apiVersion to
networking.k8s.io/v1beta1
to support Kubernetes 1.19 (#2299)
- Updated Ingress apiVersion to
kubernetes-dashboard
kubernetes-dashboard
has been upgraded tov2.0.5
(#2300)
- CI
- Added workaround to handle errors at running retagger script (#2289)
Release 5.1.0
Major Changes
- Managed Services
cert-manager-certs
cert-manager
,cainjector
,webhook
have been upgraded tov1.0.3
(#2278)
Minor Changes
- Managed Services
harbor
harbor
has been upgraded tov2.1.1
(#2282)
monitoring
- Removed limitation of single Prometheus to matching expressions on Service Monitors, Rules, and Pod Monitors (#2277)
Release 5.0.0
The 5.0.0 Release of KaaS adds support for Kubernetes 1.17.
Major Changes
- Managed Services
harbor
harbor
has been upgraded tov2.1.0
(#2268)
monitoring
prometheus-customer
has been removed. Only one single Prometheus instance will be used (#2253)
nginx-ingress
nginx-ingress
has been upgraded tov0.35.0
(#2259)
Release 4.0.0
The 4.0.0 Release of Kaas adds support for Kubernetes 1.16.
Major Changes
- Added preliminary support for Managed Services deployment on EKS (#2199)
- Due to an issue in Helm dealing with Deprecated Kubernetes APIs, there are some charts that require special considerations to upgrade them:
cert-manager-certs
cert-manager
,caninjector
,webhook
have been upgraded tov0.15.2
(#2129)
dex
dex
has been upgraded tov2.24.0
(#2110)
harbor
harbor
has been upgraded tov1.10.2
(#2109)
kubernetes-dashboard
kubernetes-dashboard
has been upgraded tov2.0.3
logging
elasticsearch
has been upgraded tov7.6.2
(#2190)
monitoring
monitoring
has been upgraded tov0.38.1
(#2103)grafana
has been upgraded to7.0.3
(#2103)kube-state-metrics
has been upgraded tov1.9.6
(#2103)prometheus-node-exporter
has been upgraded tov1.0.0
(#2103)
oauth2-proxy
oauth2-proxy
has been upgraded tov5.1.0
(#2108)
Minor Changes
- Managed Services
external-dns
external-dns
has been upgraded tov0.7.2
(#2132)
fluentd-elasticsearch
fluentd-elasticsearch
has been upgraded tov2.8.0
(#1879)
hardening
hardening
has been upgraded tov1.0.2
(#2156)
kam
- Removed kam sidebar link and added Prometheus customer (#2117)
- Added
Copy to Clipboard
buttons for KAM kubeconfig and credentials (#2204)
kube2iam
kube2iam
has been upgraded tov0.10.9
(#2155)
logging
kibana
has been upgraded tov7.6.2
(#2190)
metrics-server
metrics-server
has been upgraded tov0.3.6
(#2023)
monitoring
- Disabled webhook creation for Prometheus customer (#2047)
- Added High CPU and Memory Usage alerts (#2048)
- Scraped kubeControllerManager and kubeScheduler metrics in Prometheus for RKE clusters (#2074)
- Added default null receiver to alertmanager config (#2125)
- Added cluster name to Prometheus externalLabel to be used in alertmanager notifications (#2135)
nginx-ingress
nginx-ingress
has been upgraded tov0.32.0
(#2134)
oauth2-proxy
- Fixed CA path (#2073)
velero
velero
has been upgraded tov1.4.0
(#2126)- Added a Prometheus alert to check backups status (#2045)
- Allow not having managed services ca.[crt|key] (#2113)
- Cluster
- Removed Kubespray functionality (#2084)
- Removed EKS cluster lifecycle operations (#2095)
- Fixed a bug with alerting on non rackspace namespace (#2206)
- Added
rackspace.com/monitored: true
label to therackspace-system
namespace (#2206)
- kaasctl
- Upgraded
kaasctl
image to Ubuntu 18.04 (#2201) - Fixed panic in
services defaults
(#2054) - Added LoadServices method to ServiceManager to allow
services defaults
to load the chart values without coalescing user-defined overrides (#2128) - Fixed a bug in kaasctl logging (#2180)
- Added
yq
to kaasctl image (#2196) - Updated kaasctl tools versions (#2198)
- Upgraded
docker
to 19.03.12 - Upgraded
kuebctl
to 1.16.13 - Upgraded
kubeadm
to 1.16.13 - Upgraded
dumb-init
to 1.2.2
- Upgraded
- Upgraded
- CI
- Replaced cluster creation from Kubespray to Terraform-RKE (#2076)
- Added a flag at running Go tests to run only one test per time (#2094)
- Replaced URL for CheckMarx scanner to use one in the kaas Amazon S3 production bucket (#2102)
- Added a job to publish cleanup script to kaas Amazon S3 bucket (#2142)
- Updated terraform-rke CI names to be consistent with best practices (#2152)
- Documentation
- Documented the process for update Helm charts (#2040)
- Added 2019 PCI compliance to docs (#2079)
- Added a stand-alone resource cleanup application (#2068)
- Automated releases for kaas (#2161)
Release 4.0.0-beta.1
Major Changes
- Due to an issue in Helm dealing with Deprecated Kubernetes APIs, there are some charts that require special considerations to upgrade them:
cert-manager-certs
cert-manager
,caninjector
,webhook
have been upgraded tov0.15.2
(#2129)
dex
dex
has been upgraded tov2.24.0
(#2110)
harbor
harbor
has been upgraded tov1.10.2
(#2109)
logging
elasticsearch
has been upgraded tov7.6.1
(#2121)
monitoring
monitoring
has been upgraded tov0.38.1
(#2103)grafana
has been upgraded to7.0.3
(#2103)kube-state-metrics
has been upgraded tov1.9.6
(#2103)prometheus-node-exporter
has been upgraded tov1.0.0
(#2103)
oauth2-proxy
oauth2-proxy
has been upgraded tov5.1.0
(#2108)
Minor Changes
- Managed Services
external-dns
external-dns
has been upgraded tov0.7.2
(#2132)
fluentd-elasticsearch
fluentd-elasticsearch
has been upgraded tov2.8.0
(#1879)
hardening
hardening
has been upgraded tov1.0.2
(#2156)
kam
- Removed kam sidebar link and added Prometheus customer (#2117)
kube2iam
kube2iam
has been upgraded tov0.10.9
(#2155)
logging
kibana
has been upgraded tov7.6.1
(#2121)
metrics-server
metrics-server
has been upgraded tov0.3.6
(#2023)
monitoring
- Disabled webhook creation for Prometheus customer (#2047)
- Added High CPU and Memory Usage alerts (#2048)
- Scraped kubeControllerManager and kubeScheduler metrics in Prometheus for RKE clusters (#2074)
- Added default null receiver to alertmanager config (#2125)
- Added cluster name to Prometheus externalLabel to be used in alertmanager notifications (#2135)
nginx-ingress
nginx-ingress
has been upgraded tov0.32.0
(#2134)
oauth2-proxy
- Fixed CA path (#2073)
velero
velero
has been upgraded tov1.4.0
(#2126)- Added a Prometheus alert to check backups status (#2045)
- Allow not having managed services ca.[crt|key] (#2113)
- Cluster
- Removed Kubespray functionality (#2084)
- Removed EKS cluster lifecycle operations (#2095)
- kaasctl
- Fixed panic in
services defaults
(#2054) - Added LoadServices method to ServiceManager to allow
services defaults
to load the chart values without coalescing user-defined overrides (#2128)
- Fixed panic in
- CI
- Replaced cluster creation from Kubespray to Terraform-RKE (#2076)
- Added a flag at running Go tests to run only one test per time (#2094)
- Replaced URL for CheckMarx scanner to use one in the kaas Amazon S3 production bucket (#2102)
- Added a job to publish cleanup script to kaas Amazon S3 bucket (#2142)
- Documentation
- Documented the process for update Helm charts (#2040)
- Added 2019 PCI compliance to docs (#2079)
- Added a stand-alone resource cleanup application (#2068)
- Automated releases for kaas (#2161)
Release 3.0.0
This release is a significant update that adds a number of major new features.
- Cloud Provider
- Openstack
- Uses Kubespray 2.10 with Rackspace specific additions to support Designate and Octavia.
- Upgrades Kubernetes to 1.14.10.
- Upgrades etcd to 3.3.10.
- EKS
- Adds limited availability of EKS clusters on Amazon using EKS Kubernetes 1.14
- Openstack
- Managed Services
- All managed services are now deployed using Helm v3. KaaS now imports Helm and supports deploying and upgrading managed services using
kaasctl
, while offloading the heavy lifting to Helm. kaasctl
provides reasonable auto-generated defaults on a per-cluster basis, while allowing support staff to customize deployments based on the customer’s needs.- Cluster authentication goes through Dex for all configurations. Dex allows group retrieval and mapping using Kubernetes RBAC.
- Backend groups (Dex or your configured Dex Connector) can be mapped to Kubernetes RBAC Groups through Kubernetes Custom Resource Definition
GroupMap
resources.
apiVersion: kaas.rackspace.com/v1 kind: GroupMap metadata: name: kaas-cluster-admins namespace: rackspace-system spec: idGroup: my-group-name-from-LDAP-backend k8sGroup: kaas-cluster-admins managedServices: []
- Backend groups (Dex or your configured Dex Connector) can be mapped to Kubernetes RBAC Groups through Kubernetes Custom Resource Definition
- The Rackspace Kubernetes Access Manager (KAM) replaces the Rackspace Control Panel. With this change, there is no longer any management capability for
Roles
,RoleBindings
, orPodSecurityPolicies
. We expect customers to use declarative configuration for these resources moving forward.- kube-apiserver is now configured for OIDC Authentication (OpenStack) or IAM Authentication (EKS). Cluster users get temporary credentials through KAM to access Kubernetes.
- The following managed services have been upgraded:
- ElasticSearch 6.4.0 -> 7.4.1
- External DNS 0.5.6 -> 0.5.17
- Grafana 5.2.2 -> 6.2.5
- Harbor 1.5.2 -> 1.8.2
- Kibana 6.4.0 -> 7.2.0
- Kubernetes Dashboard 1.10.1 -> 2.0.0-rc5
- Nginx Ingress 0.14.0 -> 0.28.0
- Prometheus Operator 0.25.0 -> 0.31.1
- Prometheus 2.4.3 -> 2.10.0
- Velero (formerly Ark) 0.9.2 -> 1.2.0
- The following managed services have been added:
- AlertManager 0.17.0
- Dex 2.17.0-rackspace.1
- Cert-Manager 0.11.0
- Metrics-Server 0.3.4
- Rackspace Kubernetes Access Manager (KAM) 3.0.0
- The following managed services have been removed:
- Node problem detector
- Rackspace Control Panel
- Container Linux Update Operator
- Rackspace MaaS Agent (replaced by AlertManager)
- All managed services are now deployed using Helm v3. KaaS now imports Helm and supports deploying and upgrading managed services using
Release 2.1.0
This release introduces new tuning and configuration for FluentD.
- Managed services:
- K8S-2413: Updated the FluentD configuration file to enable Prometheus monitoring.
kaasctl
:- Fixed a bug with
urllib3
dependencies not being fulfilled by explicitly installingurllib3
v1.24.2.
- Fixed a bug with
Release 2.0.4
This release focuses on bug fixes and continuous integration (CI) improvements:
- Cloud provider (
openstack
):- K8S-2176: Update the API LB monitor to HTTPS healthcheck.
- Set Octavia timeout to ten minutes for RPCR.
- Cluster manager:
- K8S-2389: Change CA extension from
.crt
topem
. (#1150) - K8S-2395: Conditionally set CA for auth token webhook. (#1158)
- K8S-2389: Change CA extension from
- Managed services:
- K8S-2423: Allow
external-dns
to use system CA. (#1157)
- K8S-2423: Allow
- Tests:
- K8S-2412: Populate the Domain field in the OpenStack Dashboard login.
- K8S-2411: Use
vlanNetworkName
to obtain the master IP address. (#1144) - K8S-2424: Do not force configuration file to have a CA. (#1147)
- K8S-2330: Fix for the
test_horizon_login
QC test in OSP12. (#1156)
- CI:
- Add unit tests to the pipeline.
- Add
--no-confirm
to the CI cleanup make target. - Expose
Jenkinsfile
perTestSuite
runFrequencies
setting.
Release 2.0.3
This release focuses on bug fixes and continuous integration (CI) improvements.
- Kubernetes
- K8S-2368 Update Kubernetes version to 1.11.8 which addresses CVE-2019-1002100.
- Managed Services:
- Fix idempotency issue with Managed Services certificate manager discovered while working on features.
kaasctl
:- K8S-2213 Add CoreOS build ID and version to
kaasctl cluster versions
. - K8S-2316 Add support for passing tags to
kaasctl cluster test [integration|qc]
.
- K8S-2213 Add CoreOS build ID and version to
- CI:
- K8S-2147 Refactor cluster lifecycle management testing into
pytest
.
- K8S-2147 Refactor cluster lifecycle management testing into
Release 2.0.2
This release focuses on bug fixes and continuous integration (CI) improvements.
- Managed Services:
- K8S-2331: Preset 365-day validity for the Managed Services certificate authority (CA).
- K8S-2325: Install the Managed Kubernetes certificate on Mac.
- K8S-2205: Add cron jobs, namespaces, and secrets collectors to
kube-state-metrics
.
kaasctl
:- K8S-2196: Remove Flannel from the
kaasctl cluster update
command logic. - Implement a flag on
kaasctl cluster test conformance
to run Heptio™ Sonobuoy serially to submit results to Cloud Native Computing Foundation (CNCF).
- K8S-2196: Remove Flannel from the
- Minor bug fixes and test improvements.
Release 2.0.1
This release is focused on bug fixes.
kubespray
submodule- K8S-2114 Ensure terraform instances don’t get destroyed with user_data is updated.
- OpenStack provider
- K8S-2198 Enable customers to provide CA certificates to be installed on all cluster nodes immediately after provisioning.
- K8S-2212 Fix bug in SSHHealthChecks by improving discrimination between Private IP and Floating IP on cluster nodes.
Release 2.0.0
This release is focused on improving internal deployment and lifecycle management tools.
kaasctl
The legacy Kubernetes Installer has been replaced with a new tool calledkaasctl
(pronounce ‘kas-cuttle’) that introduces many changes that improve the KaaS deployment process. The goal of changing the internal components of the Kubernetes Installer is to move KaaS to a new level of scalability, add functionality that enables enhanced cluster lifecycle management, and enable multi-cloud support.
The new kaasctl tool is not an identical replacement for the old Kubernetes Installer but rather a new approach to cluster deployment and lifecycle management. The new approach enables not only provisioning of Kubernetes clusters but allows for easier upgrades, node replacement, and multi-cloud support. By using the old tool, deployment engineers had to adjust the workflow for each particular deployment based on the unique customer requirements. The newkaasctl
tool is designed to eliminate the need of creating such specific individual requirements.
Primary differences between the old Kubernetes Installer and the newkaasctl
include:- Kubespray deployment workflows replace many legacy Tectonic workflows. Kubespray is an open-source tool for deploying Kubernetes clusters on various cloud platforms, both in the cloud and on-premise. KubeSpray resolves many issues associated with the day 2 cluster operations, as well as enables you to deploy Kubernetes on top of multiple cloud platforms including Amazon® EKS, Google® Compute Engine, Microsoft® Azure®, and others.
- Quay.io access is required to deploy a Kubernetes cluster.
kaasctl
implements a new command line interface (CLI) and, therefore, replaces the old commands that were based on Makefile targets.
NOTE: All thekaasctl
commands must be executed within a Docker interactive session.
Commands comparison
Action | Old make command | New kaasctl command |
---|---|---|
Initialize a provider’s directory | N/A | kaasctl provider init <provider-type> [options] |
Configure a cluster project | make deploy | kaasctl cluster init <cluster-name> [options] |
Deploy a cluster | make deploy | kaasctl cluster create <cluster-name> [options] |
Deploy managed services | N/A | kaasctl services deploy <cluster-name> [options] |
Validate a cluster | make conformance | kaasctl cluster test conformance <cluster-name> [options] |
Replace a Kubernetes node | make replace-node node=<node-name> | kaasctl cluster replace-node <cluster-name> [options] |
View the list of versions | N/A | kaasctl cluster hotfix <cluster-name> --list-versions |
View the list of nodes | N/A | kaasctl cluster list-nodes <cluster-name> |
Update a cluster | N/A | kaasctl cluster update <cluster-name> [options] |
- Kubernetes
- The Kubernetes version is upgraded to 1.11.5.
- Switch from Flannel to Calico as the container networking provider.
- Switch from
kube-dns
tocore-dns
for cluster DNS. - Switch from
kube-proxy
backend fromiptables
toipvs
.
- Managed Services
- Heptio Velero upgraded to 0.10.1.
- KaaS Control Panel
- K8S-2038 Manage control panel session cookie server-side.
Release 1.5.0
This release is focused on bug fixes and general stability.
- Continuous Integration (CI):
- Make integration tests more reliable by waiting for all dependencies to be healthy before running tests.
- Improve debugging workflow for failed builds by outputting Kubernetes events when a build fails.
- Persistently retry operations that are prone to intermittent failures. For example, for the components that access the Internet.
- Security:
- Implement and document patches and support scripts to address CVE-2018-1002105.
- Prune the scope of the default cluster admin role binding to only the intended users. Other services that had also been leveraging this role binding have each been given their own appropriately restrictive roles and role bindings.
- KaaS Control Panel:
- The control panel and auth service are now deployed inside the cluster, and no longer act as a single, shared dependency for all clusters in an environment. The auth service is no longer publicly accessible, and the control panel is accessed through the ingress control, the same as any of the other managed services with web frontends.
Release 1.4.0
This release is focused on bug fixes and minor improvements across all the KaaS subprojects. One of the significant improvements includes support for RPCO Queens v17.1 and Kubernetes version upgrade to 1.10.8.
- Continuous integration (CI):
- Cleaned up Jenkins configuration management (#573).
- Added prevention of CI failure on merge-triggers (#545).
- Automated MNAIO base build (#500).
- KaaS Control Panel:
- Added support for CRUD operations for namespaces and roles (#564).
- DNS:
- Simplified the cluster domain name format (#569) (#555):
- For example, in previous versions users accessed Kibana by using the
kubernetes-<slug>.<slug>.mk8s.systems/kibana
URL. This version of KaaS changed the URL tokibana.<slug>.mk8s.systems
.
- For example, in previous versions users accessed Kibana by using the
- Simplified the cluster domain name format (#569) (#555):
- Disaster recovery and backups:
- Added support for Heptio Ark. Enabled the creation of volume snapshots, support for application backup and restore operations, and environment replication (#581).
- Kubernetes Installer:
- Fixed the issue with the ‘openstack purge project’ command that was occasionally failing to delete networks (#603).
- Completed the first iteration of refactoring
tools/installer/hack/apply-managed-services.sh
into Golang (#590). - Created a CLI tool for Day 2 operations (#589).
- Optimized the Kubernetes Installer Dockerfile (#594).
- Added
ntpd
andtimesyncd
to etcd nodes (#550). - Fixed the
apply-managed-services
andinstaller version
build flags (#543). - Enabled sending of system logs to separate a Elasticsearch instance if specified (#536).
- Golang:
- Migrated from Glide to Go modules (#558).
- Kubernetes:
- Upgraded to version 1.10.8 (#567).
- Changed
heapster
to accesskubelet
on port 10250 (#556). - Switched the
hyperkube
image version to upstream 1.10.7 (#553). - Added
ntpd
andtimesyncd
to etcd nodes.
- Monitoring as a Service (MaaS):
- Added a check for the control plane connectivity issues (#544).
- Added a MaaS check for the control plane connectivity issues (#544).
- Fixed the MaaS ConfigMap for monitoring
kubelet
(#538).
- RPC:
- Added compatibility with RPCO Queens v17.1 (#501).
- Fixed quota for RPCR (#547).
- Terraform:
- Consolidated 100% of cluster configuration options into
CONFIG_FILE
(#577).
- Consolidated 100% of cluster configuration options into
- Test improvements:
- Fixed a flaky
ntp
test in QC (#607) (#576) (#592). - Tested access to the OpenStack Dashboard (#605).
- Made the VMware Harbor QC test cases optional (#574).
- Increased resiliency for the cinder integration test (#600).
- Added
failfast
for Python integration tests (#604). - Tested the KaaS Control Panel login through the UI (#548).
- Caught the timeout on the
volume attach
operation for a node failure (#539).
- Fixed a flaky
Release 1.3.0
This release focused largely on bug fixes and minor improvements across all the KaaS subprojects.
Changes
- General: * Rearrange the
/docs
directory structure. - Continuous integration (CI): * Don’t run installer PR branch CI jobs on timer triggers.
- Auth *Query backend info when listing users. *Refactor to use gorilla/mux for http routing. *Fix Jenkins TCP port contention by using shared internal docker network in e2e tests. *Move README.md to /docs.
- Control panel *Improve handling of empty and errors states for namespace & PSP lists. *Optimize main-nav to avoid excessive re-renders. *Replace basic lists with new List components. *Alphabetize volume types in UI VolumeList component. *Display info from backend when listing users. *Move README.md to /docs.
- Ephemeral token service *Don’t forward empty tokens in ETA. *Add monitoring for ETA. *Fix docker build command for ETG. *Move README.md to /docs.
- Upgrader *Initial implementation of upgrader tooling; handles upgrades from K8S 1.9.6 to 1.10.5. *Move README.md to /docs.
- Installer *Internal changes *Fix data path for elasticsearch. *Ensure Grafana data persistence w/ PVC. *Change ingress/registry DNS to wildcard. *Move harbor registry behind nginx ingress. *Move k8s Prometheus behind OAuth. *Remove Alertmanager. *Rewrite rebuild-api-lb support script in golang. *Add monitoring for ETA. *Add monitoring for ingress + registry endpoints. *Add beta cert-manager deployment. *Adjust node memory eviction thresholds. *Fix Kubelet monitoring by using correct port. *Testing *Test host-based ingress. *Test elasticsearch data is persistent. *Fix PSP test flakiness. *Fix test_pod_status test module by skipping Job pods. *Fix sonobuoy-conformance make target. *Move README.md to /docs. *External changes *Update fluentd to 2.2.0 *Update harbor to 1.5.2 *Adds native OAuth support. *Update kube-state-metrics to 1.3.1 *Update ES/Kibana to 6.4.0 and disable xpack trial.
- OAuth Proxy *Clean up usage of escaped quotes. *Move README.md to /docs.
- RPC Ansible *Add deploy script to correct LXC container issue. *Remove git clone dependency for LXC setup. *Fix systemd reload order. *Specify python interpreter for Queens. *Ensure mk8s_service_account_project_id is populated in ETP playbook. *Fix service account user creatin. *Move README.md to /docs.
Release 1.2.0
This release focused largely on improvements to deployment process while fixing deeply-baked in assumptions about the use of a CA. It also includes a variety of improvements across the board, including CI, Installer, Control Panel, and integration testing.
Changes
- Overall * Improvements release process and documentation.
- CI * Various improvemnts to Jenkinsfile conditionals.
- Bundler * The bundler is no more.
- Auth
- Control panel *Add links to managed services to UI. *Add PodSecurityPolicy management to UI.
- Ephemeral token service *Monorepo best practices refactor. *Return 401 from ETP when no auth token in request. * Fix bug in RPCO metadata request schema.
- Installer *Internal changes *Refactor cluster creation shell functions. *Add retry to bash manifest deploy functions. *Only use docker entrypoint script when running as non-root on Linux. *Add NTP monitoring. *Bump checkpointer image version. *Increase prometheus retention period to 1 week. *Kubernetes client-go dependency updated to 8.0.0 *Testing *Add Golang Unit test stage to CI. *Add Golang Unit test - test installer config validation. *Add Python integration test - verify services are resilient ot node failures. *Add Python integration test - verify that Prometheus is pulling Etcd data. *Add Python integration test - verify Pods & Containers are in Running state. *Add Python integration test - verify PodSecurityPolicy. *Fix Python integration fixture - include system certs in REQUESTS_CA_BUNDLE. *Cleanly separate pytest and qtest execution. *External changes *Update to K8S 1.10.5 *Don’t require a CA to be provided during deployment. *Deploy oauth-proxy in front of Kubernetes Dashboard. *Fix label used to point Service to Grafana Pod. *Fix URI in ES exporter *Add interactive prompt for cluster/admin configs with missing required fields during validation. *Add sonobuoy-quick and sonobuoy-conformance make targets and document how to use them. *Add our own .toolbox rc for cluster nodes. * Implement dry-run mode for resource purge.
- OAuth Proxy * Improve flexibility with HTTPS backends.
- RPC Ansible *Create playbooks to set up Designate on RPCR. *Add tags to selectively run portions of playbooks. *Turn undercloud RC file into user variable. *Add external network for named listener. *Enable iptables rules persistence. *Disable DNS recursion. *Set admin role on mk8s project instead of admin project. *Fix bug involving use of wrong CoreOS image directory. *Add oidc-signing-key to control panel deployment. *Octavia script improvements. *Add Makefile usef for RPCR k8s install. *Update ETP_PROXY_IP to reference DNS name during deployment. *Validate services after mk8s RPCR deploy. *Switch Designate to use worker instead of pool-manager. *Ensure git is installed in LXC. *Add mk8s_oauth_signing_key to mk8s_user_secrets.yml.
- Miscellaneous * Numerous documentation improvements.
Release 1.1.0
This release added preliminary support for RPCR, added initial OAuth integration, and improved the stability and reliability of cluster deployments.
Changes
- Bundler *“Dockerized”*Added CI pipeline * Push bundle image on tag
- Control panel * Add oauth-proxy integration
- Ephemeral token service *Added sequence diagrams *Added CI pipelines for all components (ETS/ETP/ETG) *“Dockerize” ETG/ETP *Add build flags to support RPCR differences
- Installer *Internal changes *Make volume backend name configurable *Monorepo refactoring *Validate cluster name on creation *Add config option to disable auto OS updates *Add ASC team integration (qtest, etc) *Add Python linting to CI *Fix formatting errors in managed service manifests *Remove version entry from installer config *Remove useETA from installer config *Add node/pod info to monitoring alerts *Testing *Add test - Internal registry with bad secret *Add test - Internal registry with user creds *Add test - Internal registry with admin creds *Add test - Internal registry project fixture *Add test - Internal registry uses Ceph *Add test - Validate MaaS endpoint *Network policy test rewritten in Python *etcd snapshot test rewritten in Python *Elasticsearch replica count test rewritten in Python *External changes *Change etcd node names to be consistent with masters/workers *Show node names in monitoring dashboards rather than IPs *K8S version updated to 1.9.6_rackspace.1 *Add k8s dashboard
- OAuth proxy * Add OAuth proxy service to allow SSO for managed services
- RPC Ansible *Added support for RPCR *Stage CoreOS image on install
- Miscellaneous * Numerous documentation improvements.
Caveats
- The OAuth proxy currently only works for Kibana and Prometheus. Other services will be added in the future.
- The Docker registry is not highly available but Kubernetes will restart it, if it fails.
- All Kubernetes masters currently have a floating IP assigned, making them publicly accessible. However, they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
Release 1.0.0
This is the first General Availability (GA) release. This release is focused on stability and deployment process improvements.
Repo consolidation
As part of this release, the various GitHub projects associated with the Rackspace Kubernetes-as-a-Service (KaaS) product were merged into one “monorepo”. As a result, the scope of these release notes has expanded to include these other projects.
Changes
- Auth service * None.
- Bundler *Copy assets to expected location in LXC containers on deployment. *Support the monorepo. * Build Go binaries as part of the bundler tasks.
- Control panel * Switch from Godep to Dep.
- Ephemeral token service * Switch from Godep to Dep.
- Installer *Internal changes *Upgrade Terraform to
0.11.7
. * Upgrade Terraform Openstack plugin to1.5.0
. * Upgrade Go to1.10
. * Upgrade Docker client to18.03.1
. * Ported most existing tests to Python. *Disable Container Linux auto-updates for CI clusters. *Delete cluster from auth during purge. *Don’t delete cluster-admin role on cluster clean. *Installer image may now run from an image registry, and does not require themk8s
repo to be cloned. *Add quality control (QC) checks to verify that key cluster components are functional after deployment. *External changes *Add VLAN as a networking option for k8s clusters. *Services that use a Persistent Volume (PV) now use theStatefulSet
resource type for more reliable lifecycle management. *Add default Pod Security Policies (PSP) that restrict the actions a user may take on a k8s cluster. *Upgrade nginx-ingress-controller to0.14.0
. * Upgrade Container Linux Upgrade Operator to0.6.0
. - Jenkins *Numerous updates to support the switch to a monorepo. *CI jobs are now scoped by subdirectory rather than including the entire repository.
- Miscellaneous * Product name changed from
Managed Kubernetes
toKubernetes-as-a-Service
. * Numerous documentation improvements.
Caveats
- The Docker registry is not highly available but Kubernetes will restart it, if it fails.
- All Kubernetes masters currently have a floating IP assigned, making them publicly accessible. However, they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
Release 0.10.1
This release focuses on improvements to managed services.
- Managed service updates *Add new service to periodically capture etcd snapshots *Configure registry to use Ceph RGW rather than block storage * Added additional checks to monitoring service
- Caveats *The Docker Registry is not highly available but Kubernetes will restart it, if it fails. *All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
Release 0.10.0
This release focuses on stability improvements.
- Update to Kubernetes 1.9.6
- Improve monitoring of external etcd
- Managed service updates *Fix registry deployment affected by CVE-2017-1002101 *Update
nginx-ingress-controller
image to v0.12.0 * Reduce Elasticsearch memory requirements - Caveats *The Docker Registry is not highly available but Kubernetes will restart it, if it fails. *All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
Release 0.9.0
This release focuses on stability improvements.
- Update to Kubernetes 1.9.3
- Switch Cinder backend to Ceph for better reliability
- Switch from self-hosted to external etcd
- Switch VM image channel from Container Linux Beta to Stable
- Add a Prometheus instance to monitor customer workloads
- Reduce Elasticsearch memory requirements
- There are a few single points of failure (SPoF) to be aware of. * The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking * All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
Release 0.8.0
This release focuses on stability and systems integration.
- Update to Kubernetes 1.9.2
- Update monitoring deployments and dashboards * Update to Prometheus 2.0
- Switch to Octavia provider for OpenStack load balancers
- Update images for managed services *Update
container-linux-update-operator
to v0.5.0 *Updatenginx-ingress-controller
to v0.9.0 *Updatenode-exporter
to v0.15.0 *Updateprometheus-operator
to v0.15.0 * Updateelasticsearch-exporter
to 1.0.2 - Configure monitoring agents on cluster VMs
- Change authentication method to use bearer tokens and cluster roles rather than admin TLS certs
- There are a few single points of failure (SPoF) to be aware of. * The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking * All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage * Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
Release 0.7.0
This release focuses on stability.
- Enhanced the User Interface to allow downloading a
KUBECONFIG
with bearer token pre-configured. - Added support for provisioning Octavia Load Balancers.
- Configured Harbor to use the Rackspace KaaS authentication backend, allowing using bearer token with Harbor.
- Significant improvements to user documentation.
- Updated managed services to use CA certificate from Kubernetes.
- Significant monitoring improvements.
- There are a few single points of failure (SPoF) to be aware of. *The OpenStack load balancers are not highly available if you don’t configure to use Octavia. *The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking * All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage * Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
Release 0.6.0
This release focuses on systems integration and core functionality.
New Features
- Updated to Kubernetes 1.8.2.
- Updated
container-linux-update-operator
to v0.4.1. - Updated
defaultbackend
to v1.4. - Updated
nginx-ingress-controller
to v0.9.0-beta.15.
- Updated
- Added the ability to provision clusters that are connected to the Rackspace KaaS Control Panel.
- Added the ability for OpenStack credentials managed by Kubernetes to be hardened with a series of backend services.
- Switched Cluster DNS to be using OpenStack Designate, removing AWS Route 53 for Kubernetes clusters on OpenStack.
- Set an eviction threshold on kubelet.
- Added monitoring for Harbor Registry.
- Integrated Harbor into Rackspace Kubernetes-as-a-Service Authentication.
- Added UI tooling allowing cluster Administrators to manage cluster access through the UI.
- Added DNS records for Managed services including the registry during cluster provisioning.
- Added
node-problem-detector
as a managed service. - Enabled Network Policy (Calico or Canal) by default.
Changes from the previous version
A bug was fixed in Kubernetes 1.8.2 that no longer requires an explicit annotation for LoadBalancer
to get a public IPv4.
Known Issues & Limitations
- There are a few single points of failure (SPoF) to be aware of. *The OpenStack load balancers are not highly available. *The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking * All Kubernetes masters currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage * Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
Release 0.5.3
This release focuses on stability improvements and bug fixes.
New Features
- Updated to Kubernetes 1.8.1
- Set a memory eviction threshold so that Kubernetes will evict workloads to prevent nodes crashing when they run out of memory. This protects against malicious applications and memory leak bugs.
- Reduced memory requirements and startup time for the managed Kibana service.
Additional Notes
Clusters are currently affected by this upstream bug for Kubernetes 1.8. This bug caused stability problems on previous 0.5.x releases and has been remediated by the eviction threshold change. Until the underlying bug is fixed, the kube-apiserver pods will continue to consume available memory until they are evicted and automatically restarted. The kube-apiserver pods only run on Master nodes, not Worker nodes, so customer workloads should not be affected.
Release 0.5.2
This release continues towards a broader availability through improvements to stability and performance of managed services.
New Features
- The ingress controller and registry are now exposed behind load balancers with associated DNS records
Release 0.5.1
This minor release updates to Kubernetes 1.8.0 official.
Release 0.5.0
This release continues towards a broader availability through improvements to stability, performance, managed services, as well as upgrading Kubernetes.
New Features
This update of Rackspace Kubernetes-as-a-Service includes the following new features and bug fixes:
- Kubernetes 1.8.0-rc.1 Kubernetes components have been upgraded to this version. For information on what is new in Kubernetes, check out their release notes here: CHANGELOG
- The Docker image registry is now using Harbor as the image registry
- Image security scanning is available in the registry via Clair
LoadBalancer
now delete cleanly in all cases- Added resource limits to services in
rackspace-system
andrackspace-monitoring
namespaces - Configured Kubernetes to reboot a node when in an out-of-memory situation
- Re-enabled the CoreOS Reboot Update operator
Changes from the previous version
Due to a regression in Kubernetes 1.8.0-rc.1, LoadBalancer
requires an annotation to allocate a public IPv4 address.
apiVersion: v1
kind: Service
metadata:
name: nginx
annotations:
service.beta.kubernetes.io/openstack-internal-load-balancer: "false"
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
name: http
selector:
k8s-app: nginx
- Disabled Calico support as this is broken in Kubernetes 1.8.0-rc.1
Known Issues & Limitations
- There are a few single points of failure (SPoF) to be aware of. *The OpenStack load balancers are not highly available. *The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking * All Kubernetes masters/nodes currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage * Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
Release 0.4.0
Major themes for this release were HA improvements, operational expertise, and scaling.
New Features
This update of Rackspace KaaS includes the following new features:
- Kubernetes 1.7.3: Kubernetes components have been upgraded to this version. For information on what is new in Kubernetes, check out their release notes here: CHANGELOG
- Added Rackspace CA to Container Linux instances
- New dashboards for cluster monitoring: *Selection by Namespace, Deployments, Pods *Request rate, Err rate, Response times are also included on the App Metric Dashboard
Changes from the previous version
- Port 80 can now be used for Services
- Bumped the following managed service images to the latest supported versions: *ingress controller *elasticsearch *fluentd *kibana *kube-state-metrics *prometheus
Known Issues & Limitations
- There are a few single points of failure (SPoF) to be aware of. *The OpenStack load balancers are not highly available. *The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking *All Kubernetes masters/nodes currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release. *Kubernetes does not correctly delete the cloud load balancer for a Kubernetes
Service
oftype: LoadBalancer
that has 2 or morespec,ports
. In order to avoid problems, limit KubernetesService
objects that aretype: LoadBalancer
to only 1 port. - Storage *Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties. *If a pod with an attached Cinder persistent volume is rescheduled to another node, it will fail trying to detach/attach to the other node.
Release 0.3.0
Major themes for this release were HA improvements, operational expertise, and scaling.
New Features
The 0.3.0 update of Rackspace KaaS includes the following new features:
- Kubernetes 1.7.1: Kubernetes components have been upgraded to version 1.7.1. For information on what is new in Kubernetes 1.7, check out their release notes here: https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG.md#major-themes
- Horizontal Scaling: Adding additional worker nodes to existing clusters is now supported
- HA Kubernetes nodes: Kubernetes nodes are always placed on different physical hardware now, so a single hardware failure on a node is tolerated
- Custom DNS resolution: Clusters now support using custom DNS nameservers
Changes from 0.2.0
- Smaller master nodes: Kubernetes controllers are now using smaller flavors to optimize resource usage
- Improved logging and monitoring: Additional system and managed services have been configured for logging and monitoring including etcd and Elasticsearch
Known Issues & Limitations
- There are a few single points of failure (SPoF) to be aware of.
- The OpenStack load balancers are not highly available.
- The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking: All Kubernetes masters/nodes currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage: Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
- Kubernetes does not correctly delete the cloud load balancer for a Kubernetes
Service
oftype: LoadBalancer
that has 2 or morespec,ports
. In order to avoid problems, limit KubernetesService
objects that aretype: LoadBalancer
to only 1 port.
Release 0.2.0
Major themes for this release were PersistentVolume support, laying the foundation for unified authentication and security controls, and self hosted etcd for easier future upgrades.
Also in this release we moved to a more robust networking configuration with private neutron networks, floating IPs, introduced Prometheus for monitoring and metrics as well as persistent storage backends for all managed services.
New Features
The 0.2.0 update of Rackspace KaaS includes the following new features:
- Persistent Volume Support: As of 0.2.0 we have full support for Persistent Volume Claims (PVCs). This supports static and dynamic claims and is backed by Cinder block devices. Please see “Known Issues & Limitations” for known issues.
- Docker Registry: As part of the Kubernetes cluster, a Docker registry is deployed into the cluster as a managed service. This allows you to now perform docker “push/pull/build” directly against your Kubernetes cluster and store your custom application images for the cluster to consume.
- Aggregated Metrics and Alerts: In 0.2.0 we have added a new managed service for monitoring and metrics via Prometheus. Prometheus is pre-configured with basic cluster health monitors and a customizable Grafana dashboard. This is meant to not only monitor your cluster, but also for application developers deploying Kubernetes applications into the cluster to set up custom checks.
- Improved unified authentication system: Kibana, Prometheus, and the Docker Registry all share a single basic authentication.
Changes from 0.1.0
What did we change?
- Networking: the networking layout and design has changed to be in-line with Kubernetes best practices including floating IP addresses and private networks.
- ElasticSearch has been configured to only retain 3 days of aggregated logs in order to prevent running out of volume space; in the future this will be user configurable.
- Improved HA configuration for the Kubernetes API, the Kubernetes Ingress controller, and the etcd cluster.
- All managed services now run within the “rackspace-system” namespace.
Known Issues & Limitations
- There are a few single points of failure (SPoF) to be aware of.
- The OpenStack load balancers are not highly available.
- The Docker Registry is not highly available but Kubernetes will restart it, if it fails.
- Networking: All Kubernetes masters/nodes currently have a floating IP assigned, making them publicly accessible however they are still behind the firewall and all services are behind authentication. Publicly accessible IP addresses will be disabled/optional in a later release.
- Storage: Cinder backed persistent volumes are not fault tolerant and may introduce read/write performance penalties.
- Kubernetes does not correctly delete the cloud load balancer for a Kubernetes
Service
oftype: LoadBalancer
that has 2 or morespec,ports
. In order to avoid problems, limit KubernetesService
objects that aretype: LoadBalancer
to only 1 port.
Updated 12 months ago