What’s in Openshift 4?

thumbnail for this post

Openshift is, opinionatedly, the most popular kubernetes distribution for hybrid cloud recently has got its 4th major release! The latest release is the result of RedHat’s ( now IBM ) acquisition of CoreOS and is a merge of two leading kubernetes distributions, Tectonic and Openshift. Both platforms had their advantages, large open source communities, and solid arguments in cloud-native space.

  • CoreOS Tectonic: operator framework, quay.io container build, and registry service, stable tiny Linux distribution with ignition bootstrap and transaction-based update engine.
  • OpenShift: wide enterprise adoption, security, and multi-tenancy features.

What do we get as an outcome of such a merge?

Short answer, Openshift 4 is built on top of kubernetes 1.13 and comes with three main features:

  • Self-Managing Platform
  • Application Lifecycle Management
  • Automated Infrastructure Management

However, the devil is in the details, so let’s have a closer look!

New installer

The new openshift-install tool, together with operators, is the replacement for the old ansible scripts and is the first significant difference comparing to Openshift v3 you notice.

Install experience is straightforward; the whole process can be done in one command and require minimal infrastructure knowledge since the tool using “success first- tweak later” principle.

https://docs.openshift.com

It took me 40 minutes from start to ready to use platform. The new installer implements cluster API and “static pods” concepts, which assumes using kubernetes API for kubernetes lifecycle management e.g., bootstrap, upgrade and configuration management. The whole process may sound weird; however, such an architecture gives you a possibility to work with kubernetes cluster rollouts similarly to any cloud-native application, use the same tooling, API, and expertise. Additionally, the logic used by the installer reused as a part of an automated upgrade which is vital for avoiding configuration drifts in the future.

The installer has two modes, installer-provisioned and user provisioned infrastructure. The first one is recommended since it assumes end to end automated cluster management, the other infrastructure maintenance by a 3rd party.

Under the hood, at a high level, the install process looks so:

  1. The bootstrap node starts and hosts resources needed by the control plane
  2. Control plane nodes start an etcd cluster
  3. The bootstrap node starts a temporary control plane which uses etcd cluster and schedules the permanent control plane
  4. Bootstrap node hands over to the newly created control plane and shut down
  5. The permanent control plane creates the remaining resources

After the installer finishes, the future platform maintenance is handled by self-hosted operators; this is where self-management comes.

Operators end to end!

In Openshift 4, “operators concept” goes to the next level and forms the core of the platform. The hierarchy of operators, with clusterversion at the top, is the single door for configuration changes and is responsible for reconciling the system to the desired state. For example, if you break a critical cluster resource directly, the system automatically recovers itself. Also, OpenShift control plane and OS are tightly linked and holistically managed, which allows transaction-based maintenance flows, like upgrades and automatic certificates rotation.

Similarly to cluster maintenance, operator framework used for applications. As a user, you get SDK, OLM ( Lifecycle Manager ) and embedded operator hub.

At node level

RHEL CoreOS is the result of merging CoreOS Container Linux and RedHat Atomic host functionality and is currently the only supported OS to host OpenShift 4.

Some notes about the OS below:

  1. Node provisioning with ignition, which came with CoreOS Container Linux
  2. Atomic host updates with rpm-ostree
  3. CRI-O as a container runtime
  4. SELinux enabled by default

Machine API

Kubernetes is often considered as a cloud-agnostic layer since it contains tons of abstraction mechanisms covering many aspects of *aaS solutions. Openshift 4 introduces a set of new machine* resources provided by machine API (implementation of the upstream Cluster API).

https://github.com/openshift/machine-api-operator

Which allows creating, scaling and maintaining cloud VM instances using kubernetes objects, hence, simplifies writing custom controllers for scaling and provisioning of the cluster.

TL;DR

Openshift 4 has impressed me with a mix of self-maintenance features for on-prem and cloud IaaS based rollouts; it’s the mature solution and excellent building block for hybrid cloud infrastructure. However, nowadays most organizations use more than one cloud and kubernetes cluster simultaneously, the disappointment was to see the lack of such multi-cluster support like centralized identity, RBAC, monitoring, federation. RedHat looks in that direction too, for example, new cluster manager gives you a holistic list of clusters across your organization, and I dare to predict that we should see more and more of those in the next releases or as a part of Openshift or as a separate product.

comments powered by Disqus