Drupal on OpenShift: Enhancing the developer experience
Authored by: Lakshmi Narsimhan
Learn how to automate all the pieces once you deploy your first Drupal 8 site on OpenShift.
We walked through a detailed Drupal 8 deployment on OpenShift. We just scratched the surface of OpenShift and didn’t explore all its features. In this post, we will further this setup by enhancing the developer experience in using OpenShift.
Introducing OpenShift Build configs
We encountered what a service and a deployment config are in the previous post. Allow me to introduce a new resource called the buildconfig. A deployment config tells OpenShift what to deploy(by providing the image and tag, the number of replicas etc.), when to deploy(when the image changes, or when the config changes, like the number of replicas) and where to deploy(using node selectors). In a similar way, build config tells OpenShift how to build the image required to deploy. A build config is mostly one of 2 types. A docker based build and a source to image build. We will look at the docker based build first.
Docker based build config
We previously passed a pre-built container image to our Drupal deployment config. We can take it a step further and pass this job to OpenShift. To do this, we create a build config resource with a “docker” strategy.
apiVersion: v1
kind: BuildConfig
metadata:
name: drupal-8-docker-bc
labels:
app: drupal-8
spec:
source:
git:
uri: https://github.com/badri/drupal-8-composer/
type: Git
contextDir: .
output:
to:
kind: ImageStreamTag
name: "drupal-openshift:latest"
namespace: openshift
strategy:
dockerStrategy:
dockerfilePath: deploy/Dockerfile
triggers:
- type: ImageChange
The docker strategy of the build config indicates that a new image with the name drupal-openshift
and latest
tag will be built and pushed to the internal docker registry of our cluster. The rest of the workflow remains the same. This newly built image will automatically trigger a new deployment, which will create new pods with the new image and so on. We’ve just offloaded the first step of the workflow to OpenShift.
The docker strategy tells you what Dockerfile
to use to build the resulting image. In our case, I’ve checked in a Dockerfile
in a top level directory called deploy/.
Unlike our previous docker builds, we need not clone the repository, we just copy the source code over to docroot.
FROM lakshminp/drupal-openshift-base:1.0
# COPY --chown=1001:0 . /code/
# Above command won't work on older docker versions.
# We do a chown as root and step down as non-root user.
COPY . /code/
USER root
RUN chown -R 1001:0 /code
USER 1001
# Cleanup git related stuff and run composer.
RUN cd /code && rm -rf .git && composer install
Create the docker strategy based build config resource in OpenShift.
$ oc apply -f drupal-docker-bc.yml
Fire a new build.
$ oc start-build drupal-8-docker-bc
Tail the logs to see how the image is built.
$ oc logs -f bc/drupal-8-docker-bc
You will have a new app up in minutes.
One aspect I dislike about docker strategy is it requires some level of devops literacy from your team and some spills over some deployment context into your code base. We can minimize both these pain points by using a source strategy.
Source based build config
A source based strategy also builds a container image and pushes it to the cluster’s registry, but in a completely different style. It contains information about your source code, i.e. what is the git repository URL, which branch etc.
When you trigger a source strategy based build, OpenShift fetches this code, assembles it on top of a base image and creates a new image. This base image is called, in OpenShift lingo, an s2i image. The s2i abbreviation stands for source2image. I’ve extensively covered how s2i works and how to build a custom s2i image previously. In a gist, here’s what an s2i workflow looks like:
And our new build config looks like this:
apiVersion: v1
kind: BuildConfig
metadata:
name: drupal-8-s2i-bc
labels:
app: drupal-8
spec:
source:
git:
uri: https://github.com/badri/drupal-8-composer/
type: Git
output:
to:
kind: ImageStreamTag
name: "drupal-openshift:latest"
namespace: openshift
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: drupal-openshift-s2i:v1.0
namespace: openshift
incremental: true
type: Source
triggers:
- github:
secret: "xuqEEupGL5RQfZnlkgLieMluPvyaOKuQWBzcg3Rq"
type: GitHub
Some things of note here.
- You can use any public github repository for this, as long as
- it follows the drupal composer project format
- docroot is
web
. - You make it 12 factor compatible by adding DB settings from environment file. Check my previous post on how to do this.
I’ll write a future post on how to make this work with Gitlab, Bitbucket and private repositories.
I’ll explain the Github trigger secret later in this post.
- You need to have access to push the resulting image in your namespace. This is currently
openshift
, but change it to your own namespace.
3. The incremental: true
line in the build config will help us speed up builds. We’ll see that in a moment.
Let’s put this new build config in action.
$ oc apply -f drupal-s2i-bc.yml
You can start a build using the oc start-build
command. But here’s what makes build configs cool, it will trigger builds automatically the moment you git push your code, just like Heroku!
A source based build config achieves this feat by exposing a build trigger endpoint. This can be added as a webhook in Github at your repository’s settings.
Now, it is highly likely that you are running this OpenShift setup in a Minishift cluster. Your webhooks won’t work in that case as your Minishift URL is not a publicly exposed one. We bypass this by using a clever tool called Ultrahook. Ultrahook is a free tool which forwards all hook requests to your localhost. Once you sign up with Ultrahook, they’ll provide you a namespace and an API key. We configure our setup so that when a push happens, Github fires a webhook to the Ultrahook URL, and we have a local ultrahook client which captures this and forwards to our Minishift cluster.
NOTE that you don’t need to use this setup if you are running an OpenShift cluster in the internet instead of localhost.
Setting up ultrahook
The ultrahook client is ruby based, and chances are that you don’t have a Ruby setup on your local. I’ve created a minimal docker image for running ultrahook, which we will use for this demo.
Assuming you have docker installed on your laptop, signed up at Ultrahook and got your namespace and API key, you run this command.
$ docker run lakshminp/ultrahook -k <your-api-key> github https://192.168.42.176:8443/apis/build.openshift.io/v1/namespaces/openshift/buildconfigs/drupal-8-s2i-bc/webhooks/xuqEEupGL5RQfZnlkgLieMluPvyaOKuQWBzcg3Rq/github
This will run in the foreground. Now go to Github and add the http://github.<your-namespace>.ultrahook.com/
URL as a webook.
Trigger a new build from git push
Let’s make a code change.
$ composer require drupal/admin_toolbar
$ git commit -a -m "Added admin toolbar"
$ git push origin master
This will trigger a webhook and you will notice that your client received it and forwarded it to OpenShift. The random set of characters in the URL is the Github webhook secret we configured in the build config earlier. This is used to differentiate between various apps in the same cluster which use the same codebase.
You can see that a new build is triggered by running,
$ oc get builds
This will push a new image of drupal-openshift
, built this time using s2i, and will trigger a new deployment.
Incremental builds
OpenShift allows you to speed up your builds and reduce build time by saving artifacts across builds. In our case, this will be the vendor/
directory that gets generated when you run composer install
. This is an optional step which you can bake in as part of your s2i image. Note that most of the save-artifacts
scripts which I’ve encountered didn’t work on Debian images. I wrote my own in Python instead of the regular tar tool and shell. In case you are using the regular tar tool and are not able to generate artifacts, you can use this instead.
#!/bin/bash
pushd /code >/dev/null
python - <<-EOF
import sys, tarfile
tf = tarfile.open(fileobj=sys.stdout, mode="w|")
tf.add("vendor")
EOF
popd >/dev/null
Also, you have to set incremental: true
in your source strategy spec of your build config for the builder to call save-artifacts
. On your first successful build, it will save a tar of your vendor directory. The next time you run it, it will retrieve this tar archive instead of downloading it again.
NOTE that you can achieve git push based deployment using the docker strategy also, although I’ve not tried this method.
OpenShift templates
We rolled up all the resource YAML definitions into a single file and ran oc apply -f <filename>
so far. Every time we want to launch a new application, we have to open this file and edit stuff, like the source code, application name and generate new secrets every time we create new apps. This is,
- Tedious and error-prone. Because we have to edit a 500 line YAML file and remember to indent stuff correctly. Also, we have to take care to update the app name in every place, failing which the events and resource creation does not trigger correctly. I recall an incident where I spent hours tracking a problem, where I had updated the app name but forgot to set the generated image name in the trigger section of build config.
- Not scalable. We can’t create apps in a scalable manner. There is a lot of friction if you try to use this all-in-one resource file from, say, a CI setup.
OpenShift provides a resource type called template. A template is the same as an all-in-one resource file, except that it can be parameterized and namespaced. A template can also help you create an app from the UI as well, which we will see shortly.
Creating an OpenShift Drupal template
Let’s try and create a new OpenShift template for our Drupal app. We first need to create a template file by adding a template resource definition at the very top.
kind: Template
apiVersion: v1
message: |-
The following service(s) have been created in your project: Drupal 8, MariaDB.
For more information about using this template, including OpenShift considerations, see https://github.com/badri/ubuntu-drupal-8-s2i/blob/master/README.md.
metadata:
name: drupal-8
annotations:
description: An example PHP 7.1 application running on Ubuntu with a MySQL database, built for Drupal 8. For more information
about using this template, including OpenShift considerations, see https://github.com/badri/ubuntu-drupal-8-s2i/blob/master/README.md..
iconClass: icon-drupal
openshift.io/display-name: Drupal 8
openshift.io/documentation-url: https://www.shapeblock.com/drupal-on-openshift-enhancing-the-developer-experience
openshift.io/long-description: This template defines resources needed to develop a Debian based Drupal 8 setup running on PHP 7.1 using FPM and Nginx. It also includes resources required to create a MariaDB instance.
openshift.io/provider-display-name: Lakshmi Narasimhan
openshift.io/support-url: https://www.shapeblock.com
tags: quickstart,php,drupal 8
template.openshift.io/bindable: "false"
labels:
template: drupal-8
app: ${NAME}
Then we create an objects
section where we pull all the resources from the all-in-one file and remove the YAML separator, ---
between the resource definitions and list them under this objects section.
objects:
- apiVersion: v1
kind: DeploymentConfig
metadata:
name: ${NAME}
spec:
.
.
.
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ${NAME}-files
spec:
.
.
.
Supplying template parameters
If you observe closely, you can see that I’ve used a variable ${NAME}
instead of a hardcoded name. This is furnished to me by the template as a parameter and will be substituted with the value I provide for NAME when generating a processed template.
I have to remember to handle 3 more resource definitions to this template file.
- The build config, which is parameterized before adding.
apiVersion: v1
kind: BuildConfig
metadata:
name: ${NAME}
spec:
source:
git:
ref: ${SOURCE_REPOSITORY_REF}
uri: ${SOURCE_REPOSITORY_URL}
type: Git
output:
to:
kind: ImageStreamTag
name: ${NAME}:latest
namespace: openshift
strategy:
sourceStrategy:
from:
kind: ImageStreamTag
name: drupal-openshift-s2i:v1.0
namespace: ${NAMESPACE}
incremental: true
type: Source
triggers:
- github:
secret: ${GITHUB_WEBHOOK_SECRET}
type: GitHub
You can see that this build config contains 5 variables which are parameters and can be provided before creating the app while processing the template.
- The route resource. This exposes your service with a URL to the outside world. We didn’t create any route resource manually, but we can extract a route which is already present and modify it.
$ oc get route # find out which route you want to export
$ oc get route <route-name> -o yaml > my-route.yml
- An imagestream for your app. This was a gotcha for me at first. When you create a new app, your deployment config expects an imagestream to be present in the registry under the same namespace of the app. Otherwise, a deployment will not happen.
apiVersion: v1
kind: ImageStream
metadata:
name: ${NAME}
Then, you add the parameters under the parameters
section.
parameters:
- description: The name assigned to all of the frontend objects defined in this template.
displayName: Name
name: NAME
required: true
from: 'drupal-8-[a-f0-9]{6}'
generate: expression
- description: The OpenShift Namespace where the ImageStream resides.
displayName: Namespace
name: NAMESPACE
required: true
value: openshift
.
.
.
There are 2 kinds of parameter variables, the plain ones and generated ones. The name in the above code is an example of a generated parameter which appends a 6 digit hex value to the base drupal-8-
. All parameter values can be overridden when processing, and take the default value if nothing is given. Let’s process the parameter and generate a ready to deploy file.
$ oc process -f drupal-8-template.yml -p NAMESPACE=drupal
This will generate a JSON blob which we can readily “oc apply”. I mentioned before that template is also a type of resource in the OpenShift ecosystem. You can do,
$ oc create -f drupal-8-template.yml
$ oc get template
NAME DESCRIPTION PARAMETERS OBJECTS
drupal-8 An example PHP 7.1 application running on Ubuntu with a MySQL database, built... 14 (2 blank) 10
$ oc process drupal-8 -p NAMESPACE=drupal
Creating a new app from the OpenShift template
Let’s create a new Drupal 8 site from the above template in command line. Before you do that, you have to ensure 2 things.
- The s2i imagestream is present in your namespace/registry.
$ oc import-image lakshminp/drupal-openshift-s2i:v1.0 --confirm
- The Nginx image is present in your namespace/registry.
$ oc tag openshift/nginx-openshift:1.0 nginx-openshift:latest -n drupal
Now, the actual template creation part where we pipe the generated JSON to “oc apply”.
$ oc process drupal-8 -p NAMESPACE=drupal | oc apply -f -
The first build will still be triggered manually.
$ oc get bc
NAME TYPE FROM LATEST
drupal-8-7c36d6 Source Git 0
$ oc start-build drupal-8-7c36d6
After a few minutes, the route will be up and you will be well on your way to install Drupal.
Later builds will be automatically triggered when you do a git push.
Using templates in the OpenShift web console
You can make use of templates via the UI as well if that’s your thing. When you are logged in to the web console, you can see an option to add a new template to the catalog here.
Go ahead and add your template. You will get an acknowledgment like this.
And create a new app from the comfort of your UI!
Next steps
We have automated all the parts of a Drupal 8 workflow in OpenShift. But there’s still scope for improvement. For example, how can I deploy a Drupal 8 site with a custom docroot other than web? How do I run cron jobs? How to add another service to this, like memcache or elastic search? Also, there’s this favorite feature of any PaaS, i.e. review environments. Per branch instances of your Drupal site. We will be addressing all these and more in our next post!