Creative Docker Deployments For Your PHP Apps

Image credit: Tri Eptaroka Mardiana

Read it later:

A couple of years ago (2018) I was asked by the company I was working for to build a new version of our REST API from scratch using PHP. Me and my team had lots of freedom on how to design it, write it and document it, including the deployment.

Since we were already using Docker deployments for another project, I tried to explore that path first but I was not satisfied. The two common solutions for deploying PHP applications at the time were:

  1. start a bunch of containers with Nginx and PHP-FPM on a Linux server and deploy the code on a shared volume on the host machine
  2. package the whole app using the PHP-Apache Docker image

We were already using an approach similar to the first one on the other project but we weren’t satisfied because it was creating a bunch of problems.

Solution number 2 could have worked but it generated a 700-800MB artifact to move around for every deployment and it wasn’t ideal.

We ended up using a standard LAMP server provisioned with Ansible for that project, and Docker was set aside, until…

PHP UK Conference 2019

In his talk Massively Scaled High Performance Web Services with PHP, Demin Yin explained in detail how they were using Docker to package and deploy their web services whilst keeping them flexible and performant. The interesting of it was they were packaging their API in a single Docker container with Nginx and PHP-FPM installed, using Supervisor as main process to keep all running in the background. The Docker base image of choice was Alpine Linux, which allowed to keep the size of the final image pretty low.

I was intrigued, so as soon as I had some spare time (which happened many months later as you can see from the post’s date above…) I decided to experiment with this technique and share it on GitHub so that can be useful.

The base images

The first step was to create two base Docker images: one for console applications and one for web applications. Each image is built upon Alpine GNU/Linux to keep it as small as possible. Each image is tagged with the PHP version used, without using the latest tag. So we can have your-user/php-web:7.3, your-user/php-cli:7.4, etc.

One important difference between my process and Demin’s is that he builds his images by custom compiling both Nginx and PHP while I’m sticking to the out-of-the-box versions. The other difference is that Demin didn’t go into the fine details of what it within each image so I’m trying to use my common sense here. Each image contains general and security settings and the most common dependencies.

The CLI Image

As you can see from the Dockerfile, the image for CLI applications is pretty standard. It’s built on top of the official php:7.4-cli-alpine, installs the latest version of Composer and uses an unprivileged user to run.

The Web Image

I’ve built the Web image starting from php:7.4-fpm-alpine. Nginx and Supervisor are installed along with the basic dependencies, Supervisor is also the main process run as root. This allows supervisor to launch Nginx with the user www-data and start the php-fpm service with the default unprivileged www user.

Dockerising your apps

Once the base images are in place, either locally or deployed to your public or private Docker repository, they can be used as a base to build your console or web applications.

Each image has a separate build script and Dockerfile. The build shell script computes a version tag from the current date/time and build the image locally so you will have your/webapp:YYYYMMDDHHMMSS and your/cli-app:YYYYMMDDHHMMSS in your Docker image list.

Building a simple CLI app

As you can see from the example, there’s not much to do here once the base image is in place. I’m copying the source files for the console app, defining the working directory, running Composer, and finally the binary for our PHP app is set as the main command. This small example is a silly background worker but common workers are rarely more compicated than that if carefully designed.

Building a simple Web app

The web application is almost as simple as the CLI. After defining specific Nginx paths and copying the source files, we switch to the www-data user to run Composer and than back to root to let the base image start Supervisor.

Deploying your apps

Cool, how do we deploy then? Well, it depends on your project strategy. A very good solution could be pushing the images to your private Docker registry and tag them as latest. You can then restart whatever service is in use on your deployment machine and force it to pull the latest update.

Ideally, all this will happen within a CI/CD environment and the deployment system will take care of performing a blue/green deploy, which means making sure the new containers are up and running before destroying the old ones. If you are on AWS you can build a similar strategy:

  • use AWS CodeBuild linked to your Github/Bitbucket repo to
    • build the Docker images
    • push the images to Amazon ECR (Elastic Container Repository)
  • run your containers using AWS Fargate, which is basically a managed Docker-as-a-service
  • at the end of the build, tell Fargate to “update” the running service
  • wait and see the magic happen!

The Poor Man’s Deploy

What I’m about to describe below is a simple scenario for small deployments:

  • a single Linux server is used to run the containers
  • the service is managed by Docker Compose
  • the compiled images are transferred directly from the build machine, which may be either your computer or a CI pipeline, and the destination server without relying on a Docker registry

Server Setup

The application is installed on the destination server in a place like /usr/local/apps/yourservice. Within that working directory we have a docker-compose.yml that describe your service with staging or production configuration.

The working directory also contains a nice utility script such as yourappctl with wraps Docker command with nice syntax like yourappctl start|stop|restart. It could be something like this:


if [[ -L $0 ]]; then
    path=$(readlink $0)
DIR="$(cd "$(dirname "$path")" && pwd)"

if [ $# == 0 ]; then
    echo "Usage: ./appctl <start|stop|logs>"
    exit 1

if [ $1 == "start" ]; then
    echo "Starting..."
    WEB_VERSION=$(if [ -f web.version ]; then cat $DIR/web.version; fi) \
        WORKER_VERSION=$(if [ -f worker.version ]; then cat $DIR/worker.version; fi) \
        docker-compose -f $DIR/docker-compose.yml -f $DIR/docker-compose.db.yml up $3 -d $2
    exit $?

if [ $1 == "stop" ]; then
    echo "Stopping..."
    if [ $2 ]; then
        docker-compose -f $DIR/docker-compose.yml -f $DIR/docker-compose.db.yml stop $2
        docker-compose -f $DIR/docker-compose.yml -f $DIR/docker-compose.db.yml down
    exit $?

if [ $1 == "logs" ]; then
    echo "Reading logs..."
    docker-compose logs -f $2

if [ $1 == "ps" ]; then
    docker-compose -f $DIR/docker-compose.yml -f $DIR/docker-compose.db.yml ps

We also need a cleanup script to get rid of older versions of artifacts, images and containers.




# docker images <image-name> --format "{{.Repository}}:{{.Tag}}"
if [[ -f $WEB_VERSION_FILE ]]; then
    echo "Current $WEB_IMAGE_NAME is $WEB_IMAGE_TAG"
    for tag in `docker images ${WEB_IMAGE_NAME} --format "{{.Tag}}"`
        if [[ $tag != $WEB_IMAGE_TAG ]]; then
            docker rmi "${WEB_IMAGE_NAME}:$tag"

if [[ -f $WORKER_VERSION_FILE ]]; then
    for tag in `docker images ${WORKER_IMAGE_NAME} --format "{{.Tag}}"`
        if [[ $tag != $WORKER_IMAGE_TAG ]]; then
            docker rmi "${WORKER_IMAGE_NAME}:$tag"
docker image prune --force

There is an Nginx server on the host machine listening on port 80 and 443 that acts as proxy between the running containers and the external world.

The app working directory contains an nginx.conf file that tells the proxy how to serve your application.

# Example Nginx configuration file for proxying your PHP Docker App

# Internal host and port where your app is listening
upstream dockerphp {
    server    localhost:8000;

# Redirect everything to HTTPS
server {
    listen       80;
    listen       [::]:80;
    server_name; # use your server name here
    return         301 https://$host$request_uri;

server {
    listen       443 ssl ;
    listen       [::]:443 ssl;

    ssl_certificate     /path/to/ssh/cert.pem; # use your custom path here
    ssl_certificate_key /path/to/ssh/privkey.pem; # use your custom path here

    ssl_session_cache   shared:SSL:10m;
    ssl_session_timeout 10m;

    server_name; # use your server name here

    location / {
        proxy_pass http://dockerphp;

The images are built from your local machine or from a CI/CD service like TravisCI or Bitbucket pipelines.

In this example I am not using the latest tag to run the containers, which would make things easier, but I assume that we may want some control over which version is deployed so the build script creates a version file like image-name.version that will be deployed along with our artifact (the Docker image).

Once the image for our app is built, the deploy script uses docker save to export the image to a gzipped artifact:

$ docker save image-name:image-tag | gzip > /path/to/artifact.gz

The image is then uploaded on the destination server using scp:

$ scp /path/to/artifact.gz deploy@myserver:/tmp/artifact.gz

On the remote server the image is then loaded into the Docker service:

$ ssh deploy@myserver "docker load -i /tmp/artifact.gz"

Once the artifact is successfully loaded the version file is also copied to the working directory using scp.

$ scp /path/to/image.version deploy@myserver:/deploy/path/image.version

At this point we can call our nice script yourappctl restart and tell Docker Compose to restart the services.

$ ssh deploy@myserver "cd /deploy/path && ./appctl stop web && ./appctl start web"

And finally we run the cleanup script to delete the old imaes and artifact leftovers.

$ ssh deploy@myserver "cd /deploy/path && ./"

This can also be achieved with a Bitbucket Pipeline or GitHub Action. I’ll leave that to you to experiment with. Happy Coding!