Ghost Backups with AWS S3

You don't have a backup until you restored it. Until you actually test that your backups are working you can't know if they will be of any use in the furure. In a real production environment you would use CD/CI for the backup sytem too. But in this post we will talk about creating simple backups for the Ghost CMS with the help of Ansible.

Deployment process

First, lets talk about how I deploy this blog. I use docker-compose, so I can forget about installing or maintaining dependencies. The script looks like this:

version: '3.1'


    image: ghost:2
    restart: always
      - ./ghost/content:/var/lib/ghost/content
      url: ""

The ghost docker image by default uses SQLite3, and stores it in /var/lib/ghost/content/data/ghost.db. Therefore by simply copying that file we would be making a backup of the database.

But, what we really want is to backup everything. To do that, we need to backup the content folder. And that's it! So, to make our lives easier I decided to mount the content folder directly into the the host's ghost/content folder.

Ansible script

In this context the steps necessary to make the backup are:

This is the Ansible script for the previously described steps:

- name: check access to s3
  hosts: localhost
  connection: local
  gather_facts: no
    - name: aws s3 ls
      shell: aws s3 ls s3://your-backup-bucket/

- name: ensure python is installed in remote
  hosts: blog-servers
  gather_facts: no
    - name: check Python
      raw: test -e /usr/bin/python
      changed_when: false
      failed_when: false
      register: check_python
    - name: install Python
      raw: sudo apt-get install -y python 
      when: check_python.rc != 0

- name: backup blog
  hosts: blog-servers
  gather_facts: no
    - name: docker-compose down
      shell: /snap/bin/docker-compose down
        chdir: /home/ubuntu
    - name: outfile name
      shell: echo "ghost_{{inventory_hostname}}_$(date '+%F-%T').tar.gz"
      register: backup_file_name
    - name: compress files
      shell: tar -czvf  /tmp/{{backup_file_name.stdout}} ghost
        chdir: /home/ubuntu/
    - name: docker-compose up
      shell: /snap/bin/docker-compose up -d
        chdir: /home/ubuntu
    - name: copy backup to local machine
        - name: ensure backups folders exists
          connection: local
            path: ./backups
            state: directory
        - name: copy backup file to localhost
            src: /tmp/{{backup_file_name.stdout}}
            dest: ./backups/{{backup_file_name.stdout}}
            flat: true
    - name: delete temporal files
      shell: rm /tmp/ghost_{{inventory_hostname}}_*

- name: upload backup
  hosts: localhost
  connection: local
  gather_facts: no
    - name: aws s3 sync
      shell: aws s3 sync backups s3://your-backup-bucket/blog/


With the previous Ansible script, we created .tar.gz files with all the Ghost's files necessary to restore our site and we uploaded them to S3. But how do we know that this actually works? Well:

We can run locally the original docker-compose.yml with all the data decompressed in the same folder. Then, we can simply check if it works.

Obviously, this is a very minimalistic approach, but I think this is good enough for this type of websites. After all, even big companies don't automate/test their backup restoration process.