Docker

Run your backup jobs in a Docker container

Installation

Install Docker and pull the ElectricSheep.IO image from the Docker Hub:

docker pull servebox/electric_sheep

Running the container

The Dockerfile sets the container's WORKDIR to /electric_sheep and the ENTRYPOINT to electric_sheep. You'll have to mount a volume on /electric_sheep using the -v flag to make your configuration files available inside the container (Sheepfile, SSH and GPG keys - if applicable).

Configure the paths to your SSH and encryption keys according to the container filesystem (not to the host filesystem):

decrypt with: "/electric_sheep/electric_sheep.private.gpg.asc"

host "db",
	hostname: "db.example.com",
	private_key: "/electric_sheep/electric_sheep_rsa"

job "myapp-db" do
  
  schedule "daily"
  
  resource "database", host: "db", name: "myapp"
  
  remotely as: "operator" do
    mysql_dump  user: "mysql-user", password: encrypted("XXXX")
    tar_gz delete_source: true
  end
  
  move to: "localhost", using: "scp", as: "operator"
  move to: "backups/bucket", region: "eu-west-1", using: "s3" #...
  
  notify via: "email" #...

end

Once you're ready, start the container (you may also execute jobs inline using the work command instead of start):

docker run -d \
-v /path/to/config:/electric_sheep \
--name backup-jobs \
servebox/electric_sheep start

❗️

Persistence

Containers don't persist data across restarts. All is fine if you store your backups remotely, for example in an S3 bucket. But if you plan on storing a copy of your backups on the host running the container, you'll have to mount a second volume to store the artifacts. You may simply map a directory of the host to a volume of the container or create a data-only container.

Store backups in a Host Directory

Indicate the directory ES.IO should use as its working directory inside the container:

working_directory "/backups"

#...

Then mount the corresponding volume when starting the container:

docker run -d \
-v /var/backups:/backups \
-v /path/to/config:/electric_sheep \
--name backup-jobs \
servebox/electric_sheep start

It's pretty straightforward but, to avoid container portability issues - such as uid/gid mismatches, a better approach is to create a data volume container.

Store backups in a Data Volume Container

Although containers do not persist anything, volumes created by containers do. You don't have to map the volume on a host directory. First, create the data container:

docker run \
-v /backups \
--name backups \
ubuntu echo "Creating data volume"

Indicate ES.IO it should use this volume as its working directory:

working_directory "/backups"

#...

Now start the ES.IO container and bind the volume from the previously created container:

docker run -d \
--name electric_sheep \
--volumes-from backups \
-v /path/to/config:/electric_sheep \
servebox/electric_sheep start

If you need to manipulate the data to inspect or restore your backups, mount the volume inside another container:

docker run \
--volumes-from backups \
ubuntu du -h /backups

See the Docker documentation for more information on data volume containers.