rails migrate inspired approach to database schema migrations but with plain sql files. and much faster.
Let's say we have a simple docker-compose project setup with only a docker-compose.yml file
$ tree .
.
└── docker-compose.yml
0 directories, 1 file
That declares a postgres database (mydb) and an app (myapp) that uses the database somehow (in our contrived example, we're just listing the tables in our database with \dt)
version: '3'
services:
mydb:
image: "postgres"
environment:
- POSTGRES_DB=myapp_development
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=topsecret
myapp:
image: "postgres"
command: >-
sh -c '
until pg_isready --host=mydb --port=5432 --timeout=30; do sleep 1; done;
psql postgres://myuser:topsecret@mydb:5432/myapp_development -c \\dt
'
depends_on:
- mydbLet's see how our app runs
$ docker-compose up myapp
Creating dbmigrate-example_mydb_1 ... done
Creating dbmigrate-example_myapp_1 ... done
Attaching to dbmigrate-example_myapp_1
myapp_1 | mydb:5432 - no response
myapp_1 | mydb:5432 - no response
myapp_1 | mydb:5432 - accepting connections
myapp_1 | Did not find any relations.
dbmigrate-example_myapp_1 exited with code 0
The output simply shows that
- our
myappwaited untilmydb:5432was ready to accept connections - then it listed the tables inside the database and got
Did not find any relations(which means no tables)
Now let's add dbmigrate to our docker-compose.yml (take special note that myapp has a new entry under depends_on also)
version: '3'
services:
mydb:
image: "postgres"
environment:
- POSTGRES_DB=myapp_development
- POSTGRES_USER=myuser
- POSTGRES_PASSWORD=topsecret
myapp:
image: "postgres"
command: >-
sh -c '
until pg_isready --host=mydb --port=5432 --timeout=30; do sleep 1; done;
psql postgres://myuser:topsecret@mydb:5432/myapp_development -c \\dt
'
depends_on:
- mydb
- dbmigrate
# by default apply migrations with `-up` flag
# we can run different commands by overwriting `DBMIGRATE_CMD` env
# try DBMIGRATE_CMD="-h" to see what other flags dbmigrate can offer
dbmigrate:
image: "choonkeat/dbmigrate"
environment:
- DATABASE_URL=postgres://myuser:topsecret@mydb:5432/myapp_development?sslmode=disable
volumes:
- .:/app
working_dir: /app
command: ${DBMIGRATE_CMD:--up -server-ready 60s -create-db}
depends_on:
- mydbAnd we can start creating a few migration scripts by running docker-compose up dbmigrate with a custom DBMIGRATE_CMD env variable
$ DBMIGRATE_CMD="-create users" docker-compose up dbmigrate
dbmigrate-example_mydb_1 is up-to-date
Creating dbmigrate-example_dbmigrate_1 ... done
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1 | 2019/06/01 08:58:05 writing db/migrations/20190601085805_users.up.sql
dbmigrate_1 | 2019/06/01 08:58:05 writing db/migrations/20190601085805_users.down.sql
dbmigrate-example_dbmigrate_1 exited with code 0
$ DBMIGRATE_CMD="-create blogs" docker-compose up dbmigrate
dbmigrate-example_mydb_1 is up-to-date
Recreating dbmigrate-example_dbmigrate_1 ... done
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1 | 2019/06/01 08:58:14 writing db/migrations/20190601085814_blogs.up.sql
dbmigrate_1 | 2019/06/01 08:58:14 writing db/migrations/20190601085814_blogs.down.sql
dbmigrate-example_dbmigrate_1 exited with code 0
After running those 2 commands, we see that we've generated 2 pairs of empty *.up.sql and *.down.sql files.
$ tree .
.
├── db
│ └── migrations
│ ├── 20190601085805_users.down.sql
│ ├── 20190601085805_users.up.sql
│ ├── 20190601085814_blogs.down.sql
│ └── 20190601085814_blogs.up.sql
└── docker-compose.yml
2 directories, 5 files
We add our SQLs into our respective files
-- db/migrations/20190601085805_users.up.sql
CREATE TABLE IF NOT EXISTS users (
"id" SERIAL primary key
);-- db/migrations/20190601085805_users.down.sql
DROP TABLE IF EXISTS users;-- db/migrations/20190601085814_blogs.up.sql
CREATE TABLE IF NOT EXISTS blogs (
"id" SERIAL primary key
);-- db/migrations/20190601085814_blogs.down.sql
DROP TABLE IF EXISTS blogs;Let's see how our app runs after those changes
$ docker-compose up myapp
dbmigrate-example_mydb_1 is up-to-date
Recreating dbmigrate-example_dbmigrate_1 ... done
Recreating dbmigrate-example_myapp_1 ... done
Attaching to dbmigrate-example_myapp_1
myapp_1 | mydb:5432 - accepting connections
myapp_1 | List of relations
myapp_1 | Schema | Name | Type | Owner
myapp_1 | --------+--------------------+-------+--------
myapp_1 | public | blogs | table | myuser
myapp_1 | public | dbmigrate_versions | table | myuser
myapp_1 | public | users | table | myuser
myapp_1 | (3 rows)
myapp_1 |
dbmigrate-example_myapp_1 exited with code 0
Hey, looks like we have 3 tables now:
blogscreated by ourdb/migrations/20190601085814_blogs.up.sqluserscreated by ourdb/migrations/20190601085805_users.up.sqldbmigrate_versionscreated bydbmigratefor itself to track migration history.- every time
dbmigrateruns, it checksdbmigrate_versionstable to know which files indb/migrationshad been applied and skip them; which files have not been seen before and apply them
- every time
We can look at the logs of the dbmigrate container to see what had happened when myapp booted up just now
$ docker-compose logs dbmigrate
Attaching to dbmigrate-example_dbmigrate_1
dbmigrate_1 | 2019/06/01 08:59:32 [up] 20190601085805_users.up.sql
dbmigrate_1 | 2019/06/01 08:59:32 [up] 20190601085814_blogs.up.sql
Now everytime myapp starts up, since it declares depends_on: dbmigrate, our dbmigrate container will be run automatically to apply any new migration files in db/migrations. To add new migration files there, just run docker-compose up dbmigrate with a custom DBMIGRATE_CMD env variable (see above)
$ dbmigrate -create describe your change
2018/12/21 16:33:13 writing db/migrations/20181221083313_describe-your-change.up.sql
2018/12/21 16:33:13 writing db/migrations/20181221083313_describe-your-change.down.sql
generate a pair of blank .up.sql and .down.sql files inside the directory db/migrations. configure the directory with -dir command line flag.
the numeric prefix of the filename is the version. i.e. the version of the file above is 20181221083313
$ dbmigrate -up
2018/12/21 16:37:40 [up] 20181221083313_describe-your-change.up.sql
2018/12/21 16:37:40 [up] 20181221083727_more-changes.up.sql
- Connect to database (defaults to the value of
DATABASE_URLenv; configure with-url) - Start a db transaction
- Pick up each
.up.sqlfile indb/migrationsand iterate through them in ascending order- if the file
versionis found indbmigrate_versionstable, skip it - otherwise, execute the sql statements in the file
- if it succeeds, insert an entry into
dbmigrate_versionstableCREATE TABLE dbmigrate_versions ( version text NOT NULL PRIMARY KEY );
- if fail, rollback the entire transaction and exit 1
- if the file
- Commit db transaction and exit 0
$ dbmigrate -down 1
2018/12/21 16:42:24 [down] 20181221083727_more-changes.down.sql
- Connect to database (defaults to the value of
DATABASE_URLenv; configure with-url) - Start a db transaction
- Pick up each
.down.sqlfile indb/migrationsand iterate through them in descending order- if the file
versionis NOT found indbmigrate_versionstable, skip it - otherwise, execute the sql statements in the file
- if succeeds, remove the entry
WHERE version = ?fromdbmigrate_versionstable - if fail, rollback the entire transaction and exit 1
- if the file
- Commit db transaction and exit 0
You can migrate "down" more files by using a different number
$ dbmigrate -down 3
2018/12/21 16:46:45 [down] 20181221083313_describe-your-change.down.sql
2018/12/21 16:46:45 [down] 20181221055307_create-users.down.sql
2018/12/21 16:46:45 [down] 20181221055304_create-projects.down.sql
Prints a sorted list of versions found in -dir but does not have a record in dbmigrate_versions table.
$ dbmigrate -versions-pending
20181222073750
20181222073900
20181222073901
PostgreSQL
We're using github.com/lib/pq so environment variable look like this
DATABASE_URL=postgres://user:password@localhost:5432/dbmigrate_test?sslmode=disable
or
DATABASE_DRIVER=postgres
DATABASE_URL='user=pqgotest dbname=pqgotest sslmode=verify-full'
NOTE: out of the box, this driver supports having multiple statements in one
.sqlfile.
MySQL
We're using github.com/go-sql-driver/mysql so environment variables look like
DATABASE_DRIVER=mysql
DATABASE_URL='user:password@tcp(127.0.0.1:3306)/dbmigrate_test'
NOTE: to have multiple SQL statements in each
.sqlfile, you'd need to addmultiStatements=trueto the CGI query string of yourDATABASE_URL. i.e.DATABASE_URL='user:password@tcp(127.0.0.1:3306)/dbmigrate_test?multiStatements=true'See the driver documentation for details and other available options.
When there's an error, we rollback the entire transaction. So you can edit your faulty .sql file and simply re-run
$ dbmigrate -up
2018/12/21 16:55:41 20181221083313_describe-your-change.up.sql: pq: relation "users" already exists
exit status 1
$ vim db/migrations/20181221083313_describe-your-change.up.sql
$ dbmigrate -up
2018/12/21 16:56:05 [up] 20181221083313_describe-your-change.up.sql
2018/12/21 16:56:05 [up] 20181221083727_more-changes.up.sql
$ dbmigrate -up
$
PostgreSQL supports rollback for most data definition language (DDL)
one of the more advanced features of PostgreSQL is its ability to perform transactional DDL via its Write-Ahead Log design. This design supports backing out even large changes to DDL, such as table creation. You can't recover from an add/drop on a database or tablespace, but all other catalog operations are reversible. https://wiki.postgresql.org/wiki/Transactional_DDL_in_PostgreSQL:_A_Competitive_Analysis
MySQL does not support rollback for DDL
Some statements cannot be rolled back. In general, these include data definition language (DDL) statements, such as those that create or drop databases, those that create, drop, or alter tables or stored routines. https://dev.mysql.com/doc/refman/8.0/en/cannot-roll-back.html
If you're using MySQL, make sure to have DDL (e.g. CREATE TABLE ...) in their individual *.sql files.
The SQL command CREATE DATABASE <dbname> does not work well (at least in postgres) if <dbname> contains dashes -. The proper way would've been to quote the value when using it but alas there doesn't seem to be a driver agnostic way to quote that string in Go.
The workaround is, if -create-db is needed, use underscore _ for your dbname instead of dashes -