What a brave new world Infrastructure-as-Code (IaC) is! It seems that every single day it contributes another tool for dealing with complex heterogeneous infrastructures. Each level of abstraction comes with its own instruments and workflows. While they have not yet matured to the extent of a one-tool-fits-all, they are hot on the trail to achieving it.
As experienced DevOps & SRE engineers, we all know — or even use — Terraform, Pulumi and Crossplane. However, there’s also a space for many other tools targeted at more specific cases. Ever wondered how you can manage database migration schemas using the declarative approach in the modern DevOps world? This is exactly what Atlas is about.
Introduction and main features
Atlas is an Open Source tool which is written in Go and went live in November of 2021. Today, it boasts over 3.7K stars, 20+ releases, and 60+ contributors on GitHub.
The project aims to help manage and migrate database schemas. The user defines the desired state for the database and lets Atlas compare its current state with the desired one, generate and execute a migration plan. The database schema can be defined in SQL, ORM (such as GORM), or Atlas HCL format.
As for databases, Atlas and its operators support MySQL, MariaDB, PostgreSQL, TiDB, CockroachDB, and Microsoft SQL Server. It works quite well even with SQLite, which may come in handy for developers who prefer a simple and lightweight database like that. Unfortunately, there is no way to test them all in the scope of a single article, so we’ll focus on MySQL and PostgreSQL.
Atlas, among other things, allows you to customize the schema comparison policy (diff policy): you can go with skip destructive (to secure a whole database or specific tables) or concurrent_index (to delete or create indexes in a competitive fashion with no table locks — this option, by the way, is disabled by default). HCL column types’ customization is effortless and logical.
In Kubernetes, you can use Atlas not only as an operator but also as an init container for a particular database. On top of that, you can combine it with Argo CD. The latter option appears promising but calls for a rather lengthy article in its own right. Of course, you can limit yourself to the migration and import tools suggested on the official website if you wish. Furthermore, there is a nicely detailed integration available with Terraform — this guide might be a good starting point for those who use Terraform to spin up databases in the cloud.
In short, there are a number of ways to use Atlas:
- on regular hosts — as a tool or container;
- in Kubernetes — as a sidecar or init container for managing database schemas (regardless of the number of clusters, including those created using Custom Resources) or as part of the GitOps approach with Argo CD and other such tools.
Leveraging any of these methods will surely simplify incorporating DDL (Data Definition Language) into the infrastructure definition. Let’s see it in action!
Setting up the test environment
Atlas supports two different approaches:
- you can use a declarative description of the database schema (as in Terraform): the current state of the database is compared to the desired schema. The result is a diff for further application;
- you can use versioning: plan changes, test them, and eventually apply them using Atlas.
We, as DevOps engineers obsessed with the World-as-a-Code, favor a declarative approach. Well, let’s roll up our sleeves and get our hands dirty!
Firstly, we will install the tool on the host:
$ curl -sSf https://atlasgo.sh | sh
Install 'atlas-linux-amd64-latest' to '/usr/local/bin/atlas'? [y/N] y
Downloading https://release.ariga.io/atlas/atlas-linux-amd64-latest
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 69.0M 100 69.0M 0 0 51.5M 0 0:00:01 0:00:01 --:--:-- 51.5M
atlas version v0.12.1-972da10-canary
https://github.com/ariga/atlas/releases/latest
Installation successful!
Don’t forget to add auto-completion for Bash:
$ atlas completion bash > /etc/bash_completion.d/atlas
We will need the Docker Compose plugin to experiment with. Let’s make sure it’s installed:
$ docker compose version
Docker Compose version v2.19.1
1. Using Atlas with Docker Compose
We will start exploring Atlas with the help of Docker Compose. We will skip the Docker Compose 101 basics, such as installation and operating principles, as they are described well in the official documentation. Instead, we will focus on more complex cases — e.g., combining it with Kubernetes — later.
1.1. Experimenting with MySQL
MySQL will be our first DBMS for tests. That’s what we need to use it in Docker:
version: '3.8'
services:
db:
image: mysql:8.0
command: --default-authentication-plugin=mysql_native_password
restart: always
environment:
MYSQL_ROOT_PASSWORD: groot_password
ports:
- "3306:3306"
Now we can define (in SQL) a simple database for illustrative purposes:
DROP DATABASE IF EXISTS employees;
CREATE DATABASE IF NOT EXISTS employees;
USE employees;
SELECT 'CREATING DATABASE STRUCTURE' as 'INFO';
DROP TABLE IF EXISTS dept_emp,
dept_manager,
titles,
salaries,
employees,
departments;
/*!50503 set default_storage_engine = InnoDB */;
/*!50503 select CONCAT('storage engine: ', @@default_storage_engine) as INFO */;
CREATE TABLE employees (
emp_no INT NOT NULL,
birth_date DATE NOT NULL,
first_name VARCHAR(14) NOT NULL,
last_name VARCHAR(16) NOT NULL,
gender ENUM ('M','F') NOT NULL,
hire_date DATE NOT NULL,
PRIMARY KEY (emp_no)
);
CREATE TABLE departments (
dept_no CHAR(4) NOT NULL,
dept_name VARCHAR(40) NOT NULL,
PRIMARY KEY (dept_no),
UNIQUE KEY (dept_name)
);
CREATE TABLE dept_manager (
emp_no INT NOT NULL,
dept_no CHAR(4) NOT NULL,
from_date DATE NOT NULL,
to_date DATE NOT NULL,
FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
PRIMARY KEY (emp_no,dept_no)
);
CREATE TABLE dept_emp (
emp_no INT NOT NULL,
dept_no CHAR(4) NOT NULL,
from_date DATE NOT NULL,
to_date DATE NOT NULL,
FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON DELETE CASCADE,
PRIMARY KEY (emp_no,dept_no)
);
CREATE TABLE titles (
emp_no INT NOT NULL,
title VARCHAR(50) NOT NULL,
from_date DATE NOT NULL,
to_date DATE,
FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
PRIMARY KEY (emp_no,title, from_date)
);
CREATE TABLE salaries (
emp_no INT NOT NULL,
salary INT NOT NULL,
from_date DATE NOT NULL,
to_date DATE NOT NULL,
FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON DELETE CASCADE,
PRIMARY KEY (emp_no, from_date)
);
Let’s create this database in the MySQL container:
$ cat db.sql | docker exec -i root-db-1 mysql -pgroot_password
mysql: [Warning] Using a password on the command line interface can be insecure.
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
The database includes six tables and features a pretty basic structure:
$ docker exec -it root-db-1 mysql -pgroot_password -e 'use employees; show tables'
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------------+
| Tables_in_employees |
+---------------------+
| departments |
| dept_emp |
| dept_manager |
| employees |
| salaries |
| titles |
+---------------------+
It’s time to put Atlas into action:
$ atlas schema inspect -u "mysql://root:groot_password@localhost:3306/employees"
table "departments" {
schema = schema.employees
column "dept_no" {
null = false
type = char(4)
}
column "dept_name" {
null = false
type = varchar(40)
}
primary_key {
columns = [column.dept_no]
}
index "dept_name" {
unique = true
columns = [column.dept_name]
}
}
table "dept_emp" {
schema = schema.employees
column "emp_no" {
null = false
type = int
}
column "dept_no" {
null = false
type = char(4)
}
column "from_date" {
null = false
type = date
}
column "to_date" {
null = false
type = date
}
primary_key {
columns = [column.emp_no, column.dept_no]
}
foreign_key "dept_emp_ibfk_1" {
columns = [column.emp_no]
ref_columns = [table.employees.column.emp_no]
on_update = NO_ACTION
on_delete = CASCADE
}
foreign_key "dept_emp_ibfk_2" {
columns = [column.dept_no]
ref_columns = [table.departments.column.dept_no]
on_update = NO_ACTION
on_delete = CASCADE
}
index "dept_no" {
columns = [column.dept_no]
}
}
table "dept_manager" {
schema = schema.employees
column "emp_no" {
null = false
type = int
}
column "dept_no" {
null = false
type = char(4)
}
column "from_date" {
null = false
type = date
}
column "to_date" {
null = false
type = date
}
primary_key {
columns = [column.emp_no, column.dept_no]
}
foreign_key "dept_manager_ibfk_1" {
columns = [column.emp_no]
ref_columns = [table.employees.column.emp_no]
on_update = NO_ACTION
on_delete = CASCADE
}
foreign_key "dept_manager_ibfk_2" {
columns = [column.dept_no]
ref_columns = [table.departments.column.dept_no]
on_update = NO_ACTION
on_delete = CASCADE
}
index "dept_no" {
columns = [column.dept_no]
}
}
table "employees" {
schema = schema.employees
column "emp_no" {
null = false
type = int
}
column "birth_date" {
null = false
type = date
}
column "first_name" {
null = false
type = varchar(14)
}
column "last_name" {
null = false
type = varchar(16)
}
column "gender" {
null = false
type = enum("M","F")
}
column "hire_date" {
null = false
type = date
}
primary_key {
columns = [column.emp_no]
}
}
table "salaries" {
schema = schema.employees
column "emp_no" {
null = false
type = int
}
column "salary" {
null = false
type = int
}
column "from_date" {
null = false
type = date
}
column "to_date" {
null = false
type = date
}
primary_key {
columns = [column.emp_no, column.from_date]
}
foreign_key "salaries_ibfk_1" {
columns = [column.emp_no]
ref_columns = [table.employees.column.emp_no]
on_update = NO_ACTION
on_delete = CASCADE
}
}
table "titles" {
schema = schema.employees
column "emp_no" {
null = false
type = int
}
column "title" {
null = false
type = varchar(50)
}
column "from_date" {
null = false
type = date
}
column "to_date" {
null = true
type = date
}
primary_key {
columns = [column.emp_no, column.title, column.from_date]
}
foreign_key "titles_ibfk_1" {
columns = [column.emp_no]
ref_columns = [table.employees.column.emp_no]
on_update = NO_ACTION
on_delete = CASCADE
}
}
schema "employees" {
charset = "utf8mb4"
collate = "utf8mb4_0900_ai_ci"
}
To move further, we need to make a few changes to it. What happens if department names, first names, and last names in our application turn out to be rather long? Let’s compare the HCL files for our old and new schemas:
$ atlas schema diff -f file://employees.hcl --to file://employees_v2.hcl --dev-url docker://mysql/8/schema
-- Modify "departments" table
ALTER TABLE `departments` MODIFY COLUMN `dept_name` varchar(200) NOT NULL;
-- Modify "employees" table
ALTER TABLE `employees` MODIFY COLUMN `first_name` varchar(50) NOT NULL, MODIFY COLUMN `last_name` varchar(50) NOT NULL;
As we can see, this command results in a ready-to-apply diff.
Note: we need the --dev-url
key, which points to a connection to an empty dev database that Atlas uses to simulate, count, and replay schema changes. Specifying docker://mysql/8/schema
will cause Atlas to create an empty temporary database in Docker.
You will get the same result if you compare the database schema and the new file.
Now let’s add another (the second one) empty database by simply copying the previous container and changing the port in use:
$ docker compose up -d
[+] Running 2/2
✔ Container root-db2-1 Started 1.1s
✔ Container root-db-1 Running
$ cat db.sql | docker exec -i root-db2-1 mysql -pgroot_password
mysql: [Warning] Using a password on the command line interface can be insecure.
INFO
CREATING DATABASE STRUCTURE
INFO
storage engine: InnoDB
$ atlas schema diff -f mysql://root:groot_password@localhost:3306/employees --to mysql://root:groot_password@localhost:3307/employees --dev-url docker://mysql/8/schema
Schemas are synced, no changes to be made.
…. and apply the schema file to make the first database match the first migration:
$ atlas schema apply -u mysql://root:groot_password@localhost:3306/employees --to file://employees_v2.hcl --dry-run
-- Planned Changes:
-- Modify "departments" table
ALTER TABLE `employees`.`departments` MODIFY COLUMN `dept_name` varchar(200) NOT NULL;
-- Modify "employees" table
ALTER TABLE `employees`.`employees` MODIFY COLUMN `first_name` varchar(50) NOT NULL, MODIFY COLUMN `last_name` varchar(50) NOT NULL;
$ atlas schema apply -u mysql://root:groot_password@localhost:3306/employees --to file://employees_v2.hcl
-- Planned Changes:
-- Modify "departments" table
ALTER TABLE `employees`.`departments` MODIFY COLUMN `dept_name` varchar(200) NOT NULL;
-- Modify "employees" table
ALTER TABLE `employees`.`employees` MODIFY COLUMN `first_name` varchar(50) NOT NULL, MODIFY COLUMN `last_name` varchar(50) NOT NULL;
✔ Apply
Finally, we will do the same with the second database (this time, without the schema file):
$ atlas schema apply -u mysql://root:groot_password@localhost:3307/employees --to mysql://root:groot_password@localhost:3306/employees --dry-run
-- Planned Changes:
-- Modify "departments" table
ALTER TABLE `departments` MODIFY COLUMN `dept_name` varchar(200) NOT NULL;
-- Modify "employees" table
ALTER TABLE `employees` MODIFY COLUMN `first_name` varchar(50) NOT NULL, MODIFY COLUMN `last_name` varchar(50) NOT NULL;
$ atlas schema apply -u mysql://root:groot_password@localhost:3307/employees --to mysql://root:groot_password@localhost:3306/employees --auto-approve
-- Planned Changes:
-- Modify "departments" table
ALTER TABLE `departments` MODIFY COLUMN `dept_name` varchar(200) NOT NULL;
-- Modify "employees" table
ALTER TABLE `employees` MODIFY COLUMN `first_name` varchar(50) NOT NULL, MODIFY COLUMN `last_name` varchar(50) NOT NULL;
Note that the --auto-approve
key allows operations to be performed without user intervention.
Cool! Next, let’s see how Atlas works with PostgreSQL.
1.2. Experimenting with PgSQL
To do so, we will need another short Docker Compose:
version: '3.8'
services:
db:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: jonhdoe42
ports:
- "5432:5432"
db2:
image: postgres
restart: always
environment:
POSTGRES_PASSWORD: jonhdoe42
ports:
- "5433:5432”
Let’s run it:
$ docker compose up -d
[+] Running 3/3
✔ Network root_default Created 0.1s
✔ Container root-db2-1 Started 1.2s
✔ Container root-db-1 Started
Now let’s prepare a test DDL:
CREATE TABLE cd.facilities (
facid integer NOT NULL PRIMARY KEY,
name character varying(100) NOT NULL,
membercost numeric NOT NULL,
guestcost numeric NOT NULL,
initialoutlay numeric NOT NULL,
monthlymaintenance numeric NOT NULL
);
CREATE TABLE cd.members (
memid integer NOT NULL PRIMARY KEY,
surname character varying(200) NOT NULL,
firstname character varying(200) NOT NULL,
address character varying(300) NOT NULL,
zipcode integer NOT NULL,
telephone character varying(20) NOT NULL,
recommendedby integer,
joindate timestamp without time zone NOT NULL
);
… and apply it to the first database:
$ cat psql1.sql | docker exec -i root-db-1 psql postgresql://postgres:jonhdoe42@root-db-1
CREATE SCHEMA
CREATE TABLE
CREATE TABLE
Done! Prepare another slightly modified DDL:
CREATE SCHEMA IF NOT EXISTS cd;
CREATE TABLE IF NOT EXISTS cd.facilities (
facid integer NOT NULL PRIMARY KEY,
name character varying(100) NOT NULL,
membercost numeric NOT NULL,
guestcost numeric NOT NULL,
initialoutlay numeric NOT NULL,
monthlymaintenance numeric NOT NULL
);
CREATE TABLE IF NOT EXISTS cd.members (
memid integer NOT NULL PRIMARY KEY,
surname character varying(200) NOT NULL,
firstname character varying(200) NOT NULL,
address character varying(300) NOT NULL,
zipcode integer NOT NULL,
telephone character varying(20) NOT NULL,
recommendedby integer,
joindate timestamp without time zone NOT NULL,
FOREIGN KEY (recommendedby) REFERENCES cd.members(memid) ON DELETE SET NULL
);
CREATE TABLE IF NOT EXISTS cd.bookings (
bookid integer NOT NULL PRIMARY KEY,
facid integer NOT NULL,
memid integer NOT NULL,
starttime timestamp without time zone NOT NULL,
slots integer NOT NULL,
FOREIGN KEY (facid) REFERENCES cd.facilities(facid),
FOREIGN KEY (memid) REFERENCES cd.members(memid)
);
…. and apply it to the second database:
$ cat psql2.sql | docker exec -i root-db2-1 psql postgresql://postgres:jonhdoe42@root-db2-1
CREATE SCHEMA
CREATE TABLE
CREATE TABLE
CREATE TABLE
We are ready to apply the changes from the second database to the first database:
$ atlas schema apply -u postgresql://postgres:jonhdoe42@localhost:5432/postgres?sslmode=disable --to postgresql://postgres:jonhdoe42@localhost:5433/postgres?sslmode=disable --dry-run
-- Planned Changes:
-- Modify "members" table
ALTER TABLE "cd"."members" ADD CONSTRAINT "members_recommendedby_fkey" FOREIGN KEY ("recommendedby") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE SET NULL;
-- Create "bookings" table
CREATE TABLE "cd"."bookings" ("bookid" integer NOT NULL, "facid" integer NOT NULL, "memid" integer NOT NULL, "starttime" timestamp NOT NULL, "slots" integer NOT NULL, PRIMARY KEY ("bookid"), CONSTRAINT "bookings_facid_fkey" FOREIGN KEY ("facid") REFERENCES "cd"."facilities" ("facid") ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "bookings_memid_fkey" FOREIGN KEY ("memid") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE NO ACTION);
$ atlas schema apply -u postgresql://postgres:jonhdoe42@localhost:5432/postgres?sslmode=disable --to postgresql://postgres:jonhdoe42@localhost:5433/postgres?sslmode=disable --auto-approve
-- Planned Changes:
-- Modify "members" table
ALTER TABLE "cd"."members" ADD CONSTRAINT "members_recommendedby_fkey" FOREIGN KEY ("recommendedby") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE SET NULL;
-- Create "bookings" table
CREATE TABLE "cd"."bookings" ("bookid" integer NOT NULL, "facid" integer NOT NULL, "memid" integer NOT NULL, "starttime" timestamp NOT NULL, "slots" integer NOT NULL, PRIMARY KEY ("bookid"), CONSTRAINT "bookings_facid_fkey" FOREIGN KEY ("facid") REFERENCES "cd"."facilities" ("facid") ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "bookings_memid_fkey" FOREIGN KEY ("memid") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE NO ACTION);
It all goes up as expected!
On second thought, indexes might come in handy for us. Let’s add them to the second database! The SQL code we need is:
CREATE INDEX "bookings.memid_facid"
ON cd.bookings
USING btree
(memid, facid);
CREATE INDEX "bookings.facid_memid"
ON cd.bookings
USING btree
(facid, memid);
CREATE INDEX "bookings.facid_starttime"
ON cd.bookings
USING btree
(facid, starttime);
CREATE INDEX "bookings.memid_starttime"
ON cd.bookings
USING btree
(memid, starttime);
CREATE INDEX "bookings.starttime"
ON cd.bookings
USING btree
(starttime);
CREATE INDEX "members.joindate"
ON cd.members
USING btree
(joindate);
CREATE INDEX "members.recommendedby"
ON cd.members
USING btree
(recommendedby);
Applying it:
$ cat psql3.sql | docker exec -i root-db2-1 psql postgresql://postgres:jonhdoe42@root-db2-1
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
Now, let’s use the migrations to propagate the indexes to the first database. First, we will create a migration from a database without indexes using create_tables
:
$ atlas migrate diff create_tables --dir "file://migrations" --to "postgresql://postgres:jonhdoe42@localhost:5432/postgres?sslmode=disable" --dev-url "docker://postgres/15/test"
Let’s see what’s in the directory:
$ ls migrations/
20230712165510_create_tables.sql atlas.sum
$ cat migrations/20230712165510_create_tables.sql
-- Add new schema named "cd"
CREATE SCHEMA "cd";
-- Create "facilities" table
CREATE TABLE "cd"."facilities" ("facid" integer NOT NULL, "name" character varying(100) NOT NULL, "membercost" numeric NOT NULL, "guestcost" numeric NOT NULL, "initialoutlay" numeric NOT NULL, "monthlymaintenance" numeric NOT NULL, PRIMARY KEY ("facid"));
-- Create "members" table
CREATE TABLE "cd"."members" ("memid" integer NOT NULL, "surname" character varying(200) NOT NULL, "firstname" character varying(200) NOT NULL, "address" character varying(300) NOT NULL, "zipcode" integer NOT NULL, "telephone" character varying(20) NOT NULL, "recommendedby" integer NULL, "joindate" timestamp NOT NULL, PRIMARY KEY ("memid"), CONSTRAINT "members_recommendedby_fkey" FOREIGN KEY ("recommendedby") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE SET NULL);
-- Create "bookings" table
CREATE TABLE "cd"."bookings" ("bookid" integer NOT NULL, "facid" integer NOT NULL, "memid" integer NOT NULL, "starttime" timestamp NOT NULL, "slots" integer NOT NULL, PRIMARY KEY ("bookid"), CONSTRAINT "bookings_facid_fkey" FOREIGN KEY ("facid") REFERENCES "cd"."facilities" ("facid") ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "bookings_memid_fkey" FOREIGN KEY ("memid") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE NO ACTION);
$ cat migrations/atlas.sum
h1:TBIl3WYjzvTYokioT+K3FOOlq9LSAzryhjxw5gAGz6U=
20230712165510_create_tables.sql h1:BWXssdgcYxJcYSNOaK7bYlhgB5LsumtrNXiBfkC3HiU=
Ok, let’s use create_indexes
to create a second migration (this time, with indexes):
$ atlas migrate diff create_indexes --dir "file://migrations" --to "postgresql://postgres:jonhdoe42@localhost:5433/postgres?sslmode=disable" --dev-url "docker://postgres/15/test"
Here is what we’ve come up with:
$ cat migrations/20230712165923_create_indexes.sql
-- Create index "bookings.facid_memid" to table: "bookings"
CREATE INDEX "bookings.facid_memid" ON "cd"."bookings" ("facid", "memid");
-- Create index "bookings.facid_starttime" to table: "bookings"
CREATE INDEX "bookings.facid_starttime" ON "cd"."bookings" ("facid", "starttime");
-- Create index "bookings.memid_facid" to table: "bookings"
CREATE INDEX "bookings.memid_facid" ON "cd"."bookings" ("memid", "facid");
-- Create index "bookings.memid_starttime" to table: "bookings"
CREATE INDEX "bookings.memid_starttime" ON "cd"."bookings" ("memid", "starttime");
-- Create index "bookings.starttime" to table: "bookings"
CREATE INDEX "bookings.starttime" ON "cd"."bookings" ("starttime");
-- Create index "members.joindate" to table: "members"
CREATE INDEX "members.joindate" ON "cd"."members" ("joindate");
-- Create index "members.recommendedby" to table: "members"
CREATE INDEX "members.recommendedby" ON "cd"."members" ("recommendedby");
Goodness… it’s beautiful!
And here’s the cherry on top. Any attempt to get Atlas to apply the last two migrations from the migrations
directory to the test database in Docker will result in a message about write locks on tables. We’re obviously not too fond of that, are we?
$ atlas migrate lint --dir file://migrations --dev-url "docker://postgres/15/test" --latest 2
20230712165923_create_indexes.sql: concurrent index violations detected:
L2: Creating index "bookings.facid_memid" non-concurrently causes write locks on the "bookings" table
L4: Creating index "bookings.facid_starttime" non-concurrently causes write locks on the "bookings" table
L6: Creating index "bookings.memid_facid" non-concurrently causes write locks on the "bookings" table
L8: Creating index "bookings.memid_starttime" non-concurrently causes write locks on the "bookings" table
L10: Creating index "bookings.starttime" non-concurrently causes write locks on the "bookings" table
L12: Creating index "members.joindate" non-concurrently causes write locks on the "members" table
L14: Creating index "members.recommendedby" non-concurrently causes write locks on the "members" table
Let’s manually fix the migration by adding the magic word – CONCURRENTLY
:
-- Create index "bookings.facid_memid" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.facid_memid" ON "cd"."bookings" ("facid", "memid");
-- Create index "bookings.facid_starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.facid_starttime" ON "cd"."bookings" ("facid", "starttime");
-- Create index "bookings.memid_facid" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.memid_facid" ON "cd"."bookings" ("memid", "facid");
-- Create index "bookings.memid_starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.memid_starttime" ON "cd"."bookings" ("memid", "starttime");
-- Create index "bookings.starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.starttime" ON "cd"."bookings" ("starttime");
-- Create index "members.joindate" to table: "members"
CREATE INDEX CONCURRENTLY "members.joindate" ON "cd"."members" ("joindate");
-- Create index "members.recommendedby" to table: "members"
CREATE INDEX CONCURRENTLY "members.recommendedby" ON "cd"."members" ("recommendedby");
We can run the linter again:
$ atlas migrate lint --dir file://migrations --dev-url "docker://postgres/15/test" --latest 2
atlas.sum: checksum mismatch
Well, that was easy to predict. Trying to fix the problem:
$ atlas migrate hash
[my-host] root@demo-gitlab-runner ~ # atlas migrate lint --dir file://migrations --dev-url "docker://postgres/15/test" --latest 2
20230712165923_create_indexes.sql: concurrent index violations detected:
L2: Creating index "bookings.facid_memid" non-concurrently causes write locks on the "bookings" table
L4: Creating index "bookings.facid_starttime" non-concurrently causes write locks on the "bookings" table
L6: Creating index "bookings.memid_facid" non-concurrently causes write locks on the "bookings" table
L8: Creating index "bookings.memid_starttime" non-concurrently causes write locks on the "bookings" table
L10: Creating index "bookings.starttime" non-concurrently causes write locks on the "bookings" table
L12: Creating index "members.joindate" non-concurrently causes write locks on the "members" table
L14: Creating index "members.recommendedby" non-concurrently causes write locks on the "members" table
That’s weird… Linter ignores our so-much-praised CONCURRENTLY
!
To figure out why this was going on, we created an issue. The authors acted really quickly and fixed the issue the very same day.
Note that CONCURRENTLY
is not a property of the index, but a property of the instruction to create the index. Keep in mind that a long transaction affecting a table and this kind of index for the same table don’t go well together.
Now that we are confident that our migrations are flawless, let’s apply them to the first database and set the first migration as the baseline (hence we have the --baseline 20230712165510
parameter):
$ atlas migrate apply --dir "file://migrations" -u "postgresql://postgres:jonhdoe42@localhost:5432/postgres?sslmode=disable" --baseline 20230712165510
Migrating to version 20230712165923 from 20230712165510 (1 migrations in total):
-- migrating version 20230712165923
-> CREATE INDEX CONCURRENTLY "bookings.facid_memid" ON "cd"."bookings" ("facid", "memid");
pq: CREATE INDEX CONCURRENTLY cannot run inside a transaction block
-------------------------
-- 13.541209ms
-- 0 migrations ok (1 with errors)
-- 0 sql statements ok (1 with errors)
Error: sql/migrate: sql/migrate: execute: executing statement "CREATE INDEX CONCURRENTLY \"bookings.facid_memid\" ON \"cd\".\"bookings\" (\"facid\", \"memid\");" from version "20230712165923": pq: CREATE INDEX CONCURRENTLY cannot run inside a transaction block: sql/migrate: execute: write revision: pq: current transaction is aborted, commands ignored until end of transaction block
This time, the error we got was pretty much expected. The fix was self-evident:
$ atlas migrate apply --dir "file://migrations" -u "postgresql://postgres:jonhdoe42@localhost:5432/postgres?sslmode=disable" --baseline 20230712165510 --tx-mode none
Migrating to version 20230712165923 from 20230712165510 (1 migrations in total):
-- migrating version 20230712165923
-> CREATE INDEX CONCURRENTLY "bookings.facid_memid" ON "cd"."bookings" ("facid", "memid");
-> CREATE INDEX CONCURRENTLY "bookings.facid_starttime" ON "cd"."bookings" ("facid", "starttime");
-> CREATE INDEX CONCURRENTLY "bookings.memid_facid" ON "cd"."bookings" ("memid", "facid");
-> CREATE INDEX CONCURRENTLY "bookings.memid_starttime" ON "cd"."bookings" ("memid", "starttime");
-> CREATE INDEX CONCURRENTLY "bookings.starttime" ON "cd"."bookings" ("starttime");
-> CREATE INDEX CONCURRENTLY "members.joindate" ON "cd"."members" ("joindate");
-> CREATE INDEX CONCURRENTLY "members.recommendedby" ON "cd"."members" ("recommendedby");
-- ok (140.526595ms)
-------------------------
-- 148.935351ms
-- 1 migrations
-- 7 sql statements
From now on, there will be a schema and the atlas_schema_revisions.atlas_schema_revisions.atlas_schema_revisions
table in the database to store the migrations:
20230712165923 | create_indexes | 2 | 7 | 7 | 2023-07-12 17:47:38.182729+00 | 3720354 | | | WQxGEQSGBr+QCZQB+/cRZ84Ei8Br2iJ9nbwC9/k2oW8= | ["h1:sro8dOuCifHn590KwRbuoLE1w38YVON6O2KtEA0pbMQ=", "h1:RlXV4ShxPexTzjxAJZTrOLohhHskcmEHg34yCE1UqAc=", "h1:3yLYUR5xmF7yOWqA01WQrXZDNSHz7iDMa2ln0ZE23EU=", "h1:ZykewJpaiOCFHTEQrJ5mwj1S4uxGjHpTZcb4lG+qTyc=", "h1:TL2rPzD9teKkY4RxGU//7PgInoonG81+UrPkB1ykRO0=", "h1:X1salktLooVmN73SG2G8dUjgrbOysmHah5bs3o2eahk=", "h1:+DMH0dQEBDmoGUVXJS0FS9TqXYK6Jd4UbqaMXBd40lQ="] | Atlas CLI v0.12.1-972da10-canary
Well, let’s stop here and leave a more detailed description of Atlas migration functionality for later. However, it should be noted that the tool fulfils most of the developer’s needs (maybe even all of them).
Atlas can even import all known migration tool formats and allows you to create your own formats too. We mentioned GORM already as its possible schema provider. Beego is also supported. If you need more flexibility and/or are brave enough, feel free to leverage the Atlas SDK to write your own provider.
That is for now. It’s time to take it to the next level.
2. Using Atlas in Kubernetes
Let’s create a test environment consisting of several databases in K8s and apply migrations to them using Atlas Operator:
$ helm install atlas-operator oci://ghcr.io/ariga/charts/atlas-operator -n atlas-operator --create-namespace
Pulled: ghcr.io/ariga/charts/atlas-operator:0.2.0
Digest: sha256:15c3d10510acf7ba7ba203a3d6c1ecf4a2d54549031eb74aef6452c33488a5e2
NAME: atlas-operator
LAST DEPLOYED: Wed Jul 12 21:46:56 2023
NAMESPACE: atlas-operator
STATUS: deployed
REVISION: 1
TEST SUITE: None
$ kubectl -n atlas-operator get po
NAME READY STATUS RESTARTS AGE
atlas-operator-77f748578b-rxhbr 1/1 Running 0 38s
We’ll use CloudNativePG (we have reviewed this operator in detail before) and Bitpoke MySQL with the db.yaml
configuration file to fire up the databases:
apiVersion: v1
kind: Secret
metadata:
namespace: databases
name: shop-secret
type: kubernetes.io/basic-auth
stringData:
username: shop
password: jonhdoe42
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: psql1
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:15
bootstrap:
initdb:
database: shop
owner: shop
secret:
name: shop-secret
storage:
size: 1Gi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: psql2
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:13
bootstrap:
initdb:
database: shop
owner: shop
secret:
name: shop-secret
storage:
size: 1Gi
---
apiVersion: postgresql.cnpg.io/v1
kind: Cluster
metadata:
name: psql3
spec:
instances: 1
imageName: ghcr.io/cloudnative-pg/postgresql:11
bootstrap:
initdb:
database: shop
owner: shop
secret:
name: shop-secret
storage:
size: 1Gi
---
apiVersion: v1
kind: Secret
metadata:
name: mysql-secret
type: Opaque
stringData:
# root password is required to be specified
ROOT_PASSWORD: groot_password
## application credentials that will be created at cluster bootstrap
DATABASE: employees
# USER:
# PASSWORD:
---
apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: db1
spec:
replicas: 1
secretName: mysql-secret
mysqlVersion: "8.0"
volumeSpec:
persistentVolumeClaim:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
apiVersion: mysql.presslabs.org/v1alpha1
kind: MysqlCluster
metadata:
name: db2
spec:
replicas: 1
secretName: mysql-secret
mysqlVersion: "5.7"
volumeSpec:
persistentVolumeClaim:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
---
Create a Kubernetes namespace and apply the db.yaml
file:
$ kubectl create ns databases
namespace/databases created
$ kubectl -n databases apply -f db.yaml
secret/shop-secret created
cluster.postgresql.cnpg.io/psql1 created
cluster.postgresql.cnpg.io/psql2 created
cluster.postgresql.cnpg.io/psql3 created
secret/mysql-secret created
mysqlcluster.mysql.presslabs.org/mysql1 created
mysqlcluster.mysql.presslabs.org/mysql2 created
80 seconds later, all our databases are up and running:
$ kubectl -n databases get po
NAME READY STATUS RESTARTS AGE
db1-mysql-0 4/4 Running 0 81s
db2-mysql-0 4/4 Running 0 81s
psql1-1 1/1 Running 0 60s
psql2-1 1/1 Running 0 61s
psql3-1 1/1 Running 0 60s
We will prepare a manifest now for our MySQL databases (migrations-mysql.yaml
):
---
apiVersion: v1
kind: Secret
metadata:
name: mysql1-creds
type: Opaque
stringData:
url: "mysql://root:groot_password@db1-mysql.databases:3306/employees"
---
apiVersion: v1
kind: Secret
metadata:
name: mysql2-creds
type: Opaque
stringData:
url: "mysql://root:groot_password@db2-mysql.databases:3306/employees"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: migrationdir-mysql
data:
20230717152110_create_tables.sql: |
-- Create "employees" table
CREATE TABLE employees (emp_no int NOT NULL, birth_date date NOT NULL, first_name varchar(14) NOT NULL, last_name varchar(16) NOT NULL, gender enum('M','F') NOT NULL, hire_date date NOT NULL, PRIMARY KEY (emp_no)) CHARSET utf8mb4;
-- Create "departments" table
CREATE TABLE departments (dept_no char(4) NOT NULL, dept_name varchar(40) NOT NULL, PRIMARY KEY (dept_no), UNIQUE INDEX dept_name (dept_name)) CHARSET utf8mb4;
-- Create "dept_manager" table
CREATE TABLE dept_manager (emp_no int NOT NULL, dept_no char(4) NOT NULL, from_date date NOT NULL, to_date date NOT NULL, PRIMARY KEY (emp_no, dept_no), INDEX dept_no (dept_no), CONSTRAINT dept_manager_ibfk_1 FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON UPDATE NO ACTION ON DELETE CASCADE, CONSTRAINT dept_manager_ibfk_2 FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON UPDATE NO ACTION ON DELETE CASCADE) CHARSET utf8mb4;
-- Create "salaries" table
CREATE TABLE salaries (emp_no int NOT NULL, salary int NOT NULL, from_date date NOT NULL, to_date date NOT NULL, PRIMARY KEY (emp_no, from_date), CONSTRAINT salaries_ibfk_1 FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON UPDATE NO ACTION ON DELETE CASCADE) CHARSET utf8mb4;
-- Create "titles" table
CREATE TABLE titles (emp_no int NOT NULL, title varchar(50) NOT NULL, from_date date NOT NULL, to_date date NULL, PRIMARY KEY (emp_no, title, from_date), CONSTRAINT titles_ibfk_1 FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON UPDATE NO ACTION ON DELETE CASCADE) CHARSET utf8mb4;
-- Create "dept_emp" table
CREATE TABLE dept_emp (emp_no int NOT NULL, dept_no char(4) NOT NULL, from_date date NOT NULL, to_date date NOT NULL, PRIMARY KEY (emp_no, dept_no), INDEX dept_no (dept_no), CONSTRAINT dept_emp_ibfk_1 FOREIGN KEY (emp_no) REFERENCES employees (emp_no) ON UPDATE NO ACTION ON DELETE CASCADE, CONSTRAINT dept_emp_ibfk_2 FOREIGN KEY (dept_no) REFERENCES departments (dept_no) ON UPDATE NO ACTION ON DELETE CASCADE) CHARSET utf8mb4;
20230717152222_change_lengths.sql: |
-- Modify "departments" table
ALTER TABLE departments MODIFY COLUMN dept_name varchar(100) NOT NULL;
-- Modify "employees" table
ALTER TABLE employees MODIFY COLUMN first_name varchar(50) NOT NULL, MODIFY COLUMN last_name varchar(50) NOT NULL;
atlas.sum: |
h1:FfpuPZJBiN0hYk3K9vIDQfvpbN90jnDVXR53+8YtrBw=
20230717152110_create_tables.sql h1:CLp9rnUH4jHwWkpAiCScuDtUzYrXbSnJtxI8Mh8ntjQ=
20230717152222_change_lengths.sql h1:T0Hi658Y7ZsVhaGSD6mhIyFTgURYFpmEf83Yl9Nd4oA=
---
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmigration-mysql1
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql1-creds
dir:
configMapRef:
name: migrationdir-mysql
---
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmigration-mysql2
spec:
urlFrom:
secretKeyRef:
key: url
name: mysql2-creds
dir:
configMapRef:
name: migrationdir-mysql
This YAML configuration includes:
- secrets with URLs that Atlas will use to find out where and how to connect to — the namespace is specified, too, since the operator runs in a different namespace;
- a generic ConfigMap with the schema and the changes it makes (as migrations);
- an
AtlasMigration
custom resource that will bring everything together and prompt the operator to migrate the databases.
Let’s see what happens. Apply the manifest:
$ kubectl -n databases apply -f migrations-mysql.yaml
secret/mysql1-creds created
secret/mysql2-creds created
configmap/migrationdir-mysql created
atlasmigration.db.atlasgo.io/atlasmigration-mysql1 created
atlasmigration.db.atlasgo.io/atlasmigration-mysql2 created
Well, the output of the get
command could’ve been more informative:
$ kubectl -n databases get atlasmigrations.db.atlasgo.io
NAME AGE
atlasmigration-mysql1 2m59s
atlasmigration-mysql2 2m59s
What about looking at the database description? It’s better since there is Reason: Applied
:
$ kubectl -n databases describe atlasmigrations.db.atlasgo.io atlasmigration-mysql1
Name: atlasmigration-mysql1
Namespace: databases
Labels: <none>
Annotations: <none>
API Version: db.atlasgo.io/v1alpha1
Kind: AtlasMigration
Metadata:
Creation Timestamp: 2023-07-17T16:01:56Z
Generation: 1
Managed Fields:
API Version: db.atlasgo.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:annotations:
.:
f:kubectl.kubernetes.io/last-applied-configuration:
f:spec:
.:
f:dir:
.:
f:configMapRef:
f:urlFrom:
.:
f:secretKeyRef:
Manager: kubectl-client-side-apply
Operation: Update
Time: 2023-07-17T16:01:56Z
API Version: db.atlasgo.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:status:
.:
f:conditions:
f:lastApplied:
f:lastAppliedVersion:
f:observed_hash:
Manager: manager
Operation: Update
Subresource: status
Time: 2023-07-17T16:15:19Z
Resource Version: 53849536
UID: 43b829b4-c36d-4761-adff-aa304457c21a
Spec:
Dir:
Config Map Ref:
Name: migrationdir-mysql
URL From:
Secret Key Ref:
Key: url
Name: mysql1-creds
Status:
Conditions:
Last Transition Time: 2023-07-17T16:15:18Z
Message:
Reason: Applied
Status: True
Type: Ready
Last Applied: 1689610518
Last Applied Version: 20230717152222
observed_hash: c7911afe9a6f0918b5cdebfc41f85baf6d9bc08cd272aed87b25b3541be71a35
Events: <none>
… yet we don’t have enough information to know how everything went. All doubts fade away the moment we look into the database using the old good mysql
CLI:
$ kubectl -n databases exec -i db1-mysql-0 -cmysql -- mysql -uroot -pgroot_password employees -e'show create table employees;'
mysql: [Warning] Using a password on the command line interface can be insecure.
Table Create Table
employees CREATE TABLE `employees` (\n `emp_no` int NOT NULL,\n `birth_date` date NOT NULL,\n `first_name` varchar(50) NOT NULL,\n `last_name` varchar(50) NOT NULL,\n `gender` enum('M','F') NOT NULL,\n `hire_date` date NOT NULL,\n PRIMARY KEY (`emp_no`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4 COLLATE=utf8mb4_0900_ai_ci
$ kubectl -n databases exec -i db2-mysql-0 -cmysql -- mysql -uroot -pgroot_password employees -e'show create table employees;'
mysql: [Warning] Using a password on the command line interface can be insecure.
Table Create Table
employees CREATE TABLE `employees` (\n `emp_no` int(11) NOT NULL,\n `birth_date` date NOT NULL,\n `first_name` varchar(50) NOT NULL,\n `last_name` varchar(50) NOT NULL,\n `gender` enum('M','F') NOT NULL,\n `hire_date` date NOT NULL,\n PRIMARY KEY (`emp_no`)\n) ENGINE=InnoDB DEFAULT CHARSET=utf8mb4
The same goes for PostgreSQL. Here’s our YAML manifest:
---
apiVersion: v1
kind: Secret
metadata:
name: psql1-creds
type: Opaque
stringData:
url: "postgres://shop:jonhdoe42@psql1-rw:5432/shop?sslmode=disable"
---
apiVersion: v1
kind: Secret
metadata:
name: psql2-creds
type: Opaque
stringData:
url: "postgres://shop:jonhdoe42@psql2-rw:5432/shop?sslmode=disable"
---
apiVersion: v1
kind: Secret
metadata:
name: psql3-creds
type: Opaque
stringData:
url: "postgres://shop:jonhdoe42@psql3-rw:5432/shop?sslmode=disable"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: migrationdir-psql
data:
20230712165510_create_tables.sql: |
-- Add new schema named "cd"
CREATE SCHEMA "cd";
-- Create "facilities" table
CREATE TABLE "cd"."facilities" ("facid" integer NOT NULL, "name" character varying(100) NOT NULL, "membercost" numeric NOT NULL, "guestcost" numeric NOT NULL, "initialoutlay" numeric NOT NULL, "monthlymaintenance" numeric NOT NULL, PRIMARY KEY ("facid"));
-- Create "members" table
CREATE TABLE "cd"."members" ("memid" integer NOT NULL, "surname" character varying(200) NOT NULL, "firstname" character varying(200) NOT NULL, "address" character varying(300) NOT NULL, "zipcode" integer NOT NULL, "telephone" character varying(20) NOT NULL, "recommendedby" integer NULL, "joindate" timestamp NOT NULL, PRIMARY KEY ("memid"), CONSTRAINT "members_recommendedby_fkey" FOREIGN KEY ("recommendedby") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE SET NULL);
-- Create "bookings" table
CREATE TABLE "cd"."bookings" ("bookid" integer NOT NULL, "facid" integer NOT NULL, "memid" integer NOT NULL, "starttime" timestamp NOT NULL, "slots" integer NOT NULL, PRIMARY KEY ("bookid"), CONSTRAINT "bookings_facid_fkey" FOREIGN KEY ("facid") REFERENCES "cd"."facilities" ("facid") ON UPDATE NO ACTION ON DELETE NO ACTION, CONSTRAINT "bookings_memid_fkey" FOREIGN KEY ("memid") REFERENCES "cd"."members" ("memid") ON UPDATE NO ACTION ON DELETE NO ACTION);
-- Create "employees" table
20230712165923_create_indexes.sql: |
-- Create index "bookings.facid_memid" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.facid_memid" ON "cd"."bookings" ("facid", "memid");
-- Create index "bookings.facid_starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.facid_starttime" ON "cd"."bookings" ("facid", "starttime");
-- Create index "bookings.memid_facid" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.memid_facid" ON "cd"."bookings" ("memid", "facid");
-- Create index "bookings.memid_starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.memid_starttime" ON "cd"."bookings" ("memid", "starttime");
-- Create index "bookings.starttime" to table: "bookings"
CREATE INDEX CONCURRENTLY "bookings.starttime" ON "cd"."bookings" ("starttime");
-- Create index "members.joindate" to table: "members"
CREATE INDEX CONCURRENTLY "members.joindate" ON "cd"."members" ("joindate");
-- Create index "members.recommendedby" to table: "members"
CREATE INDEX CONCURRENTLY "members.recommendedby" ON "cd"."members" ("recommendedby");
("recommendedby");
atlas.sum: |
h1:xTbw3AxHRH3YSW7dOGteMxvcigM3f/Y7pdkwwScmbGM=
20230712165510_create_tables.sql h1:BWXssdgcYxJcYSNOaK7bYlhgB5LsumtrNXiBfkC3HiU=
20230712165923_create_indexes.sql h1:WQxGEQSGBr+QCZQB+/cRZ84Ei8Br2iJ9nbwC9/k2oW8=
---
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmigration-psql1
spec:
urlFrom:
secretKeyRef:
key: url
name: psql1-creds
dir:
configMapRef:
name: "migrationdir-psql"
---
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmigration-psql2
spec:
urlFrom:
secretKeyRef:
key: url
name: psql2-creds
dir:
configMapRef:
name: "migrationdir-psql"
---
apiVersion: db.atlasgo.io/v1alpha1
kind: AtlasMigration
metadata:
name: atlasmigration-psql3
spec:
urlFrom:
secretKeyRef:
key: url
name: psql3-creds
dir:
configMapRef:
name: "migrationdir-psql"
Our migration failed due to the CONCURRENTLY
property causing an error when creating the index:
Status:
Conditions:
Last Transition Time: 2023-07-17T17:18:37Z
Message: sql/migrate: sql/migrate: execute: executing statement "CREATE INDEX CONCURRENTLY \"bookings.facid_memid\" ON \"cd\".\"bookings\" (\"facid\", \"memid\");" from version "20230712165923": pq: CREATE INDEX CONCURRENTLY cannot run inside a transaction block: sql/migrate: execute: write revision: pq: current transaction is aborted, commands ignored until end of transaction block
Reason: Reconciling
Status: False
Type: Ready
Last Applied: 0
The documentation seemed to be not really clear on what needed to be done. Thus we created another issue. Thanks to the prompt Atlas developers’ help again, we fixed our migrations. This comment was our clue to success:
A file directive needs to be separated by two newlines from the first statement’s comment.
Finally, all the indexes were added, and the migrations were applied! There’s definitely more to discover with Atlas, but the article seems to be big enough already.
Conclusion
We tried applying Atlas in a couple of simple combinations in two containerized environments, Docker and Kubernetes. The results have turned out pretty good, and the tool is capable of handling all the cases. To benefit from it, it’s essential to know:
- what exactly is in the DDL migrations;
- what result you expect to achieve in database states;
- what your overall expectations are of the migration tool and the operator managing it.
That means Atlas will be helpful when you have the containerized infrastructure used by different development teams with multiple environments. By using the tool collectively, the DDL changes can be replicated from local developer environments to production with maximum security, reproducibility, and transparency for everyone involved in the process. Security can be ensured both by Atlas itself (e.g., by calculating migration checksums) and by validating the migrations.
There is no doubt that Atlas’ functionality will continue to evolve, and it’s really exciting to keep an eye on its future!
Comments