Migrating from PlanetScale (free tier) to Supabase (free tier)
Recently, PlanetScale announced they will be discontinuing their free tier. Free tier accounts were hit with this banner:
Starting April 8th, the Hobby plan will no longer be available. You'll need to upgrade to Scaler Pro to continue using PlanetScale. Read more in our blog post
I was not ready to shell out $47 a month to host my tiny database on PlanetScale. Here are the steps I took to move to Supabase.
Disclaimer: This is not a comprehensive guide; these are the steps I took personally after trials and errors. Your steps will (most probably) differ. Remember to test, test, test.
The General Steps
I used pgloader to load my MySQL database from PlanetScale into a local postgres database on my local machine. I test locally and make necessary changes to the code and database. When everything works I push the schema and data to Supabase. When everything works, I push the code to a staging server. When everything works, I push everything to production.
Considerations
- PlanetScale runs MySQL database whereas Supabase runs Postgres.
- PlanetScale free tier included daily backups, kept for 2 days. Supabase free tier does NOT give you backups. (Backups are kept, but you'll have to upgrade to a paid plan to access them. Ref)
- PlanetScale gave you 5GB of storage. Supabase has a database storage limit of 500 MB.
- PlanetScale gave you schema branching for free. Branching is a paid-tier-only feature in Supabase.
- Both put your project to sleep after 7 days of inactivity.
More considerations
- This might be a permanent move to Postgres.
- Supabase might follow suit and get rid of their free tier in the future.
- Migration might fail.
Still convinced Supabase is the right choice? Me, too. Okay, here's how I did it.
Prerequisites
- Install pgloader
- Install postgresql (You will need the server, psql, and pg_dump)
- Install pscale (the PlanetScale CLI)
Basic PSQL commands
Terminal commands:
createdbcreates a new databasepsql postgres:///dbnameconnects to database with dbname
PSQL commands:
\?lists commands\llists database\c <DB>connects to database\dnlists schemas\dl <schema>.*lists tables in schema\qquit
Preparing to Migrate
Before anything, I create backup of the database on PlanetScale (billed at $0.023 / GB / month)
I use pgloader to load the whole SQL database from planetscale into my local postgres database. My database was quite small. (If it were bigger, I would probably want to clone it into a local MySQL server.)
pscale connect dbname main
In a new session, I run this
createdb dbname
pgloader mysql://username@127.0.0.1:3306/dbname postgresql:///dbname
table name errors rows bytes total time
-------------------------------- --------- --------- --------- --------------
fetch meta data 0 59 1.220s
Create Schemas 0 0 0.005s
Create SQL Types 0 0 0.003s
Create tables 0 30 0.067s
Set Table OIDs 0 15 0.004s
...
I encountered a few errors at first, but after some troubleshooting, I had loaded the MySQL database into Postgres.
Configuration Changes
I create a new .env file to configure the application to use my local pg server.
At this point, errors arose in my local instance. I then made changes to Drizzle's Schema and changes to the codebase.
Note: pgloader has already created all the tables and imported the data, but I wanted to create the tables using the migrations files from Drizzle, instead of the ones created by pgloader.
A Fresh Drizzle
I made a new folder for drizzle with the updated schema.ts file.
Changes to schema.ts
I change all table declarations to Postgres:mysqlTable("table")
to
pgTable("table")
I wanted to use a custom schema (instead of the default schema "public"), my
code became this: pgSchema(SCHEMA).table("table"), where SCHEMA
is the database name I had in PlanetScale, such as "myapp". I did this for
every table.
Then I searched for all occurrences of sql`...` and added the
schema name in front of table names. For example user became
myapp.user.
Note: this is only for rare occassions where I had subqueries. Also, I
should probably avoid hardcoding the schema.
Column Types
int("column")tointeger("column")- Auto-Increment Columns
int("id").autoincrement()toserial("id") - Any
datetimecolumns totimestamp - Any
sql`CURRENT_TIMESTAMP`tosql`now()` - Any
doubletodoublePrecision
Pay extra attention to the nuances of timezones when converting datetime to timestamp. I had saved my datetime in UTC timezone so it was fine.
Code Changes
I change the way the application connects to the database, according to the docs: Connecting to Supabase
There are instances where I needed to get the ID of the last inserted row. MySQL would return the list of IDs inserted. For example, when adding a new user:
let insert_result = await db.insert(user).values({
...
})
if (insert_result.length && insert_result[0].insertId) {
// added user
}
In Postgres, nothing gets returned by default. You'd have to specify what
you wantinsert() to return using the returning() function:
let insert_result = await db.insert(user).values({
...
}).returning({ insertId: user.id }) // <-- HERE
if (insert_result.length && insert_result[0].insertId) {
// added user
}
I was expecting insertId so I specify that by passing a
dictionary into returning(). This allows for minimal code
changes.
Keep fixing bugs until all tests pass and everything works as it should.
New Migration File
I run drizzle generate:pg to create a fresh migration file in
my new migrations folder, named something like
0000_random_name.sql. This file contains the database schema. I
now need the data from the local postgres database.
I run pg_dump of the database:
pg_dump --data-only postgres:///dbname > dump.sql Add this to the top of the file to avoid constraint violation errors:
SET session_replication_role = replica; Note: remove any other commands in the file you do not want or need, such as the configuration commands near the top.
Important: near the end of the dump file, there are commands which set the values of the AUTO-INCREMENT columns. You want these.
-- For example
SELECT pg_catalog.setval('schema_name.user_id_seq', 2222, true); -- this is the last inserted user id
Two-File Migration
I now have 2 sql files to run, one with the schema and another with the data. I create a new database in my local postgres instance, along with a new .env config and test the migration process locally before running them on Supabase.
Ready for Supabase
The next steps are fairly simple.
- Run the migration file with the database schema on Supabase.
psql <connection_string> < 0000_filename.sql - Run the migration file with the data on Supabase.
psql <connection_string> < dump.sql - Configure the application running on localhost to use Supabase as the
database, for testing purposes. For example:
dotenv -e .env.supabase npm run dev - Configure the a staging server to use Supabase as the database, for testing purposes. Run tests.
- Finally configure the production server to use Supabase as the database.
Your Mileage May Vary
Again, your process might be different. You may encounter roadblocks I did not. Good luck.