Understanding the Architectural and Operational Gaps in Amazon DMS

When setting up a pre-production environment, we turned to Amazon DMS to replicate our Aurora database into a fresh RDS PostgreSQL instance. Tables were created, data came through, and the job reported success. We thought we were done. We were not.

The Problem

Once we deployed the application, errors started showing up immediately. After inspecting the target database, the issue was obvious — DMS had only moved the data. Everything else was silently missing:

  • Triggers
  • Functions and stored procedures
  • Sequences (breaking auto-increment columns)
  • Secondary indexes
  • Foreign key and check constraints

No errors in the DMS logs. Just a functionally broken database.

Why DMS Doesn’t Migrate These

DMS is a data replication tool, not a schema migration tool. It’s built to move rows fast — ideally with minimal downtime using Change Data Capture. Objects like triggers and foreign keys can actually interfere with high-speed bulk loads, so DMS skips them by design. This is documented, but easy to miss until it bites you.

What We Tried (And Why It Didn’t Work)

AWS Schema Conversion Tool (SCT) — Helped with some objects but required heavy manual review, didn’t handle all PostgreSQL syntax cleanly, and sequences still needed manual fixes.

Manual pg_dump + SQL scripts — Exporting missing objects individually worked but was fragile, hard to repeat, had no version control, and broke when objects had dependencies on each other.

Neither approach was something we could reliably repeat across environments.

What Actually Worked: Alembic

We already had Alembic in our stack as our schema migration framework. The question became — why were we using DMS for environment setup at all?

We didn’t need live replication. We needed a repeatable, version-controlled way to build a complete database. So we dropped DMS from the process entirely and just ran our existing Alembic migrations on the fresh instance. Everything came through — tables, indexes, constraints, sequences, functions, and triggers — because it was all defined in code. For any legacy objects not yet in Alembic, we added a single bootstrap.sql committed to the repo.

The result: a fully functional database built from one command.

Outcome

  • Environment setup dropped from hours of debugging to a single command
  • No missing objects — everything defined in version control
  • Fully repeatable across dev, staging, and pre-prod
  • New developers could spin up a database reliably without any manual steps

Lessons Learned

A successful DMS job doesn’t mean a working database. It only means data moved. Always audit non-data objects after any migration by comparing triggers, functions, sequences, and indexes between source and target.

Use the right tool for the job. DMS is for replication. Alembic, Flyway, and Liquibase are for schema management. They solve different problems — don’t mix them up.

Triggers and functions fail silently. A missing column crashes your app instantly. A missing trigger lets data insert fine but skips the business logic entirely. These bugs are hard to catch and easy to miss in testing.

When Should You Use DMS?

Use it for live production migrations with large data volumes, cross-engine migrations (MySQL to PostgreSQL, etc.), or ongoing CDC replication. For environment setup and schema management, a proper migration tool like Alembic will serve you much better.

Final Thoughts

DMS is powerful, but the name is misleading. It moves data — it doesn’t migrate your full database. Before using it, list every object type in your schema and verify which ones DMS actually carries over. If you already have a schema migration framework, that might be all you need.

Have a different approach that worked for you? Share it in the comments.

3 Likes

That’s great input @Akhilesh

1 Like