Cloud Migration Strategies
The 7 Rs framework gives every workload a migration path. The hard part isn't choosing a strategy. It's sequencing hundreds of workloads into waves that respect dependencies, network constraints, and team capacity.
The 7 Rs: what each migration strategy means with concrete examples
Cloud migration strategies fall into seven categories, commonly called the 7 Rs. Most enterprise migrations use at least four of them across their application portfolio. Rehost means moving a workload to the cloud without changing its code or architecture. You take a VMware virtual machine running a Java application on Tomcat and convert it to an EC2 instance, Azure VM, or Compute Engine instance using AWS Application Migration Service (MGN), Azure Migrate, or Google's Migrate for Compute Engine. Rehosting is fast, typically 2-4 weeks per application, and works when the workload doesn't need architectural changes in the cloud. Replatform means making targeted optimizations during migration without rewriting application code. Swapping a self-managed MySQL database for Amazon RDS or Azure Database for MySQL is replatforming. Replacing an Apache web server with an Application Load Balancer is replatforming. The application code stays the same, but you swap infrastructure components for managed equivalents. Repurchase means replacing a self-hosted application with a SaaS equivalent. Migrating from an on-premises Jira Server installation to Jira Cloud is repurchase. Moving from a self-managed email server to Microsoft 365 is repurchase. This strategy eliminates the workload from your migration scope entirely. Refactor means rewriting the application to use cloud-native services. Converting a monolithic .NET application into microservices running on EKS with DynamoDB instead of SQL Server is refactoring. This is the most expensive strategy but delivers the most benefit for applications that need elastic scaling or faster release cycles. Retire means decommissioning the application entirely. Discovery assessments typically find that 10-20% of applications are unused, duplicated, or already replaced. Turning them off before migration reduces scope and costs. Retain means keeping the workload on-premises, either permanently or until a future migration phase. Mainframe applications, hardware-dependent systems, and workloads with strict data sovereignty requirements fall into this category. Relocate is the newest R, added when VMware Cloud on AWS, Azure VMware Solution, and Google Cloud VMware Engine made it possible to move entire VMware clusters to the cloud without converting individual VMs.
Assessing workloads: the criteria that determine migration strategy
Workload assessment evaluates each application against criteria that determine which of the 7 Rs applies. The assessment happens before you write a single migration runbook. Get it wrong, and you'll rehost an application that should've been retired, or attempt to refactor a workload that could've been replatformed in a week. Latency requirements are the first filter. Applications that need sub-millisecond access to on-premises databases or hardware security modules can't simply rehost unless you also migrate their dependencies. A trading application that reads market data from a co-located feed handler with 200-microsecond latency won't tolerate 5-millisecond roundtrips to a cloud database. These workloads either retain, or you migrate the entire dependency chain together. Data gravity determines which workloads migrate first. If your ERP system writes 50 TB of data that six other applications read, those downstream applications shouldn't migrate before the ERP system. Moving consumers first means they'd read data across a WAN link, adding latency and egress costs. Map the data dependencies and migrate from the center of gravity outward. Compliance constraints narrow the target cloud and region. Workloads processing PCI cardholder data need environments with PCI DSS certification. Healthcare workloads need HIPAA-compliant configurations with Business Associate Agreements. Financial services workloads in the EU need to satisfy DORA operational resilience requirements. Document these constraints during assessment so the migration team doesn't discover them during implementation. Team skill is the most underestimated criterion. A team of five developers who've maintained a Java monolith for eight years can't suddenly refactor it into Kubernetes microservices without training or hiring. Matching the migration strategy to the team's current capabilities prevents the pattern where leadership approves a refactoring project that delivers a half-migrated system 18 months behind schedule. License analysis matters for commercial software. Oracle Database, SQL Server, and SAP have licensing models that change dramatically in the cloud. Oracle's per-core licensing costs double on AWS and Azure because those providers use hyper-threaded vCPUs. Moving Oracle workloads to OCI avoids this multiplier. SQL Server workloads get Azure Hybrid Benefit pricing with Software Assurance.
Migration wave planning: sequencing workloads and mapping dependencies
Wave planning organizes assessed workloads into sequential groups that migrate together. Each wave should be deployable independently, meaning the workloads in wave 3 shouldn't depend on workloads that don't migrate until wave 5. The first wave is always the easiest workloads with the fewest dependencies. These are your proof-of-concept migrations: internal tools, development environments, static websites, and standalone reporting applications. The purpose of wave 1 isn't to migrate critical workloads. It's to validate your migration tooling, network connectivity, security configurations, and operational runbooks. If your Azure Migrate appliance can't discover a VMware VM in wave 1, you want to find that out before wave 4 when you're migrating the payment processing system. Dependency mapping is the most time-consuming part of wave planning. Application teams often don't know their full dependency graph. A middleware application might connect to an LDAP server, a shared file server, a certificate authority, and a monitoring agent that reports to an on-premises SNMP server. Tools like AWS Application Discovery Service, Azure Migrate dependency analysis (using wire data from a network agent), or ServiceNow's CMDB can automate discovery, but they'll miss dependencies that don't generate network traffic during the observation window. Batch jobs that run monthly or quarterly are the classic blind spot. Each wave should include a cutover weekend and a rollback plan. The cutover plan specifies the exact sequence: freeze changes on the source, perform final data sync, update DNS or load balancer targets, validate the application in the cloud, and confirm success. The rollback plan specifies what happens if validation fails: revert DNS, restart source servers, and document the failure for post-mortem. Plan for 3-5 applications per wave for the first few waves, then scale to 10-20 per wave as the team builds confidence. Large enterprises with 500-plus applications typically run 15-25 waves over 12-18 months. The middle waves contain the bulk of the portfolio. The final waves contain the hardest workloads: databases with terabytes of data, applications with real-time failover requirements, and systems that require an extended parallel-run period where both on-premises and cloud environments serve traffic simultaneously.
Network connectivity during migration: VPN first, then dedicated circuits
Network connectivity between your on-premises data center and the cloud is the backbone of every migration. Without it, you can't replicate data, access migrated workloads from the corporate network, or run hybrid configurations during the transition period. The standard approach is VPN first, dedicated circuit later. Site-to-site VPN tunnels can be provisioned in hours. AWS Site-to-Site VPN connects your on-premises router to an AWS Virtual Private Gateway or Transit Gateway over IPSec. Azure VPN Gateway does the same to a VNet. GCP Cloud VPN connects to a VPC. OCI IPSec VPN connects to a DRG. Each provides encrypted connectivity over the public internet with bandwidth up to 1.25 Gbps per tunnel. For wave 1 and wave 2, VPN is sufficient. You're migrating lightweight workloads with modest data volumes. But by wave 3 or 4, you'll need dedicated connectivity. AWS Direct Connect, Azure ExpressRoute, GCP Cloud Interconnect, and OCI FastConnect provide private fiber connections with 1 Gbps, 10 Gbps, or 100 Gbps options. These circuits don't traverse the public internet, so latency is lower and more predictable. The lead time for provisioning is 2-8 weeks depending on the provider and colocation facility. Start the process during wave 1, and the circuit will be ready by wave 3. Order two circuits from different paths for redundancy. During the migration, your network carries three types of traffic simultaneously: data replication from on-premises to cloud (high bandwidth, tolerant of latency), application traffic between migrated and non-migrated workloads (latency-sensitive), and user traffic from corporate offices to cloud-hosted applications. Prioritize application traffic using QoS policies on your on-premises routers. DNS is the traffic routing mechanism during migration. As each application migrates, update its DNS record to point to the cloud endpoint. Use low TTL values (60-300 seconds) during cutover so changes propagate quickly. After migration stabilizes, increase TTL to reduce DNS query load. Some organizations use split-horizon DNS where internal clients resolve to cloud private IPs via VPN, while external clients resolve to public endpoints directly.
Database migration strategies and the tools each provider offers
Database migration is the riskiest part of any cloud migration because databases are stateful, latency-sensitive, and often poorly documented. The migration strategy depends on the database engine, the data volume, the acceptable downtime window, and whether you're changing engines during migration. AWS Database Migration Service handles homogeneous migrations (MySQL to RDS MySQL, PostgreSQL to RDS PostgreSQL) and heterogeneous migrations (Oracle to PostgreSQL, SQL Server to MySQL). For heterogeneous migrations, you first use the AWS Schema Conversion Tool to translate stored procedures, triggers, and views from the source engine's SQL dialect to the target's. DMS supports continuous replication with change data capture, so you can run the source and target databases in parallel during a validation period. The ongoing replication lag is typically under one second for moderate write volumes. Azure Database Migration Service focuses on migrations into Azure SQL Database, Azure SQL Managed Instance, and Azure Database for PostgreSQL. Its strongest feature is the online migration mode for SQL Server to Azure SQL Managed Instance, which uses the existing SQL Server backup chain and log shipping to minimize downtime. The cutover requires only the time to replay the final transaction log, typically seconds to minutes. Azure also offers the Data Migration Assistant for assessment and schema compatibility analysis before migration. Google Cloud Database Migration Service supports migrations from MySQL, PostgreSQL, SQL Server, and Oracle into Cloud SQL, AlloyDB, and Cloud Spanner. For PostgreSQL migrations, it uses native logical replication, which means the source database needs PostgreSQL 10 or later with logical replication enabled. For Oracle migrations to PostgreSQL-compatible targets like AlloyDB, Google partners with Striim for real-time change data capture. For large databases (multi-terabyte), network-based replication may not complete within an acceptable timeframe. AWS Snowball Edge, Azure Data Box, and Google Transfer Appliance provide physical devices that you load with data on-premises and ship to the cloud provider's data center. A Snowball Edge holds 80 TB of usable storage. For a 200 TB database, ship three devices, load the initial data, then use DMS or logical replication to sync changes accumulated during shipping. The cutover window shrinks to the time needed to replay the delta.
Diagramming migration plans that communicate to stakeholders
Migration diagrams serve two audiences: technical teams who need dependency maps and cutover sequences, and executives who need to see progress across waves and understand risk. The best migration diagrams are not static architecture views. They show movement. A migration wave diagram uses a timeline layout with vertical swim lanes for each wave. Within each lane, application boxes show what migrates, with dependency arrows connecting applications across waves to highlight sequencing constraints. Color-code each application by migration strategy: blue for rehost, green for replatform, orange for refactor, gray for retire, and red for retain. This gives executives an instant visual summary of the portfolio strategy. The network connectivity diagram shows the on-premises data center, the VPN or dedicated circuit connection, and the cloud environment side by side. Draw the applications that have already migrated in the cloud zone, the applications waiting to migrate in the on-premises zone, and the hybrid traffic paths between them. Update this diagram after each wave to show progress. The database migration diagram deserves its own view. Show each database instance, its replication path to the cloud target, the replication lag in the current state, and the expected cutover window. Mark which databases use DMS continuous replication, which use native logical replication, and which require a physical transfer appliance due to size. Diagrams.so generates migration architecture diagrams from text descriptions. Describe your source environment, target cloud, migration waves, and network connectivity. The AI renders the current-state and target-state architectures with migration arrows showing the path for each workload. The output is native .drawio XML, so you can update the diagram as workloads move through each wave. Architecture warnings flag common migration risks like workloads with undocumented dependencies or databases that exceed network transfer capacity within the planned cutover window.
Real-world examples
Generate these diagrams with AI
Generate Cloud Architecture Diagrams from Text
Describe your cloud infrastructure in plain English. Get a valid Draw.io diagram with region boundaries, availability zones, managed services, and DR paths.
Generate Multi-Cloud Architecture Diagrams from Text with AI
Describe infrastructure spanning AWS, Azure, GCP, and OCI. Get a valid Draw.io diagram with correct provider icons, cross-cloud networking, and unified identity flows.
Generate Hybrid Cloud Architecture Diagrams from Text with AI
Describe your on-premises to cloud connectivity in plain English. Get a valid Draw.io diagram with data center boundaries, VPN tunnels, dedicated links, and security zones.