At Slalom, we have helped hundreds of organizations migrate to the cloud. Often, this means moving from on-premise applications and data storage to applications and storage running on Amazon Web Services (AWS).
In a typical AWS migration project, a customer will migrate on-premise data to an AWS data lake, then build a pipeline in AWS to transform the data and move it to a data warehouse, where it can be accessed for business intelligence through applications like Tableau or Qlik.
But there are lots of other scenarios where AWS can be helpful, including using Amazon S3 for data storage, running serverless computing with Amazon Lambda or running a relational database with Amazon RDS.
Whether your AWS plans include data lakes and data warehouses or a different strategy, here are tips for making your migration a success.
Most AWS deployments begin with “lift and shift,” moving an on-premise application or capability to AWS and replicating on-premise services in the cloud. But this kind of straightforward replication doesn’t realize the full potential of the cloud for speed, scalability and efficiency. So plan to make adjustments and design changes after the lift-and-shift phase is complete.
The good thing about this? Once you’re operating in the cloud, changing architectures and systems is easy. You haven’t had to invest in new hardware. Modern cloud systems are highly configurable. You can make changes and additions to cloud services, relatively quickly and easily.
When you ran your IT operations on-premise, you knew where the code resided, and you knew how to manage it.
Even with a low-code development platform running in the cloud, you’re still going to need some code to integrate with other on-premise systems and to run your AWS services. Plan for this code up front. Write and test as much of it as possible before you lift and shift.
You’ll likely have to write and manage far less code than you would with legacy on-premise systems, so overall, your development and code management workload should decrease.
For tips on preparing for any cloud migration, read our post on "5 Things to Do Before Any Cloud Migration Project."
AWS services are highly durable and highly available. Nonetheless, it’s important to have an availability and disaster recovery strategy, so that you’re always prepared for that rare but potentially business-critical service degradation or outage.
You’ll want to know ahead of time how you’re going to deploy across regions for availability and data locality. What’s your data retention policy for data stored in S3? What’s your cross-region replication strategy? Will running services without SLAs be reliable enough? If you’ve adopted SLAs, are they sufficient?
Plan for regional outages and disasters ahead of time. They’re rare, but you want to ensure your applications and data remain accessible even when rarities occur.
To access AWS, you’ll need an AWS account. Most organizations use multiple accounts. It’s worth thinking about how you’ll manage accounts before your AWS deployment gets too large.
You can set up separate accounts for individual business units, regions and so on. You can also create accounts for the different phases of IT activity. You can have accounts for development, quality assurance (QA) and operations.
Regardless of how you choose to set up your AWS accounts, it’s important that people know which account to log in to. You need to understand who’s responsible for each service running on AWS.
You want to avoid a situation in which services are running unattended and no one knows who the services belong to or why they are running. Meanwhile, the meter is spinning. Fortunately, tagging is one good way to identify service users.
In general, you want your account structure to replicate your organization, as long as the account structure doesn’t become too complex. AWS includes a service called AWS Organizations to help with policy-based account management for enterprises with lots of AWS accounts. If you think you’ll have many AWS users, you might want to investigate the Organizations service.
Read Slalom's recommended best practices for keeping your cloud migration on the right path.
Using a low-code, cloud-based integration platform streamlines development and integration. If you move to AWS and find you have to make adjustments, working in a low-code platform makes them quick and easy.
Optimally the platform should also include data quality management, so you can create golden records and ensure data consistency and availability. You don’t want to have to enforce data quality and data governance as an afterthought.
The Dell Boomi integration platform as a service (iPaaS) offers rapid, low-code development for integration and MDM, so we often recommend the Boomi platform to clients for AWS migration projects. Together, AWS and Boomi offer enterprises a fast, agile and cloud-based way of handling critical IT challenges.
Want to learn more about migrating to the cloud and integrating your hybrid IT infrastructure? Contact a Dell Boomi integration expert or reach out to our partner, Slalom.