So, alot has happened in the last 6 years since I last blogged on this website.
TL/DR: Cloud has taken over the world, and I’m lucky enough to have a career focused on designing cloud solutions.
This brings us to the year 2020, and one challenge I had to tackle recently was trying to find a relatively easy way to migrate Virtual Machines between projects within Google Cloud Platform (GCP).
Before we get into the solution, we’ll talk a little background info…
Google cloud organizes its resources into “projects” – think of these similar to an Azure subscription/resource group organizational unit. Natively, once you have built Google Compute Engine (GCE) VMs, you are unable to easily move those VM resources into another project. There are a few how-to guides for leveraging disk snapshots, cloning, etc, but alot of those are difficult to operate at scale and open a plethora of potential issues in practice.
Therefore, I present to you: The Google Cloud Compute Engine Migrator.
I’ve developed a scriptable way to “move” VMs between projects, giving options on network connectivity with the ability to do both individual and bulk migrations. Now, the “move” is in quotes here, because what this script actually does is shuts down the target VM, takes a virtual machine image of that VM (which includes all disks), and then deploys a new VM instance from that image in the project of your choosing. So, while it’s a true 1:1 clone, it is not in fact the same virtual machine.
I’ll also concede there surely is a much more API driven and UX friendly way of doing this with. But, I’m a command line script kid first, even in 2020. Therefore what we have here is a bash script leveraging the gcloud command suite to somewhat intelligently enable you to migrate your VMs between projects. With that, let’s dive into the tech weeds…
- Disclaimer: This migration tool is provided without any warranty, make sure you review the script code and test accordingly with your requirements. Always make sure you have backups and have validated recovery from those backups before running, especially in production environments. This cannot be stressed enough.
- Make sure you know the source project id, destination project id, and which network within your GCP environment you would like to attach to.
- If you intend to keep the source VM’s same IP address, the VPC network must be shared between the source and destination projects.
- Keeping the same IP address will require the source VM to be removed after taking a machine image. The script will not do this for you, but will prompt you when to do it.
- If not using the same IP (IE, attaching to a new VPC network at the destination), recognize that you will be left with 2 VM instances and that a cleanup should be performed on the source VM once destination VM has been completed.
Using the Migrator Script
- Make sure you install the Google cloud SDK (or, just use CloudShell within GCP)
- If running outside of CloudShell, authenticate by running
gcloud auth login
- Clone the repository:
git clone https://github.com/vanberge/gce-migrator.git
- Change directory into the gce-migrator folder and run the script per the usage options below
- Use format:
./gce-migrate.sh -s -d -n -m -S
- -s <sourceproject id>: The project ID where VM currently lives
- -d <destproject id>: The project ID where VM will reside after migration
- -n <network>: The destination network that the new instance of the VM will be connected to. Values are the name of the destination network, or “static” to keep the existing IP.
- network name: If passing network name, the VM will be connected to the network specified with the next available IP address.
- static: If passing “static”, the script will retain the IP address of the VM instance.
NOTE: Setting the network “static” will require the deletion of the source VM before creating the new instance in the destination project. The script will prompt you to do this, but you MUST Have a backup and recovery scenario in the even this does not work.
- -m <migration type>: Must pass a single VM name, or “bulk”.
- bulk – use the “bulk” argument to migrate all GCE instances in the source project into the destination project and network.
- Single VM – Pass
-m <vmname>arguments to migrate a single GCE instance
- -S: enable Secure/Shielded VM as part of the conversion. Only needed if source is NOT shielded, and you wish the destination to be shielded.
./gce-migrate.sh -s sourceproject1 -d destproject1 -n default -m myvm1
- This will migrate the VM “myvm1” from sourceproject1 to destproject1 using the default VPC network
./gce-migrate.sh -s sourceproject1 -d destproject1 -n static -m myvm1
- This will migrate the VM “myvm1” from sourceproject1 to destproject1, keeping myvm1’s private IP address
- As noted above, this requires the VM’s VPC network/subnet to be shared with the destination project
./gce-migrate.sh -s sourceproject1 -d destproject1 -n default -m bulk
- Migrates all VMs in sourceproject1 to destproject
- Attaches the VMs to the default network
./gce-migrate.sh -s sourceproject1 -d destproject1 -n default -m bulk -S
- This will migrate all VM instances in sourceproject1 to destproject1, connecting to the default VPC network.
- Enables shielded VM options on the VM as part of the migration
If moving to a new network, this migration script will leave a stopped GCE instance in the source project, as well as machine images for all migrated VMs. Once functionality is validated at the destination, these items should be cleaned up per best practices and to avoid any future interruption.
Hopefully this helps somebody who finds themselves with the challenge of moving GCE VMs between GCP projects!