Samuel Fajreldines

I am a specialist in the entire JavaScript and TypeScript ecosystem.

I am expert in AI and in creating AI integrated solutions.

I am expert in DevOps and Serverless Architecture

I am expert in PHP and its frameworks.

+55 (51) 99226-5039 samuelfajreldines@gmail.com

Mastering Multi-Cloud with Terraform: Deploying Infrastructure on Google Cloud and AWS

The proliferation of cloud platforms has ushered in a new era of flexibility and opportunity for software professionals and organizations. While choosing a single provider can streamline certain operations, leveraging multiple clouds has become increasingly attractive for achieving redundancy, optimizing costs, and accessing the unique offerings of each provider. This is where Infrastructure as Code (IaC) tools like Terraform come into play. By using Terraform to deploy to both Google Cloud and AWS, teams can establish a scalable, version-controlled, and streamlined workflow that ensures the right resources are provisioned, managed, and maintained consistently, no matter which cloud solutions you opt for.

Why Use Terraform for Multi-Cloud Deployments?

Infrastructure as Code has revolutionized the way we approach infrastructure management. Rather than manually configuring servers and services through each provider’s console, IaC allows you to codify your infrastructure and provision it automatically. Terraform stands out for multi-cloud setups due to its:

  1. Provider Ecosystem: Terraform’s provider plugins support numerous platforms, including Google Cloud (GCP), AWS, Azure, and many more. This multi-cloud compatibility means you can unify your infrastructure definitions in a single tool.

  2. State Management: Terraform tracks the real-world state of your resources in a state file. This stateful approach allows Terraform to reconcile desired configurations with existing infrastructure, streamlining updates and minimizing the chance of configuration drifts.

  3. Code Reusability: By adopting modules and code-organization best practices, you can encapsulate reusable configurations. This is especially valuable when you have overlapping resource requirements across providers.

  4. Visibility and Version Control: Because Terraform files are essentially code, they can be placed in source control systems like Git. This fosters easier collaboration and rollbacks, ensuring a stable code-driven workflow.

  5. Uniform Workflow: When working with heavily scripted or manual processes, combining multiple providers often entails context-switching between CLIs or APIs. Terraform eliminates that friction by unifying it all under “terraform apply” or “terraform plan,” making team collaboration more coherent.

Benefits of Multi-Cloud with Google Cloud and AWS

  1. Redundancy and Resilience: By deploying across Google Cloud and AWS, you protect your systems against provider-specific outages. If one platform experiences downtime, you can shift workload to the other provider, maintaining business continuity.

  2. Cost Optimization: Each provider periodically offers incentives or discounted services. Using a multi-cloud strategy allows you to take advantage of cost savings in specific regions or for certain types of workloads, ensuring that you optimize your cloud budget.

  3. Access to Unique Services: Providers differentiate themselves through specialized services. For instance, AWS might lead in serverless offerings with AWS Lambda, while Google Cloud might be more appealing for its managed Kubernetes offering (GKE) or big data solutions. A multi-cloud strategy leverages the best of each.

  4. Migrate Gradually: Many organizations begin by moving a portion of workloads to a new cloud without committing fully. Using Terraform helps you integrate parts of your architecture into GCP or AWS while maintaining a single source of truth. This is ideal for companies planning migrations or expansions.

Preparing Your Terraform Environment

Before diving into the code, there are several preliminary steps necessary to ensure a smooth Terraform experience for both Google Cloud and AWS.

1. Install Terraform

Download and install the latest Terraform binary from the official HashiCorp website. Make sure Terraform is accessible in your environment’s PATH by running:

terraform -version

2. Configure Cloud CLIs

You need the Google Cloud SDK (gcloud) and AWS CLI to authenticate and manage resources. Follow the official documentation for installing and configuring each:

  • Google Cloud SDK: After installing, authenticate by running gcloud init or gcloud auth login.
  • AWS CLI: Authenticate using aws configure to set your access key, secret key, and default region.

3. Set Up Service Accounts and Access Keys

You can choose to authenticate Terraform through your local environment credentials or by using dedicated service accounts. For production use, it’s often more secure and auditable to rely on specific credentials:

  • Google Cloud: Create a service account with the necessary roles (e.g., compute admin, storage admin). Download the JSON key file and reference it in your Terraform configurations or environment variables.
  • AWS: Generate an access key and secret key with limited permissions. This approach follows the principle of least privilege, ensuring that the Terraform user or role can only manage the resources it needs.

Structuring Your Terraform Project

A typical multi-cloud Terraform project might follow a directory structure like this:

.
├── modules
│   ├── gcp
│   │   └── compute_instance
│   └── aws
│       └── ec2_instance
├── environments
│   ├── dev
│   │   ├── main.tf
│   │   ├── variables.tf
│   │   └── terraform.tfvars
│   └── prod
│       ├── main.tf
│       ├── variables.tf
│       └── terraform.tfvars
├── provider.tf
├── versions.tf
└── README.md

Key Files

  • versions.tf: Lock down both the Terraform version and the providers to maintain consistency across environments.
  • provider.tf: Configure both the Google Cloud and AWS providers, specifying project/region details as needed.
  • main.tf: Reference modules for your environment or define resources inline, depending on how large your codebase is.
  • variables.tf: Declare variables with default values or set them in terraform.tfvars to keep sensitive information out of version control when possible.
  • modules/: Organized directories for reusable code, each containing their own main.tf, variables.tf, and outputs.tf.

This modular approach lets you version modules, test them in a staging environment, and then promote them to production.

Configuring the Providers

To create a multi-cloud Terraform configuration, you’ll need to declare both the GCP and AWS providers. A simple example might look like:

terraform {
  required_version = ">= 1.0.0"
  required_providers {
    google = {
      source  = "hashicorp/google"
      version = "~> 4.0"
    }
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.0"
    }
  }
}

provider "google" {
  project = var.gcp_project
  region  = var.gcp_region
  credentials = file(var.gcp_credentials_file)
}

provider "aws" {
  region = var.aws_region
  access_key = var.aws_access_key
  secret_key = var.aws_secret_key
}

In a production environment, you may want to rely on environment variables (e.g., GOOGLE_APPLICATION_CREDENTIALS, AWS_ACCESS_KEY_ID, and AWS_SECRET_ACCESS_KEY) to avoid storing sensitive values in code. Additionally, you can explore more advanced features like assume roles in AWS or Workload Identity in Google Cloud for secure authentication flows.

Provisioning Resources on Google Cloud

Once your provider for Google Cloud is configured, you can provision various resources (e.g., Compute Engine Instances, Cloud Storage Buckets, Firestore Databases) directly in Terraform. For example, to create a Compute Engine instance:

resource "google_compute_instance" "web_server" {
  name         = "terraform-gcp-instance"
  machine_type = "e2-micro"
  zone         = "us-central1-a"

  boot_disk {
    initialize_params {
      image = "debian-cloud/debian-11"
    }
  }

  network_interface {
    network = "default"
    access_config {}
  }

  metadata = {
    ssh-keys = "terraform:YOUR_PUBLIC_SSH_KEY"
  }
}

This snippet provisions a basic virtual machine on GCP. You can customize machine types, container images, or additional bootstrapping steps within this configuration.

Provisioning Resources on AWS

Terraform allows you to define AWS resources in the same project. For instance, if you want to stand up an EC2 instance:

resource "aws_instance" "web_server" {
  ami           = data.aws_ami.ubuntu.id
  instance_type = "t2.micro"

  tags = {
    Name = "terraform-aws-instance"
  }
}

data "aws_ami" "ubuntu" {
  most_recent = true
  owners      = ["099720109477"] # Canonical
  filter {
    name   = "name"
    values = ["ubuntu/images/hvm-ssd/ubuntu-focal-20.04-amd64-server-*"]
  }
}

Here, we retrieve the latest Ubuntu AMI ID for a specified region and provision an t2.micro instance with that image. Beyond compute, Terraform can handle other AWS services like S3, RDS, or Elastic Load Balancers, always using a similar code-driven approach.

Managing State

Multi-cloud setups can complicate state management if not well planned. Here are important considerations:

  1. Remote State Storage: Storing state in local files is risky, especially in collaborative environments. Instead, use a remote backend such as Amazon S3 with DynamoDB for state locking or Google Cloud Storage with state locking via Google’s API operations. This ensures concurrency safety and centralized state.

  2. Separate States for Environments: You might maintain distinct state files for different environments (e.g., dev, staging, prod). This prevents changes intended for one environment from affecting another.

  3. Encryption and Access Control: Protect state files as they often contain sensitive information such as resource IDs and network configurations. Implement bucket policies (in S3) or ACLs (in GCS) to restrict read and write access.

Best Practices for Multi-Cloud Deployments

  1. Use Modules and DRY Principles: Don’t copy-paste. Instead, encapsulate references to GCP and AWS resources into modules. This fosters reusability and reduces the risk of errors.

  2. Leverage Workspaces: Terraform workspaces let you keep different states for different environments. This is faster to bootstrap for small projects than separate Terraform directories, though either approach can be used effectively.

  3. Test in Lower Environments: Before rolling out changes to production, apply them in development or staging. Validate that your Terraform plans operate correctly with the resources you manage in both Google Cloud and AWS.

  4. Automate with CI/CD: Incorporate Terraform into your CI/CD pipelines. Tools like GitLab CI, GitHub Actions, or Jenkins can run terraform plan and terraform apply once a pull request is merged, ensuring consistent, automated deployments.

  5. Tag and Label Resources: Make it a habit to tag your resources in AWS and label them in GCP. This helps track usage, costs, and ownership across multiple clouds.

  6. Monitor and Observe: Use built-in monitoring offerings like Cloud Monitoring in GCP and Amazon CloudWatch in AWS. These tools let you keep an eye on resource utilization, performance metrics, and system logs across cloud providers.

Handling Complex Multi-Cloud Scenarios

When organizations scale, multi-cloud setups can quickly grow in complexity. Some advanced strategies include:

  • Hybrid Kubernetes Cluster: If your deployment approach centers on container orchestration, you may manage cluster definitions in GCP’s GKE and AWS’s EKS. Terraform, combined with Helm modules, can unify your container-based workflows.

  • Serverless Architectures: Cloud Functions in GCP and Lambda in AWS can be provisioned side by side for specific workloads. Terraform’s coverage extends to these serverless services, letting you version function configurations alongside your main infrastructure.

  • Data Layer Replication: If your application handles large volumes of data in both clouds, you may use cross-cloud replication strategies. For instance, syncing data between Google Cloud Storage and Amazon S3 to ensure availability and resilience is possible through Terraform plus cloud-native solutions or third-party replication services.

  • Network and Security: For multi-cloud environments, virtual private networks (VPCs) in AWS and Virtual Private Cloud (VPC) profiles in GCP must be tracked carefully. Ensure your applications rely on robust identity and access management (IAM) across all clouds. Terraform can declare IAM roles, policies, and service accounts, standardizing security configurations across multiple providers.

Overcoming Common Pain Points

  1. Provider-Specific Quirks: Each provider has its unique features, settings, and occasional limitations. Familiarize yourself with the official documentation and keep an eye out for community modules that handle some complexities.

  2. Credential Management: Multi-cloud can mean multiple types of credentials. Consider vaulting solutions like HashiCorp Vault or AWS Secrets Manager to reduce clutter and centralize secret storage.

  3. Resource Naming Collisions: Some resource names must be globally unique, while others only need to be unique within a region or project. Align your naming conventions to avoid conflicts.

  4. Cost Overruns: Monitor resource usage to avoid hidden costs. Terraform can spin up resources quickly, but forgetting to terminate them can lead to surprising bills at month’s end, especially if you create ephemeral test environments across multiple cloud providers.

  5. Consistency in Infrastructure Patterns: Resist the temptation to manage Google Cloud and AWS resources in entirely different ways. Instead, adopt uniform standards so your team can jump from one environment to another without a steep learning curve.

Looking Ahead

Terraform is constantly evolving, just like the cloud providers themselves. New features, best practices, and recommended patterns emerge regularly. By adopting Terraform to manage Google Cloud and AWS, you position your projects to be more adaptable, portable, and scalable in a rapidly changing technology landscape. Your organization can adopt the best solutions offered by each cloud provider, minimize dependency risks, and foster an agile environment that’s ready to embrace emerging capabilities.

Moreover, refining your skillset in writing modular, well-organized Terraform code, influencing DevOps pipelines, and monitoring multi-cloud costs will make you invaluable in modern software projects. As multi-cloud strategies become more mainstream, these competencies will remain in high demand, especially as organizations look for robust disaster recovery, cost-optimized environments, and advanced serverless architectures.

By mastering Terraform’s multi-cloud capabilities, you gain a significant edge. From delivering consistent resource creation across regions and providers to mitigating single-provider lock-in, Terraform is a powerful ally in any DevOps toolkit. Explore, experiment, and fine-tune your IaC strategies to unlock the full potential of both Google Cloud and AWS. The future of multi-cloud lies in adopting the right tools, best practices, and workflows to deliver high-performing, resilient infrastructure — and Terraform is an essential part of that journey.


Resume

Experience

  • SecurityScoreCard

    Nov. 2023 - Present

    New York, United States

    Senior Software Engineer

    I joined SecurityScorecard, a leading organization with over 400 employees, as a Senior Full Stack Software Engineer. My role spans across developing new systems, maintaining and refactoring legacy solutions, and ensuring they meet the company's high standards of performance, scalability, and reliability.

    I work across the entire stack, contributing to both frontend and backend development while also collaborating directly on infrastructure-related tasks, leveraging cloud computing technologies to optimize and scale our systems. This broad scope of responsibilities allows me to ensure seamless integration between user-facing applications and underlying systems architecture.

    Additionally, I collaborate closely with diverse teams across the organization, aligning technical implementation with strategic business objectives. Through my work, I aim to deliver innovative and robust solutions that enhance SecurityScorecard's offerings and support its mission to provide world-class cybersecurity insights.

    Technologies Used:

    Node.js Terraform React Typescript AWS Playwright and Cypress