Skip to main content

Overview

Before deploying infrastructure via CI/CD, you need to bootstrap your AWS accounts and GitHub organization for Terraform management. This is a one-time setup that creates the foundation for all automation.

Decisions

This setup step requires deciding the following:

Namespace

The namespace is a short prefix (3-5 characters) used to generate unique names for all AWS resources: S3 buckets, IAM roles, EKS clusters, etc. Choose something that identifies your organization.
ExamplesDescription
acmeCompany name abbreviation
mycoShort identifier
xyzProject code
The default is ksk (“Kube Starter Kit”). You’ll use this namespace consistently across all configuration, it cannot be easily changed later.
This “namespace” refers to the naming convention from Cloud Posse’s terraform-null-label, not a Kubernetes namespace.

Primary AWS Region

Choose a primary AWS region for your infrastructure. This region will host:
  • The Terraform state S3 bucket
  • Your EKS clusters (staging and production)
  • Most other AWS resources
The default is us-east-2. Consider factors like latency to your users, service availability, and pricing when choosing.
You can deploy to multiple regions later, but the state bucket region cannot be changed without migrating state.

How Cross-Account Access Works

The Infrastructure account is the central hub for all Terraform operations. There are two access paths:
  1. CI/CD (GitHub Actions): Authenticates via OIDC, then assumes roles in target accounts
  2. Human admins (IAM Identity Center): Authenticates via SSO to the Infrastructure account, then assumes roles in target accounts
GitHub Actions               Human Admin (You)
       │                            │
       │ OIDC                       │ IAM Identity Center (SSO)
       ▼                            ▼
┌─────────────────────────────────────────────────────┐
│                Infrastructure Account               │
│                                                     │
│   ┌─────────────────┐     ┌────────────────────┐    │
│   │  GitHub OIDC    │     │   SSO Admin Role   │    │
│   │  Role           │     │   (via Leapp)      │    │
│   └─────────────────┘     └────────────────────┘    │
│             \                   /                   │
│              ▼                 ▼                    │
│          ┌──────────────────────┐                   │
│          │   Terraform State    │                   │
│          │   Bucket (S3)        │                   │
│          └──────────────────────┘                   │
└─────────────────────────────────────────────────────┘
          │                │               │          \
          │ sts:AssumeRole (cross-account) │           \
          ▼                ▼               ▼            ▼
┌──────────────┐  ┌──────────────┐  ┌──────────────┐  ┌──────────────┐
│  Management  │  │     ECR      │  │   Staging    │  │  Production  │
│   Account    │  │   Account    │  │   Account    │  │   Account    │
│ ------------ │  │ ------------ │  │ ------------ │  │ ------------ │
│   IAM Role   │  │   IAM Role   │  │   IAM Role   │  │   IAM Role   │
│   (trusts    │  │   (trusts    │  │   (trusts    │  │   (trusts    │
│    infra)    │  │    infra)    │  │    infra)    │  │    infra)    │
└──────────────┘  └──────────────┘  └──────────────┘  └──────────────┘
For CI/CD (GitHub Actions):
  1. GitHub Actions authenticates via OIDC to the GitHub OIDC role in the Infrastructure account
  2. That role assumes target account roles via cross-account IAM trust policies
  3. Terraform runs with credentials for the target account, state stored in Infrastructure account
For Human Admins:
  1. Admin authenticates to the Infrastructure account via IAM Identity Center (using Leapp)
  2. The SSO role in Infrastructure account can assume target account roles
  3. Admin runs Terraform locally with the same cross-account access as CI/CD
This pattern keeps credentials management simple while maintaining proper account isolation.

What Gets Created

Infrastructure Account (Central Hub)

ResourcePurpose
S3 bucketStores Terraform state for all accounts with versioning enabled
GitHub OIDC providerEnables keyless authentication from GitHub Actions
GitHub OIDC IAM roleRole that GitHub Actions assumes; can then assume roles in other accounts

Each Target Account (Management, ECR, Staging, Production)

ResourcePurpose
Terraform IAM roleAdmin role with trust policy allowing both the GitHub OIDC role (for CI/CD) and SSO admin role (for human admins) from the Infrastructure account to assume it
Route53 hosted zoneDNS zone for the environment (e.g., staging.example.com); not created in Management or ECR accounts

Prerequisites

Required Accounts

Based on the account structure, you need these AWS accounts:
AccountPurpose
ManagementAWS Organizations, IAM Identity Center
InfrastructureTerraform state, GitHub OIDC, CI/CD automation
ECRContainer registry (shared across environments)
StagingStaging environment resources
ProductionProduction environment resources

Bootstrapping Overview

Bootstrapping solves a chicken-and-egg problem: Terraform needs IAM roles and an S3 bucket to run, but we want Terraform to manage those resources. The solution is to manually create minimal resources, then let Terraform import and manage them. Each account needs a bootstrap IAM role. The Infrastructure account additionally needs an S3 bucket for Terraform state.
The bootstrap scripts in terraform/bootstrap/ must be run manually with AWS CLI credentials before Terraform can take over.

Configure Leapp CLI

Before you can authenticate to AWS accounts, configure the Leapp CLI with your IAM Identity Center portal:
leapp integration create \
  --integrationType AWS-SSO \
  --integrationAlias "My Organization" \
  --integrationPortalUrl https://d-xxxxxxxxxx.awsapps.com/start \
  --integrationRegion <AWS_REGION>
Replace the portal URL with your IAM Identity Center URL (found in the AWS IAM Identity Center console under Settings > Identity source) and <AWS_REGION> with your primary region.
For a GUI experience, Leapp provides a desktop app for managing AWS SSO sessions. It discovers available accounts and permission sets automatically from your configured integration.

Bootstrap the Infrastructure Account

The Infrastructure account is special, it hosts the S3 state bucket that all other accounts depend on.
1

Log into the Infrastructure account

Start a Leapp session for the Infrastructure account:
leapp session start "Infrastructure"
aws sts get-caller-identity
2

Get your SSO role ARN

You’ll need this ARN for cross-account trust policies:
mise run //terraform/bootstrap:get-sso-role-arn
Save this ARN, it looks like:
arn:aws:iam::INFRA_ACCOUNT_ID:role/aws-reserved/sso.amazonaws.com/REGION/AWSReservedSSO_AdministratorAccess_XXXXX
3

Create the S3 state bucket

mise run //terraform/bootstrap:create-state-bucket \
  --bucket-name <NAMESPACE>-gbl-infra-bootstrap-state \
  --aws-region <AWS_REGION>
S3 bucket names are globally unique. Replace <NAMESPACE> with your namespace and <AWS_REGION> with your primary region.
4

Update Terraform configuration

Edit terraform/config.tm.hcl with your namespace and account details:
globals {
  namespace = "<NAMESPACE>"  # Your chosen namespace

  # Your SSO admin role in Infrastructure account (for local Terraform runs)
  sso_admin_assume_role_arn = "arn:aws:iam::<INFRA_ACCOUNT_ID>:role/aws-reserved/sso.amazonaws.com/<AWS_REGION>/AWSReservedSSO_AdministratorAccess_XXXXX"

  # GitHub OIDC role (will be created by Terraform)
  github_oidc_assume_role_arn = "arn:aws:iam::<INFRA_ACCOUNT_ID>:role/<NAMESPACE>-gbl-infra-bootstrap-github-oidc"

  # S3 backend configuration (must match the bucket you created above)
  backend_bucket = "<NAMESPACE>-gbl-infra-bootstrap-state"
  backend_region = "<AWS_REGION>"
}
Replace the placeholders with your values from Decisions.
5

Generate Terraform files

Propagate the configuration changes to generated files:
cd terraform
terramate generate
6

Apply the Infrastructure bootstrapping stack

This imports the S3 bucket and creates the GitHub OIDC provider and role:
cd terraform
terramate run --tags infra:bootstrapping --parallel 1 -- terraform init
terramate run --tags infra:bootstrapping -- terraform apply

Bootstrap Target Accounts

Each target account (Management, ECR, Staging, Production) needs an IAM role that can be assumed from the Infrastructure account.

Manual Role Creation

For each account:
1

Log into the target account

leapp session start "Staging"  # or Management, ECR, Production
aws sts get-caller-identity
2

Create the IAM role

cd terraform/bootstrap

mise run create-terraform-iam-role-in-target-account \
  --role-name <ROLE_NAME> \
  --infra-sso-role-arn "<INFRA_SSO_ROLE_ARN>"
Replace <INFRA_SSO_ROLE_ARN> with the SSO role ARN you recorded during Bootstrap the Infrastructure Account.Use the appropriate role name for each account:
AccountRole Name
Management<NAMESPACE>-gbl-mgmt-bootstrap-admin
ECR<NAMESPACE>-gbl-ecr-bootstrap-admin
Staging<NAMESPACE>-gbl-staging-bootstrap-admin
Production<NAMESPACE>-gbl-prod-bootstrap-admin
This creates an IAM role with:
  • AdministratorAccess policy attached
  • Trust policy allowing your SSO role from the Infrastructure account to assume it
Repeat these steps for each target account.

Terraform Takeover

Once the manual roles exist, Terraform can import and manage them. Terramate generates an import block in each root module to bring the manually-created role into Terraform state.

How Import Works

The Terramate template in terraform/imports/mixins/modules/bootstrapping.tm.hcl generates both the import block and module call for each bootstrapping stack:
# Generated in each root module (e.g., terraform/live/staging/global/bootstrapping/_main.tf)

import {
  to = module.bootstrapping.module.iam_role.aws_iam_role.this[0]
  id = "ksk-gbl-staging-bootstrap-admin"  # Uses globals: namespace-environment-stage
}

module "bootstrapping" {
  source = "../../../../modules/account-bootstrapping"
  # ...
}
The import block ID is constructed from your configured globals (namespace, environment, stage) to match the role name you created manually (e.g., "<NAMESPACE>-gbl-staging-bootstrap-admin").
Import blocks must be in root modules, not child modules. This is a Terraform requirement. The account-bootstrapping module itself does not contain the import block; it’s generated by Terramate at the root module level.
When you run terraform apply, Terraform:
  1. Imports the existing IAM role into state (instead of trying to create it)
  2. Updates the role’s trust policy to allow both your SSO role and the GitHub OIDC role to assume it
This enables CI/CD pipelines to manage infrastructure going forward.

Apply the Bootstrapping Stacks

cd terraform

# Log back into Infrastructure account
leapp session start "Infrastructure"

# Apply each account's bootstrapping stack
terramate run --tags management:bootstrapping -- terraform init
terramate run --tags management:bootstrapping -- terraform apply

terramate run --tags ecr:bootstrapping -- terraform init
terramate run --tags ecr:bootstrapping -- terraform apply

terramate run --tags staging:bootstrapping -- terraform init
terramate run --tags staging:bootstrapping -- terraform apply

terramate run --tags prod:bootstrapping -- terraform init
terramate run --tags prod:bootstrapping -- terraform apply
The import block was added in Terraform 1.5. It allows declarative imports without running terraform import commands manually. See the Terraform import block documentation for more details.

Configure Domain Nameservers

The bootstrapping stacks create Route53 hosted zones for Staging and Production (e.g., staging.example.com, prod.example.com). For DNS to work, you must configure your domain registrar to use the Route53 nameservers.
1

Get the Route53 nameservers

After applying the bootstrapping stacks, retrieve the nameservers for each hosted zone:
# Get staging nameservers
terramate run --tags staging:bootstrapping -- terraform output hosted_zone_nameservers

# Get production nameservers
terramate run --tags prod:bootstrapping -- terraform output hosted_zone_nameservers
You’ll see 4 nameservers like:
ns-123.awsdns-45.com
ns-678.awsdns-90.net
ns-111.awsdns-22.org
ns-333.awsdns-44.co.uk
2

Configure your domain registrar

How you configure nameservers depends on your setup:If using a subdomain (e.g., staging.example.com):
  • Add NS records in your parent domain’s DNS pointing to the Route53 nameservers
  • Example: Add NS records for staging subdomain pointing to the 4 nameservers above
If using a dedicated domain (e.g., example-staging.com):
  • Update the domain’s nameservers at your registrar (Namecheap, GoDaddy, Route53, etc.)
  • Replace the default nameservers with the 4 Route53 nameservers
3

Verify DNS propagation

DNS changes can take up to 48 hours to propagate, but usually complete within minutes. Verify with:
dig NS staging.example.com +short
You should see the Route53 nameservers in the response.
If you skip this step, external-dns and cert-manager will not work. DNS records created in Route53 won’t resolve, and Let’s Encrypt DNS-01 challenges will fail.

Troubleshooting

”Bucket already exists” error

S3 bucket names are globally unique. If the bucket name is taken:
  1. Choose a different name with your organization prefix
  2. Update backend_bucket in terraform/config.tm.hcl
  3. Update any hardcoded references in bootstrapping stacks

State file in wrong location

All Terraform state is stored in the Infrastructure account’s S3 bucket, regardless of which account the resources are in. If you see state errors:
  1. Verify backend_bucket and backend_region in terraform/config.tm.hcl
  2. Ensure the GitHub OIDC role has S3 permissions in the Infrastructure account
  3. Check that the bucket exists and has the expected state files

Next Steps

With accounts bootstrapped, proceed to Configure Access to set up CI/CD and user access for GitHub and AWS.