Skip to main content

Overview

EKS clusters in Kube Starter Kit are configured with private API endpoints by default for security. This means you can’t access the Kubernetes API directly from the internet; you need to go through the bastion host using a SOCKS5 proxy. This page covers:
  • Setting up SSH over AWS SSM Session Manager
  • Connecting to the cluster via SOCKS proxy
  • Configuring kubectl for persistent proxy access
If you prefer simpler access, you can enable the public API endpoint by setting endpoint_public_access = true in the EKS configuration (see Deploy Infrastructure - EKS Cluster). With a public endpoint, you can run aws eks update-kubeconfig and use kubectl directly without a proxy. However, this exposes your Kubernetes API to the internet, while still protected by IAM authentication, it increases your attack surface and may not meet compliance requirements.

Architecture

Your Machine                 AWS VPC
┌───────────┐               ┌───────────────────────────┐
│           │               │                           │
│ kubectl   │───SOCKS5─────>│ Bastion Host              │
│           │   proxy       │ (private subnet)          │
└───────────┘               │         │                 │
      │                     │         ▼                 │
      │ SSM Session         │  ┌───────────────┐        │
      └────────────────────>│  │ EKS API       │        │
        (via AWS APIs)      │  │ (private)     │        │
                            │  └───────────────┘        │
                            └───────────────────────────┘
The bastion host:
  • Lives in a private subnet (no public IP)
  • Uses AWS SSM Session Manager for access (no SSH keys to manage)
  • Acts as a SOCKS5 proxy for kubectl traffic

One-Time Setup

If you haven’t already, run mise install to install the required tools (AWS CLI, Session Manager plugin, kubectl).

Configure SSH for SSM

Add the SSM proxy configuration to your SSH config:
# View the required config
mise run //tools:bastion:setup-ssh-config
Add this to ~/.ssh/config:
# AWS SSM Session Manager SSH proxy
Host i-* mi-*
    User ec2-user
    ProxyCommand sh -c "aws ssm start-session --target %h --document-name AWS-StartSSHSession --parameters 'portNumber=%p'"
This allows SSH to instances via SSM using the instance ID as the hostname.

Connect to the Cluster

1

Authenticate to AWS

Start a Leapp session for the target account:
leapp session start "Staging"
Verify authentication:
aws sts get-caller-identity
2

Update kubeconfig

Get the cluster credentials:
mise run //tools:eks:get-credentials {cluster-name}
Or manually:
export REGION="us-east-2"  # Your AWS region

aws eks update-kubeconfig \
  --name ${CLUSTER_NAME} \
  --region ${REGION} \
  --alias staging
3

Start the SOCKS proxy

In a separate terminal (with the same Leapp session active), start the proxy:
mise run //tools:eks:connect staging
This automatically looks up the bastion instance and starts a SOCKS5 proxy on localhost:1080. Keep this terminal open while accessing the cluster.
The task automatically pushes your SSH public key via EC2 Instance Connect (valid for 60 seconds) before establishing the SSH tunnel.
4

Use kubectl with the proxy

Option A: Per-command (temporary)
HTTPS_PROXY=socks5://localhost:1080 kubectl get nodes
Option B: Update kubeconfig (persistent)
kubectl config set-cluster {cluster-name} --proxy-url=socks5://localhost:1080
Now kubectl commands work without the environment variable:
kubectl get nodes
kubectl get pods -A

Configure Persistent Access

To avoid passing the proxy URL each time, configure it in your kubeconfig:
Set the proxy for a specific cluster context:
kubectl config set-cluster staging --proxy-url=socks5://localhost:1080
This modifies ~/.kube/config to include the proxy URL for that cluster.

Production Access

The same process applies to production:
# Authenticate to production
leapp session start "Production"

# Update kubeconfig
mise run //tools:eks:get-credentials {cluster-name}

# Start proxy (in separate terminal)
mise run //tools:eks:connect production

# Configure persistent proxy
kubectl config set-cluster {cluster-name} --proxy-url=socks5://localhost:1080

# Use kubectl
kubectl get nodes
For production access, consider implementing additional access controls:
  • Require MFA for SSM sessions
  • Use AWS CloudTrail to audit access
  • Implement just-in-time access with temporary permissions

Next Steps

With cluster access configured, proceed to Deploy Kubernetes Baseline to bootstrap ArgoCD and deploy infrastructure components.