Generic Agent: AWS EKS Deployment
Deploy the Generic Agent on AWS using EKS
Prerequisites
- You have an AWS account with administrative access.
- Terraform (>= 1.9) is installed with AWS authentication configured.
- kubectl is installed for cluster access.
- You are an Account Owner in Monte Carlo.
This guide walks through deploying the Generic Agent on AWS using EKS (Elastic Kubernetes Service) with the monte-carlo-data/mcd-k8s-agent/aws Terraform module. The module provisions all required infrastructure β including VPC, EKS cluster, S3 storage, Secrets Manager, and IAM β and deploys the agent via Helm.
Architecture
The diagram below shows the AWS-specific deployment architecture. For a general overview of how the Generic Agent works, see the Architecture section on the main page.
The agent runs as a pod in an EKS cluster and connects to:
- S3 Bucket β object storage for sampling and temporary data during operations.
- AWS Secrets Manager β stores the agent token and integration credentials, synced to the cluster via External Secrets Operator.
- Customer Integrations β the data sources (e.g. Redshift, Snowflake) being monitored by Monte Carlo.
Traffic to the Monte Carlo platform can optionally be routed over AWS PrivateLink instead of the public internet.
Provider Configuration
This module does not configure the aws provider β your root module must do so. At minimum, the provider must set the target region. The module derives the AWS region from the provider configuration:
provider "aws" {
region = "us-east-1"
}The module applies Monte Carlo agent tags to all resources it creates. To add your own tags alongside these, use the custom_default_tags variable.
Steps
1. Register the Agent
-
In Monte Carlo, navigate to Settings > Deployments and click Add.
-
Select Generic, enter a name for the agent, and click Generate key.
-
Copy the Key ID and Key Secret β these will be used as the values for
mcd_idandmcd_tokenin the deployment steps below.
2. Deploy the Agent
2.1 Configure the Terraform Module
Set
backend_service_urlto the Public endpoint shown in the Agent Service section of the Account Information page in Monte Carlo.
Check Docker Hub for the latest available chart version and update the
helm.chart_versionvalue accordingly.
Agent token secret
You must configure the agent token secret using one of two options:
Option 1 β Provide credentials at deploy time (recommended): The module creates and populates the secret in AWS Secrets Manager.
token_credentials = {
mcd_id = var.mcd_id
mcd_token = var.mcd_token
}Define the variables in a terraform.tfvars file:
mcd_id = "your-mcd-id"
mcd_token = "your-mcd-token"Option 2 β Use a pre-existing secret: Point the module to an existing secret in AWS Secrets Manager by name. The secret must be in the same region as the module deployment. The secret value must be a JSON object with the following format:
{
"mcd_id": "<your-mcd-id>",
"mcd_token": "<your-mcd-token>"
}token_secret = {
create = false
name = "my-existing-secret-name"
}Full deployment (new VPC and cluster)
This is the simplest option β the module creates all infrastructure from scratch:
provider "aws" {
region = "us-east-1"
}
module "mcd_agent" {
source = "monte-carlo-data/mcd-k8s-agent/aws"
backend_service_url = "<backend_service_url>"
token_credentials = {
mcd_id = var.mcd_id
mcd_token = var.mcd_token
}
helm = {
chart_version = "0.0.2"
}
}Existing VPC
If you already have a VPC, provide its ID and at least two private subnet IDs. The subnets must have outbound internet access (e.g. via NAT gateway) for pulling container images and reaching the Monte Carlo backend:
provider "aws" {
region = "us-east-1"
}
module "mcd_agent" {
source = "monte-carlo-data/mcd-k8s-agent/aws"
backend_service_url = "<backend_service_url>"
helm = {
chart_version = "0.0.2"
}
networking = {
create_vpc = false
existing_vpc_id = "vpc-0123456789abcdef0"
existing_private_subnet_ids = ["subnet-aaa111", "subnet-bbb222"]
}
}The module creates VPC endpoints for S3, Secrets Manager, STS, and EC2 by default. If your VPC already has these endpoints, set
networking.create_vpc_endpointstofalseto avoid conflicts. The existing VPC must have DNS hostnames enabled (enable_dns_hostnames = true) for VPC Interface endpoints.
Existing EKS cluster
If you already have an EKS cluster, the module can deploy the agent into it:
provider "aws" {
region = "us-east-1"
}
module "mcd_agent" {
source = "monte-carlo-data/mcd-k8s-agent/aws"
backend_service_url = "<backend_service_url>"
helm = {
chart_version = "0.0.2"
}
cluster = {
create = false
existing_cluster_name = "my-cluster"
}
networking = {
create_vpc = false
}
}Infrastructure only (manual Helm deployment)
To provision AWS infrastructure without deploying the agent via Helm β useful if you want to manage the Helm release separately:
provider "aws" {
region = "us-east-1"
}
module "mcd_agent" {
source = "monte-carlo-data/mcd-k8s-agent/aws"
backend_service_url = "<backend_service_url>"
helm = {
chart_version = "0.0.2"
deploy_agent = false
}
}
output "helm_values" {
value = module.mcd_agent.helm_values
sensitive = true
}After applying, use the helm_values output as your values.yaml and follow the Helm deployment steps in the Kubernetes deployment guide.
AWS PrivateLink (optional)
To route traffic to the Monte Carlo backend over AWS PrivateLink instead of the public internet, add the private_link block. The module creates an interface VPC endpoint, a security group, and a Route53 private hosted zone with DNS resolution.
Before deploying with PrivateLink, contact Monte Carlo at [email protected] to request that your AWS account be allowlisted. Include your Monte Carlo Account ID (found here) and AWS Account ID. Do not proceed until you receive confirmation.
You can find the region, VPC endpoint service name, and Private link endpoint in the Agent Service section of the Account Information page in Monte Carlo, under AWS PrivateLink. When using PrivateLink, backend_service_url must use the private link endpoint (it must contain .privatelink.).
provider "aws" {
region = "us-east-1"
}
module "mcd_agent" {
source = "monte-carlo-data/mcd-k8s-agent/aws"
backend_service_url = "https://artemis.privatelink.getmontecarlo.com"
token_credentials = {
mcd_id = var.mcd_id
mcd_token = var.mcd_token
}
helm = {
chart_version = "0.0.2"
}
private_link = {
vpce_service_name = "<vpce_service_name>"
region = "us-east-1"
}
}After deploying, the VPC endpoint connection requires approval from Monte Carlo. Contact [email protected] and include the following output values:
terraform output vpce_id
terraform output vpce_dns_entryThe agent will not be able to communicate with the Monte Carlo backend until the connection is approved β you will see ConnectTimeout errors in the agent logs during this time. Once approved, restart the agent services and run the reachability test:
kubectl rollout restart deployment mcd-agent-deployment -n mcd-agent
kubectl rollout restart daemonset logs-collector metrics-collector -n mcd-agentFor more details on PrivateLink setup and DNS configuration, see the Agent Service AWS PrivateLink documentation.
2.2 Deploy
terraform init && terraform applyAdditional module inputs, options, and defaults can be found in the module documentation.
2.3 Verify
Configure kubectl access to the cluster:
aws eks update-kubeconfig --name <cluster_name> --region <region>Check the agent pod is running:
kubectl get pods -n mcd-agent
kubectl logs -n mcd-agent -l app=mcd-agent --tail=30Run the reachability test to confirm the agent can communicate with the Monte Carlo platform:
kubectl exec -n mcd-agent deploy/mcd-agent-deployment -- \
curl -s -X POST localhost:8080/api/v1/test/reachabilityA successful response contains "ok": true.
3. Enable the Agent
After verifying the agent is running, click Enable on the agent registration screen.
If you've navigated away from the registration screen, go to Settings > Deployments, select your agent, and click Enable.
Once validations pass, click Enable in the validations dialog to activate the agent.

Outputs
The module exposes the following outputs:
| Name | Description |
|---|---|
cluster_endpoint | Endpoint for EKS control plane |
cluster_name | EKS cluster name |
storage_bucket_name | S3 bucket name for agent storage |
pod_identity_role_arn | IAM role ARN for pod identity |
eso_role_arn | IAM role ARN for External Secrets Operator |
mcd_secrets_access_role_arn | IAM role ARN for ESO to access Secrets Manager |
vpce_id | ID of the PrivateLink VPC endpoint (when configured) |
vpce_dns_entry | DNS entries for the PrivateLink VPC endpoint (when configured) |
helm_values | Helm values for manual deployment (sensitive) |
FAQs
Can I use AWS PrivateLink instead of the internet?
Yes. The module supports optional AWS PrivateLink to route traffic to the Monte Carlo backend over a private connection. See the AWS PrivateLink section above for configuration details and the Agent Service AWS PrivateLink documentation for more information.
Does the agent require inbound network access?
No. The Generic Agent is egress-only. All connections are initiated from the agent to the Monte Carlo platform. No inbound connectivity to the agent is required.
Can I use an existing S3 bucket?
Yes. Set storage.create_bucket to false and provide storage.existing_bucket_name in the module configuration. The module automatically grants the agent's pod identity role the required S3 permissions on the specified bucket. Ensure the bucket does not have policies that deny access from the agent's IAM role. The following actions are required:
{
"Action": [
"s3:PutObject",
"s3:GetObject",
"s3:DeleteObject",
"s3:ListBucket",
"s3:GetBucketPublicAccessBlock",
"s3:GetBucketPolicyStatus",
"s3:GetBucketAcl"
],
"Resource": [
"arn:aws:s3:::<your-bucket-name>",
"arn:aws:s3:::<your-bucket-name>/*"
],
"Effect": "Allow"
}Can I customize the Helm values?
Yes. Use the custom_values variable to merge additional values with the module-generated Helm configuration. This accepts any map matching the chart's values.yaml schema.
How do I add credentials for data source integrations?
-
Create a secret in AWS Secrets Manager with the integration credentials. See the Self-Hosted Credentials documentation for the JSON format for each integration type.
aws secretsmanager create-secret --name mcd/integrations/<integration> \ --secret-string '{"connect_args": { ... }}' -
Grant the agent's pod identity role read access to the secret. You can find the role ARN in the Terraform output
pod_identity_role_arn.{ "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "secretsmanager:GetSecretValue", "Resource": "arn:aws:secretsmanager:<region>:<account-id>:secret:mcd/integrations/<integration>-*" } ] } -
Register the integration in Monte Carlo. You can do this via the UI (see Self-Hosted Credentials) or via the CLI:
montecarlo integrations add-self-hosted-credentials-v2 \ --connection-type <integration> \ --self-hosted-credentials-type AWS_SECRETS_MANAGER \ --aws-secret <secret_name>
How do I connect the agent to my data sources privately?
When the agent is deployed in a separate VPC from your data sources, you need to establish network connectivity between them. Common options include:
- VPC Peering β connect two VPCs directly. See AWS VPC Peering documentation.
- AWS Transit Gateway β connect multiple VPCs through a central hub. See AWS Transit Gateway documentation.
- VPC Endpoints (PrivateLink) β access AWS services (e.g. RDS, Redshift, S3) privately without traversing the internet. See AWS PrivateLink documentation.
If your data sources are on-premises or in another cloud, you can use AWS Direct Connect or a Site-to-Site VPN to establish connectivity.
The agent itself does not require any special configuration for networking β connectivity to data sources must be configured at the infrastructure level.
How do I scale the agent?
There are several ways to scale the agent:
Manual replicas: Set agent.replica_count in your module configuration:
agent = {
replica_count = 3
}Thread count: Increase the number of concurrent operations a single replica can process via custom_values:
custom_values = {
container = {
opsRunnerThreadCount = "36"
}
}The default is 18. Increasing this value allows each replica to handle more operations in parallel, which can be useful before adding more replicas.
Pod resources: Set CPU and memory requests/limits to ensure replicas have adequate resources, especially when increasing thread count:
custom_values = {
container = {
resources = {
requests = {
cpu = "500m"
memory = "512Mi"
}
limits = {
cpu = "2"
memory = "2Gi"
}
}
}
}Autoscaling (HPA): Enable the Horizontal Pod Autoscaler to scale replicas automatically based on CPU (and optionally memory) utilization:
custom_values = {
autoscaling = {
enabled = true
minReplicas = 1
maxReplicas = 5
targetCPUUtilizationPercentage = 70
}
}When autoscaling is enabled, replica_count is ignored β the HPA manages replicas. container.resources.requests must be set for the HPA to calculate utilization, and metrics-server must be installed in the cluster (standard in EKS, AKS, and GKE).
How do I upgrade the agent?
To upgrade the agent, update both the Helm chart version and the agent image tag in your module configuration, then run terraform apply:
agent = {
image = "montecarlodata/agent:1.0.0-generic"
}
helm = {
chart_version = "0.1.0"
}Available agent images are published to montecarlodata/agent on Docker Hub (use tags ending with -generic, e.g. latest-generic or 1.0.0-generic). Helm chart versions are available at montecarlodata/generic-agent-helm.
You will not receive notifications when an update is available from Monte Carlo, but you can subscribe to the repositories for new updates. Keeping the agent up to date is part of your shared responsibility.
How do I monitor the agent?
You can monitor the agent using standard AWS monitoring tools such as Amazon CloudWatch Container Insights or Amazon Managed Service for Prometheus. The agent exposes a health endpoint at /api/v1/test/healthcheck that can be used for liveness and readiness probes.
How do I rotate the agent token?
-
Update the secret in AWS Secrets Manager:
aws secretsmanager update-secret --secret-id mcd/agent/token \ --secret-string '{"mcd_id":"<new-mcd-id>","mcd_token":"<new-mcd-token>"}' -
Force sync the Kubernetes secret from External Secrets Operator:
kubectl annotate externalsecret -n mcd-agent --all \ force-sync=$(date +%s) --overwrite -
Restart the agent services:
kubectl rollout restart deployment mcd-agent-deployment -n mcd-agent kubectl rollout restart daemonset logs-collector metrics-collector -n mcd-agent
Updated about 2 hours ago
