EKS Audit Log Integration Using Terraform
tip
To integrate with Amazon EKS, you can use the Lacework CLI to generate Terraform code. You must also enable audit logging on the clusters that you want to integrate.
If you want to create the main.tf
file and run Terraform manually, follow the steps for one of these scenarios: Integrate EKS Clusters Audit Logs in a Single Region or Integrate EKS Clusters Audit Logs Across Multiple Regions.
Overview
Lacework integrates with AWS to analyze EKS Audit Logs for monitoring EKS cluster security and configuration compliance. This topic describes how to integrate with AWS by running Lacework Terraform modules from any host supported by Terraform.
If you are new to the Lacework Terraform Provider, or Lacework Terraform Modules, read Terraform for Lacework Overview to learn the basics on how to configure the provider and more.
Resources Provisioned by Lacework Terraform Modules
To integrate AWS with Lacework, Lacework Terraform modules provision the following resources in the designated AWS account:
- IAM Cross-Account Role - A cross-account role is required to give access to Lacework access for assessments of cloud resource configurations and for analysis of CloudTrail events. The cross account role will be given the following policies:
- SecurityAudit - AWS Managed Policy to provide read-only access to cloud resource configurations.
- Lacework Custom IAM Policy - A custom policy that provides Lacework read-only access to ingest EKS Audit Logs.
- SNS Topic - An SNS topic is required for all EKS Audit Log integrations. Terraform will create a new topic in the designated account.
- S3 Bucket - An S3 bucket is required for all EKS Audit Log integrations. Lacework will create a new bucket in the designated account.
- S3 Bucket Notification - A S3 Bucket notification is required for all EKS Audit Log integrations. This notifies the SNS topic when a new object has been created in the bucket.
- S3 Bucket Lifecycle rule - An S3 Bucket Lifecycle rule to specify the log number of days the logs are retained. Defaults to 180 days.
- S3 Bucket versioning - A resource for controlling versioning on an S3 bucket.
- Kinesis Firehose - A Kinesis Firehose is required for all EKS Audit Log integrations. Lacework will create a new Firehose in the designated account.
- IAM Lacework Firehose Role - A firehose role is required to assume the AWS firehose service role.
- Lacework Firehose IAM Policy - A firehose policy is required to allow the firehose to manage the contents of the S3 bucket.
- IAM Lacework CloudWatch Role - A CloudWatch role is required to assume the logs role for each region integrated.
- Lacework CloudWatch IAM Policy - A CloudWatch policy is required to allow the firehose to access the CloudWatch logs.
- CloudWatch Subscription Filter(s) - CloudWatch subscription filter(s) are required for each EKS cluster in order to notify when logs have been added to the CloudWatch Log group.
Requirements
- AWS Account Admin - The account used to run Terraform must have administrative privileges on every AWS account you intend to integrate with Lacework.
- AWS CLI - The Terraform provider for AWS leverages the configuration from the AWS CLI and Lacework recommends the AWS CLI is installed and configured with API keys for the account being integrated.
- Lacework Administrator - A Lacework account with administrator privileges.
- Lacework CLI - Lacework leverages the configuration from the Lacework CLI. Installing and configuring the Lacework CLI is recommended.
- Terraform -
~> 0.15
,~> 1.0
,~> 1.1
.
Module Inputs Reference
For an up-to-date guide on all available configuration inputs, refer to the Terraform Registry documentation:
Module Dependencies
Lacework Terraform modules for AWS have the following dependencies that are installed when running terraform init
:
For detailed information on these dependencies, visit Lacework on the Terraform Registry.
Deployment Scenarios
Use the Lacework CLI for any supported EKS cluster audit log integration scenario.
To create the main.tf
file and run Terraform manually, follow the steps for these deployment scenarios:
- Integrate EKS cluster(s) Audit Logs in a single region - This deployment scenario configures a new Lacework EKS Audit Log integration for cluster(s) in a single AWS region.
- Integrate EKS cluster(s) Audit Logs across multiple regions - This deployment scenario configures a new Lacework EKS Audit Log integration for clusters across multiple AWS regions.
Prerequisite
Audit logging must be enabled on the clusters that you want to integrate. You can do this via the AWS CLI using the following command:
aws eks --region <region> update-cluster-config --name <cluster_name> \
--logging '{"clusterLogging":[{"types":["audit"],"enabled":true}]}'
Use the Lacework CLI
For details about the lacework generate k8s eks
command, see the CLI documention.
- Ensure the Lacework CLI is installed and configured to the Lacework account you plan to integrate.
- Install the AWS CLI. For details on how to install the AWS CLI, see Configuring your AWS CLI.
- Ensure audit logging is enabled on the clusters that you want to integrate.
- From the Lacework CLI, generate the Terraform code by running the
lacework generate k8s eks
command. - Answer the integration questions in the CLI to customize your Terraform code for your specific integration.
- For
Run Terraform plan now?
, entery
to install Terraform if it is not already installed, initialize, download the required modules, and display the plan of resources to apply. - Select Continue with Terraform Apply to deploy the integration.
Validate the Configuration
After Terraform finishes applying changes, use the Lacework CLI or log in to the Lacework Console to confirm the integration is working.
To validate the integration using the CLI, open a Terminal and run the lacework cloud-account list
command. EKS cloud account integrations are listed as AwsEksAudit
.
To validate the integration using the Lacework Console, log in to your account and go to Settings > Integrations > Cloud accounts.
Scenario 1 - Integrate EKS Clusters Audit Logs in a Single Region
This scenario creates a new Lacework EKS Audit Log integration with a cross-account IAM role to provide Lacework access. This example targets cluster(s) in a single AWS region.
provider "lacework" {}
module "aws_eks_audit_log" {
source = "lacework/eks-audit-log/aws"
version = "~> 0.2"
cloudwatch_regions = ["us-west-1"]
cluster_names = ["example_cluster"]
}
- Ensure the Lacework CLI is installed and configured to the Lacework account you plan to integrate.
- Open an editor of your choice (such as Vim) and create a new file called
main.tf
. - Copy the code snippet above, and paste it into the
main.tf
file. - Validate the provider/module versions and the configuration as covered in the provider documentation.
- Open a Terminal and change directories to the directory that contains the
main.tf
file and runterraform init
to initialize the project and download the required modules. - Run
terraform plan
to validate the configuration and review pending changes. - After you review the pending changes, run
terraform apply
to execute changes.
note
Lacework Terraform modules provide a number of inputs for customization. Visit the documentation on the Terraform Registry for the complete list of inputs for each module.
Validate the Configuration
After Terraform finishes applying changes, use the Lacework CLI or log in to the Lacework Console to confirm the integration is working.
To validate the integration using the CLI, open a Terminal and run the lacework cloud-account list
command. EKS cloud account integrations are listed as AwsEksAudit
.
To validate the integration using the Lacework Console, log in to your account and go to Settings > Integrations > Cloud accounts.
Scenario 2 - Integrate EKS Clusters Audit Logs Across Multiple Regions
This scenario creates a new Lacework EKS Audit Log integration with a cross-account IAM role to provide Lacework access. This example targets clusters across multiple AWS regions.
Due to limitations on the aws_cloudwatch_log_subscription_filter
resource, to make this integration multi-region, we must move the creation of the aws_cloudwatch_log_subscription_filter
resources to the top level and additionally create an AWS provider for each region.
Create an AWS provider for each region that has a cluster that you wish to integrate.
provider "aws" {
region = "us-west-2"
alias = "us-west-2"
}
When declaring the aws_cloudwatch_log_subscription_filter
resource for each region, you must change two fields in the example to suit your environment:
- List of clusters within the
for_each
assignment. Note thetoset()
function is required when using a list of strings withfor_each
. - The
provider
to use. Set this to the region where the cluster(s) are deployed.
resource "aws_cloudwatch_log_subscription_filter" "lacework_cw_subscription_filter-us-west-2" {
// the following variables require input
for_each = toset(["<list of clusters>"])
provider = aws.your_region_aliased_provider
// leave the following variables as is
name = "${module.aws_eks_audit_log.filter_prefix}-${each.value}"
role_arn = module.aws_eks_audit_log.cloudwatch_iam_role_arn
log_group_name = "/aws/eks/${each.value}/cluster"
filter_pattern = module.aws_eks_audit_log.filter_pattern
destination_arn = module.aws_eks_audit_log.firehose_arn
depends_on = [module.aws_eks_audit_log]
}
The full multi-region Terraform example:
provider "lacework" {}
provider "aws" {
region = "eu-west-1"
alias = "eu-west-1"
}
provider "aws" {
region = "us-west-2"
alias = "us-west-2"
}
module "aws_eks_audit_log" {
source = "lacework/eks-audit-log/aws"
version = "~> 0.2"
cloudwatch_regions = ["eu-west-1", "us-west-2"]
no_cw_subscription_filter = true
}
resource "aws_cloudwatch_log_subscription_filter" "lacework_cw_subscription_filter-eu-west" {
for_each = toset(["cluster-1", "cluster-2"])
provider = aws.eu-west-1
name = "${module.aws_eks_audit_log.filter_prefix}-${each.value}"
role_arn = module.aws_eks_audit_log.cloudwatch_iam_role_arn
log_group_name = "/aws/eks/${each.value}/cluster"
filter_pattern = module.aws_eks_audit_log.filter_pattern
destination_arn = module.aws_eks_audit_log.firehose_arn
depends_on = [module.aws_eks_audit_log]
}
resource "aws_cloudwatch_log_subscription_filter" "lacework_cw_subscription_filter-us-west-2" {
for_each = toset(["cluster-3", "cluster-4"])
provider = aws.us-west-2
name = "${module.aws_eks_audit_log.filter_prefix}-${each.value}"
role_arn = module.aws_eks_audit_log.cloudwatch_iam_role_arn
log_group_name = "/aws/eks/${each.value}/cluster"
filter_pattern = module.aws_eks_audit_log.filter_pattern
destination_arn = module.aws_eks_audit_log.firehose_arn
depends_on = [module.aws_eks_audit_log]
}
- Ensure you have the Lacework CLI installed and configured to the Lacework account you plan to integrate.
- Open an editor of your choice (such as Vim) and create a new file called
main.tf
. - Copy the code snippet above, and paste it into the
main.tf
file. - Validate the provider/module versions and the configuration as covered in the provider documentation.
- Open a Terminal and change directories to the directory that contains the
main.tf
file and runterraform init
to initialize the project and download the required modules. - Run
terraform plan
to validate the configuration and review pending changes. - After you review the pending changes, run
terraform apply
to execute changes.
note
Lacework Terraform modules provide a number of inputs for customization. Visit the documentation on the Terraform Registry for the complete list of inputs for each module.
Validate the Configuration
After Terraform finishes applying changes, use the Lacework CLI or log in to the Lacework Console to confirm the integration is working.
To validate the integration using the CLI, open a Terminal and run the lacework cloud-account list
command. EKS cloud account integrations are listed as AwsEksAudit
.
To validate the integration using the Lacework Console, log in to your account and go to Settings > Integrations > Cloud accounts.
AWS Security Token Service Limitations
The Lacework AWS module will fail due to limitations of STS. See AWS STS documentation for more information. Customers using aws-vault, for example, must pass the --no-session
flag. For more information, see aws-vault documentation.