Deploy Linux Agent on Kubernetes
You can use the following methods to deploy the Lacework Linux agent on Kubernetes:
- Deploy with a Helm Chart - Helm is a package manager for Kubernetes that uses a packaging format called charts. A chart is a collection of files that describe a related set of Kubernetes resources. You can download the Lacework Helm chart and use it to deploy the agent.
- Deploy with a DaemonSet - DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy the agent onto any Kubernetes cluster, including hosted versions like AKS, EKS, and GKE.
- Deploy with Terraform - For organizations using Hashicorp Terraform to automate their environments, Lacework provides the terraform-kubernetes-agent module to create a Secret and DaemonSet for deploying the agent in a Kubernetes cluster.
After you install the agent, it takes 10 to 15 minutes for agent data to appear in the Lacework Console under Resources > Agents. You can also view your Kubernetes cluster in the Lacework Console under Resources > Kubernetes. If your cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
note
The datacollector pod uses privileged containers and requires access to host PID namespace, networking, and volumes.
Prerequisites
- A Kubernetes cluster on a supported Kubernetes environment. For more information, see Supported Kubernetes Environments.
- To enable the agent to read the cluster name:
- If you created the Kubernetes cluster in a K8s orchestrator that supports machine tags such as AKS, EKS, and GKE, add the
KubernetesCluster
machine tag for the cluster using the instructions at Add KubernetesCluster Machine Tag. - If you created your own Kubernetes cluster (rather than utilizing EKS, AKS, GKE or similar orchestrator), specify the cluster name using the
KubernetesCluster
tag in the config.json file using the instructions at Set KubernetesCluster Agent Tag in config.json File.
- If you created the Kubernetes cluster in a K8s orchestrator that supports machine tags such as AKS, EKS, and GKE, add the
Supported Kubernetes Environments
The Lacework Linux agent supports the following Kubernetes versions, managed Kubernetes services, container network interfaces (CNI), service meshes, and container runtime engines:
Kubernetes Environment | Environment Name |
---|---|
Kubernetes versions | 1.9.x to 1.25 |
K8s orchestrators | AKS EKS GKE EKS EKS Fargate ECS Fargate Openshift Rancher ROSA |
CNI | Weavenet Calico Flannel Cilium kubenet |
Service mesh | Linkerd 2.11 |
Container runtime engine | Docker Containerd CRI-O |
Install using Helm
Supported Versions
- EKS (Bottlerocket and Amazon Linux)
- Helm v3.1.x to v3.11.x
- Kops 1.20
- Kubernetes v1.10 to v1.25
- Ubuntu 20.04
Install using Lacework Charts Repository (Recommended)
Use Helm to Install the Agent (Charts Repository)
Helm Charts help you define, install, and upgrade Kubernetes applications.
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/
Install the Helm charts or upgrade an existing Helm chart. If the tenant you are using is located outside North America, replace the values for the
LACEWORK_AGENT_TOKEN
andLACEWORK_SERVER_URL
.note
KUBERNETES_CLUSTER_NAME
andKUBERNETES_ENVIRONMENT_NAME
are optional. Replace them with values from your setup. To change theKUBERNETES_CLUSTER_NAME
, see How Lacework Derives the Kubernetes Cluster Name.If you are using a tenant located in North America, run the following command:
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agentIf you are using a tenant located outside of North America, run the following command:
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agentVerify the pods.
kubectl get pods -n lacework -o wide
After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Install Using the Charts from the Lacework Release Page
note
Lacework recommends installing from the charts repository than from the Lacework Release Page if possible. Installing from the charts repository does not require editing of the Charts.yaml file, whereas this method does.
Get the Helm Chart for the Agent
The Helm chart is available as part of the agent release tarball from the Lacework Agent Release GitHub repository (v2.12.1 or later).
The Helm chart includes the following:
./helm/
./helm/lacework-agent/
./helm/lacework-agent/Chart.yaml
./helm/lacework-agent/templates/
./helm/lacework-agent/templates/_helpers.tpl
./helm/lacework-agent/templates/configmap.yaml
./helm/lacework-agent/templates/daemonset.yaml
./helm/lacework-agent/values.yaml
Edit Charts.yaml
For Helm charts v4.2, in the Charts.yaml, change the version: 4.2.0.218
line to be version: 4.2.0
Use Helm to Install the Agent (Release Page)
Replace the example text with your own values.
Install the charts or upgrade an existing deployment.
helm upgrade --install --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.serverUrl=${LACEWORK_SERVER_URL} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent helm.tar.gzVerify the pods.
kubectl get pods -n lacework -o wide
After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
If you have an autogenerated or custom Helm deployment and these steps do not work, you can optionally:
- Change
"additionalProperties": true
invalues.schema.json
. Lacework supports this change, but it is not encouraged. - Use Helm to install the agent (charts repository).
Install on Openshift
Install with the cluster-admin
Role
Use the normal Helm installation instructions to install the Lacework agent on Openshift.
Install with a Service Account
You can also install the Lacework agent using Helm charts and a service account.
Before deploying the Helm chart, ensure that the service account has permissions to create privileged pods by running the following command:
oc adm policy add-scc-to-user privileged -z ${SERVICE_ACCOUNT_NAME}
To install the agent using a Helm chart:
Specify a service account when installing the Helm chart by adding
laceworkConfig.serviceAccountName
to the Helm command:--set laceworkConfig.serviceAccountName="${SERVICE_ACCOUNT_NAME}"
Modify the values.yaml file and add the service account:
# [Optional] Specify the service account for agent pods
serviceAccountName: ${SERVICE_ACCOUNT_NAME}
You can specify that the agent runs on all nodes in a cluster or in a subset of nodes in the cluster.
Enable the Lacework Agent on all Nodes
To run the Lacework agent on all nodes in your cluster, specify the following toleration during installation in one of the following ways:
Enter a command, such as:
--set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=Exists"
Modify the values.yaml file and add data similar to the following:
tolerations:
# Allow Lacework agent to run on all nodes in case of a taint
- effect: NoSchedule
operator: Exists
Enable the Lacework Agent on a Subset of Nodes
To set multiple tolerations for the Lacework agent, set an array of desired tolerations in one of the following ways:
Enter the following command and repeat for each scheduling condition:
--set "tolerations[0].effect=NoSchedule" --set "tolerations[0].operator=node-role.kubernetes.io/master"
. Ensure you increment the array index for each scheduling condition.Modify the values.yaml file and add data similar to the following:
tolerations:
# Allow Lacework agent to run on all nodes in case of a taint tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
- effect: NoSchedule
key: node-role.kubernetes.io/infra
Helm Configuration Options
Specify the Container Runtime that the Agent Uses to Discover Containers
By default, the agent automatically discovers the container runtime (containerd
or docker
). You can use the containerRuntime
option to specify the runtime that you want the agent to use to discover containers.
To specify the container runtime that the agent uses, do one of the following:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.containerRuntime=docker
- Modify the values.yaml file and add data similar to the following:
containerRuntime:docker
note
If either the containerRunTime
or the containerEngineEndpoint
setting is wrong, the agent will not detect the container.
Specify the Endpoint that the Agent Uses to Discover Containers
By default, the agent uses the default endpoint for the system's container runtime. You can use the containerEngineEndpoint
option to specify any valid URL, TCP endpoint, or a Unix socket as the endpoint.
To specify the endpoint that the agent uses to discover containers, do one of the following:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.containerEngineEndpoint=unix:///run/docker.sock
- Modify the values.yaml file and add data similar to the following:
containerEngineEndpoint:unix:///run/docker.sock
note
If either the containerRunTime
or the containerEngineEndpoint
setting is wrong, the agent will not detect the container.
Specify Nodes for Your Agent Deployment
Tolerations let you run the agent on nodes that have scheduling constraints such as master nodes or infrastructure nodes (for OpenShift).
By default, the Lacework agent is permitted to run on worker nodes and master nodes in your Kubernetes cluster. This is done by specifying the toleration as follows:
# Allow Lacework agent to run on all nodes including master node
tolerations:
- effect: NoSchedule
key: node-role.kubernetes.io/master
- effect: NoSchedule
key: node-role.kubernetes.io/control-plane
# Allow Lacework agent to run on all nodes in case of a taint
# - effect: NoSchedule
# operator: Exists
Specify CPU Requests and Limits
CPU requests specify the minimum CPU resources available to containers. CPU limits specify the maximum CPU resources available to containers. For more information, see Resource Management for Pods and Containers.
The default CPU request is 200m. The default CPU limit is 500m.
You can specify the CPU requests and limits in one of the following ways:
Enter a command such as the following on the command line:
--set resources.requests.cpu=300m
--set resources.limits.cpu=500mModify the values.yaml file and add data similar to the following:
resources:
requests:
cpu: 300m
limits:
cpu: 500m
Specify Memory Requests and Limits
Memory requests specify the minimum memory available to containers. Memory limits specify the maximum memory available to containers. For more information, see Resource Management for Pods and Containers.
The default memory request is 512Mi. The default memory limit is 1450Mi.
You can specify the memory requests and limits in one of the following ways:
Enter a command such as the following on the command line:
--set resources.requests.memory=384Mi
--set resources.limits.memory=512MiModify the values.yaml file and add data similar to the following:
resources:
requests:
memory: 384Mi
limits:
memory: 512Mi
Specify a Proxy URL on Helm Charts
Proxy servers allow you to specify a URL to route agent traffic.
You can set the proxy server URL in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.proxyUrl=${LACEWORK_PROXY_URL}
on the command line.Modify the values.yaml file and add data similar to the following:
# [Required] Specify a proxy server URL to use for routing agent
proxyUrl: value
Configure File Integrity Monitoring Properties
Enable or Disable FIM
Enable FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.mode=enable
on the command line.Modify the values.yaml file and add data similar to the following:
mode: enable
Disable FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.mode=disable
on the command line.Modify the values.yaml file and add data similar to the following:
mode: disable
Specify the File Path
You can override default paths for FIM using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.filePath={<path1>,<path2>, ...}
on the command line.Modify the values.yaml file and add data similar to the following:
filePath: [<path1>, <path2>, ...]
Specify the File Path to Ignore
Alternatively, you can override default paths by specifying files to ignore for FIM in one of the following ways:
Enter a command such as
--set laceworkConfig.fileIgnore={<path1>,<path2>, ...}
on the command line.Modify the values.yaml file and add data similar to the following:
fileIgnore: [<path1>, <path2>, ...]
Prevent the Access Timestamp from Being Used in Hash Computation
You can prevent the access timestamp from being used by utilizing this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.noAtime=true
on the command line.Modify the values.yaml file and add data similar to the following:
noAtime: true
Alternatively, you can enable access timestamp to be used by utilizing this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.noAtime=false
on the command line.Modify the values.yaml file and add data similar to the following:
noAtime: false
Specify the FIM Scan Start Time
You can specify a start time for the daily FIM scan using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.runAt=<HH:MM>
on the command line.Modify the values.yaml file and add data similar to the following:
runAt: <HH:MM>
Specify the FIM Scan Interval
You can specify the FIM scan interval using this property in one of the following ways:
Enter a command such as
--set laceworkConfig.fim.crawlInterval=<time_interval>
on the command line.Modify the values.yaml file and add data similar to the following:
crawlInterval: <time_interval>
Specify Package Scan Options
By default, package scan is enabled.
To disable package scan, do one of the following:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.packagescan.enable=false
- Modify the values.yaml file and add data similar to the following:
packagescan:
enable:false
If package scan is disabled, do one of the following to enable it:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.packagescan.enable=true
- Modify the values.yaml file and add data similar to the following:
packagescan:
enable:true
To specify interval (in minutes) between package scans, do one of the following:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.packagescan.interval=60
- Modify the values.yaml file and add data similar to the following:
packagescan:
interval:60
Specify Process Scan Options
By default, process scan is enabled.
To disable process scan, do one of the following:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.procscan.enable=false
- Modify the values.yaml file and add data similar to the following:
procscan:
enable:false
If process scan is disabled, do one of the following to enable it:
- Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.procscan.enable=true
- Modify the values.yaml file and add data similar to the following:To specify interval (in minutes) between process scans, do one of the following:
procscan:
enable:true - Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.procscan.interval=60
- Modify the values.yaml file and add data similar to the following:
procscan:
interval:60
Filter Executables Tracked by the Agent
By default, the agent collects command-line arguments for all executables when it is collecting process metadata. You can use the cmdlinefilter
option to selectively enable or disable collection of command-line arguments for executables.
To collect command-line arguments for specific executables only, do one of the following:
Use one of the following with the
helm install
orhelm upgrade
command:- To collect data for one executable:
--set laceworkConfig.cmdlinefilter.allow=java
- To collect data for more than one executable, use a comma separated list:
--set laceworkConfig.cmdlinefilter.allow=java,python
- To collect data for all executables, use the * wildcard. This is the default and recommended setting.
--set laceworkConfig.cmdlinefilter.allow=*
- To collect data for one executable:
Use one of the following in the values.yaml file:
- To collect data for one executable:
cmdlinefilter:
allow:java - To collect data for more than one executable, use a comma separated list:
cmdlinefilter:
allow:java,python - To collect data for all executables, use the * wildcard. This is the default and recommended setting.
cmdlinefilter:
allow:*
- To collect data for one executable:
To disable collection of command-line arguments for specific executables, do one of the following:
Use one of the following with the
helm install
orhelm upgrade
command:- To disable collection of data for one executable:
--set laceworkConfig.cmdlinefilter.disallow=java
- To disable collection of data for more than one executable, use a comma separated list:
--set laceworkConfig.cmdlinefilter.disallow=java,python
- To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
--set laceworkConfig.cmdlinefilter.disallow=*
- To disable collection of data for one executable:
Use one of the following in the values.yaml file:
- To disable collection of data for one executable:
cmdlinefilter:
disallow:java - To disable collection of data for more than one executable, use a comma separated list:
cmdlinefilter:
disallow:java,python - To disable collection of data for all executables, use the * wildcard. This setting stops data collection for all executables and is not recommended.
cmdlinefilter:
disallow:*
- To disable collection of data for one executable:
important
Limiting the data collected by the agent reduces Lacework’s process-aware threat and intrusion detection in your cloud environment and limits the alerts that Lacework generates. If you must disable sensitive data collection in your environment, Lacework recommends disabling the smallest set of executables possible.
Specify Image Pull Secrets
Image pull secrets enable fetching the Lacework agent image from private repositories and/or allow bypassing rate limits.
You can configure image pull secrets in one of the following ways:
Modify your Helm install/upgrade command with the following options:
--set image.imagePullSecrets.name=<registrySecret>
Modify the values.yaml file and add data similar to:
# [Optional] imagePullSecrets.
# https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
imagePullSecrets:
- name: <registrySecret>Where
<registrySecret>
is the name of the secret that contains the credentials necessary to fetch the Lacework agent image.
Specify an Existing Secret
Existing secrets allow you to store the Lacework access token outside of Helm.
You can use an existing secret in your Lacework Helm charts in one of the following ways:
Enter a command such as the following on the command line:
--set laceworkConfig.accessToken.existingSecret.key="lacework_agent_token"
--set laceworkConfig.accessToken.existingSecret.name="lacework-agent-secret"Modify the values.yaml file and add data similar to the following:
laceworkConfig:
# [Required] An access token is required before running agents.
# Visit https://<LACEWORK UI URL> for eg: https://lacework.lacework.net
accessToken:
existingSecret:
key: lacework_agent_token
name: lacework-agent-secret
Specify Custom Annotations on Helm Charts
Annotations are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.
You can set annotations in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.annotations.<key> <value>
on the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Define custom annotations to use for identifying resources created by these charts
annotations:
key: value
another_key: another_value
Specify Custom Labels on Helm Charts
Similar to custom annotations, custom labels are a way of adding non-identifying metadata to Kubernetes objects. They are used by external tools to provide extra functionalities.
You can set labels in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.labels.<key> <value>
on the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Define custom labels to use for identifying resources created by these charts
labels:
key: value
another_key: another_value
Specify Tags to Categorize Agents
You can use the tags
option to specify name/value tags to categorize your agents. For more information, see Adding Agent Tags.
To specify tags, do one of the following:
Use the following option with the
helm install
orhelm upgrade
command:--set laceworkConfig.tags.<tagname1>=<value1>
--set laceworkConfig.tags.<tagname2>=<value2>For example:
--set laceworkConfig.tags.location=austin
--set laceworkConfig.tags.owner=peteModify the values.yaml file and add data similar to the following:
tags:
<tagname1>: <value1>
<tagname2>: <value2>For example:
annotations:
location: austin
owner: pete
Specify the perfmode Property on Helm Charts
You can set the perfmode property in your Lacework Helm charts in one of the following ways:
Enter a command such as
--set laceworkConfig.perfmode=PERFMODE_TYPE
on the command line.Modify the values.yaml file and add data similar to the following:
# [Optional] Set to one of the other modes like ebpflite, scan, or lite for load balancers.
perfmode: PERFMODE_TYPE
Where PERFMODE_TYPE
can be one of the following values:
ebpflite
- The eBPF lite mode.lite
- The lite mode.scan
- The scan mode.null
- Disables the perfmode property. The agent runs in normal mode.
Disable or Enable Logging to stdout
Logging to stdout is enabled by default for Lacework Helm charts. You can disable stdout logging in one of the following ways:
Enter a command such as
--set laceworkConfig.stdoutLogging=false
on the command line.Modify the values.yaml file and add data similar to the following:
stdoutLogging: false
Install a Specific Version of the Lacework Agent Using Helm Charts
The Lacework Helm Charts Repository contains a helm chart version for every agent version. By default, the latest version of the Lacework agent is installed when you use the Lacework Helms Charts Repository to install the agent. You can use the chart version corresponding to an agent version to install a specific version of the agent.
Add the Lacework Helm Charts repository:
helm repo add lacework https://lacework.github.io/helm-charts/
If the repository was already added on your machine, update the repository:
helm repo update lacework
View the chart versions available in the repository:
helm search repo lacework --versions
NAME CHART VERSION APP VERSION DESCRIPTION
lacework/lacework-agent 6.2.0 1.0 Lacework Agent
lacework/lacework-agent 6.1.2 1.0 Lacework Agent
lacework/lacework-agent 6.1.0 1.0 Lacework Agent
lacework/lacework-agent 6.0.2 1.0 Lacework Agent
lacework/lacework-agent 6.0.1 1.0 Lacework Agent
lacework/lacework-agent 6.0.0 1.0 Lacework Agent
lacework/lacework-agent 5.9.0 1.0 Lacework Agent
lacework/lacework-agent 5.8.0 1.0 Lacework Agent
lacework/lacework-agent 5.7.0 1.0 Lacework Agent
lacework/lacework-agent 5.6.0 1.0 Lacework Agent
lacework/lacework-agent 5.5.2 1.0 Lacework AgentIn this example, the 6.2.0 chart version corresponds to the 6.2.0 version of the agent.
Use the
--version
option to use a specific chart version to install the agent. For example, run the following command to install the 6.2.0 version of the agent with the 6.2.0 chart version:helm upgrade --install –version 6.2.0 --namespace lacework --create-namespace \
--set laceworkConfig.accessToken=${LACEWORK_AGENT_TOKEN} \
--set laceworkConfig.kubernetesCluster=${KUBERNETES_CLUSTER_NAME} \
--set laceworkConfig.env=${KUBERNETES_ENVIRONMENT_NAME} \
lacework-agent lacework/lacework-agent
Deploy with a DaemonSet
DaemonSet Visibility
When an agent is installed on a node as a DaemonSet/Pod or on the node itself, the agent has container visibility of the following resources:
- Processes running on the host.
- Processes running in a container that make a network connection (server or client).
- All container internal servers and processes that are listening actively on certain ports.
- File Integrity Monitoring (FIM) on the host.
- Host vulnerability on the host.
DaemonSet Deployment Using a configmap
Download the Kubernetes Config (
lacework-cfg-k8s.yaml
) and Kubernetes Orchestration (lacework-k8s.yaml
) files using the instructions in Create Agent Access Tokens and Download Linux Agent Installers.Create the pods namespace
kubectl create namespace lacework
note
Lacework recommends assigning a namespace to the DaemonSet config.
Using the kubectl command line interface, add the Lacework configuration file into the cluster in the newly created namespace.
kubectl create -f lacework-cfg-k8s.yaml -n lacework
Instruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master.
To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.kubectl apply -f lacework-k8s.yaml -n lacework
After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Repeat the above steps for each Kubernetes cluster.
The config.json file is embedded in the lacework-cfg-k8s.yaml file.
To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following command:kubectl replace -f lacework-cfg-k8s.yaml -n lacework
note
Lacework always recommends assigning a namespace to the DaemonSet config.
DaemonSet Deployment Using a Secret
Download the Kubernetes orchestration file (lacework-k8s.yaml) using the instructions in Create Agent Access Tokens and Download Linux Agent Installers.
Edit the lacework-k8s.yaml file and make the following changes:
- Change
configMap
tosecret
- Change
name
tosecretName
- Change
Use the following command in the kubectl command line interface to create the Lacework access token secret. In the command, replace:
YOUR_ACCESS_TOKEN
with the agent access token. For more information, see Create Agent Access Tokens and Download Linux Agent Installers.CLUSTER_NAME
with your Kubernetes cluster name.SERVER_URL
with your Lacework server URL. For more information, see Agent Server URL.
kubectl create secret generic lacework-config --from-literal config.json='{"tokens":{"AccessToken":"YOUR_ACCESS_TOKEN"}, "serverurl":"SERVER_URL", "tags":{"Env":"k8s", "KubernetesCluster":"CLUSTER_NAME"}}' --from-literal syscall_config.yaml=""
You should see the message
secret/lacework-config created
if the secret is created successfully.Instruct the Kubernetes orchestrator to deploy an agent on all nodes in the cluster, including the master. To change the CPU and memory limits, see Change Agent Resource Installation Limits on K8s Environments.
kubectl create -f lacework-k8s.yaml
You should see the message
daemonset.apps/lacework-agent created
if the DaemonSet is created successfully.After you install the agent, it takes 10 to 15 minutes for the agent data to appear in the Lacework Console. In the Lacework Console, go to Resources > Kubernetes and click Clusters to verify that the cluster on which you installed the agent is displayed. If the cluster is not displayed, see How Lacework Derives the Kubernetes Cluster Name.
Repeat the above steps for each Kubernetes cluster.
To customize FIM or add tags in a Kubernetes environment, edit the configuration section of the YAML file and push the revised lacework-cfg-k8s.yaml file to the cluster using the following commands:kubectl replace -f lacework-k8s.yaml
kubectl create namespace lacework
kubectl apply -f lacework-k8s.yaml -n lacework
You can confirm the DaemonSet status using the following command:
kubectl get ds
or
kubectl get pods --all-namespaces | grep lacework-agent
Deploy DaemonSet Using Terraform
Lacework maintains the terraform-kubernetes-agent module to create a Secret and DaemonSet for deploying the Lacework Datacollector Agent in a Kubernetes cluster.
If you are new to the Lacework Terraform Provider or Lacework Terraform Modules, read the Terraform for Lacework Overview article to learn the basics on how to configure the provider and more.
This topic assumes familiarity with the Terraform Provider for Kubernetes maintained by Hashicorp on the Terraform Registry.
DaemonSets are an easy way to deploy a Kubernetes pod onto every node in the cluster. This is useful for monitoring tools such as Lacework. You can use the DaemonSet method to deploy Lacework onto any Kubernetes cluster, including hosted versions such as EKS, AKS, and GKE.
Run Terraform
The following code snippet creates a Lacework Agent Access token with Terraform and then deploys the DaemonSet to the Kubernetes cluster being managed with Terraform.
important
Before running this code, ensure that the following settings match the configurations for your deployment.
config_path
config_context
lacework_server_url
terraform {
required_providers {
kubernetes = {
source = "hashicorp/kubernetes"
version = "~> 2.0"
}
lacework = {
source = "lacework/lacework"
version = "~> 1.0"
}
}
}
provider "kubernetes" {
config_path = "~/.kube/config"
config_context = "my-context"
}
provider "lacework" {
# Configuration options
}
resource "lacework_agent_access_token" "k8s" {
name = "prod"
description = "k8s deployment for production env"
}
module "lacework_k8s_datacollector" {
source = "lacework/agent/kubernetes"
version = "~> 1.0"
lacework_access_token = lacework_agent_access_token.k8s.token
# For deployments in Europe, overwrite the Lacework Server URL
#lacework_server_url = "https://api.fra.lacework.net"
# Add the lacework_agent_tag argument to retrieve the cluster name in the Kubernetes Dossier
lacework_agent_tags = {KubernetesCluster: "Name of the Kubernetes cluster"}
pod_cpu_request = "200m"
pod_mem_request = "512Mi"
pod_cpu_limit = "500m"
pod_mem_limit = "1024Mi"
}
note
Due to upstream breaking changes, version 1.0+ of this module discontinued support for version 1.x of the hashicorp/kubernetes
provider. If 1.x of the hashicorp/kubernetes
provider is required, pin this module's version to ~> 0.1
.
- Open an editor and create a file called
main.tf
. - Copy/Paste the code snippet above into the
main.tf
file and save the file. - Run
terraform plan
and review the changes that will be applied. - Once satisfied with the changes that will be applied, run
terraform apply -auto-approve
to execute Terraform.
Validate the Changes
After Terraform executes, you can use kubectl
to validate the DaemonSet is deployed successfully:
kubectl get pods -l name=lacework -o=wide --all-namespaces