Skip to main content

Kubernetes Integration

Learn how to integrate gVisor with Kubernetes for enhanced container security and isolation in your cluster.

Overview

gVisor integrates seamlessly with Kubernetes through RuntimeClasses, allowing you to:

  • Run security-sensitive workloads with additional isolation
  • Mix gVisor and standard containers in the same cluster
  • Apply gVisor selectively based on workload requirements

Prerequisites

  • Kubernetes cluster (1.12+)
  • Container runtime with gVisor support (containerd, CRI-O)
  • gVisor installed on all nodes where it will be used
  • RuntimeClass API enabled (default in 1.12+)

Cluster Setup

Install gVisor on Nodes

Install gVisor on all nodes where you want to run secure workloads:

# On each node
curl -fsSL https://gvisor.dev/archive.key | sudo gpg --dearmor -o /usr/share/keyrings/gvisor-archive-keyring.gpg
echo "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/gvisor-archive-keyring.gpg] https://storage.googleapis.com/gvisor/releases release main" | sudo tee /etc/apt/sources.list.d/gvisor.list > /dev/null

sudo apt-get update
sudo apt-get install -y runsc

Configure containerd

Update containerd configuration on each node:

# Edit /etc/containerd/config.toml
sudo vim /etc/containerd/config.toml

Add gVisor runtime configuration:

version = 2

[plugins."io.containerd.grpc.v1.cri"]
[plugins."io.containerd.grpc.v1.cri".containerd]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes]
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc]
runtime_type = "io.containerd.runsc.v1"

[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc-kvm]
runtime_type = "io.containerd.runsc.v1"
[plugins."io.containerd.grpc.v1.cri".containerd.runtimes.runsc-kvm.options]
TypeUrl = "io.containerd.runsc.v1.options"
ConfigPath = "/etc/containerd/runsc-kvm.toml"

Create KVM-specific configuration:

sudo tee /etc/containerd/runsc-kvm.toml <<EOF
[runsc_config]
platform = "kvm"
file-access = "shared"
network = "sandbox"
EOF

Restart containerd:

sudo systemctl restart containerd

Verify Runtime Installation

Check that gVisor runtimes are available:

# Check containerd runtimes
sudo ctr --namespace k8s.io run --runtime runsc --rm docker.io/library/alpine:latest test echo "gVisor test"

# Verify on Kubernetes node
kubectl get nodes -o wide

RuntimeClass Configuration

Basic RuntimeClass

Create a basic RuntimeClass for gVisor:

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc

Apply the RuntimeClass:

kubectl apply -f gvisor-runtimeclass.yaml

Advanced RuntimeClass with Overhead

Define resource overhead for gVisor containers:

apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
overhead:
podFixed:
memory: "20Mi"
cpu: "10m"
scheduling:
nodeClassMap:
security: "secure-nodes"
tolerations:
- effect: NoSchedule
key: gvisor
value: "true"

Multiple RuntimeClasses

Create different RuntimeClasses for different use cases:

# High-performance gVisor with KVM
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor-kvm
handler: runsc-kvm
overhead:
podFixed:
memory: "30Mi"
cpu: "15m"
---
# Standard gVisor
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor
handler: runsc
overhead:
podFixed:
memory: "20Mi"
cpu: "10m"
---
# Development gVisor with debugging
apiVersion: node.k8s.io/v1
kind: RuntimeClass
metadata:
name: gvisor-debug
handler: runsc-debug
overhead:
podFixed:
memory: "50Mi"
cpu: "20m"

Using gVisor with Workloads

Simple Pod

Run a pod with gVisor:

apiVersion: v1
kind: Pod
metadata:
name: gvisor-pod
spec:
runtimeClassName: gvisor
containers:
- name: app
image: nginx
ports:
- containerPort: 80

Deployment with gVisor

Create a deployment using gVisor:

apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
labels:
app: secure-app
spec:
replicas: 3
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
runtimeClassName: gvisor
containers:
- name: app
image: nginx:1.20
ports:
- containerPort: 80
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"

StatefulSet with gVisor

Run a database with enhanced isolation:

apiVersion: apps/v1
kind: StatefulSet
metadata:
name: secure-postgres
spec:
serviceName: postgres
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
runtimeClassName: gvisor
containers:
- name: postgres
image: postgres:13
env:
- name: POSTGRES_PASSWORD
value: "secretpassword"
- name: POSTGRES_DB
value: "myapp"
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-data
mountPath: /var/lib/postgresql/data
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "512Mi"
cpu: "500m"
volumeClaimTemplates:
- metadata:
name: postgres-data
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi

Job and CronJob Integration

Secure Batch Jobs

Run batch processing with gVisor:

apiVersion: batch/v1
kind: Job
metadata:
name: secure-data-processor
spec:
template:
spec:
runtimeClassName: gvisor
restartPolicy: Never
containers:
- name: processor
image: data-processor:latest
command: ["/bin/sh"]
args: ["-c", "echo 'Processing sensitive data...' && sleep 30"]
resources:
requests:
memory: "512Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "500m"
env:
- name: DATA_SOURCE
value: "/data/input"
- name: OUTPUT_PATH
value: "/data/output"
volumeMounts:
- name: data-volume
mountPath: /data
volumes:
- name: data-volume
persistentVolumeClaim:
claimName: data-pvc

Scheduled Secure Jobs

apiVersion: batch/v1
kind: CronJob
metadata:
name: secure-backup
spec:
schedule: "0 2 * * *"
jobTemplate:
spec:
template:
spec:
runtimeClassName: gvisor
restartPolicy: OnFailure
containers:
- name: backup
image: backup-tool:latest
command: ["/backup.sh"]
env:
- name: BACKUP_TARGET
value: "s3://my-secure-backups/"
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "200m"

Service Integration

Exposing gVisor Services

Services work normally with gVisor pods:

apiVersion: v1
kind: Service
metadata:
name: secure-app-service
spec:
selector:
app: secure-app
ports:
- name: http
port: 80
targetPort: 80
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
name: secure-app-lb
spec:
selector:
app: secure-app
ports:
- name: http
port: 80
targetPort: 80
type: LoadBalancer

Ingress with gVisor Backends

apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: secure-app-ingress
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host: secure-app.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: secure-app-service
port:
number: 80

Multi-Tenant Configurations

Namespace-Level Security

Apply gVisor to entire namespaces using admission controllers:

apiVersion: v1
kind: Namespace
metadata:
name: secure-tenant
labels:
security-level: high
runtime: gvisor
---
# Admission controller configuration (example with OPA Gatekeeper)
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: requiregvisor
spec:
crd:
spec:
names:
kind: RequireGvisor
validation:
openAPIV3Schema:
type: object
properties:
namespaces:
type: array
items:
type: string
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package requiregvisor

violation[{"msg": msg}] {
input.review.kind.kind == "Pod"
input.review.object.metadata.namespace in input.parameters.namespaces
not input.review.object.spec.runtimeClassName == "gvisor"
msg := "Pods in secure namespaces must use gVisor runtime"
}
---
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: RequireGvisor
metadata:
name: secure-namespaces
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
parameters:
namespaces: ["secure-tenant", "financial-data"]

Node Selection for gVisor

Use node selectors and affinity to control where gVisor workloads run:

apiVersion: apps/v1
kind: Deployment
metadata:
name: secure-app
spec:
replicas: 2
selector:
matchLabels:
app: secure-app
template:
metadata:
labels:
app: secure-app
spec:
runtimeClassName: gvisor
nodeSelector:
gvisor-enabled: "true"
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: security-zone
operator: In
values: ["high", "restricted"]
tolerations:
- key: "gvisor"
operator: "Equal"
value: "true"
effect: "NoSchedule"
containers:
- name: app
image: secure-app:latest

Monitoring and Observability

Monitoring gVisor Containers

Monitor gVisor workloads with Prometheus:

apiVersion: v1
kind: ServiceMonitor
metadata:
name: gvisor-metrics
spec:
selector:
matchLabels:
runtime: gvisor
endpoints:
- port: metrics
interval: 30s
path: /metrics
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: monitored-gvisor-app
spec:
replicas: 1
selector:
matchLabels:
app: monitored-app
template:
metadata:
labels:
app: monitored-app
runtime: gvisor
spec:
runtimeClassName: gvisor
containers:
- name: app
image: nginx
ports:
- name: http
containerPort: 80
- name: metrics
containerPort: 9113
- name: nginx-exporter
image: nginx/nginx-prometheus-exporter:0.10.0
args:
- -nginx.scrape-uri=http://localhost/nginx_status
ports:
- name: metrics
containerPort: 9113

Logging Configuration

Configure structured logging for gVisor containers:

apiVersion: v1
kind: ConfigMap
metadata:
name: fluentd-gvisor-config
data:
fluent.conf: |
<source>
@type tail
path /var/log/containers/*gvisor*.log
pos_file /var/log/fluentd-gvisor.log.pos
tag kubernetes.gvisor.*
format json
time_key timestamp
</source>

<filter kubernetes.gvisor.**>
@type record_transformer
<record>
runtime gvisor
security_level high
</record>
</filter>

<match kubernetes.gvisor.**>
@type elasticsearch
host elasticsearch-logging.kube-system.svc.cluster.local
port 9200
index_name gvisor-logs
</match>
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: fluentd-gvisor
spec:
selector:
matchLabels:
app: fluentd-gvisor
template:
metadata:
labels:
app: fluentd-gvisor
spec:
containers:
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-elasticsearch
volumeMounts:
- name: config
mountPath: /fluentd/etc
- name: varlog
mountPath: /var/log
volumes:
- name: config
configMap:
name: fluentd-gvisor-config
- name: varlog
hostPath:
path: /var/log

Performance Considerations

Resource Planning

Plan resources for gVisor workloads:

apiVersion: v1
kind: LimitRange
metadata:
name: gvisor-limits
namespace: secure-workloads
spec:
limits:
- default:
memory: "256Mi"
cpu: "200m"
defaultRequest:
memory: "128Mi"
cpu: "100m"
min:
memory: "64Mi"
cpu: "50m"
max:
memory: "2Gi"
cpu: "1000m"
type: Container
---
apiVersion: v1
kind: ResourceQuota
metadata:
name: gvisor-quota
namespace: secure-workloads
spec:
hard:
requests.cpu: "4"
requests.memory: 8Gi
limits.cpu: "8"
limits.memory: 16Gi
pods: "20"

Horizontal Pod Autoscaling

Configure HPA for gVisor workloads:

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: gvisor-app-hpa
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: secure-app
minReplicas: 2
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
behavior:
scaleUp:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 50
periodSeconds: 60
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 25
periodSeconds: 60

Troubleshooting

Common Issues and Solutions

RuntimeClass Not Found

# Check RuntimeClass exists
kubectl get runtimeclass

# Verify node supports runtime
kubectl describe node <node-name> | grep -i runtime

Pod Scheduling Issues

# Check pod events
kubectl describe pod <pod-name>

# Verify node selectors and tolerations
kubectl get nodes -l gvisor-enabled=true

Performance Issues

# Check resource usage
kubectl top pods --containers -n <namespace>

# Review gVisor metrics
kubectl exec -it <gvisor-pod> -- cat /proc/meminfo

Debug gVisor in Kubernetes

Enable debug logging for troubleshooting:

apiVersion: v1
kind: Pod
metadata:
name: debug-gvisor-pod
spec:
runtimeClassName: gvisor
containers:
- name: debug-container
image: alpine
command: ["sleep", "3600"]
env:
- name: GVISOR_DEBUG
value: "true"

Next Steps

With gVisor integrated into Kubernetes, explore: