Deploying WebRTC Applications In AWS EKS A Step-by-Step Guide With LiveKit And STUNner

Deploying WebRTC Applications in AWS EKS A Step-by-Step Guide with LiveKit and STUNner

Sitemap

Get unlimited access to the best of Medium for less than $1/week.Become a member

[

Become a member

](https://medium.com/plans?source=upgrade_membership---post_top_nav_upsell-----------------------------------------)## L7mp Technologies

L7mp Technologies

We build Kubernetes goodies for WebRTC.

1. Introduction

Running WebRTC applications in Kubernetes has become increasingly popular as developers and engineers embrace cloud-native solutions for real-time communication. However, deploying these applications in AWS Elastic Kubernetes Service (EKS) presents unique challenges compared to other cloud providers. This blog aims to guide cloud engineers and WebRTC developers through setting up a fully functional architecture in EKS, leveraging STUNner for seamless WebRTC NAT traversal.

AWS remains the most popular cloud provider, yet setting up Kubernetes in AWS, particularly EKS, is often more complex than in Azure or Google Cloud. Configuring STUNner in EKS, for example, requires additional steps due to the way AWS handles networking and load balancing. These configurations, while manageable, can be daunting for those new to the field. This guide aims to demystify the process and provide a clear path to deploying STUNner in EKS.

To make the demonstration practical, we’ll deploy the popular LiveKit open-source WebRTC server, along with the LiveKit Meet example application. This setup will highlight the benefits of using STUNner as a TURN server in front of LiveKit while focusing on the AWS-specific steps required to make it work. The architecture that we’ll create is depicted in the figure below.

The architecure of running LiveKit behind STUNner in AWS EKS

By the end of this blog, you’ll not only have a working example of LiveKit running behind STUNner in EKS but also gain a deeper understanding of why STUNner configurations differ in AWS and how to address these challenges. Once you have STUNner properly running in EKS, you can just as easily deploy any other WebRTC media server (e.g., Mediasoup, Jitsi, Janus, or Elixir WebRTC) using the STUNner documentation — LiveKit is simply one example. Whether you’re new to WebRTC in Kubernetes or looking to streamline your AWS EKS deployments, this guide will set you on the right path.

2. Prerequisites

Before we dive into deploying WebRTC applications in AWS EKS, let’s make sure we have the necessary tools, services, and access configured. Here’s what you’ll need:

Tools Required

  1. AWS CLI: For interacting with AWS services from the command line.
  2. eksctl: A CLI tool specifically designed for creating and managing EKS clusters.
  3. kubectl: The Kubernetes CLI for interacting with your cluster.
  4. Helm: A package manager for Kubernetes used to install and manage applications.

AWS Account and API Key Setup

To follow along with this guide, you’ll need an AWS account. If you don’t already have one, you can sign up at aws.amazon.com.

Once you have an account:

  1. Log in to the AWS Management Console.
  2. Navigate to IAM (Identity and Access Management).
  3. Create a new IAM user with programmatic access.
  4. Assign the user the necessary permissions (AdministratorAccess is sufficient for this tutorial, though more restrictive permissions are recommended in production).
  5. Download the access key and secret key for this user.

With the API key ready, install and configure the AWS CLI:

curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure

You’ll be prompted to enter:

  • Access Key ID
  • Secret Access Key
  • Default Region (e.g., us-west-2)
  • Output Format (default is json)

You can run the following command to verify the AWS CLI is authenticated and working:

aws sts get-caller-identity

Installing eksctl, kubectl, and Helm

Since eksctl, kubectl, and helm are all single Go binaries, installing them is straightforward. Follow these steps for each tool:

Install eksctl:

curl -L "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" -o eksctl.tar.gz
tar -xzf eksctl.tar.gz -C /usr/local/bin
rm eksctl.tar.gz
eksctl version

Install kubectl:

curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/$(uname -s | tr '[:upper:]' '[:lower:]')/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client

Install helm:

curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version

Once you’ve installed these tools and configured your AWS account, you’re ready to start setting up your EKS cluster and deploying WebRTC applications.

3. Setting Up an EKS Cluster

To deploy WebRTC applications in AWS EKS, the first step is to create an Elastic Kubernetes Service (EKS) cluster. In this section, we’ll walk you through the process using eksctl, a CLI tool specifically designed to simplify EKS cluster management.

Introduction to EKS and eksctl

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service that makes it easier to run Kubernetes applications in the AWS cloud. While EKS handles much of the underlying complexity, setting up a cluster can still be time-consuming without the right tools.

This is where eksctl comes in. It’s a command-line utility originally built by Weaveworks (now fully managed by AWS) that significantly simplifies creating and managing EKS clusters. With eksctl, you can define your cluster in a YAML configuration file and create it with a single command.

Cluster Creation

To create an EKS cluster using eksctl, follow these steps:

  1. Create a YAML configuration file, such as cluster-config.yaml, with the following content:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig

metadata:
  name: stunner-livekit
  region: eu-central-1

iam:
  withOIDC: true

nodeGroups:
  - name: ng-1
    instanceType: t3.medium
    desiredCapacity: 3
    volumeSize: 100
    ssh:
      publicKeyPath: ~/.ssh/id_rsa.pub
  • Cluster Name: The cluster is named stunner-livekit.
  • Region: The cluster is deployed in the eu-central-1 region.
  • IAM with OIDC: Enables IAM roles for service accounts, which will be necessary later.
  • Node Group: Configures a node group with:
    t3.medium instance type.
    — 3 desired nodes.
    — 100 GB of disk storage per node.
    — SSH key for secure access to the nodes.

Create the cluster using eksctl:

eksctl create cluster -f cluster-config.yaml

This command will:

  • Provision the necessary infrastructure (e.g., VPC, subnets, security groups).
  • Create the Kubernetes control plane and worker nodes.
  • Configure IAM roles and permissions.

Validating the Setup

After the cluster is created, verify that it’s running and that kubectl is configured to interact with it:

Check the cluster status:

eksctl create cluster -f cluster-config.yaml

Test kubectl connectivity: Ensure kubectl is configured to use the new cluster by checking the nodes:

kubectl get nodes

You should see output listing the nodes in your cluster, similar to:

NAME                                           STATUS ROLES  AGE   VERSION
ip-192168297.eu-central-1.compute.internal  Ready  <none> 3h15m v1.30.7-eks-59bf375
ip-1921686316.eu-central-1.compute.internal Ready  <none> 3h21m v1.30.7-eks-59bf375
ip-1921686717.eu-central-1.compute.internal Ready  <none> 3h21m v1.30.7-eks-59bf375

4. Installing the AWS Load Balancer Controller

One of the most crucial steps in setting up your EKS cluster for any publicly reachable application is installing the AWS Load Balancer Controller. This component is essential for managing ingress traffic from the public Internet into your cluster, enabling automatic provisioning and management of AWS Application Load Balancers (ALBs) and Network Load Balancers (NLBs) for Kubernetes services.

However, this step can be quite cumbersome due to the integration between AWS IAM roles and Kubernetes service accounts. Without a proper understanding of both, the installation is prone to errors. This guide will walk you through the process step by step to ensure everything is configured correctly.

Purpose of the AWS Load Balancer Controller

The AWS Load Balancer Controller allows Kubernetes to manage AWS-specific load balancing resources for your applications. Specifically, it:

  • Provisions and configures ALBs or NLBs in response to Kubernetes ingress and service resources.
  • Ensures that your applications are exposed externally in a scalable and efficient way.
  • Provides necessary support for your EKS architecture by managing the traffic routing to your Kubernetes workloads.

Installation Steps

Follow these steps to set up the AWS Load Balancer Controller:

1. Create an IAM Role for the Controller

The controller requires an IAM role with permissions to manage AWS resources. Here’s how to set it up:

Download the required IAM policy: create a policy document (iam-policy.json) with the permissions required by the controller:

curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json

Apply the policy in AWS:

aws iam create-policy \
 - policy-name AWSLoadBalancerControllerIAMPolicy \
 - policy-document file://iam-policy.json

Create an IAM role for the controller and associate it with the policy:

POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==\`AWSLoadBalancerControllerIAMPolicy\`].Arn' --output text)
eksctl create iamserviceaccount \
  --cluster=stunner-livekit \
  --namespace=kube-system \
  --name=aws-load-balancer-controller \
  --attach-policy-arn="$POLICY_ARN" \
  --approve

This step creates a service account in the kube-system namespace and links it to the IAM role. You can double-check if the service account was correctly created in the Kubernetes cluster:

$ kubectl get serviceaccounts -n kube-system aws-load-balancer-controller
NAME                          SECRETS  AGE
aws-load-balancer-controller  0        30s

2. Install the AWS Load Balancer Controller with Helm

Once the IAM role is linked to the service account, you can proceed to install the controller using Helm.

Add the Helm repository:

helm repo add eks https://aws.github.io/eks-charts
helm repo update

Install the controller: Replace and with your cluster’s name (stunner-livekit) and region (eu-central-1):

helm install aws-load-balancer-controller eks/aws-load-balancer-controller \
  --set clusterName=stunner-livekit \
  --set serviceAccount.create=false \
  --set serviceAccount.name=aws-load-balancer-controller \
  --namespace kube-system \
  --set region=eu-central-1

3. Validate the Installation

Check the controller pod status: You should see a pod named aws-load-balancer-controller running.

kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller

Verify that the controller is working: Check the logs of the controller pod to confirm it has started successfully:

kubectl logs -n kube-system deployment/aws-load-balancer-controller

At this point, the AWS Load Balancer Controller is installed and ready to manage traffic for your cluster. This step is critical for enabling STUNner to function in AWS EKS, as it requires the Load Balancer Controller to route traffic to its components.

4. Install Nginx Ingress and Cert-Manager for HTTP(S) Traffic and TLS Certificate Management

Although the Nginx Ingress and Cert-Manager aren’t specific to AWS or EKS, they are standard tools for managing ingress resources and TLS certificates in Kubernetes clusters which we’ll need for LiveKit. WebRTC requires a secure context (i.e., HTTPS) for the getUserMedia API to work in browsers, which makes securing the client-server signaling connection essential.

Note: if you’re familiar with AWS and EKS you can replace this step to use the AWS Application Load Balancer instead of Nginx and configure Cert-Manager to use AWS specific CAs to generate TLS certs for your ingress resources. However, setting this up would be way more difficult than just using Nginx and Cert-Manager, and would totally distract the focus of this blog.

To install Nginx and Cert-Manager simply execute the following:

kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/aws/deploy.yaml
kubectl annotate service -n ingress-nginx ingress-nginx-controller service.beta.kubernetes.io/aws-load-balancer-scheme=internet-facing
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
kubectl wait --for=condition=Ready -n cert-manager pod  -l app.kubernetes.io/component=webhook --timeout=90s

kubectl get services -n ingress-nginx

After checking the services for the ingress-nginx namespace you should see the public IP (or hostname) that the AWS Load Balancer Controller created for you. You should register this in your DNS provider since this will route the HTTP traffic into your cluster (AWS LoadBalancer will usually give you a hostname so you’ll need define a CNAME type DNS entry to you own domain).

We also need to create a ClusterIssuer for Cert-Manager that will use Let’s Encrypt to generate a valid TLS certificate for your ingress hostnames. Apply the following yaml with kubectl:

apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
  name: letsencrypt-prod
spec:
  acme:
    email: info@yourdomain.io # use your own domain
    privateKeySecretRef:
      name: letsencrypt-prod
    server: https://acme-v02.api.letsencrypt.org/directory
    solvers:
    - http01:
        ingress:
          class: nginx

In the next section, we’ll focus on deploying STUNner and configuring it for NAT traversal in EKS.

5. Installing STUNner

With the AWS Load Balancer Controller and Cert-Manager in place, we can now focus on deploying STUNner, a vital component for enabling WebRTC applications to function seamlessly in Kubernetes. In this section, we’ll introduce STUNner, outline its purpose in WebRTC, and guide you through deploying it in your EKS cluster.

Introduction to STUNner

STUNner is a Kubernetes-based media gateway that simplifies the deployment of WebRTC applications in cloud-native environments. It acts as a STUN and TURN server, allowing WebRTC clients to establish peer-to-peer connections even when behind NATs or firewalls.

In this deployment, STUNner serves as a gateway between external WebRTC clients and media servers running in Kubernetes. By leveraging STUNner, you can:

  • Expose your WebRTC media servers through only one (TURN) port, instead of thousands which greatly reduces the security attack surface.
  • Avoid manual NAT configuration and firewall management.
  • Achieve seamless NAT traversal for WebRTC traffic.
  • Simplify scaling and operational management of WebRTC applications in Kubernetes.

STUNner is a perfect fit for EKS, and by integrating it with the AWS Load Balancer Controller, we’ll expose STUNner to handle traffic from external WebRTC clients.

STUNner Installation

To install STUNner in your EKS cluster, you can use either Helm or Kubernetes manifests. In this guide, we’ll use Helm for simplicity.

Add the STUNner Helm repository:

helm repo add stunner https://l7mp.io/stunner
helm repo update

Install STUNner: Replace with the namespace where you want to install STUNner (e.g., stunner):

helm install stunner stunner/stunner-gateway-operator \
  --namespace stunner \
  --create-namespace

This command deploys the STUNner Gateway Operator, including a default dataplane. The Helm chart allows for easy customization of configurations, you can find more information here.

Configuration

The next step in deploying STUNner is configuring it to function as a TURN server and ingress gateway for WebRTC traffic. This is achieved using a GatewayConfig, which defines the authentication method, as well as other settings like load balancer annotations tailored to the cloud provider — in this case, AWS.

In this example, we’ll use a basic username/password combination for TURN authentication. However, STUNner supports more advanced authentication methods, such as long-term credentials or third-party authentication backends, which can be integrated based on your specific requirements.

Additionally, the GatewayConfig is where we can include cloud provider–specific configurations for the load balancer using Kubernetes service annotations. These annotations are passed to the AWS Load Balancer Controller, ensuring that the service created for STUNner has the appropriate configuration for AWS-specific needs. Below is a sample configuration for AWS EKS:

apiVersion: stunner.l7mp.io/v1
kind: GatewayConfig
metadata:
  name: stunner-gatewayconfig
  namespace: stunner
spec:
  realm: stunner.l7mp.io
  authType: plaintext
  userName: "stunneruser"
  password: "stunnerpassword"
  loadBalancerServiceAnnotations:
    service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
    service.beta.kubernetes.io/aws-load-balancer-type: external
    service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
    service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
    service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /live
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8086"
    service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: HTTP

Explanation of Load Balancer Annotations

Here’s a breakdown of the annotations used in the configuration and their specific roles:

service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing

This annotation specifies the scheme of the load balancer.

  • internet-facing: Makes the load balancer accessible from the public internet.
  • Alternatively, you could use internal for private access.

service.beta.kubernetes.io/aws-load-balancer-type: external

This defines the type of load balancer.

  • external: Creates an AWS Network Load Balancer (NLB) for external traffic.
  • AWS also supports other types, such as Application Load Balancers (ALBs), but STUNner requires an NLB for handling L4 TURN traffic.

service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip

Configures the NLB to target pods directly by their IPs instead of the node port.

  • This is essential for exposing STUNner pods directly to external clients.
  • Using IP target type improves scalability and avoids unnecessary hops through Kubernetes nodes.

service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: “true”

Enables cross-zone load balancing, which distributes traffic evenly across all availability zones in the region.

  • This ensures consistent performance even when traffic spikes occur in specific zones.

service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip

Specifies source IP based sticky routing.

  • Since UDP is a stateless protocol, by default the NLB will randomly send a client traffic to a STUNner endpoint. If this endpoint changes during a session the TURN connection will break, therefore we have to make sure that one client’s traffic is sent to the same STUNner pod.

service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /live
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: “8086”
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: HTTP

Defines the health check of STUNner.

  • STUNner is by-default configured to expose this health check endpoint.
  • The UDP listeners of the NLB need this config to see whether STUNner endpoints are alive or not, otherwise they will not forward any UDP traffic to STUNner.

By setting up these annotations, you ensure that the load balancer created for STUNner is optimized for WebRTC traffic in AWS EKS. The annotations handle crucial aspects like sticky routing, health checks, and cross-zone balancing, making the deployment more robust and efficient.

Once this configuration is applied, STUNner will automatically inherit these annotations when the AWS Load Balancer Controller provisions a load balancer for the service. This step is critical for ensuring that external WebRTC clients can reliably connect to the STUNner gateway.

Next, we need to create a GatewayClass, where we refer to the previous configuration. Apply the following yaml.

apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
  name: stunner-gatewayclass
spec:
  controllerName: "stunner.l7mp.io/gateway-operator"
  parametersRef:
    group: "stunner.l7mp.io"
    kind: GatewayConfig
    name: stunner-gatewayconfig
    namespace: stunner
  description: "STUNner is a WebRTC ingress gateway for Kubernetes"

Finally, let’s create a Gateway that will tell STUNner on which ports it should listen to TURN traffic. In this example we create an UDP and a TCP listener. Notice that they are configured to different port numbers. This is due to the fact that there is a current limitation of the AWS Load Balancer Controller that it can only create mixed protocol services where the TCP and UDP ports are different.

apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
  annotations:
    stunner.l7mp.io/enable-mixed-protocol-lb: "true"
  name: stunner-gateway
  namespace: stunner
spec:
  gatewayClassName: stunner-gatewayclass
  listeners:
    - name: udp-listener
      port: 3478
      protocol: TURN-UDP
    - name: tcp-listener
      port: 3480
      protocol: TURN-TCP

At this point, you’ve successfully installed STUNner in your EKS cluster. Once configured, STUNner will act as the gateway for WebRTC traffic, seamlessly bridging external clients with internal media servers. In the next section, we’ll deploy LiveKit and connect it with STUNner to complete the setup.

6. Deploying LiveKit

With STUNner configured and running in your AWS EKS cluster, we’re ready to deploy LiveKit, a powerful open-source platform for real-time video and audio communication. LiveKit provides the building blocks for WebRTC applications, enabling features like video conferencing, live streaming, and interactive collaboration.

Introduction to LiveKit

LiveKit simplifies the process of building scalable real-time communication systems. It serves as a media server that handles signaling, WebRTC transport, and media routing, allowing developers to focus on their applications rather than the complexities of WebRTC infrastructure.

While the official LiveKit documentation provides a Helm chart for deploying LiveKit in Kubernetes, it relies on host networking for media transport. Host networking allows pods to share the network interface of the Kubernetes node, which is often considered a Kubernetes anti-pattern. Host networking can lead to issues such as:

  • Reduced network isolation.
  • Port conflicts especially during upgrades, or even with other workloads on the node.
  • Compromised scalability in cloud environments.

Instead of using host networking, we’ll leverage STUNner to handle TURN traffic and expose LiveKit securely and efficiently. This architecture aligns better with Kubernetes best practices and provides the benefits of scalability, modularity, and proper network isolation.

Components to Deploy

To get LiveKit up and running, we’ll deploy several interconnected components, each with its own Deployment, Service, and Ingress configuration. These components include:

  1. **Redis
    **Redis serves as a backend for LiveKit, managing session storage and state. It is a lightweight, in-memory database optimized for low-latency operations, making it an ideal choice for real-time communication platforms.
  2. **LiveKit Server
    **The LiveKit server handles signaling, WebRTC sessions, and media routing. It is the core of the LiveKit architecture and interacts with STUNner to manage media transport for clients.
  3. **Demo Token Server
    **Authentication in LiveKit requires generating JWT tokens that define user permissions and capabilities for a session. The demo token server is a simple service that generates these tokens based on predefined secrets. It’s a handy utility for testing and demonstration purposes.
  4. **LiveKit Meet Demo App
    **To showcase the capabilities of LiveKit, we’ll deploy LiveKit Meet, a demo application that provides a fully functional video conferencing interface. This app is an excellent starting point for exploring LiveKit’s features or building your own WebRTC application.

Deployment Process

All deployment files (manifests for Deployment, Service, and Ingress) for the above components have been prepared and are hosted in a single YAML file on GitHub. You can deploy all the components in one step by applying this file to your cluster.

To deploy LiveKit and its related components, run the following command:

kubectl create namespace livekit
kubectl apply -f https://raw.githubusercontent.com/megzo/aws-eks-stunner-livekit/refs/heads/main/manifests/livekit.yaml -n livekit

Please check this manifest file and modify it to your own use case. Specifically, there are two main points you should modify:

  • Check the hostnames in the Ingress resources and use your own domains. Make sure these domains point to your Nginx ingress load balancer hostname in your DNS provider.
  • Check the LiveKit ConfigMap and modify the TURN server hostnames to the public IP or hostname that the stunner-gateway service got from the AWS Load Balancer Controller. The ConfigMap should look like this.
apiVersion: v1
kind: ConfigMap
metadata:
  name: livekit-server
data:
  config.yaml: |
    keys:
      access_token: secret
    log_level: debug
    port: 7880
    redis:
      address: redis:6379
    rtc:
      port_range_start: 50000
      port_range_end: 60000
      tcp_port: 7801
      stun_servers: []
      turn_servers:
      - credential: stunnerpassword
        # use your load balancer hostname
        host: k8s-stunner-stunnerg-68f0a27aae-d7d3aa5970ce2e68.elb.eu-central-1.amazonaws.com
        port: 3478
        protocol: udp
        username: stunneruser
      - credential: stunnerpassword
        # use your load balancer hostname
        host: k8s-stunner-stunnerg-68f0a27aae-d7d3aa5970ce2e68.elb.eu-central-1.amazonaws.com
        port: 3480
        protocol: tcp
        username: stunneruser
      use_external_ip: false
    turn:
      enabled: false

Finally, we need to define an UDPRoute for STUNner to allow the TURN clients to connect to LiveKit. This is a security feature of STUNner that it will only permit connections to endpoints where there is an UDPRoute defined, otherwise with the right TURN credential you could just connect to any pod on any port in the Kubernetes internal network. Also, don’t worry about that the LiveKit service points to a HTTP (websocket) port, and the WebRTC connection is UDP/RTP, STUNner actually does not care about the protocol type and the port numbers, it will just discover the Endpoints behind the service. So apply the following yaml:

apiVersion: stunner.l7mp.io/v1
kind: UDPRoute
metadata:
  name: livekit
  namespace: stunner
spec:
  parentRefs:
    - name: stunner-gateway
  rules:
    - backendRefs:
        - name: livekit-server
          namespace: livekit

At this point, by deploying LiveKit alongside Redis, the token server, and the LiveKit Meet app, you’ll have a fully functional WebRTC platform running in Kubernetes, powered by STUNner for efficient and scalable media transport. This architecture not only avoids the pitfalls of host networking but also aligns with Kubernetes best practices, ensuring modularity, security, and cloud-native scalability.

7. Verifying the Complete Setup

Now that we’ve deployed all the components, it’s time to verify that everything is running correctly and accessible. This section will guide you through testing the setup to ensure it’s ready for real-time communication.

Accessing LiveKit Meet

To access the LiveKit Meet demo app, open the following link in your browser: https://meet.aws.stunner.cc (replace with your own ingress domain).

You should see the LiveKit Meet interface, where you can join or create a room and start testing real-time video and audio communication.

LiveKit Meet Frontend

Using the LiveKit Connection Test

LiveKit provides a connection test site that helps ensure your deployment is properly configured for WebRTC connections. Follow these steps to use the connection test:

  1. Open the connection test site: https://livekit.io/connection-test
  2. Enter the following details:

Now run the test and verify that the connection is established successfully.

If everything is set up correctly, you’ll see a success message indicating that your LiveKit deployment is functioning as expected.

Successful connection test on LiveKit’s site

With these tests complete, you can be confident that your LiveKit deployment in AWS EKS is operational and ready to support real-time communication. From here, you can start building and scaling your WebRTC applications on top of this robust foundation!

8. Conclusion

In this blog, we’ve walked through the complete process of deploying a WebRTC application in AWS EKS, powered by LiveKit and STUNner. From setting up an EKS cluster and configuring the AWS Load Balancer Controller to deploying STUNner and LiveKit, we’ve built a fully functional and scalable real-time communication platform in Kubernetes. I also created a GitHub repo to collect all the steps and Kubernetes manifests at one place.

However, this guide became quite lengthy due to the added complexity of running Kubernetes in AWS. Unlike other cloud providers, where STUNner often “ just works ” out of the box, setting up EKS with STUNner requires navigating AWS-specific challenges, such as configuring IAM roles, load balancer annotations, and integrating external services.

That said, what we’ve built here is a foundational setup. A true production-grade deployment would involve many additional steps, including:

  • Deploying multiple clusters for failover and geographic distribution.
  • Implementing autoscaling for STUNner, LiveKit, and the cluster itself to handle varying traffic loads.
  • Enhancing security with advanced authentication mechanisms and stricter policies.
  • Setting up persistent backends for Redis to ensure data durability.

If you’re looking to take your WebRTC applications to the next level or want guidance on building scalable, secure, and cloud-native WebRTC services in Kubernetes, don’t hesitate to reach out to us. With our expertise, we can help you streamline the process and achieve your goals faster. Get in touch — we’re here to help!

Responses (3)

Anatoly Belonog

What are your thoughts?

[

See more recommendations

](https://medium.com/?source=post_page---read_next_recirc--c94309af4ed8---------------------------------------)

The action has been successful