Deploying WebRTC Applications In AWS EKS A Step-by-Step Guide With LiveKit And STUNner
Deploying WebRTC Applications in AWS EKS A Step-by-Step Guide with LiveKit and STUNner
Get unlimited access to the best of Medium for less than $1/week.Become a member
[
Become a member
](https://medium.com/plans?source=upgrade_membership---post_top_nav_upsell-----------------------------------------)## L7mp Technologies
We build Kubernetes goodies for WebRTC.
1. Introduction
Running WebRTC applications in Kubernetes has become increasingly popular as developers and engineers embrace cloud-native solutions for real-time communication. However, deploying these applications in AWS Elastic Kubernetes Service (EKS) presents unique challenges compared to other cloud providers. This blog aims to guide cloud engineers and WebRTC developers through setting up a fully functional architecture in EKS, leveraging STUNner for seamless WebRTC NAT traversal.
AWS remains the most popular cloud provider, yet setting up Kubernetes in AWS, particularly EKS, is often more complex than in Azure or Google Cloud. Configuring STUNner in EKS, for example, requires additional steps due to the way AWS handles networking and load balancing. These configurations, while manageable, can be daunting for those new to the field. This guide aims to demystify the process and provide a clear path to deploying STUNner in EKS.
To make the demonstration practical, we’ll deploy the popular LiveKit open-source WebRTC server, along with the LiveKit Meet example application. This setup will highlight the benefits of using STUNner as a TURN server in front of LiveKit while focusing on the AWS-specific steps required to make it work. The architecture that we’ll create is depicted in the figure below.
The architecure of running LiveKit behind STUNner in AWS EKS
By the end of this blog, you’ll not only have a working example of LiveKit running behind STUNner in EKS but also gain a deeper understanding of why STUNner configurations differ in AWS and how to address these challenges. Once you have STUNner properly running in EKS, you can just as easily deploy any other WebRTC media server (e.g., Mediasoup, Jitsi, Janus, or Elixir WebRTC) using the STUNner documentation — LiveKit is simply one example. Whether you’re new to WebRTC in Kubernetes or looking to streamline your AWS EKS deployments, this guide will set you on the right path.
2. Prerequisites
Before we dive into deploying WebRTC applications in AWS EKS, let’s make sure we have the necessary tools, services, and access configured. Here’s what you’ll need:
Tools Required
- AWS CLI: For interacting with AWS services from the command line.
- eksctl: A CLI tool specifically designed for creating and managing EKS clusters.
- kubectl: The Kubernetes CLI for interacting with your cluster.
- Helm: A package manager for Kubernetes used to install and manage applications.
AWS Account and API Key Setup
To follow along with this guide, you’ll need an AWS account. If you don’t already have one, you can sign up at aws.amazon.com.
Once you have an account:
- Log in to the AWS Management Console.
- Navigate to IAM (Identity and Access Management).
- Create a new IAM user with programmatic access.
- Assign the user the necessary permissions (AdministratorAccess is sufficient for this tutorial, though more restrictive permissions are recommended in production).
- Download the access key and secret key for this user.
With the API key ready, install and configure the AWS CLI:
curl "https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip" -o "awscliv2.zip"
unzip awscliv2.zip
sudo ./aws/install
aws configure
You’ll be prompted to enter:
- Access Key ID
- Secret Access Key
- Default Region (e.g., us-west-2)
- Output Format (default is json)
You can run the following command to verify the AWS CLI is authenticated and working:
aws sts get-caller-identity
Installing eksctl, kubectl, and Helm
Since eksctl, kubectl, and helm are all single Go binaries, installing them is straightforward. Follow these steps for each tool:
Install eksctl:
curl -L "https://github.com/weaveworks/eksctl/releases/latest/download/eksctl_$(uname -s)_amd64.tar.gz" -o eksctl.tar.gz
tar -xzf eksctl.tar.gz -C /usr/local/bin
rm eksctl.tar.gz
eksctl version
Install kubectl:
curl -LO "https://dl.k8s.io/release/$(curl -L -s https://dl.k8s.io/release/stable.txt)/bin/$(uname -s | tr '[:upper:]' '[:lower:]')/amd64/kubectl"
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
kubectl version --client
Install helm:
curl -fsSL https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash
helm version
Once you’ve installed these tools and configured your AWS account, you’re ready to start setting up your EKS cluster and deploying WebRTC applications.
3. Setting Up an EKS Cluster
To deploy WebRTC applications in AWS EKS, the first step is to create an Elastic Kubernetes Service (EKS) cluster. In this section, we’ll walk you through the process using eksctl, a CLI tool specifically designed to simplify EKS cluster management.
Introduction to EKS and eksctl
Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service that makes it easier to run Kubernetes applications in the AWS cloud. While EKS handles much of the underlying complexity, setting up a cluster can still be time-consuming without the right tools.
This is where eksctl comes in. It’s a command-line utility originally built by Weaveworks (now fully managed by AWS) that significantly simplifies creating and managing EKS clusters. With eksctl, you can define your cluster in a YAML configuration file and create it with a single command.
Cluster Creation
To create an EKS cluster using eksctl, follow these steps:
- Create a YAML configuration file, such as cluster-config.yaml, with the following content:
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: stunner-livekit
region: eu-central-1
iam:
withOIDC: true
nodeGroups:
- name: ng-1
instanceType: t3.medium
desiredCapacity: 3
volumeSize: 100
ssh:
publicKeyPath: ~/.ssh/id_rsa.pub
- Cluster Name: The cluster is named stunner-livekit.
- Region: The cluster is deployed in the eu-central-1 region.
- IAM with OIDC: Enables IAM roles for service accounts, which will be necessary later.
- Node Group: Configures a node group with:
— t3.medium instance type.
— 3 desired nodes.
— 100 GB of disk storage per node.
— SSH key for secure access to the nodes.
Create the cluster using eksctl:
eksctl create cluster -f cluster-config.yaml
This command will:
- Provision the necessary infrastructure (e.g., VPC, subnets, security groups).
- Create the Kubernetes control plane and worker nodes.
- Configure IAM roles and permissions.
Validating the Setup
After the cluster is created, verify that it’s running and that kubectl is configured to interact with it:
Check the cluster status:
eksctl create cluster -f cluster-config.yaml
Test kubectl connectivity: Ensure kubectl is configured to use the new cluster by checking the nodes:
kubectl get nodes
You should see output listing the nodes in your cluster, similar to:
NAME STATUS ROLES AGE VERSION
ip-192–168–29–7.eu-central-1.compute.internal Ready <none> 3h15m v1.30.7-eks-59bf375
ip-192–168–63–16.eu-central-1.compute.internal Ready <none> 3h21m v1.30.7-eks-59bf375
ip-192–168–67–17.eu-central-1.compute.internal Ready <none> 3h21m v1.30.7-eks-59bf375
4. Installing the AWS Load Balancer Controller
One of the most crucial steps in setting up your EKS cluster for any publicly reachable application is installing the AWS Load Balancer Controller. This component is essential for managing ingress traffic from the public Internet into your cluster, enabling automatic provisioning and management of AWS Application Load Balancers (ALBs) and Network Load Balancers (NLBs) for Kubernetes services.
However, this step can be quite cumbersome due to the integration between AWS IAM roles and Kubernetes service accounts. Without a proper understanding of both, the installation is prone to errors. This guide will walk you through the process step by step to ensure everything is configured correctly.
Purpose of the AWS Load Balancer Controller
The AWS Load Balancer Controller allows Kubernetes to manage AWS-specific load balancing resources for your applications. Specifically, it:
- Provisions and configures ALBs or NLBs in response to Kubernetes ingress and service resources.
- Ensures that your applications are exposed externally in a scalable and efficient way.
- Provides necessary support for your EKS architecture by managing the traffic routing to your Kubernetes workloads.
Installation Steps
Follow these steps to set up the AWS Load Balancer Controller:
1. Create an IAM Role for the Controller
The controller requires an IAM role with permissions to manage AWS resources. Here’s how to set it up:
Download the required IAM policy: create a policy document (iam-policy.json) with the permissions required by the controller:
curl -o iam-policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/v2.11.0/docs/install/iam_policy.json
Apply the policy in AWS:
aws iam create-policy \
- policy-name AWSLoadBalancerControllerIAMPolicy \
- policy-document file://iam-policy.json
Create an IAM role for the controller and associate it with the policy:
POLICY_ARN=$(aws iam list-policies --query 'Policies[?PolicyName==\`AWSLoadBalancerControllerIAMPolicy\`].Arn' --output text)
eksctl create iamserviceaccount \
--cluster=stunner-livekit \
--namespace=kube-system \
--name=aws-load-balancer-controller \
--attach-policy-arn="$POLICY_ARN" \
--approve
This step creates a service account in the kube-system namespace and links it to the IAM role. You can double-check if the service account was correctly created in the Kubernetes cluster:
$ kubectl get serviceaccounts -n kube-system aws-load-balancer-controller
NAME SECRETS AGE
aws-load-balancer-controller 0 30s
2. Install the AWS Load Balancer Controller with Helm
Once the IAM role is linked to the service account, you can proceed to install the controller using Helm.
Add the Helm repository:
helm repo add eks https://aws.github.io/eks-charts
helm repo update
Install the controller: Replace 3. Validate the Installation Check the controller pod status: You should see a pod named aws-load-balancer-controller running. Verify that the controller is working: Check the logs of the controller pod to confirm it has started successfully: At this point, the AWS Load Balancer Controller is installed and ready to manage traffic for your cluster. This step is critical for enabling STUNner to function in AWS EKS, as it requires the Load Balancer Controller to route traffic to its components. 4. Install Nginx Ingress and Cert-Manager for HTTP(S) Traffic and TLS Certificate Management Although the Nginx Ingress and Cert-Manager aren’t specific to AWS or EKS, they are standard tools for managing ingress resources and TLS certificates in Kubernetes clusters which we’ll need for LiveKit. WebRTC requires a secure context (i.e., HTTPS) for the getUserMedia API to work in browsers, which makes securing the client-server signaling connection essential. Note: if you’re familiar with AWS and EKS you can replace this step to use the AWS Application Load Balancer instead of Nginx and configure Cert-Manager to use AWS specific CAs to generate TLS certs for your ingress resources. However, setting this up would be way more difficult than just using Nginx and Cert-Manager, and would totally distract the focus of this blog. To install Nginx and Cert-Manager simply execute the following: After checking the services for the ingress-nginx namespace you should see the public IP (or hostname) that the AWS Load Balancer Controller created for you. You should register this in your DNS provider since this will route the HTTP traffic into your cluster (AWS LoadBalancer will usually give you a hostname so you’ll need define a CNAME type DNS entry to you own domain). We also need to create a ClusterIssuer for Cert-Manager that will use Let’s Encrypt to generate a valid TLS certificate for your ingress hostnames. Apply the following yaml with kubectl: In the next section, we’ll focus on deploying STUNner and configuring it for NAT traversal in EKS. With the AWS Load Balancer Controller and Cert-Manager in place, we can now focus on deploying STUNner, a vital component for enabling WebRTC applications to function seamlessly in Kubernetes. In this section, we’ll introduce STUNner, outline its purpose in WebRTC, and guide you through deploying it in your EKS cluster. STUNner is a Kubernetes-based media gateway that simplifies the deployment of WebRTC applications in cloud-native environments. It acts as a STUN and TURN server, allowing WebRTC clients to establish peer-to-peer connections even when behind NATs or firewalls. In this deployment, STUNner serves as a gateway between external WebRTC clients and media servers running in Kubernetes. By leveraging STUNner, you can: STUNner is a perfect fit for EKS, and by integrating it with the AWS Load Balancer Controller, we’ll expose STUNner to handle traffic from external WebRTC clients. To install STUNner in your EKS cluster, you can use either Helm or Kubernetes manifests. In this guide, we’ll use Helm for simplicity. Add the STUNner Helm repository: Install STUNner: Replace This command deploys the STUNner Gateway Operator, including a default dataplane. The Helm chart allows for easy customization of configurations, you can find more information here. The next step in deploying STUNner is configuring it to function as a TURN server and ingress gateway for WebRTC traffic. This is achieved using a GatewayConfig, which defines the authentication method, as well as other settings like load balancer annotations tailored to the cloud provider — in this case, AWS. In this example, we’ll use a basic username/password combination for TURN authentication. However, STUNner supports more advanced authentication methods, such as long-term credentials or third-party authentication backends, which can be integrated based on your specific requirements. Additionally, the GatewayConfig is where we can include cloud provider–specific configurations for the load balancer using Kubernetes service annotations. These annotations are passed to the AWS Load Balancer Controller, ensuring that the service created for STUNner has the appropriate configuration for AWS-specific needs. Below is a sample configuration for AWS EKS: Here’s a breakdown of the annotations used in the configuration and their specific roles: service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing This annotation specifies the scheme of the load balancer. service.beta.kubernetes.io/aws-load-balancer-type: external This defines the type of load balancer. service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip Configures the NLB to target pods directly by their IPs instead of the node port. service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: “true” Enables cross-zone load balancing, which distributes traffic evenly across all availability zones in the region. service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip Specifies source IP based sticky routing. service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /live Defines the health check of STUNner. By setting up these annotations, you ensure that the load balancer created for STUNner is optimized for WebRTC traffic in AWS EKS. The annotations handle crucial aspects like sticky routing, health checks, and cross-zone balancing, making the deployment more robust and efficient. Once this configuration is applied, STUNner will automatically inherit these annotations when the AWS Load Balancer Controller provisions a load balancer for the service. This step is critical for ensuring that external WebRTC clients can reliably connect to the STUNner gateway. Next, we need to create a GatewayClass, where we refer to the previous configuration. Apply the following yaml. Finally, let’s create a Gateway that will tell STUNner on which ports it should listen to TURN traffic. In this example we create an UDP and a TCP listener. Notice that they are configured to different port numbers. This is due to the fact that there is a current limitation of the AWS Load Balancer Controller that it can only create mixed protocol services where the TCP and UDP ports are different. At this point, you’ve successfully installed STUNner in your EKS cluster. Once configured, STUNner will act as the gateway for WebRTC traffic, seamlessly bridging external clients with internal media servers. In the next section, we’ll deploy LiveKit and connect it with STUNner to complete the setup. With STUNner configured and running in your AWS EKS cluster, we’re ready to deploy LiveKit, a powerful open-source platform for real-time video and audio communication. LiveKit provides the building blocks for WebRTC applications, enabling features like video conferencing, live streaming, and interactive collaboration. LiveKit simplifies the process of building scalable real-time communication systems. It serves as a media server that handles signaling, WebRTC transport, and media routing, allowing developers to focus on their applications rather than the complexities of WebRTC infrastructure. While the official LiveKit documentation provides a Helm chart for deploying LiveKit in Kubernetes, it relies on host networking for media transport. Host networking allows pods to share the network interface of the Kubernetes node, which is often considered a Kubernetes anti-pattern. Host networking can lead to issues such as: Instead of using host networking, we’ll leverage STUNner to handle TURN traffic and expose LiveKit securely and efficiently. This architecture aligns better with Kubernetes best practices and provides the benefits of scalability, modularity, and proper network isolation. To get LiveKit up and running, we’ll deploy several interconnected components, each with its own Deployment, Service, and Ingress configuration. These components include: All deployment files (manifests for Deployment, Service, and Ingress) for the above components have been prepared and are hosted in a single YAML file on GitHub. You can deploy all the components in one step by applying this file to your cluster. To deploy LiveKit and its related components, run the following command: Please check this manifest file and modify it to your own use case. Specifically, there are two main points you should modify: Finally, we need to define an UDPRoute for STUNner to allow the TURN clients to connect to LiveKit. This is a security feature of STUNner that it will only permit connections to endpoints where there is an UDPRoute defined, otherwise with the right TURN credential you could just connect to any pod on any port in the Kubernetes internal network. Also, don’t worry about that the LiveKit service points to a HTTP (websocket) port, and the WebRTC connection is UDP/RTP, STUNner actually does not care about the protocol type and the port numbers, it will just discover the Endpoints behind the service. So apply the following yaml: At this point, by deploying LiveKit alongside Redis, the token server, and the LiveKit Meet app, you’ll have a fully functional WebRTC platform running in Kubernetes, powered by STUNner for efficient and scalable media transport. This architecture not only avoids the pitfalls of host networking but also aligns with Kubernetes best practices, ensuring modularity, security, and cloud-native scalability. Now that we’ve deployed all the components, it’s time to verify that everything is running correctly and accessible. This section will guide you through testing the setup to ensure it’s ready for real-time communication. To access the LiveKit Meet demo app, open the following link in your browser: https://meet.aws.stunner.cc (replace with your own ingress domain). You should see the LiveKit Meet interface, where you can join or create a room and start testing real-time video and audio communication. LiveKit Meet Frontend LiveKit provides a connection test site that helps ensure your deployment is properly configured for WebRTC connections. Follow these steps to use the connection test: Now run the test and verify that the connection is established successfully. If everything is set up correctly, you’ll see a success message indicating that your LiveKit deployment is functioning as expected. Successful connection test on LiveKit’s site With these tests complete, you can be confident that your LiveKit deployment in AWS EKS is operational and ready to support real-time communication. From here, you can start building and scaling your WebRTC applications on top of this robust foundation! In this blog, we’ve walked through the complete process of deploying a WebRTC application in AWS EKS, powered by LiveKit and STUNner. From setting up an EKS cluster and configuring the AWS Load Balancer Controller to deploying STUNner and LiveKit, we’ve built a fully functional and scalable real-time communication platform in Kubernetes. I also created a GitHub repo to collect all the steps and Kubernetes manifests at one place. However, this guide became quite lengthy due to the added complexity of running Kubernetes in AWS. Unlike other cloud providers, where STUNner often “ just works ” out of the box, setting up EKS with STUNner requires navigating AWS-specific challenges, such as configuring IAM roles, load balancer annotations, and integrating external services. That said, what we’ve built here is a foundational setup. A true production-grade deployment would involve many additional steps, including: If you’re looking to take your WebRTC applications to the next level or want guidance on building scalable, secure, and cloud-native WebRTC services in Kubernetes, don’t hesitate to reach out to us. With our expertise, we can help you streamline the process and achieve your goals faster. Get in touch — we’re here to help! Anatoly Belonog What are your thoughts? [ See more recommendations ](https://medium.com/?source=post_page---read_next_recirc--c94309af4ed8---------------------------------------) The action has been successfulhelm install aws-load-balancer-controller eks/aws-load-balancer-controller \
--set clusterName=stunner-livekit \
--set serviceAccount.create=false \
--set serviceAccount.name=aws-load-balancer-controller \
--namespace kube-system \
--set region=eu-central-1
kubectl get pods -n kube-system -l app.kubernetes.io/name=aws-load-balancer-controller
kubectl logs -n kube-system deployment/aws-load-balancer-controller
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/controller-v1.12.0/deploy/static/provider/aws/deploy.yaml
kubectl annotate service -n ingress-nginx ingress-nginx-controller service.beta.kubernetes.io/aws-load-balancer-scheme=internet-facing
kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.2/cert-manager.yaml
kubectl wait --for=condition=Ready -n cert-manager pod -l app.kubernetes.io/component=webhook --timeout=90s
kubectl get services -n ingress-nginx
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-prod
spec:
acme:
email: info@yourdomain.io # use your own domain
privateKeySecretRef:
name: letsencrypt-prod
server: https://acme-v02.api.letsencrypt.org/directory
solvers:
- http01:
ingress:
class: nginx
5. Installing STUNner
Introduction to STUNner
STUNner Installation
helm repo add stunner https://l7mp.io/stunner
helm repo update
helm install stunner stunner/stunner-gateway-operator \
--namespace stunner \
--create-namespace
Configuration
apiVersion: stunner.l7mp.io/v1
kind: GatewayConfig
metadata:
name: stunner-gatewayconfig
namespace: stunner
spec:
realm: stunner.l7mp.io
authType: plaintext
userName: "stunneruser"
password: "stunnerpassword"
loadBalancerServiceAnnotations:
service.beta.kubernetes.io/aws-load-balancer-scheme: internet-facing
service.beta.kubernetes.io/aws-load-balancer-type: external
service.beta.kubernetes.io/aws-load-balancer-nlb-target-type: ip
service.beta.kubernetes.io/aws-load-balancer-cross-zone-load-balancing-enabled: "true"
service.beta.kubernetes.io/aws-load-balancer-target-group-attributes: stickiness.enabled=true,stickiness.type=source_ip
service.beta.kubernetes.io/aws-load-balancer-healthcheck-path: /live
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: "8086"
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: HTTP
Explanation of Load Balancer Annotations
service.beta.kubernetes.io/aws-load-balancer-healthcheck-port: “8086”
service.beta.kubernetes.io/aws-load-balancer-healthcheck-protocol: HTTPapiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
name: stunner-gatewayclass
spec:
controllerName: "stunner.l7mp.io/gateway-operator"
parametersRef:
group: "stunner.l7mp.io"
kind: GatewayConfig
name: stunner-gatewayconfig
namespace: stunner
description: "STUNner is a WebRTC ingress gateway for Kubernetes"
apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
annotations:
stunner.l7mp.io/enable-mixed-protocol-lb: "true"
name: stunner-gateway
namespace: stunner
spec:
gatewayClassName: stunner-gatewayclass
listeners:
- name: udp-listener
port: 3478
protocol: TURN-UDP
- name: tcp-listener
port: 3480
protocol: TURN-TCP
6. Deploying LiveKit
Introduction to LiveKit
Components to Deploy
**Redis serves as a backend for LiveKit, managing session storage and state. It is a lightweight, in-memory database optimized for low-latency operations, making it an ideal choice for real-time communication platforms.
**The LiveKit server handles signaling, WebRTC sessions, and media routing. It is the core of the LiveKit architecture and interacts with STUNner to manage media transport for clients.
**Authentication in LiveKit requires generating JWT tokens that define user permissions and capabilities for a session. The demo token server is a simple service that generates these tokens based on predefined secrets. It’s a handy utility for testing and demonstration purposes.
**To showcase the capabilities of LiveKit, we’ll deploy LiveKit Meet, a demo application that provides a fully functional video conferencing interface. This app is an excellent starting point for exploring LiveKit’s features or building your own WebRTC application.Deployment Process
kubectl create namespace livekit
kubectl apply -f https://raw.githubusercontent.com/megzo/aws-eks-stunner-livekit/refs/heads/main/manifests/livekit.yaml -n livekit
apiVersion: v1
kind: ConfigMap
metadata:
name: livekit-server
data:
config.yaml: |
keys:
access_token: secret
log_level: debug
port: 7880
redis:
address: redis:6379
rtc:
port_range_start: 50000
port_range_end: 60000
tcp_port: 7801
stun_servers: []
turn_servers:
- credential: stunnerpassword
# use your load balancer hostname
host: k8s-stunner-stunnerg-68f0a27aae-d7d3aa5970ce2e68.elb.eu-central-1.amazonaws.com
port: 3478
protocol: udp
username: stunneruser
- credential: stunnerpassword
# use your load balancer hostname
host: k8s-stunner-stunnerg-68f0a27aae-d7d3aa5970ce2e68.elb.eu-central-1.amazonaws.com
port: 3480
protocol: tcp
username: stunneruser
use_external_ip: false
turn:
enabled: false
apiVersion: stunner.l7mp.io/v1
kind: UDPRoute
metadata:
name: livekit
namespace: stunner
spec:
parentRefs:
- name: stunner-gateway
rules:
- backendRefs:
- name: livekit-server
namespace: livekit
7. Verifying the Complete Setup
Accessing LiveKit Meet
Using the LiveKit Connection Test
(Replace ms.aws.stunner.cc with your own domain.)
(Replace YourName and test-room with your preferred identity and room name.)*8. Conclusion
Responses (3)
Recommended from Medium
