Scale jitsi - make videochat more efficient. I run a jitsi installation - like described previous - that gets used more and more. I had to learn that jitsi does not only use peer-to-peer communication, that all is done also in parts via the server. The initial statement that you can happily run it on small scale hardware is true - but only if you do not want to have not more than 10 people in one room only.
- Scaling Up Your Jitsi with Jitsi Bridges Complete Guide to Jitsi Bridge Installation, Configuration and Testing After you make your own Jitsi installation you may need need to scale up your system. The main resource requirement here will be the bandwith.
- Riot isn't an alternative to Jitsi so much as a complement to it. Matrix is for text chat, and Riot uses Jitsi for calls; while you can embed a Jitsi client into Riot rooms, and it auto-shares the URL, AFAIK group calls are not E2EE even if the room is. One-on-one calls through Riot are encrypted, but only if made in a one-on-one direct chat.
MeetrixIO is a well experienced company with WebRTC related technologies.We provide commercial support for Jitsi Meet, Kurento, OpenVidu, BigBlue Button, Coturn Server and other webRTC related opensource projects.
One of the amazing features in Jitsi Meet is the inbuilt horizontal scalability.When you want to cater large number of concurrent users, you can spin up multiple video bridges to handle the load.If we setup mulple video bridges and connect them to the same shard, Jicofo, the conference manager selects the least loaded Videobridge for the next new conference.
Run Prosody on all interfaces
By default prosody runs only on local interface (127.0.0.0), to allow the xmpp connections from external servers, we have to run prosody on all interfaces. To do that, add the following line at the beginning of
Allow inbound traffic for port 5222
Open inbound traffic from JVB server on port
5222 (TCP) of Prosody server. But DO NOT open this publicly.
Install JVB on a seperate server
Install a Jitsi Video Bridge on a different server.
Copy JVB configurations from Jitsi Meet server
/etc/jitsi/video-bridge/sip-communicator.properties of JVB server with the same files from the original Jitsi Meet server
Update JVB config file
/etc/jitsi/videbridge/config set the
XMPP_HOST to the ip address/domain of the prosody server
/etc/jitsi/video-bridge/sip-communicator.properties file update the following properties
<XMPP_HOST>: The ip address of the prosody server. Better if you can use the private IP address if that can be accessed from JVB server.
<JVB_NICKNAME>: This should be a unique string used by Jicofo to identify each JVB
You can add the following line at the beginning of
/usr/share/jitsi/jvb/jvb.sh to generate a unique nickname for the JVB at each startup. This might be useful if you are using an auto-scaling mechanism.
Looking for commercial support for Jitsi Meet ? Please contact us via [email protected]
In the last 48 hours, several Scaleway teams were on the front line to launch, in less than a day, an integral videoconferencing solution: Ensemble.
Free, open-source and sovereign, Jitsi VideoConferencing powered by Scaleway will be available for the duration of the Covid-19 crisis!
You will be able to use this solution to keep in touch with your family and friends, maintain your business, interact with your customers, meet your patients or prepare your exams with other students.
How it All Started?
Monday 8am we decided to join the collective effort against #COVID19. By deploying Jitsi Meet, an open-source video conferencing solution providing secured virtual rooms with high video and audio quality, on more than one hundred Scaleway instances, we aimed to facilitate remote communication for all amid the COVID-19 pandemic.
How Does it Work?
In a nutshell, ensemble.scaleway.com aims at providing Jitsi servers. The number of people in need of a scalable videoconference solution being very high at the moment, it was our responsibility to find an alternative that was able to handle a significant load of video bridge requests.
As shown in the architecture diagram below, all Jitsi instances are constantly monitored to keep track of their capacity. This allows us to ensure that each user is provided with the least-used instance to create a virtual room and start a call.
The stateless API is composed of a front website in React and an API that will query a Prometheus (every 30 seconds) to get a list of all the Jitsi servers available and their current CPU usage.
The web application then selects the Jitsi server that has the most CPU available and returns the URL to the user. With that URL, a user can easily connect to the Jitsi server and start enjoying the call with an optimal sound and video quality.
All Jitsi servers are deployed on Scaleway Elements Instances which can hold a large number of concurrent video bridges.
Now that we explained the general architecture and the typical user workflow of this application, let's see how it is deployed using infrastructure as code technologies.
Scaleway Terraform Module
Terraform is an infrastructure tool that manages cloud resources in a declarative paradigm. We decided to use the Scaleway Terraform Provider to manage all our infrastructure from a single versioned place. All changes applied to our infrastructure are tracked in a git repository.
To ensure consistency across concurrent Terraform execution, the terraform state is persisted in a Scaleway Database PostgreSQL managed instance. For that, we used the pg backend in Terraform.
We created all the required instances to make this application run:
- The most important are the Jitsi servers. At the moment, we created more than 100 of those (
DEV1-Ltype). These instances run the Jitsi videoconference solution.
- The instance running the Prometheus. Prometheus scraps the state of each Jitsi state and, in particular, the CPU usage of each Jitsi server.
- The instances running the APIs. The API instances query the Prometheus to identify what are the CPU usage on all Jitsi servers and return them to the web application.
These constitute the infrastructure of ensemble.scaleway.com. Now we are going to complete this Terraform module by enabling those instances to serve our application.
Scaleway Jitsi Image
When creating an instance, you have to select or create an image. In each cloud deployment, instances are booted with a specific cloud image that is designed to meet the specific requirements of the instance.
First, we created a base image called base, which was the starting point for all the others. On this base image, we installed the requirements to run containers with Docker,
docker-compose and a
node_exporter that is used by our Prometheus monitoring system to know, among other information, the CPU usage of the machine.
From the base image, we then created a Jitsi image using the official docker compose distribution:
docker-jitsi-meet. We also added an
Nginx Prometheus exporter on
docker-jitsi-meet docker-compose for monitoring purposes.
When a Jitsi instance boots with this image, a
docker-compose will start and the Jitsi server which is running as a container will automatically start working as well.
Note that the base and Jitsi images are created with Ansible playbooks. It allows to easily recreate images when needed.
Finally, we created a
front container image which gathers the web application code (
React) and the API code (
Node.js). This image will run inside containers that
docker-compose will pull from a private Scaleway registry.
The base image aims at providing the system-wide requirements such as the operating system and other basic components as docker,
node_export.. However, we needed to be able to deploy new versions of our applications without rebooting an instance with a new image. As a result, we decoupled the base image from the application that was containerized.
The API and the React website which are bundled in the same container image are hosted on a Scaleway private registry. Once stored on the registry, images can be pulled in the instance by the docker daemon controlled by
docker-compose to run the application. That comes in very handy when we need to deploy a new version of our API after a bug fix or a feature enhancement as we only need to push the new container image to the registry and tell
docker-compose to use the new version.
Scaling It Infrastructure
Now that our applications are deployed, let's see how we can make our API server reliable using a Load Balancer.
Scaleway Load Balancer
Load Balancers are highly available and fully-managed instances that allow to distribute the workload among your various services. They ensure the scaling of all your applications while securing their continuous availability, even in the event of heavy traffic. Load Balancers are built to use an Internet-facing front-end server to shuttle information to and from backend servers. They provide a dedicated public IP address and forward requests automatically to one of the backend servers based on resource availability.
In the context of Jitsi, we used our Load Balancer to automatically forward requests to our API servers based on resource availability. Our API servers are the ones providing information about the current load of each Jitsi sever to ensure that the user is provided with the most available instance.
Load Balancer can also allow us to add more API instances if the existing API instances are too busy to handle the load. In addition, they can evict a faulty API server in case it is not able to answer requests anymore for whichever reason. Desk chair with moveable arms. We even added an extra reliability guarantee on our API instances by using Scaleway Placement Groups.
Scaleway Placement Groups
Placement Groups allow you to organize instances into groups, distributing the load, and ensuring maximum availability. Placement groups have two operating modes. The first one is called
max_availability. It ensures that all the compute instances that belong to the same group will not run on the same underlying hardware. The second one is called
low_latency and does the exact opposite: it brings compute instances closer together to achieve higher network throughput. In the context of our application, we want to ensure that the API servers and Jitsi servers are as available as possible.
We enable the
max_availability mode across our two API servers so they are not on the same underlying hypervisor. Now that we have a reliable deployment of our API server, let's see how to secure the connection to the different ports available on those machines using Scaleway Security Groups.
Scaleway Security Groups
Security groups enable you to create rules that either drop or allow incoming traffic from or to certain ports of your server. Typically it establishes a barrier between a trusted (internal) network and untrusted external network, like the Internet. The security group configuration is based on a set of inbound and outbound rules. We applied security groups to all the components of our architecture.
On the API instances, we only allowed
HTTPS/HTTP connection and SSH remote access connection.
On the Jitsi instances, only SSH and ports that are required for Jitsi to work are allowed. All the others are blocked.
Let's complete this deployment by adding human understandable domain names for each component of this application.
Finally, we wanted to manage the DNS record for each of the instances: API, Jitsi servers and Prometheus). DNS records make a domain name such as
h-5660.ensemble.scaleway.com resolve to the correct Jitsi instance across all users.
We use Scaleway domains to manage the DNS record for the whole ensemble.scaleway.com solution. When provisionning a new Jitsi server in Terraform, we automatically generate a DNS record for this Jitsi instance.
We generated a wildcard certificate for all subdomains of ensemble.scaleway.com. Each Jitsi server gets its certificate that is used by their Nginx server to handle HTTPS connections.
While currently in early access, you can already register for Scaleway Domains .
It's Working like a Charm
The ensemble.scaleway.com solution was built and deployed in a flash thanks to joint forces, and the first reactions are already very positive. The solution is widely used and the number of rooms created keeps increasing. In the last 8 hours, we monitored more than 800 active Jitsi connections, 1700 Jitsi rooms created and more than 6000 page views of ensemble.scaleway.com from all over the world.
Scaling Its Business
We are still working actively on this project to provide support to as many people as possible in this challenging period. In particular, we will work to make this project and the code used to create this infrastructure available to all as soon as possible.
In the meantime, if you would like to set up your own Jitsi server, feel free to check our tutorials on how to install Jitsi on your server, whether you are using Debian 10 or Ubuntu 18.04.
If you want to know more about Infrastructure as code at Scaleway, we invite you to read those following articles:
Infrastructure as Code @Scaleway (1/3) - Overview
Infrastructure as Code @Scaleway (2/3) - Internal usage
Infrastructure as Code @Scaleway (3/3) - Supported tools