Automated testing of WebRTC applications

As you probably know, we run Talky, a free videochat service powered by WebRTC. Since WebRTC is still evolving quickly, we add new features to Talky roughly every two weeks. So far, this has required manual testing in Chrome, Opera, and Firefox each time to verify that the deployed changes are working. Since the goal of any deploy is to avoid breaking the system, each time we make a change we run it through a post-commit set of unit tests, as well as an integration test using a browser test-runner script as outlined in this post.

All that manual testing is pretty old-fashioned, though. Since WebRTC is supposed to be for the web, we decided it was time to apply modern web testing methods to the problem.

The trigger was reading two blog posts published recently by Patrik Höglund of the Google WebRTC team, describing how they do automated interop testing between Chrome and Firefox. This motivated me to spend some time on the post-deploy process of testing we do for Talky. The result is now available on github.

Let's review how Talky works and what we need to test. Basically we need to verify that two browsers can connect to our signaling service and establish a direct connection. The test consists of three simple steps: determine a room name to test against by generating a random number to use for the room URL start two browsers * determine that the peer-to-peer connection is up and that video is running.

If the process fails in the staging area, our ops team will not deploy the new version to the main Talky site.

Although step one is easy, starting the two browsers is more complicated. When a user goes directly to a videochat room we show a "check-your-hair screen" which requires a user action to join. It's already possible to skip this by using a localStorage javascript setting. This means we need to start both browsers with a clean profile and pre-seed the localStorage database with some of those settings.

To get around all that manual testing, we want to run these tests on servers and machines that don't have any webcams and microphones attached. Fortunately, this is pretty easy to achieve because the browser manufacturers provide special ways to simulate webcams and microphones for testing purposes. In Chrome, this is done by adding --use-fake-device-for-media-stream as a command line argument when starting the browser. In Firefox, a special fake:true variable in the getUserMedia calls needs to be set (as explained here).

Since we don't want user interaction, we also need to do something similar for skipping the security prompt. In Chrome, that is achieved with the --use-fake-ui-for-media-stream flag; in Firefox, this is done with by setting a preference of media.navigator.permission.disabled:true.

Next, we need to actually start the browsers in a way which doesn't require any visible windows and works on headless servers as well. Fortunately, there is a Linux tool for this called xvfb. It is even used by a sample script that is part of Google's WebRTC code available on github. After starting two browsers, we need to wait for them to become connected. This is relatively easy to determine by listening for the iceconnectionstatechange events of the WebRTC peerconnection API. Check the simplewebrtc demo page for a basic example.

We wait for this event to happen and then write something to the logs. In Chrome this is relatively easy, since normal console.log calls will be written to the log file on disk. In Firefox this turned out to be slightly more complicated: we need to set a preference browser.dom.window.dump.enabled:true and then use a window.dump call to write something to the standard output. For Talky, we log the string P2P connected. Other applications, such as Jitsi Meet can be tested that way as well. Our shell script then searches the logs for that string and if found, waits for another five seconds before declaring the test a success and exiting.

Sounds pretty simple, eh? It's just putting a bunch of pieces together, building on the work Patrik Höglund has done and pushing it slightly further. The test saves us lots of time on each deploy and allows us to deploy changes to our new service without headache. We can even run it continuously to check whether our service is up. We're also integrating this technique into our software development process for all the Otalk WebRTC modules.

Want to get started with WebRTC? Check out our WebRTC consulting services.

Comment directly to Philipp Hancke @HCornflower.

Enjoy this post? We'd love to invite you to join our mailing list, &you, where we connect with our community and share the latest we're learning.

You might also enjoy reading: