Does your mobile phone provide better quality than a typical webcam? I'll test some virtual webcam software for mobile.
WebRTC & mobile battery consumptionChris Koehncke
We’ve all been there, sitting on a dirty airport concourse floor, praying to a power outlet as it imparts it’s charge wisdom to our Oracle (otherwise known as our mobile phone). To any road warrior, power is indeed a god.
I wanted to understand the implications of WebRTC applications on battery life. Unfortunately finding a mobile developer in San Francisco is tough; finding one with knowledge of communications – difficult; and finding one who would talk to me, nearing on impossible. Somehow I managed to find those willing to speak.
Before I discuss mobile battery drain and WebRTC, I need to tell you about likely the largest flaw with mobile Internet Communications.
Here’s the problem (if you’ve not yet experienced it) – if you’re talking on a virtually any Internet Comms program (e.g. Skype, Viber..) and someone happens to call your mobile phone (the real one), here’s what happens.
On IOS, the phone side of your mobile takes priority and your IP application is suddenly moved to a background task, hence the phone starts ringing in your ear and your IP party is sent into silence and you left bewildered as to what next.
On Android, the experience is similar, you will suddenly hear the phone ringing in your ear but this time over the top of your conversation with no clear instructions on what your options are (often you hang up on everybody).
The short is both IOS and Android give the phone “side” of your mobile device priority.
Apple has imposed militant rules for what an application can do to pass muster for iTunes. This is actually a good policy. One of the rules is an IOS application in background can only use the radio every 10 minutes. This is a clear power saving rule.
But waking your application while in background (say you want to reach out and “call” your application) requires some sleight of hand. Most applications revert to some trickery of sending a signal to the application (with the hope it’s in foreground or otherwise active) and using a push notification to wake the application up (if the application is in deep background). Push notifications on IOS allow for a custom sound (which often is a phone ringing type sound though unique for each application).
Push notifications are one way and hence you don’t know if the device got the message. Thus you have more trickery as your server hopes you got the message, the app can notify the user, the user can bring the application to foreground and the server can connect to the app. All of this while the other party is waiting for the call to connect (time becomes precious).
Since push notifications are one way, you have another problem if you’ve loading the application on multiple devices (phone & tablet for example), it’s hard to tell a device that didn’t answer the call to stop ringing. If the app sends a push notification, all of your devices will start to “ring”. Answer on one, the other has no idea and keeps ringing. This requires your application server to recognize and do some trickery to shut down the ringing on the other devices (note in my own testing, many applications haven’t found a way to implement this check solution).
I haven’t even gotten to battery power yet. But recognize all of this trickery (above) requires power.
As a developer, you could choose to save power and hope push notifications work or a dual approach which requires power to be consumed. It’s a balance.
Apple knows that consumers will blame the phone and not “an” app for battery draining so with IOS 8, Apple intelligently starts to deal with push notifications and background activity. Basically, if you haven’t been using your application for a while (meaning you didn’t run it explicitly), Apple will push the app’s priority down the queue. Hence, if you look at the Wall Street Journal app daily, IOS will ensure that notifications to that app appear with priority. That game app you played once and now hidden deep in a folder, may not get it’s notifications without delays. This makes total sense, but hellish if you’re using push notification to wake up your communications application AND the user hasn’t used it in a while.
Your phone power is optimized for voice communications. The telco voice codecs are moderate complexity and low bandwidth and with the onboard hardware encoders, you get a nice balance of power and bandwidth usage. This is why you can talk for hours on your mobile phone and barely use any power. The “phone” side of your mobile device can access onboard specialized hardware to encode voice, an application though can’t access this hardware. The phone lives in a protected area.
Thus a WebRTC or packet communication session introduces some design challenges. Since your app can’t use the on-board specialized hardware encoder/decoder, this requires you to implement a software encoder/decoder in your app (which by default isn’t as efficient, hence a little more battery). Second, the codecs (Opus or G.711/G.722) either use a ton of bandwidth (requiring more radio power) or are computational intensive (requiring more of the generic CPU power).
What I have found interesting though, mobile developers aren’t mostly too worried about battery life.
Apple though is worried about battery life. With the Apple development environment, the developer will get warning messages if they implement functionality that will impact battery drain. But these are only warnings that can be ignored. However, at least Apple is trying.
Android is the wild wild west, as long as the user agrees to the permissions (that long list of things that Android asks if you “OK with), then an application has pretty much free rein to do whatever. As a result, you’re more likely to see a misbehaving app on Android than IOS sucking your power away.
The mobile phone ringing and over ruling your application will require both Android and IOS to be more communications aware. I’m sure we’ll get there, but your device is actually two devices for the moment and they’re mostly not aware of each other.
The encoding/decoding of media can be done smartly, in which case battery impact is minimal, recognizing that video does tend to burn up the CPU.
Generating an outbound media connection is pretty straightforward to develop. Unfortunately, crafting an application that needs to “wake up” needs careful architecting and likely a lot of trial and error to get right. Viber seems to have nailed it, while even Skype continues to struggle.
Mobile developers usually aren’t thinking about battery management during development (in least in the first version).
Any bi-directional communications app can misbehave (a recent Facebook app was caught draining people’s battery) so this isn’t limited solely to WebRTC.