TrueConf tested H.264 hardware implementations and Android mobile devices came up short.
One of the recent discussions in our industry concerns which one of the two modern video codecs – H.264 or VP8 – should be adopted as mandatory for the new WebRTC (Web Real Time Communications) standard. Even with the recent decision from the IETF’s RTCWeb group, interoperability woes are far from resolved.
Two years ago, video conferencing provider TrueConf put its stakes in VP8, and we have been monitoring this debate ever since.
One of the main pro-H.264 arguments is the widespread hardware implementation of H.264, which is supposed to be good for mobile devices. When you are using a powerful desktop computer, it doesn’t matter much if you use hardware or software for encoding. On the other hand, computationally expensive operations consume a lot of energy on a mobile device, and powerful operations like video encoding eat up energy and severely compromise battery life.
After Google improved their Android hardware API, we were eager to try and test the hardware implementations of H.264 on every mobile device for which it is available. We acquired some of the newest Android devices and got our compilers ready.
And now? Let’s see what we ended up with:
Some implementations look like no one ever tested them. Really. When you initialize the codec, you get a built-in message like “Oh my god, someone finally launched this!” Others simply don’t work. Some begin launching, but stop after a second, performing some action but not returning the thing requested. Software developers producing something of comparable quality would hardly survive two weeks on any job if this was the sum total of their work.
Most of the devices we tested output proprietary junk, along with a standard-specified bitstream which could hardly be decoded by any other vendor’s decoder.
Once we dealt with the errors, the next problem emerged:
Video encoder speed is not equivalent to the delay produced by the encoder. If the encoder outputs one frame in 10ms it does not mean that after you put the first frame in, you will get it after 10ms has elapsed. Sometimes you get the first frame with a one frame delay in software encoding, but in reality we saw six frame delays. Frames are transmitted from the camera regularly, which means that when you are capturing at 20fps a frame processes every 1/20 of a second (50ms). So if the encoder has a delay of 6 frames, you will receive your first frame no earlier than 300ms!
If you are using your camera for video recording (which also uses the H.264 hardware encoder) it’s no big deal if the resulting file appears in your phone memory a little bit later than when it was actually filmed. But in the case of real-time communication, you will add 300ms to your total delay, including any delay from your network — and that is very noticeable.
Now, we have come to the main issue: video quality. Even if you are implementing the same profile of the same standard video encoder, you have a spectrum of quality options. You can make it faster with lower quality, or slower with better quality. Normally you choose something in between.
Speed is not an issue in hardware implementation terms: You generally choose somewhere in between a low-quality implementation with a lower number of logic gates and a high quality implementation with a larger number of gates. More logic gates mean more precious chip space and therefore more money spent, so it’s no wonder that hardware chip manufacturers often try to implement simpler versions … but what we found in mobile devices is that all the manufacturers selected the lowest quality possible. Their H.264 quality is easily comparable withH.263 and maybe even with H.261 video codecs released 20 years ago! It is actually only half as good as normal software, meaning that you need twice the bandwidth to get the same quality as other comparable codecs.
In real-time communication over the Internet, you only have limited channel bandwidth. If you can’t increase the channel bandwidth, you will have to live with suboptimal quality.
We were considering using hardware-based H.264 in every device where it is available, and maybe blacklisting some of the mobile devices with major bugs. After testing, we thought that we could perhaps change the strategy – turn the hardware H.264 off by default and turn it on for specific devices only where quality doesn’t raise big questions. But guess how many devices made it to that whitelist? Almost none!
So, in reality, we believe there is no such thing as widespread, good quality H.264 encoder implementation at this stage. In my opinion, that alone makes the adoption of H.264 into WebRTC unnecessary.
You should not blame your telco for battery drain during video calls and for poor video call quality. Blame the phone manufacturer.
If you do record video with your phone, consider re-encoding it with one of the good H.264 software encoders, like x264. You can save about half of your hard drive space without losing any quality.
H.265 (HEVC) in phones is useless for now. Better quality could be gained just by using any reasonable H.265 implementation, and good H.264 implementation would lead to a significant boost in quality. IP Cores with good implementations have been available on the market for years.