Monday, November 29, 2021

Google AI is improving Duo’s call quality on poor internet

(Pocket-lint) – Google’s AI division has been working to improve Google Duo call quality and reliability on low bandwidth connections.

According to a post on the Google AI blog, a new audio codec, called Lyra, has been developed by the team, with the goal of compressing speech into a lower bitrate. At 3kbps, Lyra uses lower data than most other codecs. Google AI noted codecs capable of operating at comparable bitrates to Lyra suffer from “increased artifacts and result in a robotic-sounding voice”.

Currently, the open-source codec Opus, the most widely used codec for VOIP applications, with audio at 32kbps, can obtain transparent speech quality. Although Opus can be used in more bandwidth-constrained environments, even down to 6kbps, it does start to demonstrate degraded audio quality, Google AI said. That makes the Opus codec less preferable than Lyra at 3kbps.

To hear both Lyra and Opus working inside Google Duo, if you care how they compare, check out Google AI’s blog post.

Google AI said it used a combination of existing codec technology and “advances in machine learning with models trained on thousands of hours of data” – including speakers in over 70 languages from open source libraries – when working on Lyra. The team plans to continue developing the codec and is hoping it’ll be embraced by developers and apps beyond Google Duo.

In the meantime, Google AI plans to roll out Lyra (in Google Duo) to boost audio calls in very low bandwidth connections. There’s no word on when the new Lyra codec will be widely available for Duo users, though it’ll likely be a quiet background update.

ALSO READ  Podium Marketing evaluation | TechRadar

If you’re a Google Duo user on Android or iOS who has poor or unreliable internet connections, you may soon notice a drastic improvement to your Duo calls, in terms of both quality and stability. It’s just unclear when that will begin to happen.

Writing by Maggie Tillman.


Leave a Reply

This website uses cookies. By continuing to use this site, you accept our use of cookies.