Saturday, July 27, 2024
Mobile news

Android Studio to get Gemini 1.5 Pro upgrade, code suggestions and more



Google’s Android development environment is getting a Gemini makeover of sorts. The company is announcing that Gemini 1.5 Pro will arrive on Android Studio later this year. This version offers longer context windows and multimodal inputs. Moreover, developers can now use Gemini to generate code suggestions, analyze crash reports and recommend the next steps to rectify development issues.

“Android is uniquely positioned to bring all of Google’s AI innovations to the broader app ecosystem,” Matthew McCullough, Google’s vice president of product management for Android developer tools and experience, says during a press conference ahead of the company’s Google I/O developer conference. “That’s why we continue to invest in tools and APIs that are easy to use and meet developers where they are and where we can have the most impact.”

“One way we do this is by offering developers multiple ways to leverage Gemini models in their Android apps,” he continues. “Another part of our commitment is to use AI to make hard tasks easier for developers. Since launching AI features in Android Studio last year, we continue to refine our underlying models, integrate developers’ feedback and expand availability to more countries. And our goal remains to help developers leverage AI in their workflow and be more productive.”

From Gemini 1.0 to Gemini 1.5 Pro

Google announced weeks ago that Android Studio would be powered by Gemini 1.0 Pro. The model was available as a preview for developers to use for free. However, at some point this year, the company plans to evolve its AI offering, swapping out Gemini 1.0 Pro for its more top-of-the-line model, Gemini 1.5 Pro. With a larger context window (1 million tokens vs. 32,000 tokens), it can provide better-quality responses than before.

VB Event

The AI Impact Tour: The AI Audit

Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. Secure your attendance for this exclusive invite-only event.


Request an invite

Offering developers better AI solutions is vital for Google, especially for mobile. It’ll help stay ahead, or at least remain competitive, of Apple which is rumored to be revamping Siri with OpenAI’s ChatGPT. And let’s not forget about the AI wearable market, with the Ray-Ban Meta Smart Glasses growing in popularity and the rise of devices such as the Humane AI Pin and Rabbit r1. Though the latter category may not be faring as well, it highlights an emerging trend of external mobile AI use cases beyond the phone. Google can’t afford to ignore those who are building on Android.

Google Assistant was once the AI everyone knew on Android devices. It opened up to developers in 2016 with the launch of Actions on Google. But the days of tapping into Assistant are over. The inclusion of Gemini affords developers greater freedom to bake AI into their apps in a more native way.

Code suggestions and crash reporting

Code suggestions with Gemini. Image credit: Google

At Google I/O 2023, the company introduced Studio Bot for Android Studio. It was an AI-powered coding assistant powered by Google’s Codey text-to-code foundation model, a descendant of PaLM 2. Developers could pose questions about Android development or ask Studio Bot to fix errors in their existing code.

Fast-forward a year, and Studio Bot has been rebranded to Gemini in Android Studio. When enabled, developers can prompt the model to perform various tasks, from simplifying complex code to executing specific code transformations such as “make this code idiomatic” or generating new functions as described. New name, improved model, and enhanced capabilities.

McCullough highlighted the new code suggestion capabilities during a brief demo last week, showcasing in one instance how Gemini can take a selected piece of code and explain its purpose. This can help developers determine whether they’re editing the right part of an app or how a change might impact other areas. In addition, he showed off how Gemini could translate parts of code into other languages.

It’s unclear if Studio Bot exists in its current form, but suffice it to say Google is infusing Gemini directly into its products instead of making them standalone products. The company isn’t the only one offering coding assistants. There’s Microsoft’s Copilot, GitHub Copilot, Oracle Code Assist, Amazon CodeWhisperer, Tabnine, and others.

Gemini API starter app template. Image credit: Google

It’s not code generation, but Google has also updated its Gemini API to provide a starter app template within Android Studio. Developers can run prompts directly through this API using image sources as inputs and render responses on the screen. Some may find this helpful as a starting point when creating an Android app. It’s similar to Wix, Squarespace or WordPress.com for websites where you can select a template and customize it based on your needs. The difference is that with Android Studio, you can instruct Gemini to build it out for you.

Gemini for recommendations on crash reports. Image credit: Google

Lastly, developers can now use Gemini to better understand why their Android apps crash. The AI model will analyze the crash reports, provide insights, generate a crash summary, and suggest recommendations on the next steps, including sample code to correct issues and links to relevant documentation. All this can be accessed within the App Quality Insights tool in Android Studio after Gemini is activated.

This feature builds on Android Studio’s integration with Firebase Crashlytics made years ago. That move at the time was described as “a big step in how Android developers can improve their app stability.” That, plus data from Android Vitals, resulted in the creation of Android Studio’s App Quality Insights (AQI). However, deciphering the data remains a manual analysis problem and can be time-consuming for developers. Google hopes Gemini will be used to tackle the more laborious parts, freeing up resources needed in order to improve the overall app experience.



READ SOURCE

This website uses cookies. By continuing to use this site, you accept our use of cookies.