Will Google Assistant be powered by artificial intelligence soon?

The Google Assistant digital assistant may soon see the integration of AI capable of expanding its capabilities.

Google Assistant may soon get advanced features based on artificial intelligence similar to chatbots like Bard and ChatGPT. In an email seen by Axios, a Google executive told employees that the company sees “a huge opportunity to explore what an overloaded assistant powered by the latest LLM technology would look like.”

Now, the company hasn’t gone into the details of what these “overloaded” capabilities might look like, but a quick look at Bard’s feature set gives a good idea of ​​what we can expect.

Bard is built on top of the Pathways Language Model 2 (PaLM 2) language model and is based on the Google Language Model for Dialogue Applications (LaMDA) technology. In addition to being able to answer questions based on information obtained from the Internet, Bard recently gained the ability to analyze images using the same technology as Google Lens.. It can also receive quotes and will soon be available in Adobe Express thanks to its integration with Generation AI Firefly.

But these features have little in common with Google Assistant, which currently can best fetch web search results or perform app-related tasks on the device, such as setting an alarm or playing music. Bard, on the other hand, might be the smartest AI chatbot, but it can’t perform any meaningful tasks on your phone, like play music or set an alarm, but with a built-in assistant. can greatly expand the capabilities of Google Assistant. Interestingly, Google has already given a teaser of what’s coming next.


You may also be interested in:


Google Assistant i

A big leap forward for digital assistants

In May 2023, the Google AI team released a report titled “Enabling Conversational Cellular Experience with LLM” which talked about testing large language model queries against phone user interface. We’re talking about integrating large language models with graphical user interfaces (GUIs), which are applications and software that run on your phone’s screen.

He discusses extensively four areas of application, including summarizing the content on the screen, answering questions based on the content shown on the display, and most importantly, assigning user interface features to language prompts.

For example, the language model can look through the user interface to automatically generate contextual questions and the information they convey. Once it has gathered the details, it can turn them into questions so that when the user asks, the language model responds quickly.

Another notable feature isanswer the questions on the screen“. For example, when a blog post is opened in a web browser, the AI ​​can provide information such as title, author name, publication date, and more.

But the most promising area of ​​application is “lUI Action Mapping Operator“. Essentially, this results in you controlling your phone with prompts (both voice and text). The virtual assistant can be asked to open an app, change phone settings such as cellular mode, and more with improved conversational capabilities.

It’s not clear exactly when the “overloaded” Google Assistant will reach, but it would be a real leap in its capabilities. Interestingly, Apple is also rumored to be playing with generational AI tools – reportedly inside AppleGPT – to improve Siri.


Source link

Leave a Comment