Google returns with its annual developer conference – Google I/O this week. The 2020 edition of the event was canceled because of the pandemic, but in 2021, Google I/O returned as a virtual event with lots of exciting releases, especially in the field of artificial intelligence (AI).
In 2017 Google announced that it was an AI-first company and since then, every year, it has brought more clarity about Google’s AI-first mission by delivering tools to increase knowledge, success, health and happiness, and making them “available” to everyone.
The solutions and technology that Google presented this year were focused more on solving real problems, bringing AI closer to mainstream adoption. Here are some of the AI-based announcements at this year’s Google I/O developer conference.
LaMDA- Aiding natural conversations with machines
Google has made a new leap in developing conversational AI models. It announced LaMDA on the first day keynote of the event. LaMDA stands for Language Model for Dialogue Applications and aims to replace robotic conversations with AI with more natural dialogues.
The model has been built on Transformer — a neural network architecture open-sourced by Google Research in 2017. Google CEO Sundar Pichai. said in the past few years and how Google has been working tirelessly to make enhancements to organize and access heaps of information conveyed by the written and spoken word.
Google believes interactions with AI are narrow and limited, instead of free-flowing and open-ended like in real life. “That meandering quality can quickly stump modern conversational agents (commonly known as chatbots), which tend to follow narrow, pre-defined paths. But LaMDA — short for ‘Language Model for Dialogue Applications’ — can engage in a free-flowing way about a seemingly endless number of topics, an ability we think could unlock more natural ways of interacting with technology and entirely new categories of helpful applications,” the company said.
AI in Google Maps
Google Maps has got various new features, and the updates at the Google I/O 2021. Features include more detailed street maps, expansion to the existing Live View feature and better navigational routes with the help of machine learning.
According to the company, Google Maps makes use of AI to show users if they are likely to encounter obstructions such as sidewalks and crosswalks, while planning their route. When a user opts to get directions in Maps, the service calculates multiple route options to the destination based on several factors, for example, showing eco-friendly routes to provide the most fuel-efficient routes to the users
Google Maps will use its Live View feature to aid users, who are exploring a new neighborhood. Live View uses AR signs to help users with real world navigation, but Google is expanding its use case further, to make it more personalized for individual users. For example, the Google Maps service can now more prominently display breakfast places and coffee shops in the morning to help users who are stepping out at the time.
Google Cloud’s new AI platform – Vertex AI
Pichai introduced a new AI platform, Vertex AI, Google Cloud’s new unified ML platform that brings AutoML and AI Platform together into a unified API, client library, and user interface. With the help of this platform, one can build, deploy, and scale ML models faster, with pre-trained and custom tooling within a unified AI platform. Vertex AI’s custom model tooling supports advanced ML coding, with nearly 80% fewer lines of code.
Project Starline: “magic mirror” using AI
Pichai also unveiled a new project called Starline that combines advances in hardware and software to enable friends, families and coworkers to connect, even when they are far away from one another. The tech project combines computer vision, machine learning, spatial audio and real-time compression
Google has developed a breakthrough light field display system that creates a sense of volume and depth, where users can hold video calls with people as 3D holograms and it can be experienced without the need for additional glasses or headsets. “It’s as close as we can get to the feeling of sitting across from someone,” Pichai said.
Smart Canvas for Hybrid work model
After launching Google Docs and Sheets 15 years ago, Google introduces Smart canvas as the next big step to create an evolving hybrid work model that gives new urgency to existing collaboration challenges.
The tech giant also introduced new smart chips in Docs for recommended files and meetings. Javier Soltero, senior executive introduced a new ability to insert Google Meet video calls directly inside other tools like Docs and Sheets, a direct challenge to Microsoft’s Teams product and offerings from Zoom Video Communications. To insert smart chips into work, one must simply type “@” to see a list of recommended people, files, and meetings. Smart chips will come to Sheets in the coming months.
AI Computing advances
The search giant has also unveiled its new Quantum AI campus in Santa Barbara, California. This campus houses the first quantum data centre, quantum hardware research laboratories, and in-house quantum processor chip fabrication facilities. Of the other advances, Google comes up with an AI-enhanced screening for skin cancer and radiology. With the help of AI and machine learning, the tech giant has previewed an AI-powered dermatology tool.
Google announced the next generation of its custom Tensor Processing Unit (TPU) AI chips – the fourth generation chips, which the tech giant said are twice as fast as the earlier version. Pichai stated that these chips are then brought together in pods with 4,096 v4 TPUs. Google undoubtedly leverages the custom chips to run many of its own machine learning services, however it will ensure this new generation chip is available to developers through Google Cloud platform.