India’s search behaviour takes centre stage in Google’s strategic outlook, says VP Pandu Nayak

0
40


Nayak said that they have found ways to automatically translate high-quality locally relevant English documents ranging from Wikipedia to other local sites and then surface them on the Hindi search experience.

India’s unique search needs are emerging as a key factor in shaping Google’s future strategy for its search business, a top executive said at the CNBC-TV18 and Moneycontrol Global AI Conclave on December 16.

“What’s neat about India is that it forces us to think about a lot of interesting problems. Voice-first (nature) is a great example. Another important problem that India really highlights is different languages,” said Pandu Nayak, vice president of search at Google in a fireside chat at the event.

Nayak said that they have done significant work in search to tackle these problems using its products such as Google Translate.

“For several years now, we’ve been trying to improve the search experience for our Hindi users and our local language users. And we’ve done that both by making sure that voice recognition for Hindi is improved and becomes as good as English voice recognition on the input side and improving the search experience for Hindi users in terms of content sparsity as compared to English users,” he said.

For instance, Nayak said that they have found ways to automatically translate high-quality locally relevant English documents ranging from Wikipedia to other local sites and then surface them on the Hindi search experience.

He also cited Project Vaani, a joint initiative between Google and Indian Institute of Science (IISc), that is aimed at collecting and transcribing open-source anonymised speech data from across all of the country’s 773 districts. As part of the first phase, data has been collected from 85 districts.

“There’s also the multi-modality, the photo-first things that are central to the things that users do here,” Nayak said. Multi-modality refers to using multiple modalities including text, images, video and audio.

He mentioned that the future is multi-modal since they are increasingly seeing people wanting to search using different content types such as images and text that together captures the user’s intent, like pointing their phone camera at an object and asking another question related to it. “This is a very natural way to ask a bunch of questions,” Nayak said.

Earlier this month, Google also unveiled Gemini, its newest and most advanced artificial intelligence (AI) model.

Gemini, the first AI model released tech after the merger of its AI research units, DeepMind and Google Brain has been built from the ground up and is “multimodal” in nature, meaning it can understand and work with different types of information, including text, code, audio, image and video, at the same time.

At launch, Google said it will be using Gemini across all its products. The company’s AI chatbot Bard will use a fine-tuned version of Gemini Pro for more advanced reasoning, planning, and understanding.

Gemini is also being used to make Google’s generative AI search offering Search Generative Experience (SGE) faster for users. The company said that they witnessed a 40 percent reduction in latency in English in the United States, alongside improvements in quality.

Gemini will also be integrated into more of the company’s products and services, including Search, Ads, Chrome, and Duet AI in the coming months, the company said at the time.

“I am really looking forward to seeing how we’re going to bring it to our users in search,” Nayak said.

Palo Alto Networks presents the CNBC-TV18 Moneycontrol Global AI Conclave, in collaboration with EY as the Knowledge Partner and Google, Yotta Infrastructure, and Reliance Industries Limited as Associate Partners, with Townhall serving as the Technology Partner.




Source link