Google has finally launched its latest innovation in the world of artificial intelligence. This is the the launch of Gemini. Gemini is said to be designed to be more capable, flexible, and optimised. Interestingly, this new AI model has created a buzz by scoring over 90% on the Massive Multitask Language Understanding (MMLU) benchmark. This is believed to be the highest score scored by an AI model yet. Before this, OpenAI held the first rank by achieving 86.5% on the MMLU benchmark. This benchmark assesses how well an AI model understands language and tackles problem-solving tasks.
Gemini has been launched in three different sizes – Ultra, Pro, and Nano. Each comes with specific capabilities for providing various power requirements. The all-new Google AI model allows developers greater flexibility in their AI-related tasks, particularly those working with Android and enterprises.
Also read: Here’s why the Google Gemini AI launch could have been delayed
In terms of implementation, Gemini Nano will make its debut in the Pixel 8 Pro smartphone, most likely with a new update. It has been developed for on-device AI tasks. This size is designed to run directly on smartphones, supporting features such as ‘Summarise’ in the Recorder app and ‘Smart Reply’ in Gboard. Initially, it will roll out in WhatsApp and then more apps afterward.
Gemini Pro will be available for developers and enterprises from December 13. Currently, the Pro model is available in Google’s chatbot, Bard. It is said to provide more advanced reasoning, planning, and understanding in responding to all your queries.
Lastly, Gemini Ultra is currently being tested with red-teaming. It is said to roll out with ‘Bard Advanced’, which will launch in early 2024. Gemini Ultra will offer a more advanced AI experience. It is the one that scored 90.0% in the MMLU benchmark, surpassing OpenAI’s GPT-4.
Sundar Pichai, the CEO of Google and Alphabet said, “Every technology shift is an opportunity to advance scientific discovery, accelerate human progress, and improve lives. I believe the transition we are seeing right now with AI will be the most profound in our lifetimes, far bigger than the shift to mobile or the web before it.”
Sissie Hsiao, Vice President and General Manager for Assistant and Bard at Google explained, “Before bringing it to the public, we ran Gemini Pro through several industry-standard benchmarks. In six out of eight benchmarks, Gemini Pro outperformed GPT-3.5, including in MMLU (Massive Multitask Language Understanding), one of the key leading standards for measuring large AI models, and GSM8K, which measures grade school math reasoning.”