San Francisco: Google unveiled on Tuesday its latest innovations to its AI model series Gemini at the company's annual developer conference Google I/O 2024.
The company announced the private preview of a new version of Gemini 1.5 Pro, the company's current flagship model, that can take in up to 2 million tokens. With that capacity, which doubles the previous maximum amount, the new version of Gemini 1.5 Pro supports the largest input of any commercially available model.
"Today, all of our 2-billion user products use Gemini ... More than 1.5 million developers use Gemini models across our tools ... We've also been bringing Gemini's breakthrough capabilities across our products, in powerful ways," Google CEO Sundar Pichai told the conference.
"The power of Gemini -- with multimodality, long context and agents -- brings us closer to our ultimate goal: making AI helpful for everyone," he added.
The company brings the improved version of Gemini 1.5 Pro to all developers globally. In addition, Gemini 1.5 Pro with 1 million context is now directly available for consumers in Gemini Advanced, which can be used across 35 languages, Google said.
Pichai also announced the company's 6th generation of TPUs, called Trillium at the conference. "Trillium is our most performant and most efficient TPU to date ... We'll make Trillium available to our Cloud customers in late 2024," he said.