A list of Gemini Ultra benchmarks compared to GPT-4

Google pushed a chart with side-by-side comparison of Gemini Ultra’s and GPT-4’s performance on a number of tests. In almost every category, Google is on top.
Graphic: Google

Google is generally reserved when it takes shots at competitors, but the company didn’t mince words. According to Google, Gemini has OpenAI beat on almost every measure.

Advertisement

“With a score of over 90%, Gemini is the first A.I. model to outperform human experts on the industry standard benchmark MMLU,” said Eli Collins, Vice President of Product at Google DeepMind, speaking at a press conference. “It’s our largest and most capable A.I. model.” MMLU, short for Massive Multitask Language Understanding, measures AI capabilities using standard tests in a combination of 57 subjects such as math, physics, history, law, medicine, and ethics.

“Gemini’s performance also exceeds current state-of-the-art results on 30 out of 32 widely used industry benchmarks,” Collins said.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Quantité de husqvarna – tondeuse autoportée husqvarna rider 214 t avec coupe de 94 cm – neuf.