Refresh
“It’s a new industrial revolution,” Huang notes, powered by Blackwell, working alongside NIMs – all opearing in conjunction with the Omniverse.
And that’s a wrap! A whole lot to experience and digest – but some huge announcements when it comes to Blackwell, the future of AI and much more – we’ll be publishing our thoughts and write-ups shortly, so stay tuned to TechRadar Pro for more soon!
For his big finish, Huang is joined by a whole host of Project GROOT (General Robotics 00 Three) robots – and then by two famous friends from Star Wars as well!
Huang says the model will learn from watching human examples, but also learn in a “library” that will help it learn about the real world – not creepy at all…
We’re then treated to a video that shows off these humanoid robots in development, across factories, healthcare and science – and in operation in the real world, so the future may not be too far off after all.
Back to robotics – and Nvidia has over 1,000 robotics developers, Huang notes.
The AI future has often conjured up visions of Terminator-esque robots wiping out humanity, but the reality is often a little more mundane.
There’s a new SDK, Isaac Perceptor, aimed at robotic arms (known as “manipulators” and vehicles, giving such items far more insight and intelligence.
Nvidia is looking to forward the development of humanoid robots with the release of Project GROOT, another new collection of APIs.
Huang announces that Omniverse Cloud will now stream to the Apple Vision Pro!
He said he’s used it himself, and it works – if not slightly embarassing.
Project design and customization is another potentially huge market for AI – and now we’re seeing how Nissan works with Nvidia to offer the ultimate personalization options when choosing a new car.
It’s also being used in marketing and advertising campaigns – and there’s even a little hint of the Apple Vision Pro…
Huang introduces a demo of a virtual warehouse that blends together robotics and “human” systems, showing how the technology could work to boost productivity and efficiency.
It’s a slightly disturbing, but comprehensive look at how industries of the future could work – Nvidia has already signed up industrial giant Siemens to implement it.
The “chat-GPT” moment for robotics could be just around the corner, Huang says, and Nvidia wants to be up to speed and ready to roll when it does.
“We need a simulation engine that represents the world digitally for a robot,” he says – that’s the Omniverse.
Now, we move on to Robotics, or as Huang calls it – “physical AI”.
Robotics goes along with AI and Ominverse/Digital Twin work as a key pillar for Nvidia, all working together to get the most out of the company’s systems, he says.
“The enterprise IT industry is sitting on a goldmine…they’re sitting on a lot of data,” Huang notes, saying these companies could turn it into chatbots.
Nvidia is going to work with SAP, Cohesity, Snowflake and ServiceNow to simplify building of such chatbots, so it’s clear this is only the beginning.
Huang runs us through how Nvidia is using one such NIM to create an internal chatbot designed to solve common problems encountered when building chips.
Although he admits there were some hiccups, it has greatly improved knowledge and capabilities across the board.
In order to receive and operate software better, Nvidia has a new service – NIM.
Nvidia Inference Microservice, or NIM, brings together models and dependencies in one neat package – optimized depending on your stack, and connected with APIs that are simple to use.
NIMs can be downloaded and used anywhere, via the new Nvidia.ai.com hub – a single location for all your AI software needs.
“This is how we’re going to write software in the future,” Huang notes – by assembling a bunch of AIs.
Now, on to healthcare. Huang highlights the vast amounts of work the company has already done, from imaging to genomics and drug discovery.
Today, Nvidia is going a step further, building models for researchers around the world, taking a lot of the background heavy lifting and making drug discovery faster than ever.
So what else can generative AI do for us? It’s time to hear about some of the ways the technology can benefit us today.
First up – weather forecasting. Extreme weather costs countries around the world billions, and Nvidia has today launched Earth-2, a set of APIs that can be used to build more accurate and wider-resolution models, giving better forecasts that can save lives.
Nvidia has already partnered with The Weather Company for better predictions and forecasts.
After all of those pretty math and tech-heavy announcements, it’s time for something different, as we enter hour two of this keynote.
It’s time for some digital twins. Nvidia has already created a digital twin of everything in its business, Huang notes, blending together with Omniverse APIs to ensure successful factory rollouts and more.
“Blackwell will be the most successful product launch for us in our history,” Huang declares.
AWS, Google Cloud, Microsoft Azure and Oracle Cloud have already signed up for Blackwell, but so have a whole host of other companies, from AI experts to computing OEM and telco powerhouses.
“Blackwell is going to be just an amazing system for generative AI, ” Huang notes, adding that he believes future data centers will primarily be factories for generative AI.
“The excitement for Blackwell is genuinely off the charts.”
Blackwell is designed for trillion-parameter generative AI models, so unsurprisingly, it trounces Hopper in inference – with up to 30x greater output.
These are the systems that will train massive GPT-esque models, Huang notes, with thousands of GPUs coming together for a huge collection of computing – but at a fairly low energy cost.
When it comes to inference (aka generating/LLMs), though, Huang says Blackwell-powered units can again bring down computing costs and energy demands.
There’s also a new NVLink Switch Chip, offering the chance to let all GPUs talk to each other at the same time.
It can be housed in the new DGX GB200 NVL72, essentially one giant GPU again with some ridiculous numbers – including 720 petaflops of training, and 1.44 exaflops inference – this is for seriously heavy lifting, able to process 130TB/s – more than the entire bandwidth of the whole internet.
Some more frankly ridiculous numbers concerning Blackwell.
Blackwell also offers secure AI with a 100% in-system self-tesing RAS service, with full performance encryption – with data secure not just in transit, but at rest, and when it is being computed.
Secure AI – that’s a major step forward all round.
Here’s the future, people.
Installing Blackwell is as simple as sliding out existing Hopper units, with new platforms scaling from a single board up to a full data center.
Huang talks about Blackwell like a proud parent – it’s clear this is a huge upgrade for Nvidia, with some frankly staggering numbers behind it.
But wait – there’s more!
Say hello to Blackwell – “the engine of the new industrial revolution”.
Twice the size of Hopper, the “AI superchip”, it fits 208 billion transistors, putting two dies together to think they’re one chip. “It’s just one giant chip.”
“People think we make GPUs, but GPUs don’t look the way they used to,” Huang notes.
Bigger GPUs! That’s the answer, as Nvidia has been putting GPUs together for some time.
Following Selene and Eos, the company is now working on the next step – “we have a long way to go, we need larger models,” Huang notes.
“I’d like to introduce you to a very big GPU, he teases…
Huang wants to shout out to some partners and customers – Ansys, Synopsys and Cadence among others.
All of these partners, and many others, are demanding much more power and efficiency, he notes – so what can Nvidia do?
It ends with a shot of Earth-2, a new climate mapping and weather tracking model, before Hang re-enters.
“We’e reached a tipping point,” he says, “We need a new way of computing…accelerated computing is a massive speeding up.”
Simulations can help drive up the scale of computing, he notes, with digital twins allowing for much more flexibility and efficiency.
“But to do that, we need to accelerate an entire industry,” Huang notes.
We’re now getting an insight into “the sould” of Nvidia – the Omniverse.
Everything we’re going to see today is generated and simulated by Nvidia’s own systems, he notes.
This is kicked off with a video showig off designs from Adobe, coding from Isaac, PhysX, and animation from Warp – all running on Nvidia.
It’s been a rollercoaster few years, Huang reminds us – and there’s a long way to go.
“It is a brand new category,” he notes, “it’s unlike anything we’ve ever done before.”
“A new industry has emerged.”
And with that, it’s time for the main man himself – Nvidia founder and CEO Jensen Huang.
Breathe a sigh of relief everyone – he is indeed wearing his now iconic leather jacket!.
“At no conference in the world is there such a diverse collection of researchers,” he notes – there’s a healthy amount of life sciences, healthcare, retail, logistics companies and so much more. $100trillion of the world’s companies are here at GTC he says.
“There’s definitely something going on,” he says, “the computer is the most fundamental part of society today.”
The lights go down, and it’s time for the keynote.
An introductory video runs us through some of the possibilities AI can bring, from weather forecasting to healthcare, shows some of the amazing growth Nvidia has had in recent months
With just minutes to go, we’re being treated to an incredible display from LLM-themed artist Refik Anadol, who has taken over the entire wall with an amazing visual piece.
The DJ is currently playing “Under Pressure” by David Bowie & Queen…hopefully not a taste of things to come.
In case you were wondering, the tagline for Nvidia GTC 2024 is “The Conference for the Era of AI”…
We’re nearly there! Half an hour to go…
Well, we’re in and seated at the keynote…45 minutes before the start, and it’s completely heaving! This is the hottest ticket here in San Jose and the tech world as a whole…
This also means the Wi-Fi is very sketchy at best – so bear with us while we get settled.
Ironically, it’s quite a quiet start to the day, with Nvidia CEO Jensen Huang set to take to the stage for his keynote at 1pm local time.
If you’d like a video accompaniment to my live updates, you can sign up to watch here.
It’s a fresh and sunny day here in San Jose – we’ve just been to the convention center to pick up our badge (and multiple drinks tokens for some reason) and it’s already packed with people – should be a good day!
Good morning from San Jose, where we’re all set for day one of Nvidia GTC.
Today sees the conference kick off with a keynote from Nvidia CEO and leather jacket enthusiast Jensen Huang, and we’ll be intrigued to see just what the company has on offer.
TechRadar Pro is live at Nvidia GTC 2024, and what a first day it’s been.
Today saw Nvidia CEO Jensen Huang take to the stage for a two-hour keynote that covered everything from the future of AI, incredibly powerful new hardware, and even dancing robots.
There was a whole host of news and announcements, so have a look through our live blog to make sure you didn’t miss anything big.
Services Marketplace – Listings, Bookings & Reviews