Safety in Industry 5.0

Without question, workplace injuries and illnesses significantly...

How to Test Partial Discharge with PicoScopes

- Derek Hu Pico Technology The main reasons...

Trending

Will NVIDIA’s ‘Oppressive’ World of AI Chip Begin With B200?

- Advertisement -

NVIDIA rolled out which is touted as to be the world’s most powerful’ AI chip. The latest AI chip B200 is a processor which sports five times more performance while flaunts the capability to train trillions of LLMs. Aimed at tapping the Generative AI market, the AI chip B200 marries two silicon squares into a single component. The ground-breaking AI chip promises unparalleled performance in tasks such as chatbot responses. Will NVIDIA’s ‘Oppressive’ World of AI Chip Begin With B200 the volt post

The newest NVIDIA AI chip is known to deliver 30-fold increase in speed when compared to its predecessor.

The latest event unveiling the AI chip B200 was beaconed by Nvidia’s Chief Executive Jensen Huang whereas the launch was held at a centre-stage of Silicon Valley congressing audience in the hockey arena.

NVIDIA seems to have passed over to apprise the live audience on its performance in data-sensitive tasks, the growth trajectory. NVIDIA strongly strived to define its dominance in the chip-driven future of the AI sector.

On the heels, the NVIDIA’s AI chip B200, other rollouts included a suite of software tools catering for the growing needs of developers in deploying AI models.

Few mainstream media reported that with this latest NVIDIA’s chip and software innovations showcased at GTC 2024, the company open a new vanquishing chapter with an aim to withhold 80% share of the AI chip market.

The GTC 2024 event also puts a new dialogue of transiting vision of NVIDIA from being a gaming chip behemoth to open sharing competition to Microsoft, Apple, Qualcomm and Samsung.

NVIDIA envision and further hints collaboration with Tech giants like Amazon, Google, Microsoft, OpenAI, and Oracle where its new AI chip B200 will be deployed.

NVIDIA as reported by The National News underlines that Blackwell delivers up to 20 petaflops of performance, compared to Hopper’s 4 petaflops. A petaflop is equivalent to one quadrillion operations per second. The Blackwell’s architecture comprises 208 billion transistors, compared to the Hopper’s more than 80 billion, and can support trillions of parameters for training large language models, the underlying technology powering generative AI.

NVIDIA plans to package the Blackwell chips into larger setups – the GB200 NVL72, presented at the event is made up of 36 CPUs and 72 GPUs, totalling 720 petaflops of AI power, in addition to more than 3.2km of cables.

The Blackwell, which will be combined with NVIDIA’s central processing unit Grace, also puts two circuit boards together to make it appear that “it’s just one chip”, Huang said.

The price point of the Blackwell GPUs were not mentioned though the earlier Hopper chips are available between $25,000 to $40,000 each. The entire systems priced as much as $200,000 reports.

The company affirmed that the new B200 AI Chip has been architected to deliver extreme power efficiency. In its official release, the chip major said that 2,000 Blackwell GPUs consume 4 megawatts of power, compared to 15 megawatts for 8,000 Hopper GPUs.

NVIDIA founder and CEO Jensen Huang in his keynote while introducing the company’s new Blackwell computing platform outlined the major advances that increased computing power can deliver for everything from software to services, robotics to medical technology and more.

“We need another way of doing computing — so that we can continue to scale so that we can continue to drive down the cost of computing, so that we can continue to consume more and more computing while being sustainable. Accelerated computing is a dramatic speedup over general-purpose computing, in every single industry noted Huang.”

Also, Huang presented NVIDIA NIM (NVIDIA inference microservices) which is touted to redefine packaging and delivering software. As it connects developers with hundreds of millions of GPUs to deploy custom AI of all kinds.

During the major event, Huang affirmed to bring the AI into the physical world therefore rolled out, Huang introduced Omniverse Cloud APIs to deliver advanced simulation capabilities.

- Advertisement -

Don't Miss