Everyone will have an AI assistant in the future: Nvidia CEO

Image created by DALL·E 3.
- Advertisement -

Artificial intelligence has become a revolutionary force for businesses and entire industries, spanning applications from chatbots to ChatGPT — and soon, everyone may have their own AI assistant.

This was one of the key points Nvidia CEO Jensen Huang shared during a fireside chat at SIGGRAPH 2024, where he discussed the company’s current AI strategy and the future of the enterprise landscape over the next few years.

“Every single company, every single job within the company, will have AI assistance,” Nvidia’s Chief Executive said.

Huang made this remark as Nvidia announced the launch of digital agents, or digital AI, during the event. According to him, digital agents enhance every role in a company, with customer service being the most obvious example.

“Every single company or industry has customer service. Today, it’s human doing customer service. In the future, it’s still going to be humans, but with AI in the loop. The benefit is that you’ll be able to retain the experiences of all the customer service agents you have, capturing that institutional knowledge, which you can then run through analytics and use to create better services for your customers,” he explained.

Digital helping hand

The technology, essentially a microservice hosted in the cloud, is connected to a digital human front end — an IO (input/output) of an AI that can speak, make eye contact with an actual human, and animate empathetically, Huang said.

“You could decide to connect your ChatGPT or your AI to the digital human, or you can connect your digital human to our retrieval-augmented generation customer service AI. You chat with the AI, it generates text, and that text is then translated to sound, so text-to-speech, and it’s that sound that animates the face. Then, RTX path tracing does the rendering of the digital human,” the CEO added.

Addressing concerns that such lifelike digital assistants might pose a threat to jobs in the future, Huang assured that the technology is “still pretty robotic.”

“We’re going to be robotic for some time. We’ve made this digital human technology quite realistic, but it’s still a robot, and I think that’s not a horrible way to do it,” he remarked.

Huang further elaborated: “There are many different applications where having a human or near-human representation is much more engaging than a text box. Perhaps someone needs a companion, or healthcare requires a way to advise outpatients, or a tutor needs to educate a child — all these different applications are better off having someone who is much more human and able to connect with the audience.”

Generative AI breakthrough

According to Huang, three key breakthroughs in ChatGPT have paved the way for Nvidia’s digital assistant.

The first one was reinforcement learning from human feedback: “This was a way of using humans to produce the best answers to align the AI with our core values or the skills that we want it to perform.”

Next is guard railing, which directs the AI’s focus and responses within a particular domain.

Nvidia CEO Jensen Huang speaks during a fireside chat at SIGGRAPH 2024, discussing the future of AI and technological advancements. Image courtesy of Nvidia.

“The purpose of guard railing is to prevent it from wandering off and pontificating about all kinds of stuff you ask it about. It will focus only on what it’s been trained to do, aligned to perform, and has deep knowledge of,” Huang said.

Lastly, there’s retrieval-augmented generation, where data is vectorised — meaning it’s embedded in a way that captures its meaning.

“It might be all of the articles that you’ve ever written, so now the AI becomes authoritative about you. It could essentially be a chatbot version of you. Everything I’ve ever written or ever said could be vectorised and turned into a semantic database. Before responding, the AI would look at your prompt, search the appropriate content from the vector database, and then augment it in this generative process,” the CEO detailed.

Power concerns

Amid all the innovations, concerns remain about generative AI’s power consumption. Studies indicate that a single ChatGPT query consumes about 10 times the electricity to process a Google search. With data centres taking up between 3% to 6% of the global energy supply, how can generative AI be justified as an enterprise investment?

“Let’s suppose that generative AI consumes 1% of the world’s energy. Even if data centres consume 4% global energy, remember that the goal of generative AI isn’t training but inference. Ideally, we create new models to predict weather, discover new materials, optimise our supply chains, and reduce the energy consumed and the gasoline wasted as we deliver products. Hence, the goal is actually to reduce the energy consumed by the other 96%. Very importantly, you have to think about generative AI from a longitudinal perspective,” Huang noted.

Moreover, the end of CPU scaling and the rise of accelerated computing mark a significant paradigm shift, one that Huang believes will save substantial energy and resources.

“Everyone is moving from CPUs to accelerated computing because they want to save energy. Accelerated computing can help you save 20 to 50 times the energy while doing the same processing tasks. Therefore, the first thing we need to do as a society is accelerate every application. If you’re doing Spark data processing, run it with accelerated Spark to reduce the energy required by 20 times. If you’re doing SQL processing, use accelerated SQL to cut the power consumption by 20 times,” he recommended.

AI reality

AI has been a game-changer for companies like Nvidia, Huang said, transforming not only the way they work but also the solutions they deliver to people and enterprises.

“We invent tools here, and these tools either accelerate our work, collaborate with us to do better or larger-scale work, or enable us to accomplish tasks that were previously impossible. What you’re going to see is that generative AI will become more controllable than before. Now we’re using Omniverse with generative AI to better control generative images and reduce hallucinations,” he said.

Pretty soon, all jobs, including his own, will be modified, but not replaced by AI, Nvidia’s Chief Executive acknowledged.

“I’m going to be prompting a whole bunch of AIs. We will all soon have AI assistants. Our software programmers already have AIs that help them program. All of our software engineers have AIs that help them debug software. We have AIs that help our chip designers in designing chips. None of the work that we do would be possible anymore without generative AI,” Huang concluded.