HPE’s AI CTO on the future of swarm learning

Image via www.vpnsrus.com

Every industry has its unique set of recurring challenges. For the financial sector, it’s fraud. For healthcare, it’s getting one step ahead of diseases.

Throughout the years, data has been instrumental in solving some of these problems. To keep people’s personal information safe, enterprises need to adhere to data privacy laws wherever they operate.

From an operational standpoint, this hindered many opportunities for data collaboration among industry peers. But what if there was a way to facilitate collective learning without violating data regulations?

Dr Eng Lim Goh, Senior Vice President and Chief Technology Officer, AI at Hewlett Packard Enterprise (HPE), believes that this is where swarm learning (SL), which utilises blockchain, can make a huge splash.

“Let’s take credit card companies for example. They don’t want to share customer data, but they want to share fraud prevention measures. They want to fight fraud together. The question is, who is going to be the central coordinator here?”, Dr Goh said during the HPE Discover More Singapore 2022 conference.

Why blockchain?

Realising that territorial jurisdictions are a huge roadblock to SL, HPE in 2018 started working on replacing the central coordinator with blockchain.

“(There’s) no more central coordinator, because all of you are equal. All you have to do is agree on the smart contract and the private permission blockchain,” Dr Goh said.

“But how do you rotate? It should be random, and should be a round robin, and if the round robin entity doesn’t respond, who is the next (one)? You just set the rules and launch as a smart contract, and basically this network goes off on its own the same way blockchain goes off on its own,” he revealed.

The CTO described SL as a “decentralised machine-learning approach that unites edge computing, blockchain-based peer-to-peer networking, and coordination while maintaining confidentiality.”

According to him, SL goes beyond federated learning because of the exclusion of a central coordinator.

The evolution of AI

Meanwhile, for AI, enterprises are also increasingly exploring new use cases, in conjunction with other technologies.

Dr Goh, however, cautioned about the data-first modernisation that is currently gaining ground.

“We need to first understand the differences between what machines are and what humans are. Both have their issues, and both have their positives, but we need to understand the differences so that as we build these machines, we use them in a way that is acceptable to humans,” the CTO noted.

“To understand the differences, the machines need to understand the data feed. We are very careful with the textbooks we developed for grade schools and children’s books, because we could influence (young people). We should treat this machine as very young. You have to be very careful about what you feed it, in the same way we’re very careful with such textbooks,” he remarked.

On the flipside, Dr Goh also observed a lot of organisations that are exploring measures to deploy AI in a more efficient and sustainable manner.

“These huge models require massive amounts of energy to learn and to train. Now, they also need a lot of energy to give you answers. To be practical and deploy this in a more edge kind of way, for example, ‘Okay, my model is large now, can I stay the same and get it to be smarter?’ Not just by learning more data, but to be broader. Now, they’re (businesses) looking at ways like (how) the human brain works. Between ages three and 30, our brain prunes the connections we don’t use,” he shared.

Aside from this, enterprises are also looking for ways to avoid training AI models from scratch.

“When we learn something, we don’t learn from scratch. When I was young, I learned from looking at square and round objects. I didn’t really learn from scratch (what) round objects (were), because I have a concept of objects. In a lot of these multimodal LLMs (large language models), in the recent past, we learned from scratch,” Dr Goh explained.

“You don’t leverage what you’ve learned before to learn a new task. To learn a new task, I learned from scratch. Now they (businesses) are saying, ‘Is there a way not to start from what you’ve learned albeit in a different task and grow from that?’ So, they save energy in learning also. These companies are looking at ways to make their models smarter, and have more capability, without having to grow the model exponentially as we’ve done in the last two years,” he continued.

Ultimately, for companies like HPE, constant conversations with their customers centre around the evolution and effectiveness of the technology deployed, and whether the technology can be further developed to meet future needs.

“We developed AI algorithms where we see a need, like swarm learning, blockchain, (and) federated learning, but most of our energy is in deployment with services, HPE GreenLake Services, deployment of AI technology, and then working with them. Then, after the successful deployment and production of all these AI technologies, after a while, we would sit back and have such a discussion,” said Dr Goh.

“‘What does this mean?’ This keeps going for the next five years, because the corporation needs to know, and the employees want to know. ‘Is it getting to a point where it is affecting some jobs?’ These are the kinds of questions they look for, that they have to look ahead to see. And this would be the discussions we normally have with them,” he concluded.