Generative AI has been a game-changer for many enterprises, especially in the areas of automation and customer experience. However, many concerns still surround the technology— from hallucination to biases resulting from manipulation.
One company, Anthropic, aims to define the future of work through an honest AI engine. In early 2023, Anthropic launched its AI assistant, Claude, after a comprehensive review process and with help from communication platform Slack.
During a fireside chat at Salesforce’s most recent Dreamforce event, Anthropic Co-Founder and CEO Dario Amodei shared insights into the company’s journey in creating Claude, what enterprises should consider before deploying AI, and how the future of work is evolving.
Covering all bases
According to Amodei, Anthropic had developed an early version of Claude by mid-2022, but they postponed the launch to enhance its safety features.
“Some of the things we worked on have included a method called constitutional AI for specifying the values of an AI system. We’ve implemented a lot of methods to make our AI system Claude safer. There was a recent test where people tried adversarially to get AI models to do or say bad things, or help with illegal activities. Other models like Bard and ChatGPT were susceptible to this around 40% of the time, while Claude was susceptible only around 1% or 2% of the time. So our investment in safety has really paid off,” he said.
A common concern among AI systems, Amodei noted, relates to a choice, or value judgement. For example, asking the AI what it thinks about a particular political candidate.
“Constitutional AI is sort of our solution to that, where we have an explicit set of principles that describes what the AI system should do. Some examples of principles in our constitution are: the AI system should respect human rights; and the AI system shouldn’t have a political point of view, but should offer information about the different possible views and arguments that people might make for them,” he explained.
To train their AI engine, the company used the UN Declaration of Human Rights, Apple’s Terms of Service, and other references previously used by chatbot developers.
“That was just our first attempt to come up with things that are hard to object to. We’re doing a second revision of that, where we intend to include some notion of a democratic participation, something to get at the idea of legitimacy that society should have a say in these things,” the CEO shared.
Deployment strategy
While many businesses are fascinated by what AI can offer them, there is an equal amount of fear and uncertainty, which if unchecked can be a roadblock to innovation.
To this end, Amodei listed three factors enterprises should consider before deploying AI:
- Privacy and security.
- Ensuring the AI system behaves safely.
- Scoping of use cases.
For privacy and security, Anthropic is collaborating with cloud providers AWS and Google.
“These methods ensure privacy and security, providing all necessary cloud certifications and access to our model. You can even have single-tenant access. It’s similar to when your company just spins up an AWS instance, it just so happens that we’ve shipped a copy of our model onto it. That has a number of these privacy and security benefits— we don’t train on enterprise data, unless enterprises particularly want that and are allowed to do that for their particular use case. We don’t retain data upon request, unless there’s a trust and safety violation that we have to monitor for. We’ve worked very hard on that, and we’re always working to make the security measures better,” he elaborated.
In terms of ensuring the AI system behaves safely, Amodei said that around 20 to 25% of his company is currently working on making the models more honest.
“Hallucination is a big problem. If you think of medical or legal applications, or just any kind of professional or knowledge-work-based area, precision is very important,” he remarked.
Finally, clearly defining the application areas will ensure an enterprise can anticipate future problems and setbacks, and can respond to them accordingly.
“Let’s say you’re a company deploying something in the finance, legal, or health and medical sectors. You have to consider what happens if the model is wrong 1% of the time, as is likely. How do you build the infrastructure around it? A deployment that’s perfectly safe and ethical, with the right guardrails, can become unethical if the approach is, ‘We’ve provided the model to our customers and what they get is what they get.’ Thus, it’s critical to build a strong architecture to support the deployment,” Amodei added.
No slacking off
During the development of Anthropic’s AI assistant, Claude, the initial route researchers communicated with the platform was through Slack.
The company built a Slack bot and discovered they could communicate with Claude in more ways than one.
“You could have threaded conversations with the model. You could have different people respond to and talk to the model, and have them participating in a conversation. You could have the model edit its own responses, or give two possible responses and have humans react,” Amodei said.
Before launching a direct consumer model, Claude’s first iteration was a quad bot in Slack, which the company found very helpful internally.
“As the company grew, the number of Slack channels proliferated, and it became increasingly harder to keep track of all the information. We use Claude in Slack to summarise the content of Slack channels. Every day, there’s a special channel in our Slack called the Anthropic Times, which scans through all the hundreds of Slack channels that new and long-standing employees, including me, have a hard time monitoring, and it summarises them and says these are the top 20 things that happened today. You can just read the bulletin. The human doesn’t have to read everything,” the CEO recalled.
Future forward
With Anthropic determined to further improve its Claude platform, one of the biggest missing pieces, according to Amodei, is the idea of connecting to external content. For him, this is key to making generative AI a central part of the future of work.
“Now you have a chatbot that can answer generic questions. With all the Slack channels that we have, how do I take everything that the company knows and use it to innovate? The ability to connect to all these external resources, both from an AI perspective—allowing the model to integrate that information—and from an interface perspective—providing a platform for this integration—is key,” he said.
On the AI side, Anthropic is doubling down on retrieval augmented generation, where the model is able to take external content into account.
“We’re working hard on search, the model learning how to write code, and just different types of fine-tuning. So just this whole zoo of different ways to interact with the model,” Amodei explained.
On the interface side, Anthropic is banking on Slack as a natural platform to enable the AI model to behave like a co-worker, a virtual assistant, or even a Chief of Staff.
“I really see a lot of potential here where two years from now, you have a bunch of employees, but also you have Claude that specialises in legal contract content that helps your lawyers out; Claude that specialises in AI research; and Claude that specialises in recruiting and keeping track of the number of candidates,” the CEO continued.
To conclude, Amodei shared a final piece of advice for enterprises embarking on their AI journey.
“You should build for where the technology is going to be a year from now, because the pace of progress is very fast. It’s easy to build something, but to really build out an ambitious deployment of the model within your company takes a long time. By the time you do it, there’s going to be a better model we can swap in,” he said.