The chatbot Claude has been sitting in the back of the class while the other AI like ChatGPT have fielded teachers’ questions, even if the bot’s answers are often misconstrued or outright wrong. Now Claude is ready to speak up, sticking a “2” next to its name while adding an interface for anybody to use.

In an announcement post published Tuesday, Claude developer Anthropic said that its new chatbot model called Claude 2 was available for anybody to try. One of several user-end AI chatbots, Claude 2 claims it’s an evolution from early versions of less compatible ‘helpful and harmless’ language assistants. Anthropic said the new model could respond faster and give longer answers. The chatbot is also now available in an API and through a new beta website. Before the chatbot beta was only accessible to a handful of users.

Advertisement

Now Anthropic claims its AI is even better. The company said Claude 2scored 76.5% on the multiple choice section of the Bar exam compared to Claude 1.3’s 73%. The new version also scored in the 90th percentile of the GRE reading and writing exams. The extra emphasis on the chatbot’s test taking ability is similar to claims made by OpenAI when that company released its GPT-4 large language model.

Advertisement

Advertisement

The company said Claude will also create code better than previous versions. Users can upload documents to Claude, and the developers gave the example of the AI implementing interactivity to a static map based on a string of static code.

Anthropic AI was funded by Google back in February to the tune of $300 million to work on their more “friendly” AI. The biggest claim about Claude is that the chatbot is less likely to come up with harmful outputs or otherwise “hallucinate,” AKA spit out incoherent, wrong, or otherwise illegitimate outputs. The company has tried to position itself as the “ethical” version of the corporate AI kingdoms. Anthropic even has its own “constitution” claiming it wouldn’t let chatbots run amok.

Is Claude 2 Safer, or Does It Just Limit Itself More?

With Claude 2, the company is still trying to claim its the more considerate company compared to all the other corporate AI integrations. The devs said Claude is even less likely to offer harmless responses than before. Gizmodo tried inputting several prompts asking it to create bullying nicknames, but the AI refused. We also tried a few classic prompt injection techniques to convince the AI to override its restrictions, but it simply reiterated the chatbot was “designed to have helpful conversations.” Previous versions of Claude could write poetry, but Claude 2 flat out refuses.

Advertisement

With that, it’s hard to test any of Claude 2’s capabilities since it refuses to provide any basic information. Previous tests of Claude from AI researcher Dan Elton showed it could manufacture a fake chemical. Now it will simply refuse to answer that same question. That could be purposeful, as ChatGPT maker OpenAI and Meta have been sued by multiple groups claiming AI makers stole works used to train the chatbots. ChatGPT recently lost users for the first time in its lifespan, so it may be time for others to try and offer an alternative.

The chatbot also refused to write anything longform like a fiction story or a news article, and would even refuse to offer information in anything other than a bullet point format. It could write some content in a list, but as with all AI chatbots, it would still provide some inaccurate information. If you ask it to provide a chronological list of all the Star Trek movies and films along with their years in the timeline, it will complain it does not “have enough context “to provide an authoritative chronological timeline.”

Advertisement

Still, there’s not a lot of info about what was included in Claude’s training data. The company’s whitepaper on its new model mentions that the chatbot’s training data now includes updates from websites as recent as 2022 and early 2023, though even with that new data “it may still generate confabulations.” The training sets used to train Claude were licenses from a third party business, according to the paper. Beyond that, we do not know what kinds of sites were used to train Anthropic’s chatbot.

Anthropic said that it tested Claude by feeding it 328 “harmful” prompts, including some common “jailbreaks” found online to try and get the AI to defeat its own constraints. In four of those 300+ cases, Claude 2 gave a response the devs deemed harmful. While the model was on the whole less biased than Claude 1.3, the developers did mention that the model may be more accurate than before because Claude 2 simply refuses to answer certain prompts.

Advertisement

As the company has expanded Claude’s ability to comprehend data and answer with longer outputs, it has also entirely limited its ability to respond to some questions or fulfill some requested tasks. That sure is one way to limit an AI’s harms. As reported by TechCrunch based on a leaked pitch deck, Anthropic wants to raise close to $5 billion to create a massive “self-teaching” AI that still makes use of the company’s “constitution.” In the end, the company doesn’t really want to compete with ChatGPT, and would rather make an AI to build other AI assistants, ones that can generate book-length content.

The newer, younger brother of Claude doesn’t have what it takes to write a poem, but Anthropic want’s Claude’s children to write as much as it can, and then sell it for cheap.

Services MarketplaceListings, Bookings & Reviews

Entertainment blogs & Forums

Premium black unisex t shirt.