Google is facing accusations of using Anthropic’s Claude AI in its testing of the Gemini AI model without permission. According to internal correspondence seen by TechCrunch, contractors working on the Gemini project have been comparing its responses against those generated by Claude. However, Google has not confirmed if it obtained Anthropic’s approval for using Claude in these tests.
In the competitive world of AI, companies often benchmark their models against those of rivals. Typically, this is done by using industry-standard tests. However, the Gemini team appears to be directly comparing its output with that of Claude by having contractors evaluate the responses based on various factors like truthfulness and verbosity. Each contractor has up to 30 minutes to judge which model, Gemini or Claude, provides the better response.
Also read: Google’s search monopoly under fire in Japan, cease-and-desist order expected
Internal correspondence reveals that contractors recently noticed references to Claude in the testing process. One instance even showed a message saying, “I am Claude, created by Anthropic.” The contractors also observed that Claude placed a strong emphasis on safety. In one case, Claude refused to answer a prompt involving role-playing, while Gemini’s response was flagged for containing inappropriate content, including nudity and bondage.
Anthropic’s terms of service prohibit using Claude to develop competing products or train other AI models without its approval. Google, which is a major investor in Anthropic, has not disclosed whether it has obtained such permission for its use of Claude.
Also read: Rivals demand action against Google’s search result changes, here’s why
Shira McNamara, a spokesperson for Google DeepMind, which oversees the Gemini project, stated that the company does compare model outputs as part of its evaluation process. However, McNamara firmly denied any suggestion that Google is using Claude to train Gemini, calling such claims “inaccurate.”
This controversy comes on the heels of a report revealing concerns from contractors working on Gemini about its accuracy in sensitive areas like healthcare. Some expressed worries that the AI could produce misleading information on critical topics.