In a landmark ruling that could shape the future of artificial intelligence and intellectual property law, a US federal judge has decided that Anthropic did not break the law by using copyrighted books to train its system.
However, Anthropic – the company that developed the chatbot Claude – could still face severe penalties related to how they handled these books.
Anthropic has also crossed legal limits
The ruling was issued by Judge William Alsup in San Francisco, who found that Anthropic's use of the work of authors Andrea Bartz, Charles Graeber and Kirk Wallace Johnson in training its AI model was valid under the principle of fair use.
This doctrine allows limited use of copyrighted content without consent, and played a central role in Alsup’s ruling – one of the few to address fair use in the era of generative AI.
“Like any reader who aspires to be a writer, Anthropic’s great language models are trained not to copy or replace, but to create something different,” Mr. Alsup writes.
While the judge upheld Anthropic’s use of books to train its AI, he also noted that the company had broken the law by storing more than 7 million pirated titles in a “central library.” This action did not fall within the scope of fair use.
A trial is scheduled for December to determine how much, if any, Anthropic should pay to the authors. Under US copyright law, damages can be as high as $150,000 per work found to have been intentionally infringed.
Anthropic has not commented publicly on the ruling, but the outcome splits the case into two parts: the training part, which is legally protected, and the storage part, which is not.
Anthropic Verdict: A Victory for the AI Industry?
The lawsuit is part of a wave of lawsuits filed by authors and media against companies like OpenAI, Meta, and Microsoft for building AI systems based on copyrighted material without permission. The core issue: do they have the right to take advantage of copyrighted content to create tools that compete with the original authors?
Alsup's ruling bolsters the AI developer's argument that the models generate groundbreaking creative content and should not be required to pay all copyright owners whose work was used in the training process.
“Like any reader hoping to become a writer, Anthropic's model was trained on these books not to copy but to create entirely new content.”
– Alsup
Anthropic argued in court that copying books is essential to studying writing styles and extracting non-copyrighted elements, such as structure and tone, to help AI generate unique content.
The company believes that this form of learning fosters human creativity – the goal that copyright law aims to achieve.
Alsup, however, criticized Anthropic for collecting digital copies in violation of copyright. Although the company argued that the source of the material did not matter, the judge rejected that view.
In the ruling, he said: “This order casts doubt on the ability of any infringing party to demonstrate whether downloading from infringing sites when they could have lawfully purchased or accessed them was actually necessary for subsequent legitimate use.”
In other words, while the ultimate goal may be defensible, Anthropic’s approach to the data source is not. This distinction will influence how AI companies collect training data in the future, encouraging more legal and ethical data collection methods.
With a slew of copyright lawsuits targeting AI companies, the decision could set an important precedent. A December court hearing will decide whether Anthropic’s way of hosting content merits financial penalties, and how much.
Source: https://tintucbitcoin.com/phan-quyet-co-loi-cho-anthropic/
Thank you for reading this article!
Please Like, Comment and Follow TinTucBitcoin to stay updated with the latest news about the cryptocurrency market and not miss any important information!