Binance Square

Últimas noticias sobre la Inteligencia Artificial (IA) en el sector cripto

--

Meta Wins Legal Battle Over AI Training and Copyright Claims

According to ShibDaily, Meta, the company behind Facebook, has emerged victorious in a significant legal case concerning copyright infringement. A U.S. federal judge ruled that Meta did not violate copyright laws when it used works from 13 authors to train its AI systems without obtaining prior consent. U.S. District Judge Vince Chhabria granted summary judgment in favor of Meta, stating that the authors who filed the lawsuit failed to provide sufficient evidence that the company's use of their books caused them any harm. In 2023, a group of authors, including comedian Sarah Silverman and writer Ta-Nehisi Coates, accused Meta of copyright infringement for allegedly using their published works to train its large language models without permission. However, Judge Chhabria found that the authors did not demonstrate that Meta’s AI systems would lead to market dilution by generating content that directly competed with their work. Consequently, he determined that Meta’s use of the copyrighted material qualified as “fair use,” a legal principle that allows limited use of protected content without authorization, thereby shielding the company from copyright liability. Despite ruling in Meta’s favor, Judge Chhabria acknowledged that using copyrighted material without permission to train large language models, such as those powering tools like OpenAI’s ChatGPT, could be unlawful in many circumstances. His remarks offered some reassurance to creative professionals who argue that such practices infringe on their intellectual property rights. Victoria Aveyard, author of the bestselling Red Queen series, has been among the most vocal critics of Meta’s AI training practices. In a March TikTok post, Aveyard accused the company of using her work without her consent, without compensation, and against her will, referencing a database published by The Atlantic that allows authors to check whether their books were included in AI training datasets. Meanwhile, Meta welcomed the court’s decision, with a company spokesperson telling Reuters that it supports the principle of fair use, calling it a vital legal framework for building transformative AI technology. Artificial intelligence systems have recently become the focus of numerous legal disputes concerning alleged copyright infringement. Among these, OpenAI and The New York Times are engaged in an ongoing legal battle, with the Times asserting that OpenAI used its articles without authorization. OpenAI maintains that its activities fall within the bounds of fair use and emphasizes the significance of advancing AI technology. In December 2024, OpenAI CEO Sam Altman commented on the ongoing lawsuit, asserting that the news organization is on the wrong side of history.
8
--

AI Activity on Blockchains Sees Significant Growth in 2025

According to Cointelegraph, artificial intelligence activity on blockchains has experienced a substantial increase since the beginning of 2025, driven by heightened funding and user engagement with this emerging technology. Blockchain analytics platform DappRadar reports that AI-related onchain activity has surged by 86% this year, with approximately 4.5 million daily unique active wallets participating in AI decentralized applications (DApps).The rise in daily users has expanded AI app market share from 9% at the start of the year to 19%, closely trailing blockchain gaming, which holds a 20% share. DappRadar analyst Sara Gherghelas emphasizes that the growth of AI is not merely a result of hype but signifies a fundamental shift in user interaction with decentralized applications. AI agents are increasingly serving as a new onchain interface layer, whether through DeFi copilots, social agents, or autonomous gaming assistants.In May, DappRadar predicted that AI agent usage, which involves programs capable of autonomously executing blockchain actions like trading, would soon surpass gaming, a sector that has traditionally dominated the DApp ecosystem. The report highlights that AI agent funding has risen by over 9% in 2025, with $1.39 billion raised by AI agent projects, marking a 9.4% increase compared to 2024. Although this figure is still lower than the funding received by companies like OpenAI, it is noteworthy that AI agent funding now rivals or exceeds other Web3 verticals such as blockchain gaming.Gherghelas suggests that investors in the Web3 space increasingly view AI agents as a new primitive capable of reshaping user interactions with protocols, navigating DApps, and even automating personal financial strategies. She anticipates that 2025 could be the first year AI agents attract more capital than any other Web3 vertical.DappRadar's data from January to June reveals that most AI DApp users are located in Europe, accounting for 26% of all interactions. The largest share of users, at 33%, comes from unspecified regions and users utilizing VPNs or other anonymized sources. Asia closely follows Europe with just under 22% of users, while North America accounts for 15.8%. Gherghelas notes that the global distribution of AI users indicates that AI agents are not a localized phenomenon, with diverse demand spanning continents. Whether managing trades in Asia, representing users in Europe, or interacting with players in North America, AI agents are increasingly becoming a cross-continental presence.
11
--

CoreWeave in Talks to Acquire Core Scientific Amid AI Infrastructure Expansion

According to Cointelegraph, CoreWeave, a company that transitioned from cryptocurrency mining to AI infrastructure provision, is reportedly in discussions to acquire Core Scientific. This follows an increased bid after a previous offer was rejected last year. The Wall Street Journal reported that the acquisition could be finalized in the coming weeks, although financial specifics remain undisclosed. The offer must consider Core Scientific's significant growth over the past year.The news of the potential acquisition led to a surge in Core Scientific's stock, which rallied over 23%, prompting a temporary halt in trading. Currently, Core Scientific holds a market capitalization of approximately $3.6 billion. The company's stock, listed under the ticker CORZ, experienced a substantial intraday rally. Last year, CoreWeave proposed a takeover bid of $5.75 per share, valuing Core Scientific at around $1 billion. However, Core Scientific declined the offer, opting to strengthen its partnership with CoreWeave, which included a $1.225 billion agreement to bolster infrastructure support for Nvidia GPUs.CoreWeave's stock, trading under the ticker CRWV, has seen a remarkable increase of nearly 300% this year, elevating its market cap to $78.4 billion. Core Scientific's decision to reject the initial offer proved beneficial, as its stock now trades at nearly three times the original bid. Cointelegraph reported that Core Scientific's first-quarter earnings more than doubled, with net income reaching $580 million, although revenue fell short of analyst expectations at $79.5 million, with $67.2 million derived from self-mining.The company attributed its revenue and mining declines to the Bitcoin network's quadrennial halving in April 2024, which reduced mining rewards from 6.25 BTC to 3.125 BTC. Core Scientific is recognized as the 30th largest corporate Bitcoin holder, possessing 977 BTC according to industry data. As the landscape of cryptocurrency mining evolves, Core Scientific's strategic decisions and partnerships continue to shape its trajectory in the sector.
9
--

Elon Musk's xAI Plans to Retrain Grok Model with Revised Knowledge Base

According to Cointelegraph, Elon Musk has announced that his artificial intelligence company, xAI, will undertake a significant retraining of its AI model, Grok, using a new knowledge base devoid of "garbage" and "uncorrected data." Musk revealed in a post on X that the forthcoming Grok 3.5 model will possess "advanced reasoning" capabilities and aims to "rewrite the entire corpus of human knowledge," by adding missing information and eliminating errors. He emphasized the necessity of this approach, citing the prevalence of "far too much garbage" in existing foundation models trained on uncorrected data.Musk has consistently criticized rival AI models, such as OpenAI's ChatGPT, for being biased and omitting politically incorrect information. His vision for Grok is to create an "anti-woke" model, free from what he perceives as damaging political correctness. This aligns with his previous actions, such as relaxing content moderation on Twitter after acquiring the platform in 2022, which led to an influx of unchecked conspiracy theories, extremist content, and fake news. To combat misinformation, Musk introduced the "Community Notes" feature, enabling X users to provide context or debunk posts prominently displayed under offending content.Musk's announcement has sparked criticism from various quarters. Gary Marcus, an AI startup founder and professor emeritus at New York University, expressed concern over Musk's plan, likening it to a dystopian scenario reminiscent of George Orwell's "1984." Marcus criticized the idea of rewriting history to align with personal beliefs, suggesting it represents a dangerous precedent. Bernardino Sassoli de’ Bianchi, a professor at the University of Milan, echoed these sentiments, warning against the manipulation of historical narratives by powerful individuals. He argued that altering training data to fit ideological perspectives undermines innovation and constitutes narrative control.In his efforts to reshape Grok, Musk has encouraged X users to contribute "divisive facts" for training the bot, specifying that these should be "politically incorrect, but nonetheless factually true." This call has resulted in a flood of conspiracy theories and debunked extremist claims, including Holocaust distortion, vaccine misinformation, racist pseudoscientific assertions about intelligence, and climate change denial. Critics argue that Musk's approach risks amplifying falsehoods and conspiracy theories under the guise of seeking factual accuracy.
6
--

OpenAI Reduces Dependency on Scale AI Amid Meta Acquisition

According to Cointelegraph, OpenAI is scaling back its contracts with Scale AI, a data labeling startup recently acquired by Meta. This decision comes shortly after Meta announced a $14.8 billion deal for a 49% ownership stake in Scale AI, marking Meta's second-largest acquisition. As part of the deal, Scale CEO Alexandr Wang will join Meta’s experimental AI project, with the companies having announced the agreement on June 12. Scale AI, founded in 2016 and backed by over 100 investors, provides labeled data crucial for training and enhancing artificial intelligence models. The startup has been a supplier to prominent AI companies such as Anthropic, Cohere, and Adept. In 2019, Scale AI raised $100 million in a Series C funding round, according to PitchBook. However, OpenAI is now phasing out its reliance on Scale AI's data, seeking more specialized data sources for its AI models. An OpenAI spokesperson revealed that the company began reducing its contracts with Scale over the past year, noting that Scale accounted for only a small portion of OpenAI’s data requirements. Google is reportedly another company moving away from contracts with Scale AI, driven by concerns that Meta's acquisition could provide insights into competitors’ AI advancements. Reuters reported that this strategic shift is motivated by the potential competitive implications of Meta's involvement with Scale AI. Despite these changes, Scale interim CEO Jason Droege emphasized that the startup remains an independent entity, asserting that their commitment to protecting customer data remains unchanged. OpenAI is now exploring alternative data suppliers, including emerging companies like Mercor, to support its operations. Bloomberg highlighted that Scale AI initially employed a large number of contractors to label images and text for early AI systems. Over time, the company transitioned to hiring more educated contractors to contribute to the development of advanced AI models. This evolution reflects the growing complexity and sophistication required in AI data labeling processes.
8
--

Study Suggests AI Chatbots May Impact Cognitive Abilities

According to Cointelegraph, a recent study conducted by researchers at the Massachusetts Institute of Technology's Media Lab indicates that artificial intelligence chatbots, such as OpenAI's ChatGPT, may be affecting cognitive abilities. The study involved 54 participants who completed essay writing tasks using three different methods: ChatGPT, search engines, and their own cognitive abilities. In a subsequent session, participants who initially used ChatGPT were asked to write without any tools, while those who relied solely on their brains were instructed to use the language model.The findings were significant, revealing that over 83% of ChatGPT users struggled with memory recall, unable to quote from essays they had written just minutes earlier. Similarly, more than 80% of participants using language models faced difficulties recalling their own work. Alex Vacca, co-founder of sales tech agency ColdIQ, described these results as "terrifying," suggesting that AI might be leading to cognitive decline rather than enhancing productivity. The researchers noted that brain connectivity diminished with increased reliance on external tools, with the brain-only group showing the strongest cognitive networks, followed by the search engine group, and finally, the language model group exhibiting the weakest coupling.The study utilized electroencephalography (EEG) to monitor brain activity, assessing cognitive engagement and load during the tasks. The researchers warned of accumulating "cognitive debt" from repeated dependence on external systems like language models, which could replace the cognitive processes necessary for independent thinking. This cognitive debt, while deferring mental effort in the short term, could lead to long-term consequences such as reduced critical inquiry, increased susceptibility to manipulation, and decreased creativity.The paper, which is yet to undergo peer review, suggests that the use of AI language models might negatively impact learning, particularly among younger users. The researchers emphasized the need for "longitudinal studies" to fully understand the long-term effects of AI chatbots on human cognition before these tools are deemed beneficial for humanity. When approached for comment, ChatGPT responded that the study does not claim the chatbot is inherently harmful but cautions against excessive reliance without reflection or effort.
17
Conoce las noticias más recientes del sector
⚡️ Participa en los últimos debates del mundo cripto
💬 Interactúa con tus creadores favoritos
👍 Disfruta contenido de tu interés
Email/número de teléfono
Creador relevante
Binance News
@Binance_News
Mapa del sitio
Preferencias de cookies
Términos y condiciones de la plataforma