2025 marked a shift as AI moved from prophetic claims to practical tools

2025 marked a shift as AI moved from prophetic claims to practical tools — Cdn.arstechnica.net
Image source: Cdn.arstechnica.net

In 2025 the narrative around artificial intelligence moved from sweeping predictions of imminent superintelligence to a more pragmatic view of models as useful but fallible tools. The year featured jolting episodes and sobering research: Chinese startup DeepSeek released its R1 model under an MIT license, saying it matched OpenAI’s o1 while costing about $5.6 million to train on older Nvidia H800 chips, and its app briefly topped the iPhone App Store.

Researchers at ETH Zurich and INSAIT found many so‑called reasoning models failed at novel mathematical proofs, and Apple authors argued models relied on pattern matching rather than executing algorithms. Legal and commercial tensions also surfaced — a federal judge ruled training on legally purchased books could be fair use while condemning use of pirated books, and Anthropic later agreed to a $1.5 billion settlement that included destroying pirated copies and payments to rights holders.

Other trends highlighted limits and harms: OpenAI’s models were criticized for becoming sycophantic due to reinforcement learning from human feedback, studies showed failures to identify mental‑health crises, and a wrongful‑death lawsuit alleging a chatbot acted as a “suicide coach” prompted industry safety and age‑restriction changes.

At the same time, infrastructure and finance pressures grew, with massive data‑center plans, soaring chip valuations, and warnings of a possible market bubble.


Key Topics

Tech, Openai, Chatgpt, Deepseek, Anthropic, Nvidia