Claude locks out GPT-5, Apple enters the fray, and Musk revives Vine, sort of.
Claude locks out GPT-5, Apple enters the fray, and Musk revives Vine, sort of.
Here is an overview of the latest on Claude locking GPT-5 out, Apple’s entry into AI, and Elon Musk’s revival of Vine:
Claude Locks Out GPT-5
Specific details about Claude (from Anthropic) locking out GPT-5 are not clearly reported in recent public sources. However, competition among AI companies to push the capabilities of large language models (LLMs) like Claude, GPT-5, and others remains fierce. Companies focus on proprietary enhancements, safer deployment, and developer access controls that might create some access or ecosystem exclusivity, but no clear “lockout” event is detailed.
Apple Enters the AI Fray
Apple has been quietly but powerfully advancing its AI strategy. In 2025, Apple unveiled Apple Intelligence, featuring an on-device foundational large language model (LLM) designed for privacy, efficiency, and seamless integration across Apple devices such as iPhone, iPad, Mac, Apple Watch, and Apple Vision Pro. Apple allows developers to access these on-device AI models, fostering the creation of private, AI-powered apps. Apple’s approach balances powerful AI with stringent privacy, running many features offline without data sharing. Apple is playing a slower, more deliberate AI game focused on user trust, hardware-software integration, and incremental feature improvements rather than quick hype. This strategy contrasts with the rapid AI rollout by competitors like OpenAI and Google. Also, Apple recently incorporated OpenAI tech for select features but keeps its core AI under tight control built around privacy.
Elon Musk Revives Vine, Sort Of
Elon Musk has revived the concept of Vine, the popular once-defunct short video platform, though details show it as more of a spiritual or partial revival rather than a direct relaunch of the old Vine app. Musk’s new iteration focuses on quick, engaging video content compatible with his vision for social media and multimedia engagement but is expected to have modern features and integrations different from the original Vine platform. This move demonstrates Musk’s continued interest in influencing social media spaces with fresh takes on video sharing[no direct full details in current sources but inferred from recent media buzz].
Summary
- Claude and GPT-5 are key players in AI with competitive but no detailed blockade reported.
- Apple is making a controlled, privacy-first entry into AI with on-device LLMs for developers and users.
- Elon Musk is re-energizing a Vine-like short video platform with a modern twist, reflecting his social media vision.
This presents a snapshot of current advances and strategies in AI and tech platforms by leading figures and companies.
Can the soul of AI survive a gold rush for power, persona, and platform control?
Anthropic just blocked OpenAI’s access to Claude, accusing it of overreach. As GPT-5 looms, the gloves are off and the industry’s trust fractures are showing. Behind the scenes, it’s not just model versus model. It’s ethics versus ambition.
Meanwhile, the science of steering AI gets real. A new method can dial models toward (or away from) traits like honesty, sycophancy, even malice. It’s a technical leap with existential consequences: Are we aligning AI, or reprogramming personality?
In Cupertino, Apple is racing to reclaim AI relevance. A secret team is building its own chatbot engine to displace ChatGPT inside Siri and Spotlight, marking a sharp turn from past skepticism.
And Elon Musk is playing to nostalgia with Grok Imagine—an “AI Vine” wrapped in six-second dreams and a $30/month paywall. Innovation or reanimation? Either way, the past is premium now.
The AI race isn’t just about scale. It’s about who holds the dial, and who dares to look in the mirror.
📌 In today’s Generative AI Newsletter:
Anthropic blocks OpenAI from Claude ahead of GPT-5
Researchers unveil AI “personality vectors”
Apple builds in-house ChatGPT rival with project AKI
Musk launches Grok Imagine, the “AI Vine”
OpenAI lost access to the Claude API this week after Anthropic claimed the company was violating its terms of service.

Photo-Illustration: Wired Staff/Getty Images
Anthropic revoked OpenAI’s API access to its models on Tuesday, multiple sources familiar with the matter tell WIRED. OpenAI was informed that its access was cut off due to violating the terms of service.
“Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5,” Anthropic spokesperson Christopher Nulty said in a statement to WIRED. “Unfortunately, this is a direct violation of our terms of service.”
According to Anthropic’s commercial terms of service, customers are barred from using the service to “build a competing product or service, including to train competing AI models” or “reverse engineer or duplicate” the services. This change in OpenAI’s access to Claude comes as the ChatGPT-maker is reportedly preparing to release a new AI model, GPT-5, which is rumored to be better at coding.
Featured VideoAstrobiologist Answers Astrobiology Questions
OpenAI was plugging Claude into its own internal tools using special developer access (APIs), instead of using the regular chat interface, according to sources. This allowed the company to run tests to evaluate Claude’s capabilities in things like coding and creative writing against its own AI models, and check how Claude responded to safety-related prompts involving categories like CSAM, self-harm, and defamation, the sources say. The results help OpenAI compare its own models’ behavior under similar conditions and make adjustments as needed.
“It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them,” OpenAI’s chief communications officer Hannah Wong said in a statement to WIRED.
Nulty says that Anthropic will “continue to ensure OpenAI has API access for the purposes of benchmarking and safety evaluations as is standard practice across the industry.” The company did not respond to WIRED’s request for clarification on if and how OpenAI’s current Claude API restriction would impact this work.
Top tech companies yanking API access from competitors has been a tactic in the tech industry for years. Facebook did the same to Twitter-owned Vine (which led to allegations of anticompetitive behavior) and last month Salesforce restricted competitors from accessing certain data through the Slack API. This isn’t even a first for Anthropic. Last month, the company restricted the AI coding startup Windsurf’s direct access to its models after it was rumored OpenAI was set to acquire it. (That deal fell through).
Anthropic’s chief science officer Jared Kaplan spoke to TechCrunch at the time about revoking Windsurf’s access to Claude, saying, “I think it would be odd for us to be selling Claude to OpenAI.”
A day before cutting off OpenAI’s access to the Claude API, Anthropic announced new rate limits on Claude Code, its AI-powered coding tool, citing explosive usage and, in some cases, violations of its terms of service.
Anthropic has blocked OpenAI’s access to its Claude API just ahead of GPT-5’s expected release. The conflict centers on Claude Code, Anthropic’s AI coding assistant. Reports allege that OpenAI used its API privileges for more than standard benchmarking, engaging with Claude in ways that violated Anthropic’s terms of service. OpenAI denies wrongdoing, framing its actions as industry norm.
This isn’t just competitive tension. It’s a signal that foundational trust in the AI ecosystem is eroding. Dario Amodei, Anthropic’s CEO and a former OpenAI executive, has long warned about the risks of ambition unchecked by values. In a recent podcast, he argued that when motivations aren’t sincere, even good work contributes to harm. That belief seems to shape how Anthropic is drawing its lines.
The Claude freeze follows another block last month, when Anthropic restricted access to Windsurf amid reports OpenAI planned to acquire it. That deal collapsed after Google reportedly hired Windsurf’s leadership and absorbed its tech. Together, these actions paint a picture of an industry in high-stakes defense mode.
Claude Code is one of the most widely used tools among developers. Blocking access now is not just strategic, it’s a statement. As AI companies sprint toward dominance, Anthropic is betting that ethics still matter.
The question isn’t whether GPT-5 will be powerful. It’s whether the people building these systems are accountable to something more than just scale. The AI race is on. But the real contest may be for integrity.
🧠 Steering AI’s Soul: Can We Really Control the Persona of a Model?
In a chilling twist on “know thyself,” researchers at Anthropic and UT Austin have proposed a way to engineer AI personalities. Their new paper introduces persona vectors, linear directions in a model’s activation space that correspond to traits like evil, sycophancy, or hallucination. The implications? Both thrilling and troubling.
magine developing a finer control knob for artificial intelligence (AI) applications like Google Gemini and OpenAI ChatGPT. Mikhail Belkin, a professor with UC San Diego’s Halıcıoğlu Data Science Institute (HDSI) – part of the School of Computing, Information and Data Sciences (SCIDS) – has been working with a team that has done just that. Specifically, the researchers have discovered a method that allows for more precise steering and modification of large language models (LLMs) – the powerful AI systems behind tools like Gemini and ChatGPT. Belkin said that this breakthrough could lead to safer, more reliable and more adaptable AI.
“Currently, while LLMs demonstrate impressive abilities in generating text, translating languages and answering questions, their behavior can sometimes be unpredictable or even harmful,” Belkin said. “They might produce biased content, spread misinformation or exhibit toxic language.”
The multi-institutional research team includes Belkin, Daniel Beaglehole (Computer Science and Engineering Department at UC San Diego Jacobs School of Engineering), Adityanarayanan Radhakrishnan (Broad Institute of MIT and Harvard SEAS) and Enric Boix-Adserà (MIT Mathematics and Harvard CMSA).
Belkin said that they tackled this challenge by developing a novel “nonlinear feature learning” method. This technique allowed them to identify and manipulate important underlying features within the LLM’s complex network.
Think of it like understanding the individual ingredients in a cake rather than just the final product. By understanding these core components, the researchers then guided the AI app’s output in more desirable directions.
“It’s like we’re gaining a deeper understanding of the AI app’s internal thought process,” Belkin explained. “This allows us to not only predict what kind of outputs the model will generate but also to actively influence it towards more helpful and less harmful responses.”
Their approach involved analyzing the internal activations of the LLM across different layers. This allowed them to pinpoint which features are responsible for specific concepts, such as toxicity or factual accuracy. Once these features were identified, the researchers adjusted them to encourage or discourage certain behaviors.
The team demonstrated the effectiveness of their method across a range of tasks, including detecting and mitigating hallucinations (instances where the AI generates false information), harmfulness and toxicity. They also showed that their technique could steer LLMs to better understand concepts in various languages, including Shakespearean English and poetic language.
“One of the significant benefits of this new method is its potential to make LLMs more efficient and cost-effective,” Belkin said. “By focusing on the crucial internal features, we believe that we can fine-tune these powerful models using less data and computational resources – this could, in turn make advanced AI technology more accessible.”
This type of research also has the potential of opening doors for creating more tailored AI applications. Imagine an AI assistant specifically designed to provide accurate medical information or a creative writing tool that avoids clichés and harmful stereotypes. The ability to precisely steer LLMs brings these possibilities closer to reality.
The researchers have made their code publicly available – encouraging further exploration and development in this critical area of AI safety and control. To access the code, the community can find it on Belkin’s website.
“As LLMs become increasingly integrated into our daily lives, being able to understand and guide their behavior is paramount,” said Rajesh Gupta, who is the interim dean for SCIDS, the HDSI founding director and a distinguished professor with the Computer Science and Engineering Department at UC San Diego Jacobs School of Engineering. “This new research by Professor Belkin and team represents a significant step towards building more reliable, trustworthy and beneficial artificial intelligence for everyone.”
The research relies on recent work that has been published in Science and PNAS.
The research is supported by the U.S. National Science Foundation (NSF) and the Simons Foundation for the Collaboration on the Theoretical Foundations of Deep Learning (award nos. DMS-2031883 and 814639), the TILOS Institute (award no. NSF CCF-2112665) and the Office of Naval Research (ONR N000142412631). The work relied on Expanse at the San Diego Supercomputer Center at UC San Diego and Delta at the National Center for Computing Applications at the University of Illinois; this was supported by NSF ACCESS (allocation no. TG-CIS220009).
In a weekend blitz of announcements, Elon Musk introduced Grok Imagine, a new feature within X’s AI assistant, Grok. Marketed as “AI Vine,” the tool can generate short videos faster than major competitors create a single image. Users describe a scene, and the AI returns a whimsical video clip in seconds. Think: “a cat breakdancing in Times Square.” It’s surreal, oddly charming, and perfectly engineered for a six-second attention span.
But here’s where the story turns. Musk also claimed that the original Vine archive has been found, and that X is working to restore user access, rekindling a cultural artifact long thought lost.
The buzz is real. But beneath the surface, a deeper question brews: Are we innovating, or reanimating?
The urgency to dominate AI video creation feels less like vision and more like valuation theater. As investors flood AI startups with capital, tools like Grok Imagine become symbols of speed—technological, cultural, and financial. But who benefits from nostalgia-fueled AI? Users? Creators? Or the platforms racing to monetize our collective memory.
Grok Imagine’s $30 per month SuperGrok paywall raises further ethical friction. Should resurrecting our digital past be locked behind a subscription?
Short-form video culture was born from spontaneity and public access. In Grok Imagine’s case, the past isn’t just prologue. It’s premium content. And that shift reveals more about where the industry is going than any AI-generated video ever could.