VStamp

OpenAI Drops Open Source Multi-Agent AI System as ChatGPT Raises Brain Damage Fears

OpenAI's Open-Source Multi-Agent AI Demo & MIT Study on ChatGPT's Brain Impact

NewsAI Revolution38,685 viewsJun 20, 2025

Exploring OpenAI's new multi-agent customer service system, Midjourney's video model launch, YouTube's AI video integration, and MIT's findings on ChatGPT's cognitive effects.

OpenAI
Multi-Agent AI
Customer Service AI
Scale AI
Meta
Midjourney
Video Generation AI
Disney Lawsuit
Universal Lawsuit
YouTube Shorts
Google Veo 3
MIT Study
ChatGPT
Brain Activity
AI Cognitive Effects
Generative AI
AI Research
AI Ethics

Blurb

This video covers the latest developments in AI technology and research:

  • OpenAI releases an open-source multi-agent customer service demo showcasing real-time AI coordination with guardrails and tracing.
  • OpenAI and Google cut ties with Scale AI after Meta's major investment raises competition concerns.
  • Midjourney launches its first image-to-video AI model amid lawsuits from Disney and Universal.
  • YouTube integrates Google's Veo 3 video model into Shorts, reaching 200 billion daily views.
  • MIT researchers reveal a study showing ChatGPT use may reduce brain engagement and memory, raising questions about AI's cognitive impact.

The video blends technical insights, industry shifts, and scientific findings to highlight the rapid evolution and challenges of AI.

Want the big picture?

Highlighted Clips

1.

OpenAI's Open-Source Multi-Agent Customer Service Demo

OpenAI releases a fully open-source airline customer service demo using its Agents SDK, featuring real-time agent coordination, modular design, and guardrails to prevent off-topic or harmful responses.

2.

OpenAI and Google Cut Ties with Scale AI After Meta Investment

OpenAI and Google end contracts with Scale AI following Meta's purchase of nearly half the company, citing concerns over data privacy and competition.

3.

Midjourney Launches First Video Generation Model Amid Lawsuits

Midjourney introduces its first imageto-video AI model generating 5-second clips with dreamy aesthetics, while facing lawsuits from Disney and Universal over copyright issues.

4.

YouTube Integrates Google's Veo 3 Video Model into Shorts

YouTube announces the rollout of Google's Veo 3 video model to Shorts, boosting AI-generated content and celebrating 200 billion daily views on the platform.

OpenAI’s Open-Source Multi-Agent Customer Service Demo

OpenAI has released a fully open-source airline-style customer service demo that you can run locally, showcasing how multiple AI agents can coordinate in real time. This demo is built on OpenAI’s Agents SDK, which the company has been teasing since spring. The backend is Python-based, running on UVicorn, and the frontend uses Next.js to render both the chat interface and a live trace of agent interactions.

"This trace isn't some abstract log file, it's literally a step-by-step visualization that lights up every time... the triage agent passes control to the seat booking agent."

The demo simulates a realistic airline customer service experience with specialized agents handling tasks like seat booking, cancellations, flight status, and FAQs. Each agent is modular and guarded by two types of guardrails: a relevance guardrail that blocks off-topic requests (e.g., asking for a strawberry poem) and a jailbreak guardrail that detects attempts to access system instructions.

"Both events get highlighted in the trace so developers can see exactly when and why the conversation got blocked."

This modular design allows developers to add or swap agents, adjust routing logic, or enhance guardrails without rewriting the core system. OpenAI is essentially sharing the internal blueprint for composable agents, complete with prompt templates, tool wrappers, handoff logic, output schemas, and a tracing layer that aids debugging.

Key points:

  • OpenAI’s demo uses multiple AI agents working in real time with a transparent tracing system.
  • Agents specialize in different airline customer service tasks.
  • Two guardrails prevent off-topic or malicious prompts.
  • The system is modular and extendable, ideal for learning multi-agent architectures.
  • The demo runs locally with simple commands, making it accessible for developers.

OpenAI Cuts Ties with Scale AI Amid Meta’s Investment

Almost simultaneously with the demo release, Bloomberg reported that OpenAI is phasing out contracts with Scale AI after Meta purchased nearly half of Scale for $14.8 billion, marking Meta’s second-largest acquisition. Scale’s CEO, Alexander Wang, is moving to Meta’s experimental AI project, which raised concerns at OpenAI about sharing training data with a vendor now partially owned by a direct competitor.

"OpenAI doesn't love feeding training data through a vendor that now answers to a direct competitor."

OpenAI claims Scale only handled a small part of their data pipeline and is shifting to newer vendors like Merkor. Google is reportedly making similar moves due to fears that Meta could gain insights into rival AI models. Scale’s interim CEO insists on independence, but the reality of Meta’s majority stake complicates that claim.

"When your new majority partner just wrote a 14.5 billion dollar check, independence starts to look a little theoretical."

Scale originally relied on large armies of contract labelers for basic tagging but has moved toward specialized annotators as AI models advanced. This talent pool is now in high demand by OpenAI and Google, creating a real-time supply chain shakeup in AI data annotation.

Key points:

  • Meta’s $14.8B investment in Scale AI triggered OpenAI to cut ties.
  • OpenAI and Google worry about data privacy and competitive intelligence.
  • Scale’s CEO moves to Meta’s AI project, signaling shifting alliances.
  • The annotation workforce is a critical resource being reallocated.
  • This reflects broader corporate drama and supply chain realignment in AI.

Midjourney Launches First Video Generation Model Amid Lawsuits

Midjourney has entered the video generation space with its first imageto-video model, which takes a single image and produces four 5-second clips with a signature dreamy, otherworldly aesthetic. The model operates exclusively inside Discord and is web-only, similar to Midjourney’s image generation tools.

"Each clip is sort of that dreamy otherworldly vibe Midjourney is known for—not exactly photoreal but definitely compelling."

Pricing is a significant factor: video generation consumes eight times the credits of image generation. The $10/month basic plan users burn credits quickly, while $60 and $120 pro and mega users get unlimited runs but only in slower "relax" mode. Users can extend clips up to 21 seconds and control motion parameters or write custom movement instructions.

Midjourney’s CEO, David Holes, describes this version as a stepping stone toward real-time open-world simulations and 3D rendering models, hinting at ambitions akin to a generative Unreal Engine.

"They want three-dimensional rendering models then eventually real-time generative worlds—lofty stuff."

However, legal troubles loom: Disney and Universal filed lawsuits just a week before the launch, alleging the model can generate copyrighted characters like Homer Simpson and Darth Vader on demand.

"The lawsuits were filed literally a week before version one dropped because the models allegedly spit out Homer Simpson and Darth Vader on demand."

Despite this, early testers are positive, though no one claims it beats OpenAI’s Sora or Runway’s Generation 4 yet. Midjourney’s video tool is a strong first step, with pricing and capabilities to be evaluated after the promised one-month review window.

Key points:

  • Midjourney’s first video model generates short, stylized clips from images.
  • Operates inside Discord, with credit-heavy pricing impacting usage.
  • CEO envisions future real-time 3D generative worlds.
  • Disney and Universal filed lawsuits over copyright infringement.
  • Early feedback is positive but not yet competitive with top-tier video AI.

YouTube Integrates Google’s Veo 3 Video Model into Shorts

YouTube announced at the Can Line Festival that it is rolling out Google’s Veo 3 video generation model to Shorts creators this summer. Some creators have already been using Veo 2 since February, but now the newer model will be more widely available.

"Expect a whole lot more AI generated 15-second science fiction skateboarding robots in your feed."

YouTube’s CEO, Neil Mohan, shared impressive stats: Shorts now receive over 200 billion daily views, a massive jump from 70 billion in March 2024. On the living room front, viewers watch a billion hours daily on connected TVs, with over half of the top 100 channels getting most traffic from TV audiences.

"Shorts now pull over 200 billion views every single day."

YouTube also highlighted a billion monthly podcast viewers and an autodubbing system used on 20 million videos, underscoring their investment in generative production tools. Their 20th birthday culture and trends report shows short-form content growth accelerating, but long-form videos over 60 minutes are also rising, likely due to TV viewers seeking documentaries and full episodes.

"If you want reach optimize for phones; if you want dwell time think TV."

Gaming remains a huge driver of watch time, with waves in ASMR, fitness, and long-form commentary also noted. YouTube’s strategy is clearly to dominate both short and long-form video consumption across devices.

Key points:

  • YouTube is expanding Google’s Veo 3 AI video model in Shorts.
  • Shorts hit 200 billion daily views, nearly tripling in months.
  • Connected TV viewing of YouTube hits a billion hours daily.
  • Podcasts and autodubbing tech are growing parts of the ecosystem.
  • Short-form content grows fast, but long-form is thriving on big screens.

MIT Study Raises Concerns About ChatGPT’s Cognitive Effects

Researchers at the MIT Media Lab conducted a study with 54 volunteers aged 18 to 39, dividing them into three groups for essay writing: brain-only, Google-assisted, and ChatGPT-assisted. Participants wrote multiple SAT-style essays while wearing 32-channel EEG caps to measure brain activity.

"The ChatGPT group showed the lowest neural engagement in alpha, theta, delta—pretty much every band linked to creativity and memory."

Two English teachers graded the essays and described the ChatGPT-assisted ones as "soulless," noting that wording and structure converged across different writers. By the third essay, many ChatGPT users simply pasted the prompt into the chatbot with minimal edits.

"The brain-only cohort lit up significantly more EEG regions and reported higher satisfaction claiming ownership over their work."

The Google group fell in the middle, as active searching still triggers cognitive planning and evaluation, unlike passively accepting AI-generated answers. After the initial round, the ChatGPT group had to rewrite an essay without AI, while the brain-only group was allowed to use ChatGPT. The AI-first writers struggled to recall their original work, showing weaker brain activity, indicating the essay hadn’t entered deep memory.

"Human first drafting followed by AI refinement might be the sweet spot."

Lead author Natalia Cosmina released the pre-review paper quickly, worried policymakers might prematurely endorse AI use in education. She also embedded "traps" in the manuscript to cause hallucinations in AI summarizers, highlighting the challenges of AI interpretation.

The lab’s next study on programming suggests even worse neural decline when coders rely heavily on AI autocompletion. Though preliminary and small-scale, these findings align with other research showing generative AI boosts short-term productivity but may undermine intrinsic motivation.

Key points:

  • ChatGPT use correlates with reduced brain activity linked to creativity and memory.
  • Essays written with ChatGPT were judged less original and engaging.
  • Active searching (Google) maintains more cognitive involvement than passive AI use.
  • AI-first writers had trouble recalling their own work later.
  • Combining human drafting with AI refinement may be optimal.
  • Early evidence suggests heavy AI reliance could erode neural engagement, especially in coding.

Closing Thoughts

The video wraps up by emphasizing the rapid evolution of AI tools—from multi-agent systems to video generation—while cautioning about the hidden cognitive costs of overreliance on AI like ChatGPT. The creator invites viewers to engage with the content and stay tuned for more updates.

"That's plenty of brain food for one session... thanks for watching and I'll see you in the next one."


This video offers a rich snapshot of the current AI ecosystem, blending technical demos, corporate shifts, legal battles, platform expansions, and emerging scientific research on AI’s impact on human cognition.

Key Questions

It's an open-source airline customer service system built with OpenAI's Agents SDK, demonstrating how multiple AI agents coordinate tasks in real time with guardrails and a live trace visualization.

Have more questions?

Analyzing video...

This may take a few moments.

Background illustration light mode

Ready to dive in?