Loading...
Loading...

Wait.... did OpenAI and Anthropic take a week off? 🤔
After a relatively quiet week of updates from AI's normal heavyweights in Anthropic and OpenAI, their competitors (and backers) picked up the slack.
↳ Meta is making AI chips but cutting jobs.
↳ NVIDIA is investing billions in Open Source AI.
↳ Perplexity is trying to bring back the Personal Computer.
↳ Google is dropping AI in your docs and your car
And a whole lot more.
Don't waste hours each week trying to make sense of the AI developments. That's our job.
Meta’s making AI job cuts and investments, NVIDIA’s big plays, Google brings Gemini everywhere and more AI news -- An Everyday AI Chat with Jordan Wilson
Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion on LinkedIn: Thoughts on this? Join the convo on LinkedIn and connect with other AI leaders.
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: [email protected]
Connect with Jordan on LinkedIn
Topics Covered in This Episode:
Timestamps:
00:00 NVIDIA's AI Ambitions Expand
05:30 "AI Evolution and NVIDIA's Edge"
08:23 "Meta Launches MTIA AI Chips"
11:47 Meta Doubles Down on AI
15:23 "Microsoft’s Copilot Health Launches"
20:23 "AMI's Ambitious AI Vision"
21:58 "Perplexity Launches AI Personal Computer"
25:52 Senate Approves Generative AI Use
30:57 Google Drive Gains AI Overview
32:43 "Friday Features & AI Updates"
36:53 "Everyday AI Updates & Insights"
Keywords:
NVIDIA, $26 billion AI investment, open weight AI models, open source AI, proprietary AI, AI hardware, AI software, OpenAI, Anthropic, DeepSeek, AMD, chip makers, Meta AI chips, MTIA 300, MTIA 400, MTIA 450,
Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)
This is the Everyday AI Show, the everyday podcast where we simplify AI and
bring its power to your fingertips. Listen daily for practical advice to boost
your career, business and everyday life.
While it was a relatively quiet week for two AI heavyweights in open AI and
anthropic, their competitors and financial backers made plenty of noise.
Nevada was all over the AI news this week and depending on how you look at it,
it might have been for both good and well definitely bad reasons. In video,
made plenty of billion dollar splashes over the past few days as its annual
GTC conference kicks off in hours and perplexity is tried to bring back the
personal computer. Well, Google quietly shipped useful AI everywhere from your
car to your Google docs. I hope you're excited to get into all the AI news this
week. I am as well. And if you miss anything that happens in the AI world,
don't worry. That's what we're here for in our weekly AI news show on Monday
called AI news that matters. Well, if you're brand new here, welcome to the AI
news that matters on every day AI. My name is Jordan Wilson and well,
every day AI it's for you. If you're struggling to keep up, but you want to get
ahead with everything that's happening in the world of AI, you tune into our
daily live stream podcasts and free daily newsletter helping everyday
business leaders like you and me make sense of all of this to get ahead and grow
our companies and careers starts here. Like I said, Monday through Friday with the
unended and unscripted live stream podcasts, but to be the smartest person in AI
at your company, our website is your cheat code your everyday AI dot com.
All right, so in today's newsletter, we're going to have all the other AI
happenings, but let's get into the biggest AI news stories of the weekend.
Probably one of the biggest ones that no one was really talking about was one with a 26
billion with a B 26 billion dollar price tag on it. That's because according to interviews and
financial filings found by wired, Nvidia has just announced a 26 billion dollar investment in
open AI models. So open weight AI models, not open AI. They've invested money in open AI
actually more than a second. So in video is announcing plans to invest 26 billion dollars over
the next five years to develop open weight AI models. So and that's according to wired.
And this massive investment positions in video as a direct competitor to leading AI firms like
open AI and traffic and deep seek as the company expands beyond its dominant role in AI hardware.
So in videos, move into open weight or open source AI models could accelerate innovation in
lower barriers for companies and developers wanting to build on advanced AI technology.
The company's strategy raises questions about market competition since Nvidia both manufacturers,
the actual hardware that's powering AI systems. And now while they're planning to produce
leading AI software that could compete against those very companies as well. So some industry voices
worry this could give Nvidia an unfair advantage as it can optimize its own models to run
better on its own hardware versus its rival models from companies like open AI, Google or
Anthropic. So AMD's CEO has weighed in suggesting open source approaches are key to remaining
competitive in the AI market, signaling intensifying rivalry among chipmakers in AI software developers.
So this one is interesting for a couple of reasons. Well, number one, you can't overlook the fact
that Nvidia is a huge investor in some of those companies like open AI and Anthropic that are
building these close source proprietary proprietary models. And there's always all these numbers that
you hear in AI, right? All these valuations and funding rounds, but $26 billion is huge, right?
So for Nvidia to say that they're going to be investing, $26 billion over the next five years to
develop an open weight AI models. That's no small feat. That is the equivalent of what you would
be investing to get multiple state of the art frontier models, right? So investing that much money
on the open source side is a really big deal, not just for open source, but also for close source
and proprietary models, because it has an example, right? Whether it's their future version is called
Nemo Tron or something else, right? That's the version that they just released not too long ago.
Regardless, it's going to put a lot of pressure on the open AI and Anthropic and Google's of
the world to build better proprietary models, because as the technology shrinks, right? That's the
other thing that people are overlooking, right? Like, you know, to have a one gigabyte hard drive
20 years ago probably took up 20 times the space it does today and cost 100 times as much. So you
have to think the same to be true in three, five, 10 years when it comes to AI models in GPUs,
right? As an example, you could probably have something that's GPT-544 level or Gemini-31 level
running on an iPhone, on an older iPhone, right? So it does make sense from Nvidia's perspective to
well, they're going to be caching checks in both hands, because the big AI labs to keep up with
whatever Nvidia puts out on the open source end, they're going to have to continue to invest in
bigger, better models, which means using Nvidia's GPU chips for inference and training. And then
Nvidia is going to be building the open platforms. And well, you might be saying, okay, how does
Nvidia ultimately gain money from that? Well, that's because just about everyone will be buying
probably Nvidia-specific hardware to run these new models. So if Nvidia's open source models
in the near future become the premiere open source models, there's a good chance that they're
going to be optimized to run really well on Nvidia's GPU chips in their hardware. Like,
you know, as an example, the DGX sparks of three to five years from now. So very interesting play
here. I think Nvidia's actually not just squeezing it on both sides. They're actually winning in three
ways, or they could be winning in three different ways, right? One, the companies are like Open AI
and Anthropic are going to have to pay Nvidia more to compete with Nvidia, weird, right? But that's
also why those companies are starting to invest in their own infrastructure. So that's the number one
way. Number two, well, they're going to open source side. That's huge, right? For to push that
boundary, that pushes all other companies, like Mata as an example, which we have a related story
here in a second. All the other open source companies are going to have to pay more to compete with
the closed source and the open source. And then last but not least, you're going to have
probably millions of new customers, mainly consumers who are going to want to be running these
models locally. And they're probably going to be buying specialized Nvidia hardware to do so.
All right, speaking of the AI chip race, our next piece of AI news, well, Mata is launching four of
their own in-house AI chips. Maybe so they don't have to pay Nvidia so much in the future. So,
according to reports, Mata has introduced four new processors. So here's the names. And what it
stands for, it's the MTIA, which is the Mata training and inference accelerator family. So the MTIA
300, 400, 450 and 500. Those are the new processors that they introduced. So they are used,
obviously, and designed for generative AI and recommendation models. And they can be scaled
up in server racks with up to 72 chips, just like Nvidia's NBL 72 and AMD's Helios racks.
So Mata claims the MTIA 400 is its first chip to deliver both cost savings and performance
competitive with the top commercial products directly targeting Nvidia and AMD's offerings.
So the 450 and 500 build on the MTIA 400 offering faster and higher capacity memory for more
demanding AI workloads. So, according to Reuters reports, Mata has already started using some of
these chips and plans broader deployment in 26 and 2027 with all models sharing a unified infrastructure
for easy upgrades. So, Mata moves follows well strategies by all the other big tech companies,
like Google, Amazon and Microsoft, who have developed their own chips to power their AI models
in reduced dependencies on third party suppliers, mainly in video. So, Google and Amazon also
rent out their chips to companies like Anthropic and Mata, who recently signed a multi-billion dollar
to use multi-billion dollar deal to use Google's processors as well. So, in 2026 alone,
Amazon, Google, Mata and Microsoft plan to spend a combined $650 billion on capital expenditures
with most of that going toward AI infrastructure. All right, more Mata news. So, the first one might
have been a little positive, right? Oh, cool. Mata is building out all these, you know, AI chips,
which can be great for the industry, great for local job production, right? Well, maybe not so much.
That's because another recent Reuters report said that Mata is preparing for its largest workforce
reduction ever as the company pivots heavily toward AI to streamline operations. So, according to
reports, Mata is considering cutting 20% or more of its workforce, which could affect over 15,000
employees. The layoffs come as Mata plans to invest $600 billion in new data centers by 2028.
A move intended to support its AI ambitions. So, yeah, this is kind of similar to what we heard
from Amazon about six, oh, no, it was about four months ago, right? This kind of shift from
OpX to CapEx or thousands of people to AI factories and chips, and that looks like Mata might be
going down the same route. So, no official date has been set for the layoffs and the final number
of cuts is still being determined according to reports. CEO Mark Zuckerberg has been aggressively
recruiting top AI talent, offering compensation packages worth reportedly hundreds of millions of
dollars over four years. So, Mata's shift toward AI is expected to create efficiencies with projects
previously requiring large teams now handled by fewer highly skilled employees. The company also
acquired multiple book this past week. If you read our newsletter, you saw that one and that's a
social networking platform for AI agents and spent about two billion dollars to buy Chinese AI
startup Manus. So, they've been acquiring a lot and spending a lot in the AI space, but well,
apparently not their own employees. So, Mata's previous restructuring in late 2022 and early 2023
resulted in a layoff of 21,000 employees or about a quarter of its workforce at the time.
So, Mata's AI efforts follow setbacks with its Lama 4 models last year, including criticism of
misleading benchmark results and the cancellation of its largest model, which we never got the
and the company's new super intelligence team is working on a model called avocado, but performance
has not yet met expectations so far and the model has reportedly now been delayed until May.
So, what's Mata doing here? We're not sure, right? It's been now nearly a year since we got our
last models from Mata. It's been nine or so months since Mata spent 15 billion dollars to
essentially acquire scale AI's leadership team and its CEO. So, I think most of the AI
industry was expecting something probably by the end of 2025 from Mata. So, it's kind of surprising
that number one, not only have we not seen anything with these large AI investments aside from a
couple of acquisitions, but now reportedly Mata's next model, which is codenamed avocado for now,
is delayed yet again. All right, something that is not delayed, Microsoft, they just launched
co-pilot health, a new feature inside its co-pilot chatbot designed to help make users help
them better understand their medical records, wearable device data, and help them make better
decisions regarding their health. So, Microsoft's move comes as the company's survey found that
health-related questions are the most common topic for mobile co-pilot users. So, co-pilot health
brings together data from smartwatches, fitness rings, and uploaded medical records,
offering personalized insights and support, but it is not intended to diagnose or treat medical
conditions. Yeah, that's always interesting when, you know, these companies now are coming out
specifically with health products, but they're like, yeah, this isn't, you know, to actually
diagnose anything, right? It's like, yeah, you have to put that giant asterisk, even though that's
100% what people aren't going to be using this for. So, the tool was developed with input from both
Microsoft's in-house clinicians, and an external panel of hundreds of doctors across 24 countries.
So, co-pilot health uses the National Academy of Medicine's standards for credible sources,
and includes information's license from Harvard Medical School, and that started in 2025.
Users can easily connect records from multiple doctors, hospitals, and labs through a third-party
program called HealthX, and can delete their health data at any time with a simple toggle.
Microsoft emphasized the health information in co-pilot health is kept separate from regular
chatbot conversations. It is not used to train AI models, but it is not protected under HIPAA privacy
laws. So, the tool helps users prepare for doctor visits by generating questions,
breaking down lab results, and finding providers who accept their insurance, but you cannot
diagnose for prescribed medication. So, right now, co-pilot health is launching first for adults
in the US with English as the only language right now in interested users. Right now can sign up for
a wait list. This is no surprise, right? Health is obviously a huge play, especially on the consumer,
and in Microsoft's large-scale study that we talked about in our newsletter, I believe it was
two weeks ago. Well, they found maybe a little bit surprising at the time that this is overwhelmingly
one of the most popular use cases for consumers using co-pilot right now. And I know there's
probably some doctors out there listening that aren't going to want to hear this, but I've been
saying this for a long time. Any profession that is high-priced, right? Just straight-up
knowledge-based work like health care in doctors, accounting, consulting, those are industries that
are going to be disrupted here fairly quickly as the models that we use get better at reasoning,
they're faster, more accurate, and more transparent. So, I would expect, right? We saw this from
ChancevT, they came out with their health product, Anthropic went more of kind of a plug-in route,
and now Microsoft going all in with co-pilot health. So, an interesting space in one that will continue
to keep an eye on. All right, we're going to take a quick break for a word from our partners.
Here's a harsh truth. Your company is probably spending thousands or millions of dollars on AI
tools. They're being massively underutilized. Half of companies have AI tools, but only 12% use
them for business value. Most employees are still just using AI to summarize meeting notes.
If you're the one responsible for AI adoption at your company, you need section.
Section is a platform that helps you manage AI transformation across your entire organization.
It coaches employees on real use cases, tracks who's using AI for business impact,
and shows you exactly where AI is and isn't creating value. The result? You go from rolling out
tools to driving measurable AI value. Your employees move from meeting summaries to solving
actual business problems, and you can prove the ROI. Stop guessing if your AI investment is working.
Check out section at section AI dot com. That's s-e-c-t-i-o-n-a-i dot com.
All right, renowned AI scientist Jan Lecune has officially unveiled his AI startup
after more than a decade at Meta as he's now launched AMI. A French startup focused on building AI
that understands the physical world. Lecune is Meta's former chief AI scientist and obviously
a leading figure in the field, and he's co-founded AMI and left Meta. Obviously, we've been reporting
on that for a couple of months now, left Meta after 12 years to pursue the new project.
So, AMI stands for Advanced Machine Intelligence, and the company focuses on a fundamental
shift in AI development, moving away from standard large language models toward world models.
In while they started with a pretty big splash in the tune of $1 billion in its first funding
marking one of Europe's largest early-stage investments in AI ever. Investors include
five major funds and corporate giants such as Toyota, Nvidia, and Samsung, along with tech leaders
and former Google CEO Eric Schmidt and Amazon CEO Jeff Bezos. So, AMI, well, if you're wondering
what the heck are these world models? Well, they're AI systems that understand the environment
like humans and animals, moving beyond text-based language models. So, Lecune will serve as the
company's non-executive chairman. Well, Alexander LeBron is CEO and the team plays higher 20 to 30
people immediately to accelerate research and development. So, AMI's work continues as research
that Lecune started at meta, including a new architecture called JEPA designed for real world
understanding. So, within three to five years AMI plans to deliver broadly capable AI for tests
including autonomous driving, robotics, and complex system analysis. You even got French President
Emmanuel Macron that publicly praised Lecune's move highlighting France's growing leadership in AI
research. So, for the past few years, Lecune has argued that today's large language models are
kind of a dead end in terms of a path towards human intelligence because they don't have enough
data and, you know, he's essentially called them a powerful pattern matchers. So, you know, although
he's obviously one of the most prominent names in AI, I think of a lot of today's, you know,
current researchers aren't kind of budding heads with Lecune because they're saying, okay,
well, these large language models are clearly more than just pattern matching as they're able to
produce economically viable work. But, well, he's batting with AMI in their world models that they're
able to compete in areas where today's large language models aren't yet such as robotics in
manufacturing. So, we'll see in the coming months and years if AMI is able to cash in on that.
All right. Well, speaking of caching in perplexity is looking to cash in on the open claw trend
as they've essentially, well, they've tried to resurrect the personal computer and maybe go after
that open claw crowd a little bit here. So, a new AI system was released from perplexity called
the personal computer and they promised to automate complex tasks on Mac devices, potentially
changing how a lot of people could do their work if they get their way. So, perplexity launched
personal computer. It is an AI powered autonomous agent that runs right now on Mac computers. So,
unlike typical AI assistants that wait for user prompts, personal computer is one that operates
persistently in the background on a local machine, but also using perplexity's hybrid architecture
online. So, it can just carry out these tasks independently once it's given a goal.
So, we've covered perplexity's computer, which is its series of autonomous agents. So, it essentially
uses 19 different frontier models to accomplish different tasks and it can do so autonomously,
right, but this is all done in the cloud. And, right, what's super hot and trending and,
well, makes sense right now is using local machines. That's one of the reasons why open claw
has going on to become, well, by definition, the most popular open source software ever. So, perplexity
computer, well, is trying to kind of get in on the game. So, using their very impressive, right,
I was actually extremely impressed when I did a run through of perplexity computer a couple of
weeks ago on our AI at work on Wednesday series. But, this brings its capabilities well to your
actual computer. So, perplexity here trying to kind of redefine what the personal computer is now,
which is funny in 2026 as personal computers haven't been around for decades. But, essentially,
all this is, well, a couple of things, it's marrying this new technology of computer,
their hybrid autonomous architecture with the local machine, right. So, now perplexity computer,
personal computer will be able to still use the power of its hybrid cloud architecture,
but also be able to run tasks locally, right, to be able to save files locally on a machine,
to be able to read files locally on a machine. So, kind of what perplexity is saying a more secure
in sandbox version of something like open claw. So, perplexity says the system can handle long
running assignments remaining active for hours or days until objectives are completed. So,
integration right now comes with productivity tools like Gmail, Slack, GitHub, and Notion that
means it can well kind of manage most people's day to day workflows. You don't need to buy new hardware
right now because it can work on any existing Mac making advanced automation accessible without
having to have that extra investment. And right now, unfortunately, this is only available to
perplexities users on their Mac's plan and is wait list only. All right. And one or two more
big pieces of AI news. This one, not a lot was written or talked about this one. Surprisingly,
maybe it's just with my background, I find it interesting, but I think you should know about this.
That's because the US Senate has officially approved staff use of generative AI chatbots with
Senate data, marking a major shift in government tech policy. So, according to Fed scoop,
Senate staff can now use Microsoft co-pilot, Google and Gemini, and open AI's chat GBT with
official Senate data following approval from the Senate's Sergeant at Arms Chief Information Officer.
Each Senate employee will be eligible for one license to either Gemini or chat GBT at no cost
with further details on licensing expected within 30 days. Microsoft co-pilot is already
integrated into the Senate's Microsoft 365 environment and can be accessed via mobile apps or
office tools like Word and Excel. So, the Senate's AI policy includes a two tier risk assessment system
with these approvals being the first for tier two covering official Senate data.
And the approval processes in full Senate AI policy will remain undisclosed, raising concerns about
transparency and accountability among tech advocacy groups. So, multiple AI vendors are offering
discounted access to federal agencies, but it's unclear if similar deals will apply to Congress.
So, co-pilot does not automatically access internal Senate resources right now. It only uses
data explicitly shared in prompts meeting federal cyber security requirements. Here's why this
is interesting, y'all. Number one, I hope, I hope the U.S. government in the Senate takes training
seriously because I'm just going to be honest here. You like I think a lot of people if you don't
follow government and if you take off your politics hat, senators are not exactly always the smartest
people in the room. They're not. You would think they are, but let's just look at some recent
history, right? Especially when it comes to senators, many of them on the older side,
let's just call it out, not really understanding technology.
The reality is many members of Congress don't understand that technology let alone AI. So, you know,
like when a senator thought the internet was a series of tubes or when another senator didn't
understand how Facebook made money, you know, with ads or how, yes, this is real.
How a senator asked Google's CEO why his granddaughter was receiving notifications on her
iPhone, not knowing that Google made didn't make iPhones, right? So, right now the average age of
a senator is 64 years old and more than a third are 70 or older. So, I'm not saying that older
generations shouldn't use AI. I think it's great that they do, but I think that this is just going
to create a onslaught of essentially work-slop in the government, right? Which is what you don't
necessarily want. In the same way how I think, you know, AI-slop has taken over social media, right?
Yeah, I think it might start unfortunately making its way into politics, which unfortunately
means that it could start making its way into actual legislation, which is not always the good
thing, especially if you do not prioritize and emphasize training. So, please, U.S. government
actually drain these senators on how to use AI, please.
All right. In our last big piece of AI news, saving it for last because I think it's a big deal.
Google is launching new, powerful Gemini AI features across its work space apps. So,
Google has announced that Gemini will now integrate directly into docs, sheets, slides, and drive,
making it easier to start and organize projects using information pulled from emails, chats,
and files. So, users can prompt Gemini to draft documents, spreadsheets, or slides by referencing
specific emails, meeting notes, or files, reducing the need for manual information gathering.
So, in Google Docs as an example, Gemini can generate first drafts, rewrite highlighted sections
to match desire tone or professionalism, and even format documents to align with reference notes.
Google Sheets users can ask Gemini to create checklist, contactless, and track quotes by pulling
data directly from Gmail and Drive. Google's Drive search now features an AI overview. So,
if you've ever done those AI overviews in Google Search, and you're like, oh, this is pretty cool,
aside from when that one time it recommended using glue sticks on pizza or something like that,
but since it's gotten much better, Google's having that AI overview in Google Drive is pretty
cool because it can pull and pull through relevant files, including citations and users can ask
Gemini questions about selected files, emails, or calendar entries such as tax-related inquiries.
So, Gemini's features are accessible via a new prompt bar in each workspace app. So, yeah,
like as an example, if you look at the, if you're staring at a blank Google Doc, well, it's not
blank anymore because you will see this new feature there at the bottom. So, the new Gemini
Power Tools are rolling out in beta, first to Google AI Ultra and Pro subscribers, with docs,
Sheets and Slide features, first rolling out globally in English, and then Drive features
launching initially in the US for now. Alright, so that is it for our main stories, but we have a
lot for what's new and what's next. So, yeah, for the most part, we on the main show bring you anywhere
from seven to 10 big AI news stories, but there's always a ton that's happening in the world of AI.
And hey, FYI, we just started a new series as well on Friday. So, let me just quickly tell you
what the rundown is, right? Monday, we bring you the AI news that matters. Wednesday, we go deep with
one new AI feature, a new large language model, right? Hands on, very much in depth. And then
Friday, we started something new because what I realized is, right, aside from that one, you know,
big in depth dive on Wednesdays, most of what we talk about on the show ends up right here at the
end of our Monday show, which is the what's new and what's next, just these little bullet points.
So, if you're hearing something and the what's new and what's next and you're like, oh my gosh,
that's huge. I need to know about that for my company. Well, tune in on Fridays because that's
what we're going to be doing now is going over kind of our Friday features. All right.
Anyways, here's what didn't make the AI news round up in our what's new and what's next.
So, in video, launched Nemo Tron, it's open source. So, the Nemo Tron three super model for scalable
agentic AI systems. Meta, like we said earlier, acquired multbook, the social network for open
claw agents. Google, yeah, their new cinematic video overviews are being rolled out to pro users,
not just ultra. I actually just stumbled upon that myself a couple of hours ago. So, in video is
also reportedly launching Nemo claw in open claw platform for enterprise. So, yeah, I'm sure we'll
hear more news out of that this week at Nvidia GTC. Oracle is reportedly cutting up to 30,000
jobs amid costly AI data center expansion. Canva launched AI-powered magic layers for
editable AI designs. GLM5 Turbo was released, which is ZAI's quicker version of GLM5 built for
agents like open claw. So, yeah, if you're an open claw user, you might want to check out GLM5.
Google launched Gemini-powered Ask Maps Chatbot for personalized navigation in US and India.
Yes, Google, literally bringing out Gemini to your docs and your car. Inthropic, launched their code
review tool for cloud code. In video and thinking machines partnered on a gigawatt scale,
they're a Rubin AI deployment starting in 2027. That is with former open AI co-founder,
Mira Murati. The Adobe's CEO resigned as investors pressure over unclear AI strategy.
Next, the Pentagon is reportedly rolling out Gemini AI agents to automate tasks for more than
3 million federal employees. Chatchy PT released a new feature that lets you interact with math
and science visuals in real time. Cloud now builds interactive charts and diagrams and that's
in beta right now on all plans, including free plans. Google released Gemini embedding
two which lets you search and analyze text images video, audio and docs all at once.
Anthropic Mosh Anthropic Institute to research societal economic legal AI risks.
Open AI is reportedly delaying the rollout of its adult mode. I'm fine with that. I don't know
why people are so excited about that. Cloud for Excel and PowerPoint now share full context
and support reusable skills. So that's really cool that cloud in Excel and PowerPoint can now
talk to each other. YouTube expanded deep fake detection to protect politicians and journalists
from AI impersonation. Runway launched internal incubator labs to explore generative video
applications. Here's a fun one peacock launched in AI Indy Cohen avatar curating personalized
bravo short form video feeds and open AI will reportedly integrate Sora directly into
Chatchy PT's interface. We made it. I just lost my voice again at the end. I love still being sick
randomly. Right? Yeah. I'm ready for it to be warm here in Chicago so I can stop being sick so
often. But that is a wrap for all of the AI news that matters. Like I said, if you don't have
hours every single day to keep up with the headlines, the releases, the features. Hey, just
join us on Mondays as we lay it all off for you. Wednesdays we're going to go pretty deep
hands on with probably one of these things and then go over our features on Friday and we'll
obviously have other shows for you on Tuesday and Thursday as well. I hope this one was helpful.
If so, please go to our website youreverydayai.com. Sign up for the free daily newsletter.
Thanks for tuning in. We'll see you back tomorrow in every day for more every day AI. Thanks y'all.
And that's a wrap for today's edition of every day AI. Thanks for joining us.
If you enjoyed this episode, please subscribe and leave us a rating. It helps keep us going.
For a little more AI magic, visit youreverydayai.com and sign up to our daily newsletter so you don't
get left behind. Go break some barriers and we'll see you next time.

Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast

Everyday AI Podcast – An AI and ChatGPT Podcast
