Loading...
Loading...

Apple’s decision to have Google power the next generation of Apple Intelligence is the clearest signal yet that the foundation model race is entering a new phase defined by alliances, tradeoffs, and positioning rather than raw model capability. This episode looks at how Apple, Google, OpenAI, Anthropic, and Meta are each staking out roles across assistants, healthcare, commerce, and infrastructure, and why the real competition is shifting toward distribution, integration layers, and control points. From Siri and Gemini to Claude’s healthcare push, agentic shopping standards, and Meta’s energy-backed compute expansion, the throughline is a rapidly consolidating AI landscape where every move is about who gets to be the default layer.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps. https://assemblyai.com/brief
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, we're talking about all of the big moves and jockeying for
positioning between the foundation model labs, with the headliner being that Apple has
made it official, and Google will power Apple's AI models.
Before that, in the headlines, well, more jockeying for positioning, but on a slightly
different level.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
All right, friends, quick announcements before we dive in.
First of all, thank you to today's sponsors, KPMG, Zencoder, Assembly, and Super Intelligent.
To get an ad-free version of the show, go to patreon.com, slash AI Daily Brief, or you can
subscribe on Apple podcasts.
If you are interested in sponsoring the show, send us a note at sponsors at aidelebrief.ai.
Now speaking of aidelebrief.ai, you can also navigate there to find out all sorts of
information about other things going on in the world.
You can get access to the results of our ROI survey, join our AI New Year's resolution,
it's China up to join the beta of Super Intelligence, AI Strategy Compass, or, and this is the one
that I'm thinking about right now, sign up for more information about AIDB Intelligence.
This is a new forthcoming research information and benchmarking service that I literally
could not be more excited about.
If you want to go straight to that, go to aidebintel.com, and with that out of the way, let's dive
in to today's episode.
Welcome back to the AI Daily Brief headlines edition, all the daily AI news you need in
around five minutes, although today it is a little jam-packed to kick off this second full
work week in January. As I said in the intro, you can feel all of the different labs right now,
really jostling for position. At this point with the possible exception of people's affinity for
Opus 45 and Claude as a coding partner, there is incredible parity across the major foundation labs,
and a lot of why people are using different models really comes down to personal choice.
It makes sense then, as the different labs add new product and interface layers around specific
use cases, that other labs are thinking in similar terms. Last week we got a number of
announcements from OpenAI around their strategy for health and healthcare, and Anthropic is following
them into that space. In a blog post on Sunday, Anthropic announced the launch of Claude for
Healthcare. They describe it as a set of tools and resources that allow healthcare providers,
payers, and consumers to use Claude for medical purposes through HippoReady products.
Now, they're actually positioning it as an expansion of the Claude for Life Sciences products
suite that they announced back in October. Claude for Life Sciences was designed as a product
research partner for scientists and clinicians, so this is a sort of industry and process complement
to that that's a little bit more focused on the consumers of healthcare as well as the providers
of healthcare. Similar to OpenAI's product, Anthropic will allow users to share medical records
and data from fitness apps to inform health-related conversations. And in addition to data
connectivity, Anthropic is launching a range of new connectors for industry standard databases
in the healthcare industry. Connectors for those unfamiliar are Claude's way of getting access
to external information that can inform how the chatbot interacts with certain queries.
The new connectors cover a wide range of functions including insurance, diagnosis,
and research within Anthropic hoping to speed up numerous healthcare workflows.
Eric Cotterer Abrams the Head of Life Sciences at Anthropic said,
When navigating through health systems and health situations, you often have this feeling
that you're sort of alone and that you're tying together all this data from these different
sources, stuff about your health and your medical records, and you're on the phone all the time.
I'm really excited about getting to the world where Claude can just take care of that.
He added that the goal is to use Claude as the quote, orchestrator and to be able to navigate
the whole thing and simplify it for you. Now, while people have a sense, especially in the AI
space of Anthropic and Claude being super focused on the enterprise context and on things like
coding, Koji Kubota argues that this move actually feels on brand for them. Koji writes,
Anthropic isn't talking about diagnosis or treatment. What it is going after instead is the
complexity around healthcare, scattered medical records, insurance rules, and systems that are
hard for patients to navigate. Seeing this way, this is not an AI trying to practice medicine.
It looks more like an attempt to become an organizing layer underneath it.
Now, as you'll see in our main episode today, this idea of using Claude as Eric put it,
the orchestrator, and for using it as an orchestrator for not just coding purposes,
is going to be a theme that I think we're seeing more of.
Ultimately, there is absolutely no denying that everything in and around health is
a major consumer of everyone's time, be something that basically no one enjoys as it's
currently organized, and see something we're having access to platforms that are incredibly good
at absorbing and interacting with huge amounts of information at once is likely to be very valuable.
One really interesting story of someone showing the power of AI and specifically Claude when
it comes to health comes from Shopify CEO Toby Lutke. He wrote,
My annual MRI scan gives me a USB stick with the data, but you need this commercial window software
to open it. Ran Claude on the stick and asked it to make me an HTML-based viewer tool,
and it looks way better. One more prompt that it annotates everything with the findings.
Now Toby from here articulated something which I'm increasingly finding as well
when he writes, By the way, this is a good example of what I meant with reflexivity
reaching for AI. You tinker with a for a while and you just reach for this.
This was an obvious thing to try when I saw I needed to use Windows and was on my Mac.
You want to train your brain on this intuition.
Now, staying on the healthcare theme, Google on the other hand is winding down their support
of health-related AI queries. Specifically, Google AI overviews will no longer offer AI-generated
summaries for certain health-related searches. The decision comes shortly after an investigative
piece in the Guardian found that AI overviews were presenting in correct health advice.
The article highlighted advice that people with pancreatic cancer should avoid high-fat foods,
which expert said was the opposite of what should be recommended.
In another example, AI overviews made an error enlisting the normal range of liver function
tests, which could lead people with severe liver failure to think they were perfectly healthy.
Google said that they don't comment on individual search results, but that they have taken steps
to make broad improvements in the area. However, they noted their internal team of clinicians
reviewed the searches highlighted by the Guardian and found that quote,
in many instances, the information was not inaccurate and was also supported by high-quality
websites. Vanessa Hebtich, the Director of Communications and Policy at the British
Liver Trust, told the Guardian that the removal was good news, but that quote,
our bigger concern with all this is that it is nitpicking a single search result,
and Google can just shut off the AI overviews for that, but it's not tackling the bigger issues
of AI overviews for health. Now, I think that this actually shows something interesting about
consumer expectations and different forms of information dissemination. In the past for Google,
sure people wanted Google's search to index the best most accurate results,
but if someone searched for something on Google, and then they ended up on a website within
accurate results, Google didn't get the primary blame for that. It was, of course, the website
that had the faulty information that was the primary culprit. Now, however, because Google's AI
is in charge of curating, aggregating, and then reprinting that, it becomes Google's problem,
even if the root cause is the same garbage information that informs the AI overview.
There's also another interesting phenomenon, which is a little bit beyond the scope of this
particular episode to deal with, which is the differentiated expectation of AI information
to always be accurate, as opposed to some base understanding that information on the internet
is not always going to be accurate. And I think finally, there's a whole additional issue of the
difference between what Google is doing to serve accurate information when it's in the context of
Gemini, as opposed to their AI overviews, which I think are going to have very different expectations
and very different challenges. Now, staying on the theme of Google and all the labs figuring out
their strategies in a handful of fundamental areas, Google has launched their new
agentic shopping standard. The standard, which is called the Universal Commerce Protocol or UCP,
was developed in collaboration with retail partners, including Shopify, Etsy, Walmart, and Target.
Mastercard and Visa also had input from the payment side of the protocol.
The general idea is to have a standardized way for shopping agents to gather information about
products and navigate the checkout process, ensuring everything is completely interoperable.
The protocol is completely open-source and non-proprietary, so it can be used by everyone that's
building in the space. Alongside the protocol, Google released a new set of tools to help
merchants integrate with UCP. GroceryChainKroger is already building with the tools,
with Chief Digital Officer Yelkeset commenting, things are moving at a pace that if you're not
already deep into AI agents, you're probably creating a competitive barrier or disadvantage.
Finally, Google announced plans to experiment with advertising within their AI experience.
Advertisers will be able to present exclusive offers to users shopping with Google's AI
mode. Now, these are not sponsored ad placements. Instead, they allow retailers to give AI
shoppers a unique deal. Yijesh Rinivasan, Google's VP of ads and commerce said,
it's a new concept that moves beyond our traditional search ads model. It essentially gives
retailers the flexibility to deliver value to people shopping in AI mode, whether that's a lower
price, a special bundle, or free shipping. In the moment, it matters most to just close the sale.
Rogue Google CEO Sundar Pichai, AI agents will be a big part of how we shop in the not-so-distant
future. To help lay the groundwork, we partnered with Shopify at SeaWayfare, Target, and Walmart
to create the Universal Commerce Protocol, a new open standard for agents and systems to talk to
each other across every step of the shopping journey. And coming soon, UCP will power native
check-outs so you can buy directly on AI mode with the Gemini app. I continue to think that
agentic shopping and commerce is going to be one of the most ubiquitous AI developments of 2026.
Now moving over to a different aspect of foundation lab competition,
Anthropic has banned XAI from accessing their models as part of a broader crackdown on
unauthorized use of cloud code. Kylie Robinson of Core Memory reported that XAI researchers
were abruptly cut off from using Anthropic models within cursor last week.
XAI co-founder Tony Wu told staff that he'd been told this is a new policy and
Anthropic is enforcing for all of their competitors. In a Slack message Wu wrote,
this is both good and bad news. We will get a hit on productivity, but it really pushes us to
develop our own coding product and models. We're at a time in which AI is now a critical technology
for our own productivity. This coming year is going to be really wildly exciting for all of us.
The team is rapidly developing our own models and product. We will have something to share with
everyone soon. In the meantime, you may still try all different kinds of models in GROC build.
Elon Musk had, by the way, nodded to progress earlier in the week posting,
major upgrade to GROC code coming next month. It will one-shot many complex coding tasks.
Simultaneously, Anthropic implemented a new set of technical controls to prevent third-party
applications from spoofing cloud code to gain access to more favorable usage limits.
The change-affected multiple services most notably popular open-source coding agent open code.
The lengthy discussion on hacker news used the analogy of a buffet,
noting that Alicart API pricing could be as much as $1,000 a month for heavy users.
One commentator wrote,
everything about this is ridiculous and it's all Anthropics fault. Anthropics shouldn't have
an all-you-can-eat plan for $200 when their pays you go plan would cost more than $1,000 for
comparable usage. Their subscription plans should just sell you API credits at like 20% off.
Others thought this crackdown was inevitable with programmer Andrew Remnick writing,
and Anthropics just reminded us that they are in fact a corporation. They are now actively
blocking OSS harnesses from using cloud subscriptions. This is why I keep harping on the importance
of building for independence from a single provider. Let's not forget that their incentives are
not fully aligned with customers, and we need to actively build for provider independence and
open source. Going back to the buffet analogy, however, some found the move pretty justified.
Berkeley student Ayyosh posted,
I understand where Anthropic was coming from for the open code stuff. It's like bringing
Tupperware containers to the all-you-can-eat buffet. Ultimately, whatever you think of the situation,
it is clear that Opus 4.5 tokens are just about the hottest AI commodity right now,
which gives Anthropic a lot of power in the space.
Honestly, we have even more than we could get into in this headlines, including some news about
forthcoming models, but for now, that is where we're going to wrap the headlines. Next up,
the main episode. Sure, there's hype about AI, but KPMG is turning AI potential into business value.
They've embedded AI in agents across their entire enterprise to boost efficiency,
improve quality, and create better experiences for clients and employees. KPMG has done it
themselves, now they can help you do the same. Discover how their journey can accelerate yours
at www.kpmg.us slash agents. That's www.kpmg.us slash agents.
If you're using AI to code, ask yourself, are you building software or are you just playing
prompt roulette? We know that unstructured prompting works at first, but eventually it leads to
AI slop and technical debt. Enter Zenflow. Zenflow takes you from vibe coding to AI first engineering.
It's the first AI orchestration layer that brings discipline to the chaos. It transforms
freeform prompting into spec-driven workflows and multi-agent verification, where agents
actually cross-check each other to prevent drift. You can even command a fleet of parallel
agents to implement features and fix bugs simultaneously. We've seen teams accelerate
delivery 2x to 10x. Stop gambling with prompts. Start orchestrating your AI. Turn raw speed into
reliable production-grade output at Zenflow.free. If you're building anything with voice AI,
you need to know about assembly AI. They've built the best speech to text and speech
understanding models in the industry, the quiet infrastructure behind products like granola,
dovetail, ashbee, and cluelie. Now as I've said before, voice is one of the most important
modalities of AI. It's the most natural human interface, and I think it's a key part of where
the next wave of innovation is going to happen. Assembly AI's models lead the field inaccuracy
in quality so you can actually trust the data your product is built on. And their speech understanding
models help you go beyond transcription, uncovering insights, identifying speakers, and surfacing key
moments automatically. Its developer first, no contracts pay only for what you use and scales
effortlessly. Go to assembly AI.com, slash brief, grab $50 in free credits, and start building
your voice AI product today. Today's episode is brought to you by Super Intelligent.
Super Intelligent is a platform that very simply put is all about helping your company figure
out how to use AI better. We deploy voice agents to interview people across your company,
combine that with proprietary intelligence about what's working for other companies,
and give you a set of recommendations around use cases, change management initiatives,
that add up to an AI roadmap that can help you get value out of AI for your company.
But now we want to empower the folks inside your team who are responsible for that transformation
with an even more direct platform. Our forthcoming AI strategy compass tool is ready to start
to be tested. This is a power tool for anyone who is responsible for AI adoption or AI
transformation inside their companies. It's going to allow you to do a lot of the things that we do
at Super Intelligent, but in a much more automated self-managed way and with a totally different
cost structure. If you're interested in checking it out, go to AIDailyBreathe.ai-slash-compass,
fill out the form and we will be in touch soon.
Welcome back to the AIDailyBreathe. One of the things to know about this show
is that given how fast moving this industry is, it is very often the case that I make last
minute pivots to change what I'm covering on any given show. Today I was fully planning on
covering Anthropics New Cloud Code Worker, basically Cloud Code, but for everything that's not
code. Given how much Cloud Code has been on people's minds for non-Cloud code use cases,
after I got a little advance notice that this was coming, it seemed like a very obvious focus.
As it turns out, I just wanted to do it a little bit more in depth than I would have been
able to this afternoon after it was announced. So instead we're pivoting and we're going to stay
on the same theme that we started in the headlines, which is the competition and jockeying for
position among the big players. Now, if the jockeying and competition that we heard about in the
first part of the episode and the headlines was little skirmishes, the news that we're talking
about in the main episode is the big stuff. And the main story is that after a decade of complaining
and internet memes and even Larry David yelling at Siri, swearing at it and smashing it against his
car and utter frustration in one of the most relatable moments of curb your enthusiasm,
Apple is finally going to fix Siri. And more broadly, it seems maybe take AI seriously
and they're going to do so with Google as their partner.
Now this has been rumored for a while, but it became official today. The companies released a
short two paragraph joint statement in which they said, Apple and Google have entered into a
multi-year collaboration under which the next generation of Apple foundation models will be
based on Google's Gemini models and Cloud technology. These models will help power future Apple
intelligence features, including a more personalized Siri coming this year. After careful evaluation,
Apple determined that Google's AI technology provides the most capable foundation for Apple
foundation models and is excited about the innovative experiences it will unlock for Apple users.
Apple intelligence will continue to run on Apple devices and private cloud compute while maintaining
Apple's industry leading privacy standards. Reuters called it a major win for alphabet,
writing that the deal marks a major vote of confidence for them. As they point out, while Google's
technology already drives much of Samsung's Galaxy AI, the Siri deal unlocks a large market with
Apple's install base of more than two billion active devices. The saga of Siri has been a long one.
As the Verge wrote, Apple spent most of the past year working on an AI-upgraded version of Siri,
but just couldn't get it there. Now as part of those efforts, Bloomberg reported that Apple also
considered using a custom version of Gemini for AI-powered features in Siri. And of course,
along the way, we got a big shake-up in Apple's AI team. Specifically, their head of AI, John
Gianandria, stepped down last month, paving the way for this big news to kick off the year.
Now, for many, the focus on the news was as much about what it said negatively about Open AI,
as much as it did say positively about Google. Reuters quoted Parth Talsania, the CEO of Equicides
Research who said, Apple's decision to use Google's Gemini models for Siri shifts Open AI into a
more supporting role, with Chatchee BT remained positioned for complex opt-in queries,
rather than the default intelligence layer. The Verge reports that Apple had also explored working
withinthropic and perplexity, and Apple has always said that they plan to launch more
integrations with more AI companies over time. You change in, thought the deal made sense,
writing, Gemini leads in multimodality, and Open AI's device and personalized Chatchee BT are
direct competitors of Apple. Benjamin DeKrakker says, Open AI does seem very Apple-coded,
Google obviously does not, polar opposites really. So how did Open AI blow the Apple deal so badly?
Yet, others think it's the fact that Open AI is a little bit too close to Apple for comfort,
then made this deal happen the way that it did. Robert Scobal writes,
This makes sense because Open AI is trying to become a products company. In other words,
Open AI is going after Apple, and it would make no sense for Apple to help a new competitor.
In a longer post, he expanded on those thoughts. He wrote, Open AI is making a variety of new
products and going after Apple. Apple didn't want to give Open AI any more data to help a potential
new competitor. The real problem for this Open AI effort is that we're about to move to glasses.
People don't believe me that we're about to move to glasses, but you should because I just got
back from CES and there was a ton of glasses there. For Open AI to really get somewhere they
need to add a camera to an earphone. While I don't see that in this latest report, I wouldn't be
shocked to see a camera show up somewhere eventually. It's cameras that add understanding of
the real world, which can lead to many new features that Apple's current AirPods can't match.
I believe Apple is developing such a product to go with their glasses, which makes a lot of sense.
Also, Google's AI models are better at multimodality. This means they can use cameras in a much
better way than even Open AI's models can. This is why in Silicon Valley robotic companies,
a lot of them use Google Gemini because robots need multimodality. Apple's glasses, which are
expected in 2027, have some significant advantages over the others. First, they have ice
answers in them so it knows where the user is looking. It can also tell what the user is touching,
holding or gesturing towards. This new capability will give Siri a significant parlor trick.
It will let Siri answer questions that no other search engine has been able to answer before.
And he goes on from that and basically the whole argument amounts to the fact that as AI
becomes embodied, Open AI and Apple are just on an absolute collision course that makes a collaboration
extremely difficult. Certainly the fact that Open AI went and tapped a huge number of Apple
staffers, most notably Johnny I, shows that there is some similar DNA in the companies that may
well amount for why they were too close to make it work. Now, another dimension of the conversation
is about the privacy dimension. Max Weinbach writes, remember this is Gemini technology, but as they
said, it still runs on private cloud compute. That means Apple Silicon and Apple's hardware
are not Google. Google is just licensing the technology to Apple. You chin chin also does note
however that he doesn't think that a future Apple only model is a foregone conclusion. He writes,
I don't think Apple is giving up building its own foundation models. They've spent billions
on GPUs, hired many AI researchers and have the iPhone as a massive distribution channel.
Once they've collected enough data via the new Gemini powered series switching to Apple's own
models is always an option. Now for others outside the privacy question, there was a competitiveness
question. Brasser X writes, this is deeply concerning. Google has long pushed towards monopoly
power and in search they effectively already are, with Android Chrome and now Apple's foundation
models this concentration of power demands serious antitrust scrutiny. Elon Musk agreed, responding
to the news from Google post on X where he wrote, this seems like an unreasonable concentration of
power for Google given that it also has Android and Chrome. Leihepner writes, how is this different
than Google wrapping search data and underlying models in Apple Safari browser? Didn't the Justice
Department just win a historic antitrust trial about this? Still mostly people are just looking at
this as a huge win for Google. Prince writes, this seems to imply that Apple has agreed not to use
its own foundation models and its ecosystem for the next few years. What a disaster if true.
Although as Dan McAteer writes, loving this as an iPhone owner can't wait to get actually good
Siri. Tim Halterson writes, Apple giving up in the AI race and giving Google a massive distribution
advantage. Jim Kramer summed up the market's feelings when he wrote, the Apple Google partnership
is very strong. Google pays little and Siri gets better. Both stocks should be higher.
Jim is frequently known as something of a counter signal, but in this case I think he's right,
and I think it is just one more piece of evidence supporting my and many other people's prediction
that at some point in 2026 before the year is done, Alphabet will be the biggest company by
market cap on the planet. Joshua word of Google did also note that Nano Banana Pro has now crossed
a billion images created in Gemini app in just 53 days of the model being out. That's pretty cool,
but given that about 900 million of those were me, I was a little bit less impressed.
Now, if the Google Apple news was the biggest in some ways, there was also still some very big
news out of meta as well. First of all, they announced that they were expanding their nuclear
strategy with three new power deals. Vistra has contracted supply capacity from their existing
power plants, while small nuclear reactor startups, Oklo and Terra power have signed agreements to
build multiple reactors. The Vistra deal is expected to deliver 2.1 gigawatts of power from
a pair of plants in Ohio, where meta is building their Prometheus supercluster, and overall the
three deals are expected to deliver 6.6 gigawatts from meta's data centers by 2035. At the moment,
most of the large data center projects are one gigawatts scale, so these deals set meta up to
significantly expand their footprint. The new deal adds to meta's 2025 agreement with constellation
energy to extend the life of an Illinois power plant. Meta's chief global affairs officer Joel
Kaplan called the deals one of the most significant corporate purchasers of nuclear energy in American
history. He added, state-of-the-art data centers and AI infrastructure are essential to securing
America's position as a global leader in AI. Meta was also careful to address the growing negative
sentiment around data centers driving up the cost of electricity, saying that they will quote,
pay the full costs for energy used by our data centers so consumers don't bear these expenses.
Patient investor rights. My opinion, this is big tech telling the market they want firm power
for the next decade, not just more chips. Abu writes, we're shifting from compute constraint
to energy constrained. If you don't control the power, you don't control the model. And that
got us to our second announcement from meta. Earlier on Monday, Zuck posted, today we're establishing
a new top-level initiative called meta compute. Meta is planning to build tens of gigawatts this
decade, and hundreds of gigawatts are more over time. How we engineer, invest, and partner to
build this infrastructure will become a strategic advantage. The effort will be led by Sandtosh
Janarthan and Daniel Gross. Sandtosh will continue to lead our technical architecture, software stack,
silicon program, developer productivity, and building and operating our global data center fleet
network. Daniel will lead a new group responsible for long-term capacity strategy, supplier partnerships,
industry analysis, planning, and business modeling. They will work closely with Dina Powell
McCormick, who just joined Meta as president and vice chairman, to work on partnering with
governments and sovereigns to build, deploy, invest in, and finance Meta's infrastructure.
In their Wyatt Matters section, Axios wrote,
the announcement coming shortly after the firm named prominent banking executive and former
Republican official Dina Powell McCormick as president suggests Zuckerberg sees Meta's ability
to build out AI infrastructure as a strategic long-term advantage over its big tech peers.
Right to admit, so Zuck's going after GCP AWS and Azure now? Makes sense, all this extra compute
can easily monetize it with the seeming unrelenting demand. Meta becomes a cloud play.
We'll have to see if that's exactly how it plays out, but it certainly seems like the big cloud
providers just got a new player in the space. So like I said, quite a bit of strategic
repositioning on this Monday. For now, that is going to do it for today's AI Daily Brief.
Like I said, tomorrow we will dive deep into the new Cloud Co-Work product, which again,
is Cloud Code for everything that's not code, and I am super excited for that. For now,
I appreciate you listening or watching as always, and until next time, peace!

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis