Loading...
Loading...

The AI discourse is absolutely frenetic right now — everything from Karpathy's misinterpreted jobs visualization to a viral dog cancer cure story that's both less and more than it seems. NLW's argument: we're in AI's Second Moment, the agentic equivalent of the original ChatGPT shock, but with bigger capabilities, billions more people in the conversation, higher economic stakes, and an industry that's had three years to get worse at explaining itself. In the headlines: a preview of NVIDIA's GTC, SEC filings quietly listing AI agents as a material risk, and ByteDance shelving its video model over copyright disputes.
Learn more about AGENT MADNESS: Our 64-Bracket tournament to find the coolest Agent of 2026 https://www.agentmadness.ai/
Brought to you by:
KPMG – Agentic AI is powering a potential $3 trillion productivity shift, and KPMG’s new paper, Agentic AI Untangled, gives leaders a clear framework to decide whether to build, buy, or borrow—download it at www.kpmg.us/Navigate
Mercury - Modern banking for business and now personal accounts. Learn more at https://mercury.com/personal-banking
AIUC-1 - Get your agents certified to communicate trust to enterprise buyers - https://www.aiuc-1.com/
Blitzy - Want to accelerate enterprise software development velocity by 5x? https://blitzy.com/
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Our Newsletter is BACK: https://aidailybrief.beehiiv.com/
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, all about that guy who used AI to cure his dog's cancer
and what it says about the discourse in AI's second moment, before that of the headlines
a preview of NVIDIA's GTC, the AI Daily Brief is a daily podcast and video about the
most important news and discussions in AI.
Alright friends, quick announcements before we dive in.
For all thank you to today's sponsors, KPMG, Blitzie, AIUC, and Prompt QL, to get
an ad-free version of the show, go to patreon.com-aideally-brief, or you can subscribe and
up a podcast to learn about sponsoring the show, send us a note at sponsors at aideally-brief.ai
and while you are at aideally-brief.ai you can find out all about all the various things
going on in this ecosystem.
The big one this week is of course Agent Madness, it's a March Madness style bracket where
we will be having live, human, and agentic voting on the coolest things that you have
vibe coded and built this year.
In addition to bragging rights, I will feature these agents on the show, so if you are interested
in that check out Agent Madness.ai.
Currently submissions are slated to close on March 18th, that is Wednesday of this week.
So again, get on over to Agent Madness.ai.
It is a big week for NVIDIA as their GTC developer conference kicks off in San Jose.
CEO Jensen Wong was scheduled to deliver his keynote on Monday morning, so we'll likely
know more about the time this episode goes out.
In the lead up to the event, much of the speculation was around a new chip system developed
in collaboration with GROC, that is GROQ not GROK, GROC with the Q is the one that is
not an Elon Musk company.
NVIDIA acquired the chip making startup in December, and are expected to announce the
first collaborative product this week.
The information described the new product is integrating GROC's language processing chips
into NVIDIA's RAC scale servers.
If that's the case, this will be NVIDIA's first attempt to directly address inference
demand.
Until now, NVIDIA's chips have been world-leading in AI training, but haven't been particularly
focused on efficient inference.
That's where GROC steps in, delivering a chip tailored exclusively to inference workloads.
NVIDIA is expected to announce OpenAI as a buyer of the new chip.
Sources said the production has been ramping up at Samsung's chip foundry and mass production
is expected to begin in the second half of the year.
Alternatively, this will be the first time NVIDIA has manufactured an AI chip outside
of TSMC, potentially diversifying supply chains out of Taiwan.
The new servers also use Intel CPUs rather than NVIDIA CPUs according to sources, which
suggests that NVIDIA's chips don't integrate well with GROC chips at this stage.
The sources added that multiple generations of hardware are being planned, with the potential
to build GROC's technology into NVIDIA's fineman GPUs, which are the next generation
following Rubin later this year.
NVIDIA's NeoCloud partners are stepping up operations.
The information reports that N scale is in negotiations to acquire a huge data center
site in West Virginia.
The site has cleared regulatory hurdles, and is targeting two gigawatts of capacity by
2027.
Now, the deal is a little unusual for a NeoCloud provider, which have typically rented data centers
in the past.
It would also immediately make UK-based N scale a major player in the US market as they
move towards an IPO.
New documents surfaced by the information said that the acquisition would triple N scale's
revenue projections to $30 billion for 2027.
They are reportedly in talks to rent the capacity to bite dance, but could also rent their
servers back to NVIDIA.
Wright's more insights in strategy CEO and chief analyst Patrick Morehead, NVIDIA is
no longer a chip company.
As GTC 2026 opens, the company plans to present itself as a full stack, heterogeneous AI
infrastructure platform, spanning training, pre-filled decode, inference, and agent orchestration.
Next up, while many software CEOs have been downplaying the AI disruption risks to their
company this year, SEC filings are telling a different story.
So far this year, 27 firms have listed AI agents as a material risk to their business
model, up from just 7 this time last year.
The list of companies warning about agents includes Figma, Workday, and HubSpot, who's
CEOs have all recently dismissed concerns.
During their most recent earnings call, Figma CEO Dylan Field said, I think it is the
case that humans will continue to use software, and increasingly agents will too, and I'm
excited about that.
However, he added, I think right now, if you're willing to hand off mission critical
work to agents, and just let them do it unsupervised, you're a very brave person.
Meanwhile, Figma's 10K filing released on the same day acknowledge that agent AI may
quote, change how people access and interact with digital products in ways that reduce
reliance on traditional software applications.
Now keep in mind, SEC filing should not be taken too literally.
Ideas are required to discuss any material risk to their business, which often leads
to disclosures, a fanciful or unlikely risks.
Still, while individual disclosures don't tell us all that much, the volume is another
signal that we've moved past the tipping point on agents.
The idea that agents were capable of disrupting SaaS barely registered in the first half of
last year, and yet disclosure volume rapidly increase in the second half and in the beginning
of this year as the technology became more viable.
If nothing else, the shift means software executives are taking the threat of disruption
more seriously, or at least their legal departments are.
Next up, Bite Dance has paused the global launch of their cutting edge video model due to
copyright disputes.
The information reports the global release of Seed Dance 2.0 has been mothballed due to
a series of copyright disputes with Hollywood studios.
Seed Dance 2.0 was released in China last month, gathering a huge online reaction.
You might recall this viral clip with Tom Cruise and Brad Pitt in a fist fight, which demonstrated
an incredibly high fidelity replication of real world actors.
The new model led to outrage in Hollywood with companies including Disney, Warner Brothers,
Paramount, and Netflix sending cease-and-assist notices to Bite Dance.
Motion Picture Association CEO Charles Rivkin set an statement at the time, Seed Dance 2.0
has engaged in unauthorized use of US copyrighted works on a massive scale.
Bite Dance had planned to make the model available globally in mid-March.
The plan included API access through their cloud platform Bite Plus as well as a new consumer
app designed for a foreign audience.
Those plans are now reportedly on hold.
Chinese users, meanwhile, are reporting the model as far more tightly controlled than it
was at launch, to the point of rejecting prompts with no relation to copyrighted content.
Enterprise customers have complained that model access is limited to Chinese companies
with no intention of distributing content internationally.
One source said they'd been unable to negotiate terms without committing to spending around
1.5 million on the model.
Interestingly, it seems like the major hold-up is not so much about implementing guardrails,
but instead about refining them so that they don't block too much unrelated content.
We've seen this with OpenAI's release of Sora 2 as well, while it is relatively
straightforward to block copyrighted content, doing so without frustrating the user with
too many refused prompts is a much more difficult engineering problem.
And speaking of difficult engineering problems, a new AI startup led by former anthropic founders
is raising money to push the frontier of AI-enhanced scientific research.
The new company called Mirendil is in talks to raise $175 million at a billion valuation,
and if successful, the round would make Mirendil the latest AI startup to establish unicorn
status in their seed round.
The company is led by former anthropic researchers, Betemnishabra and Harsh Mehta, who spend their
time at anthropic working on things like long horizons scientific reasoning with AI agents
and automated AI research.
Both founders also have experience at Google.
Now exactly what the company plans to do is not known yet, but sources say the new company
aims to conduct AI-enhanced scientific research in fields including biology and material
science.
This area of AI research is quickly gathering interest in investment dollars as multiple
NEL labs focus on AI for science.
I would expect this to be a trend that continues throughout the year.
Speaking of Google, Google Maps is getting an AI twist with a new conversational interface.
The new feature called Ask Maps allows users to tap into a Gemini-powered chatbot to help
them navigate the world.
The feature is designed to answer questions about landmarks and help schedule travel.
Google gave small practical examples like being able to ask for a nearby location to
charge a phone or find a public tennis court with lights for an evening match.
The feature can also help with trip planning, with Google offering the example of building
a multi-stop trip to the Grand Canyon.
Right, Google, previously finding this information meant lots of research in sifting through
reviews, but now you can just tap the Ask Maps button and get your questions answered
conversationally, and with a customized map to help you visualize your options.
The feature integrates with Gemini's memory, so if you ask maps for a restaurant recommendation,
it can tap into what Gemini already knows about your preferences.
Google is also leveraging Gemini to launch a new visualization mode for navigation in
maps.
The update adds a 3D view that depicts buildings over passes and surrounding terrain.
Once again, Google flexing its multi-modality and the integration of its entire ecosystem.
Lastly, today sort of a bridge topic to our main episode.
Service now CEO Bill McDermott has warned that AI could send unemployment soaring above
30% for young professionals.
In an interview with CNBC, McDermott said that unemployment for college graduates could
quote easily go into the mid-30s in the next couple of years.
So much of the work is going to be done by agents he continued, so it's going to be challenging
for young people to differentiate themselves in the corporate environment.
Now according to data from the Federal Reserve, unemployment for recent college graduates
currently stands at 5.6%, which is far lower than the 7.8% unemployment rate for young
people without a college degree.
However, 42.5% of college graduates are classified as under-employed, meaning they don't
have enough work or are working in roles that don't require a college degree.
This is the highest level of under-employment for college grad since 2020.
Under science majors have among the highest unemployment rates at 7%, but their under-employment
rate is relatively low at 19.1% compared to other majors.
Now just why this type of discourse is so potent right now is in fact the topic of our
main episode, so with that, we will close the headlines and move on over to the main.
Agenda AI is powering a $3 trillion productivity revolution, and leaders are hitting a real
decision point.
Do you build your own AI agents, buy off the shelf, or borrow by partnering to scale faster?
KPMG's latest thought leadership paper, Agenda AI Untangled, Navigating the Build by or
Barrow Decision, does a great job cutting through the noise with a practical framework
to help you choose based on value, risk, and readiness, and how to scale agents with the
right trust, governance, and orchestration foundation.
Don't lock in the wrong model.
You can download the paper right now at www.kPMG.us-navigate, again that's www.kPMG.us-navigate.
With the emergence of AI code generation in 2022, Nvidia Master, inventor, and Harvard
engineer Sid Pureshi took a contrarian stance.
Inference time compute and agent orchestration, not pre-training would be the key to unlocking
high-quality AI-driven software development in the enterprise.
He believed the real breakthrough wasn't in how fast AI could generate code, but in
how deeply it could reason to build enterprise-grade applications.
While the rest of the world focused on co-pilots, he architected something fundamentally different.
Blitzy, the first autonomous software development platform leveraging thousands of agents
that is purpose-built for enterprise scale code bases.
Fortune 500 leaders are unlocking 5X engineering velocity and delivering months of engineering
work in a matter of days with Blitzy.
Transform the way you develop software.
Discover how at Blitzy.com.
That's BLI-TZY.com.
There's a new standard that I think is going to matter a lot for the enterprise AI agent
space.
It's called AIUC1, and it builds itself as the world's first AI agent standard.
It's designed to cover all the core enterprise risks, things like data and privacy, security,
safety, reliability, accountability, and societal impact, all verified by a trusted third party.
One of the reasons it's on my radar is that 11 Labs, who you've heard me talk about
before and is just an absolute juggernaut right now, just became the first voice agent
to be certified against AIUC1 and is launching a first-of-its-kind, insurable AI agent.
What that means in practice is real-time guardrails that block unsafe responses and protect against
manipulation plus a full safety stack.
This is the kind of thing that unlocks enterprise adoption.
When a company building on 11 Labs can point to a third party certification and say our
agents are secure, safe, and verified, that changes the conversation.
Go to AIUC.com to learn about the world's first standard for AI agents.
That's AIUC.com.
If you're an operator, your day is a non-stop stream of decisions, and most of them require
you to look at the data.
You don't need another dashboard.
You need answers you can trust fast, but the bottleneck is always the same.
The data isn't ready.
It's scattered.
It's messy.
Definitions aren't clear.
You're waiting on your data team or waiting on domain experts for clarification and confirmation.
That's the bottleneck today's sponsor, PromptQL, is built to break.
PromptQL is a trusted AI analyst for high-frequency decision making.
It connects across warehouses, databases, SAS, and internal APIs.
No massive data prep or centralization required.
It's built for multiplayer input.
Teammates can jump into a thread, correct assumptions, and nuance, flag edge cases.
PromptQL turns everyday conversations into a shared context.
And if something is ambiguous, it doesn't guess.
It escalates to the right expert, captures the correct logic, and gets it right next
time.
That's how it delivers trust and accuracy.
Over time, PromptQL specializes to your business, like that veteran employee who
just knows things.
From simple what is questions to complex what if scenarios, you can model impact and
stress test decisions before you commit, all through a simple natural language prompt.
PromptQL, the trusted AI analyst for teams with shared context and messy data.
Welcome back to the AI Daily Brief.
Today's episode is nominally about this guy who used AI to cure his dog's cancer, or
at least that's what everyone was talking about online.
But more broadly, it's about the state of the AI discourse.
And I think that the starting question that we need to ask, taking a big step back from
all of the headlines is what the heck is going on right now.
The AI discourse out there is absolutely frenetic right now.
You've got Bernie Sanders dropping nine minute long videos about X-Risk, CEOs like Bill
McDermott from ServiceNow, dropping insanely terrifying statistics all over the mainstream
media.
In this case, a casual prediction that AI is going to cause recent college graduate unemployment
over 30%.
Every time a poll comes out in America, it shows just increasingly negative sentiment around
AI, which who knows maybe has something to do with all these media outlets publishing
these scary predictions.
But then on the flip side, you've got normal people who haven't coded before managing
teams of a dozen agents or more doing all of this work that was never possible for them
before.
The divergence, in other words, between mainstream perception and actual capability has never
been higher, and yet both of them are in this incredibly heightened state.
So what is going on?
The short of it is, and this is a concept that I imagine will end up exploring a lot in
the near term.
I think the we are in AI's second moment.
Obviously in this case, I'm using AI as shorthand for generative AI.
And the first moment was the chat GPT moment at the end of 2022 beginning of 2023.
This moment was the Clawed Code Opus 45, Code X 52, et cetera, moment.
And if you want to be really reductive about it, it's the AI moment and the agent's moment.
At the beginning of the month, Ethan Mollick tweeted, from an AI user perspective, the
four big leap so far in ability, one GPT 3.5, chat GPT November 2022, two GPT 4 spring 2023,
three reasoners starts with 01 preview, but the real deal was 03 spring 2025, four workable
agentic systems, hardest plus good reasoner models, December 2025.
But really, I think his first two and his second two were all part of one thing.
And remember, in and around the first time, we also got some really heightened frenetic
discourse.
You might remember in May of 2023, which was the second month of this show when Time magazine
dropped an issue called the end of humanity, a special report on how real is the risk.
So the point that I'm making is that if this really is AI's second moment, it makes sense
that the cloud of dust being kicked up around it is proportionally bigger and more heightened
and more dramatic than even the important conversations we've had in between these two
moments.
And to some extent, I think part of what we're experiencing is just a resurfacing of
everything that came up in the wake of the first moment with some key differences now.
The first difference is that there's obviously been a huge increase in capabilities.
Chat GPT with 3.5 was amazing.
You combine that with some of the image generation capabilities of the models that were coming
out around then, and people who were trying these tools absolutely felt like wizards.
You didn't really have to convince most people if they tried these tools, they realized
that something big was changing.
And yet, even in those early days, there was still this idea of something even bigger.
The first episode that I ever had go viral, at least in the terms of a show like this
on YouTube, was about an early prototype agent.
We had experiments like auto GPT and baby AGI, a GPT engineer which would form the seeds
that would go on to be lovable.
And so two years later, as agents really come online, that big increase in capabilities
has, I think, proportionally heightened the discourse once again.
A second big change between the first moment and the second moment is that there are now
many more people in the conversation.
Around the Chat GPT moment, these tools were some of the fastest growing we'd ever seen.
Remember, Chat GPT got its first 100 million users in its first five weeks, beating the
previous record of eight months for TikTok.
But now we have literally billions of people using these tools every week.
Some people who don't like the tools are using the tools.
So there are just far more people in the conversation.
A third difference between the first moment and the second moment is higher economic stakes.
And in this case, I'm not even really talking about theoretical future job displacement things.
I'm talking about right here and right now.
Wall Street's interaction with SaaS companies, AI infrastructure build out deals and the private
financing thereof, valuations for private companies that are building AI, etc, etc, etc.
Anthropic wasn't even a blip on the radar to most people then.
And now it's at a $19 billion run rate, taking down industries every time it announces
a new feature.
A fourth key difference between AI's first moment and second moment has nothing to do with
AI itself, but has to do with the evolution of the market between 2022 and 2026.
AI is now useful as a corporate fall guy, specifically in the context of companies trying
to undo over hiring in the post-COVID period.
Investor Shamaat Palahapatiya writes, what if AI doesn't need to show an immediate ROI,
but instead is the plausible deniability companies used to RIF 50% of the workforce they
already knew did nothing.
Number five, no matter what you think of the politics of the moment, I think it's
fairly inarguable that finally as a difference between the first and second moment, this
is happening in the context of generally increased political volatility.
In other words, AI isn't the only thing happening in the world.
It's now interacting with things like war and Iran.
There is a last difference which I could point out is that we've now had three and
a half years of the AI industry doing a completely awful job of explaining itself and talking
about the future in any way that's going to be even remotely resonant to the average
person.
Not borings Paki McCormick recently tweeted, AI is very weird for me because normally
I'd be the guy who'd argue that it's crazy we're not more excited about this miracle
technology, but I completely get the negative sentiment AI companies have clearly botched
telling the story.
That's a big piece of this, telling people we built this thing that is definitely going
to take your job and hopefully we can figure out how to give you handouts or something
on the other side or come up with even better jobs or whatever, say thank you is clearly
terrible messaging.
Anyways, it's a much longer tweet, but I think that the incredibly poor messaging from
the AI industry is absolutely another thing that has changed between the first and the
second moment.
Not that there was good messaging around that first moment, mind you, there just hadn't
been as much time for us to shoot ourselves in the foot over and over yet.
The point of this is right now everything around the AI discourse is incredibly heightened.
The whole conversation is at an 11 all the time and basically has been since we all return
to work at the beginning of 2026.
There were two conversations that really demonstrated this this weekend.
The first was around a weekend project from developer Andre Carpathy that became an absolute
firestorm.
At 5pm Eastern time on Saturday night, Kaito on X tweeted, five minutes ago Andre Carpathy
just dropped Carpathy's slash jobs, he scraped every job in the US economy 342 occupations
from BLS, scored each one's AI exposure zero to 10 using an LLM and visualized it as
a tree map.
If your whole job happens on a screen, you're cooked.
Average score across all jobs is 5.3 out of 10, software devs 8 to 9, roofers zero to
1, medical transcriptionists 10 out of 10, skull emoji.
It pointed to this link Carpathy dot AI slash jobs, which is the full chart.
Instantly, Twitter was flooded with takes like this one from Tukki.
Siren emoji, do you understand what Carpathy just did?
He didn't write an opinion piece, he scraped every single job in America, ran it through
AI and scored how replaceable you are on a scale of 1 to 10, not a prediction, a diagnosis.
Accountants scored 9, paralegals 9, copywriters cooked, radiologist reading scans, the AI already
does it faster.
The only jobs that scored lower, the ones that required you to physically touch something.
In 2015, learned a code was the answer to everything, in 2025, code writes itself.
The people who listen are now the most replaceable generation in history.
I guess your degree didn't prepare you for a career.
Even people who aren't usually schlock merchants like that, started to veer into the same sort
of sensationalist territory.
Chubby, at-cheminismist rights, Carpathy is by no means interested in hyper-exaggeration.
Using AI, he concluded that out of 143 million people working in the US, approximately 57
million are at high to very high risk of their jobs being negatively impacted by AI.
That's almost 40%, let that sink in and consider what it means.
Now at this point, if you listen frequently, you're probably waiting for the yes but
where's the nuance here.
Well, first of all, if you go actually read the page that Carpathy posted, which I don't
think most of the people who were tweeting about it did, he has a very important caveat
on digital AI exposure scores.
He writes, these are rough LLM estimates, not rigorous predictions.
A high score does not predict the job will disappear.
Software developers scored 9 out of 10 because AI is transforming their work, but demand for
software could easily grow as each developer becomes more productive.
The score does not account for demand elasticity, latent demand, regulatory barriers, or social
preferences for human workers.
Many high exposure jobs will be reshaped, not replaced.
Indeed Carpathy himself was frustrated by the response.
When someone on that original tweet from Kaito said, I can't find it, Andre responded,
this was a Saturday morning, two hour vibe coded project inspired by a book on reading.
I thought the code and data might be helpful to others to explore the BLS data set visually,
or colorate in different ways or with different prompts or other own visualizations.
It's been wildly misinterpreted, which I should have anticipated, even despite the
readme doc, so I took it down.
In another tweet, he wrote, the quote unquote exposure was scored by an LLM based on how
digital the job is, this has no bearing on what actually happens to these occupations,
which has to do with demand elasticity and a lot more.
People are sensationalizing the visualization tool and putting words in my mouth.
Now, there was some interesting nuance conversation about this.
The update newsletter Stephens Schubert wrote, many seem to take this as a reason to believe
that the overall pace of automation will be high, but I don't think that makes any sense.
Even more to the point, and more insistently phrased, was Chicago Booth economist Alex
Emas who wrote, exposure does not mean threat of displacement.
It can literally mean the opposite.
AI exposed jobs may increase hiring and attract higher wages.
It all depends on a, elasticity of consumer demand and b, number of AI exposed tasks in a job.
Anthropics Peter McCrory added, I agree strongly with Alex here.
And my read is that Claude usage patterns clearly point toward uneven labor market implications.
Our recently introduced observed exposure measure aims to identify cases where exposure is
more likely to transform into actual displacement.
i.e. Claude is used in automated ways for work-related purposes on tasks that are
conceptually feasible for LLMs.
But no exposure measure is perfect or has monotone predictions.
And even when much of a job is automated, the remaining bottleneck tasks may ultimately
increase demand for complimentary human skills even among highly exposed roles.
Toronto economist Kevin Bryan said, I bet $1,000 that from now to 2030, most quote-unquote susceptible
jobs see increased share of labor. In the model these types of charts are based on,
it is explicitly not AI can substitute, but AI is related.
AI is a complement too, who doesn't want to code right now, for instance.
And I think that's all true and obviously we will continue to discuss the real no BS
labor market implications of AI. But the point is relative to our larger conversation,
this frenetic tone to the discourse. Not helping this was the fact that at literally within
one minute of Kaito posting that thing about Carpethi's research, the cobe easy letter posted,
breaking meta is planning sweeping layoffs that could affect 20% or more of the company.
Like I said, right now the conversation goes to 11.
But it wasn't just the negative side of AI that was at 11.
Google DeepMind7Cryer shared an article linked from the Australian that went hyperviral
with nearly 13 million views. Vitorio summed it up this way.
This is actually insane. Be tech guy in Australia. Adopt cancer riddled rescue dog months to live.
Pay $3,000 to sequence her tumor DNA, feed it to chat GPT in alpha fold,
zero background in biology, identify mutated proteins, match them to drug targets,
design a custom mRNA cancer vaccine from scratch, genomics professor is gobsmacked
that some puppy lover did this on his own, need ethics approval to administer it,
red tape takes longer than designing the vaccine, three months finally approved,
drive 10 hours to get rosy her first injection, tumor halves,
coat gets glossy again, dog is alive and happy.
Professor, if we can do this for a dog, why aren't we rolling this out to humans?
One man with a chatbot and $3,000 just outperformed the entire pharmaceutical discovery
pipeline. We are going to cure so many diseases. I don't think people realize how good things
are going to get. So here's the story. Australian entrepreneur Paul Coiningham has a dog named
rosy. In 2024, rosy was diagnosed with cancer that ended up being non-responsive to chemotherapy
or surgery. The tumors just kept growing. When Paul turned to chat GPT for help,
it suggested that he should get rosy's DNA sequenced and then use Google DeepMind's
AlphaFold to look for mutations that could be a target for immunotherapy.
When a drug maker wouldn't provide an off-the-shelf immunotherapy treatment,
Coiningham turned to Palli Thorderson, the director of the RNA Institute at the University of New
South Wales. Thorderson used rosy's DNA to develop a bespoke mRNA vaccine in less than two months.
He told the press, this is the first time a personalized cancer vaccine has been designed for a dog.
This is still at the frontier of where cancer immunotherapeutics are.
And ultimately, we're going to use this for helping humans.
What rosy is teaching us is that personalized medicine can be very effective and done in a
time-sensitive manner with mRNA technology. Now, as you can tell, there is a lot more to this
process than simply prompting chat GPT to cure cancer. And indeed, even the treatment itself
was an entirely successful. Yes, some of rosy's tumors have shrunk, but it would be certainly going
too far to call it a cancer cure. On top of that, it's arguably a story about how revolutionary
the Nobel Prize-winning AlphaFold model is rather than a story about chat GPT.
Pally Thorderson ended up turning to X to explain some nuances of the story.
The nuances include the fact that this was less about a cure and more about buying time,
the fact that it's difficult to estimate the real costs as lots of people donated time and
resources to this. A third nuance is that regulation of vet research and treatment is obviously
quite different than human health. But ultimately, Pally says, in the human health space,
rosy's story demonstrates that we can democratize the process of design and cancer vaccine.
While genomic analysis and RNA production will continue to be specialized,
they could turn into pure service provision, especially as automation increases.
This then begs the question, do we need to overhaul the regulatory regimes with this in mind?
And can we ensure equitable access?
Now, of course, there were tons of people who were skeptical on spec when they saw the story,
even before all that nuance was shared. And what's more, unsurprisingly,
I personally find it a little bit refreshing to have people excited about the positive
disruptive potential of AI, then to just be constantly looking at the negative.
But the point is that these are still two sides of the same coin.
We are in the midst of the transition into AI second moment.
And for a little while, until we all get used to the new paradigm that we're living in,
it's going to be weird. All I can promise is that if you hang out around here,
you will feel at least slightly less like you're taking crazy pills.
For now that is going to do it for today's AI Daily Brief,
I appreciate you listening or watching, as always. Until next time, peace!
Thanks for watching, see you next time, bye!

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis