Loading...
Loading...

Today’s episode explains the AI capabilities overhang, the growing gap between what AI can already do and how little of it is actually being used, and why this mismatch is becoming one of the defining risks and opportunities of the moment for individuals, institutions, and nations alike, with the core argument that closing the gap is now more about access, incentives, and organizational change than better models. In the headlines: Claude Code breaks into the mainstream, Anthropic’s funding round reportedly grows, xAI hits a gigawatt of compute, U.S. AI optimism lags globally, and Elon Musk escalates his lawsuit against OpenAI.
Brought to you by:
KPMG – Discover how AI is transforming possibility into reality. Tune into the new KPMG 'You Can with AI' podcast and unlock insights that will inform smarter decisions inside your enterprise. Listen now and start shaping your future with every episode. https://www.kpmg.us/AIpodcasts
Zencoder - From vibe coding to AI-first engineering - http://zencoder.ai/zenflow
Optimizely Opal - The agent orchestration platform build for marketers - https://www.optimizely.com/theaidailybrief
AssemblyAI - The best way to build Voice AI apps - https://www.assemblyai.com/brief
LandfallIP - AI to Navigate the Patent Process - https://landfallip.com/
Robots & Pencils - Cloud-native AI solutions that power results https://robotsandpencils.com/
The Agent Readiness Audit from Superintelligent - Go to https://besuper.ai/ to request your company's agent readiness score.
The AI Daily Brief helps you understand the most important news and discussions in AI. Subscribe to the podcast version of The AI Daily Brief wherever you listen: https://pod.link/1680633614
Interested in sponsoring the show? [email protected]
Today on the AI Daily Brief, the AI capabilities overhang and what to do about it.
Before that in the headlines, why cloud code is officially breaking into the mainstream.
The AI Daily Brief is a daily podcast and video about the most important news and discussions
in AI.
Alright friends, quick announcements before we dive in.
First of all, thank you to today's sponsors Zencoder, Landfall IP, robots and pencils
and super intelligent.
To get an ad-free version of the show, go to patreon.com slash AI Daily Brief or you can subscribe
it up.
A podcast, ad-free is just $3 a month.
I really tried to price it in a way where anyone who really doesn't want to hear those
ads can get it for less than the price of a cup of coffee at this point.
But if, on the other end of the spectrum, you are interested in sponsoring the show, send
us a note at sponsors at aidelebrief.ai.
And lastly before we dive in, another quick call to check out aidebintel.com, a little
later this week, people who sign up are going to start finding out what this cool looking
chart is.
I promise you, you won't want to miss it, again, that's aidebintel.com.
With that out of the way, let's dive in.
Welcome back to the AI Daily Brief headline edition, all the daily AI news you need in
around 5 minutes.
The story of this January so far has been the slow but steady settling in of the notion
of a shift in capabilities, centered upon Opus 5, Claude code, and now more recently
Claude co-work.
Turns out the change is not just among the highly enfranchised AI users on X, the Wall Street
journal declared over the weekend that Claude is taking the AI world by storm, and even
non-nerds are blown away.
The WSJ writes, they call it getting Claude-pilled.
It's the moment software engineers, executives, and investors turn their work over to Anthropics
Cloud AI, and then witness a thinking machine of shocking capability, even in an age of
wash and powerful AI tools.
The article noted the huge wave of positivity on social media, with many non-technical people
using Claude code to develop their first piece of software without knowing the first
thing about coding.
It also noted that Claude code is being deployed for a range of other use cases, including
health data analysis and expense report compiling.
The Atlantic had a similar take, writing, move over chat GPT.
The article says, though Claude code is technically an AI coding tool, hence its name,
the bot can do all sorts of computer work, book theater tickets, process shopping returns,
order door dash.
People are using it to manage their personal finances and to grow plants.
I don't know what it says about the Atlantic, that the first example they reached to is book
theater tickets, but there you go.
The author remarked that they used vibe coding tools for the first time in preparation for
the article, and was astonished that they could create a new personal website in minutes
without any coding.
They went on to spin up a dozen additional projects over the next few days.
They texted a friend to try it out and received the response.
It just does stuff.
Chat GPT is like if a mechanic just gave you advice about your car.
Claude code is like if the mechanic actually fixed it.
To be honest, I don't really think that that does it justice.
I think it's more like Claude code is like if when you dropped off your car at the mechanic
you could request any other car, and all of a sudden a few minutes later it would just
be there waiting for you.
A user named Alex Lieberman was profiled for the piece, and claimed that in terms of
implication this was even bigger than the Chat GPT moment.
However, he added Pandora's box hasn't been open for the rest of the world yet.
That might not be the case for long, however, with major publications now raving about
Anthropics product lineup.
Claude code creator Boris Cherny remarked on the overnight success that was years in
the making, saying, glad to see Claude code starting to break through.
It's been a year of very hard work, and we're just getting started.
Ajanie Mido writes,
The front page of the Wall Street Journal today is about every day people using a command
line interface.
If you are a business leader and not revisiting major operating assumptions about the world,
you are doing yourself in the people who depend on you a massive disservice.
One other Anthropic story the latest on their reported fundraising.
That round that we've been hearing about that values Anthropic at 350 billion is apparently
getting supersized up to potentially 25 billion.
That includes about 15 from Microsoft and Nvidia and another 10 from VCs and other investors.
Among those VCs is apparently Sequoia.
Feels like we're probably within a few weeks of this closing, so I'm sure we'll get more
news soon.
Now despite the Wall Street Journal writing so glowingly about Claude, US users clearly
remain concerned about the technology overall.
According to a new survey commissioned by Google and conducted by Ipsos, AI users are now
in the majority.
66% of respondents said they had used AI in the past 12 months compared to 48% in the
2024 poll and 28% in 2023.
This was the third year of the longitudinal survey which was conducted in late September
last year, so relatively up to date.
The survey pulled around 1000 adults from each of 21 countries.
The respondents were evenly split when it comes to AI job disruption, with 50% saying
AI in the workplace will create jobs and 50% saying it will eliminate jobs.
Still the majority of survey participants were in favor of fostering advancements using
AI at 58%, compared to 41% who wanted to protect industries that might be disrupted
by AI.
Not surprisingly, AI optimism was closely tied to AI use, 70% who said they've used
AI are optimistic about its benefits, and of those who use AI a lot, 86% were excited.
There's also a strong correlation between countries with high levels of AI use and high
levels of optimism.
The US however ranked low on both use and optimism.
Just 40% of US survey participants said they have used AI in the past year, which was
the only country without a majority of AI users.
As a point of comparison, the UK was at 56%, Mexico was at 66%, while the UAE, Nigeria
and India were all north of 80%.
Only 33% of US respondents said they were mostly excited about the technology, the worst
national result in the survey, and vastly beneath the overall result of 57%.
Now I actually think this is a major national issue.
The implications are not just the user numbers for open AI and anthropic, it's about the
seriousness with which people are taking the potential disruption for this technology
and preparing themselves for it.
I believe there continues to be a strand of people who are hoping to just wait it out
and return to the world that once was, and obviously I do not think that that's going
to happen.
Moving over to an issue that has become part of the political cannon fodder around AI,
which is of course data centers, XAI's Colossus 2 has now reached one gigawatt of capacity,
becoming the first training cluster to cross that threshold.
The data center is now drawing more power than the city of San Francisco, for comparison
the first Colossus cluster has a total capacity of 300 megawatts, while OpenAI recently disclosed
that they have 1.9 gigawatts across their entire training and inference fleet.
Construction began in March of last year, so this milestone was 9 months in the making.
The only other cluster that's closest anthropic in Amazon's new Carlisle data center, which
is expected to hit one gigawatt sometime in the first quarter of this year.
OpenAI's Stargate Abilene is expected to come online over the summer.
For now, XAI is the only company with access to this much compute, which is exactly what
we discussed as the big potential opportunity that could translate into differentiation
for GROC in the year to come.
Colossus 2 is also using Blackwell GPUs, making it one of the first training clusters to
run Nvidia's latest hardware and the only one at the scale.
The cluster reportedly contains 550,000 GPUs as currently configured.
As Amateo Kaplan put it, Gigawatt GROC has arrived.
Now staying in Musk world for a moment, Elon Musk is seeking up to 134 billion in damages
from OpenAI in Microsoft as his lawsuit heads to a trial.
A trial date has been set for late April, and during a hearing on Friday, Musk's lawyers
quantify the damages.
Their argument is that Elon is entitled to a portion of OpenAI's current $500 billion
valuation due to the $38 million in seed funding he donated to the nonprofit in 2015.
Musk's lawyer wrote in court filings, just as an early investor in a startup company
may realize gains many orders of magnitude greater than the investor's initial investment,
the wrongful gains that OpenAI and Microsoft have earned, and which Mr. Musk is now entitled
to discourage, are much larger than Mr. Musk's initial contributions, which is a very legal
ease way of saying, if that $38 million had been an investment into a for-profit startup,
it would have been a heck of a lot more than $38 million by now.
The filing also says that Musk plans to seek punitive damages as well as an unspecified
injunction.
OpenAI's lawyers rejected the approach, stating that his quote methodology is made up,
his results unverifiable, his approach admittedly unprecedented, and his proposed outcome, the
transfer of billions of dollars from a nonprofit corporation to a donor-turned-competitor,
implausible on its face.
OpenAI for their part continue to deny the premise of the lawsuit outside of the courtroom.
In a statement they said, Mr. Musk's lawsuit continues to be baseless and a part of his
ongoing pattern of harassment, and we look forward to demonstrating this at trial.
This latest, unserious demand is aimed solely at furthering this harassment campaign.
Now on X, the discussion centered around pages from Greg Brockman's private notes that
were revealed in the new filing.
One especially frequently shared passage from 2017 red, this is the only chance we have
to get out from Elon.
You see, the glorious leader that I would pick, we truly have a chance to make this happen.
Financially, what will take me to $1 billion?
D.V. Doss at Menlo Venture said, deep down it really is about the money.
Now several other quotes from the filing paint open AI in a very poor light.
The quotes, Altman said, Elon is cherry-picking things to make Greg look better.
Not bad, but the full story is that Elon was pushing for a new structure, and Greg and
Ilya spent a lot of time trying to figure out if they could meet his demands.
Altman continued, I remembered a lot of this, but here is a part I had forgotten.
Elon said he wanted to accumulate $80 billion for a self-sustaining city on Mars, and that
he needed and deserved majority equity.
He said that he needed full control since he'd been burned by not having it in the past,
and when we discuss succession, he surprised us by talking about his children controlling
AGI.
Altman continues after that quote, I appreciate people saying what they want, and
they can enable people to resolve things or not, but Elon saying he wants the above
is important context for Greg trying to figure out what he wants.
With the trial less than three months away, the story is unfortunately going to be a big
overhang for OpenAI as they try to execute on a pivotal year.
Right signal?
This is going to make a lot of people look greedy and ugly.
Hopefully we won't have to spend too much time on this.
I'll probably start to err on only sharing the really big highlights where it becomes a
major, inescapable point of conversation as it has been for the last couple of days.
This show, however, will not become a play-by-play court drama, as interesting and salacious as it
might be.
For now, that is going to do it for today's headlines.
Next up, the main episode.
If you're using AI to code, ask yourself, are you building software or are you just playing
prompt roulette?
We know that unstructured prompting works at first, but eventually it leads to AI slap and
technical debt.
Enter ZenFlow.
ZenFlow takes you from vibe coding to AI first engineering.
It's the first AI orchestration layer that brings discipline to the chaos.
It transforms freeform prompting into spec-driven workflows and multi-agent verification,
where agents actually cross-check each other to prevent drift.
You can even command a fleet of parallel agents to implement features and fix bugs simultaneously.
We've seen teams accelerate delivery to x to 10x.
Stop gambling with prompts.
Start orchestrating your AI.
And raw speed into reliable production-grade output at ZenFlow.free.
If you're listening to this, you already know how fast AI is writing the rules for innovation,
disruption, and value creation.
And this new era demands a new kind of patent law firm.
Landfall IP was built from the ground up to operate differently, orchestrating how human
expertise and AI work together for better patents at founder speed.
Created by world-class patent attorneys who saw a better way, Landfall IP lets AI execute
the repeatable while attorneys elevate to create the exceptional.
Landfall isn't adapting to AI, they were built for it.
Have a new idea?
Try the discovery agent for free.
It's a confidential tool that helps innovators synthesize their inventions and instantly
see patentable insight.
Visit landfallip.com to learn more, that's landfallip.com.
Today's episode is brought to you by robots and pencils, a company that is growing fast.
Their work as a high-growth AWS and Databricks partner means that they're looking for elite
talent ready to create real impact at velocity.
Their teams are made up of AI native engineers, strategists, and designers who love solving
hard problems and pushing how AI shows up in real products.
They move quickly using robot works, their agentic acceleration platform, so teams can
deliver meaningful outcomes in weeks, not months.
They don't build big teams, they build high-impact nimble ones.
The people there are wicked smart with patents, published research, and work that's helped
shaped entire categories.
They work in velocity pods and studios that stay focused and move with intent.
If you're ready for career-defining work with peers who challenge you and have your back,
robots and pencils is the place.
Explore open roles at robotsandpensals.com slash careers.
That's robotsandpensals.com slash careers.
Today's episode is brought to you by Super Intelligent.
Super Intelligent is a platform that very simply put is all about helping your company
figure out how to use AI better.
We deploy voice agents to interview people across your company, combine that with proprietary
intelligence about what's working for other companies, and give you a set of recommendations
around use cases, change management initiatives that add up to an AI roadmap that can help
you get value out of AI for your company.
But now we want to empower the folks inside your team who are responsible for that transformation
with an even more direct platform.
Our forthcoming AI strategy compass tool is ready to start to be tested.
This is a power tool for anyone who is responsible for AI adoption or AI transformation inside
their companies.
It's going to allow you to do a lot of the things that we do at Super Intelligent, but
in a much more automated, self-managed way, and with a totally different cost structure.
If you are interested in checking it out, go to aidelebrief.ai slash compass, fill out
the form and we will be in touch soon.
Welcome back to the Aidelebrief.
Today we are talking about something called the AI capabilities overhang.
Now, this is something I think about a lot, but the specific context for it was an article
that came out as part of the broader set of assets around OpenAI's announcement that
ads are coming to chat GPT, with them basically saying that part of the issue is access and
ads are going to help them with that access issue.
Now in that blog post called AI for self-empowerment, OpenAI defines the capability overhang as the
gap between what AI systems can do now and the value most people businesses and countries
are actually capturing from them at scale.
In other words, the delta between AI's current capabilities and society's current usage
of them.
And what's important about this concept is this is not about some future state.
This is not in other words a debate about AGI or super intelligence or anything like that.
It is instead a discussion of the current state of play and how far behind different types
of people and groups are in taking advantage of it.
So what I want to do today is talk about the AI capabilities overhang across six different
groups, individuals, communities, municipalities, educators, businesses and sovereigns.
For each of those groups I want to talk a little bit about what the capabilities overhang
looks like at the moment, what some of the answers to that overhang might be, and how
we, and this is the royal we I could mean society, I could mean the listeners of this podcast,
but how we could support tackling that capabilities overhang and improving the way that people
are taking advantage of what's possible right now.
So let's talk first about individuals.
Now this is admittedly a wildly all-encompassing category with a huge range of different
levels of this particular overhang.
While there are very, very few people who could claim that they don't experience that overhang
at all, in fact, even as someone who spends basically all of my time on this, I think
that there are entire categories of what's possible that I don't take nearly full enough
advantage of.
Most people fall somewhere on the spectrum from barely taking advantage to only just starting
to take advantage.
In fact, I think part of the reason that you're seeing so much excitement around Claude
Code and see it moving into the mainstream in the Wall Street Journal and things like that
is that for people who are picking it up, it is radically and directly undercutting that
capabilities overhang by massively accelerating what people can do.
But the implications of the capabilities overhang is dramatic.
Skills that took years to develop can now be augmented or replicated in hours.
Now this sort of commoditization of knowledge work creates displacement risk, of course,
but it also creates incredible opportunity in terms of the massive leverage that it can
give people.
One of the implications of the capabilities overhang, however, when it comes to that individual
focus, is that personal economic modes are eroding faster than people realize.
In other words, the gap between I should learn this AI stuff and I needed it yesterday
is closing.
To what are some of the challenges?
Information is one, and by that I don't just mean information about what's possible
with AI, although that's part of it, but also I think we have a real issue in the way
that we discuss AI.
Every survey that comes out shows that to be a little bit reductionist, but honestly not
all that much, Eastern and lower income countries are extremely enthusiastic about AI, while
people in Western and higher income countries are less so.
There are all sorts of reasons for this, but what it means is that in addition to just
a general information gap, you also have a massive enthusiasm gap, which means that people
who don't like AI or wish it didn't exist are getting farther and farther behind, kind
of hoping that it just goes away.
Go on any social media platform and you will be able to find myriad posts about people
enthusiastically quote-unquote waiting for the end of the bubble so things can go back
to normal.
Like the incontrovertible fact, known by everyone who is listening to this particular
show, that there is no such thing as going back.
So improving information availability about what you can do with AI, but also about to
some extent the inescapability of some changes because of it is a key part of overcoming
this overhang.
Another part is of course access.
People who can pay more right now have better access to AI.
However, the gap isn't necessarily as big as it seems, although chat GBT data shows
that the typical power user of their system uses seven times more compute than a typical
user, there's still incredible capacity available to anyone, even in the free versions of these
tools.
One of the things that's been super interesting to me watching people interact with the
New Year's AI resolution, which is the 10 week self-education program that came out
of my New Year's episode, is that a lot of folks, despite being on the very high end
of enfranchised users, are seeing how much they can get out of the free versions of
these tools.
I actually think that that's incredibly valuable and more instructive in many ways to the
average user than some insane person like me who's going to pay for the ultra or max
subscription every single tool that comes along.
I will say that I think even with this, a quality of access is going to continue to be
an issue and probably one that gets worse.
As much as we might not like the experience of ads in something like chat GBT, I do believe
that it does extend access and keeps access democratized in a way that a non-ad supported
model just couldn't.
However, ads from the platforms are certainly not the only way to ensure access.
There might be a role for government here, and frankly, it's one of the reasons that
some of the ideas around things like stopping data center construction are so wrong-headed
because they are likely to have the exact opposite impact where they actually further
restrict access to only the people who can pay for it.
So how can we support individuals overcoming the capability overhang?
One is a different conversation about AI and an acknowledgement that it's coming.
Two is continuing to look for ways to democratize access, whether that's from the platforms
themselves through ads or other models, or through public-private partnerships or some
other larger type of initiative, anything the last piece, and something that certainly
we will be trying to do a lot of this year with this show is self-education opportunities.
So many people are engaging so deeply with this New Year's AI resolution that I am 100%
sure that we will release other similar time-bound, but ultimately self-directed types of programs.
Next up, let's talk about communities.
Communities hold many of the assets that AI can't replicate.
Trust networks, local context, physical gatherings, shared identity, accountability structures,
and the overhang actually increases the value of these assets.
As digital interactions become AI-mediated, and in many cases for people harder to trust,
in-person community becomes a premium good.
In fact, as we think about the individual overhang, local institutions are sitting on distribution
and trust infrastructure that could be leveraged to help members navigate the AI transition,
but of course most aren't thinking this way yet.
Indeed, the challenge for communities is that community institutions tend to be the most
strapped for hard resources, and so the people who are involved in leading communities
have to trade time for everything.
Basically, when there isn't money, you require people to volunteer time and service.
That means less time for keeping up with all of these opportunities.
But if we can position community institutions as the human layer in an increasingly AI-mediated
world, there is a ton of potential power in these institutions taking on renewed importance
in this new age we're moving into.
To support that, I think we have to start by supporting their leaders.
We need dedicated resources and leadership support and training for these particular
types of institutions that are not necessarily just about the same things that are going
on with individuals, but are really about how to become a node for disseminating and supporting
transition among constituencies.
Closely related to communities is the capabilities overhang for municipalities.
Municipalities are, of course, the public and governmental complement too many of those
community institutions.
Like communities they're strapped for resources, they have old patterns of doing things that
can be very, very difficult to change.
Are these groups potentially some of the biggest beneficiaries of the efficiency gains that
come with AI?
One study found that 30 to 50% of municipal staff time is spent on tasks that are already
automatable or dramatically acceleratable right now.
And it takes about five seconds to think about some of the examples, changing review time
for permitting in land use, moving from hold times, phone trees, and manual routing within
constituent services to instant intake automated routing and proactive follow up.
There are potential implications for public works for social services, for records, for
courts, for revenue and finance, for public health.
You name it.
AI efficiencies are so full of opportunity among municipalities.
And for that reason, I think we should be spending way more time, energy, and resources
on trying to fix this particular category of capabilities overhang.
What does that look like?
Well, of course, I don't know, but I think that some opportunities include different types
of public-private partnerships.
I've got to say, right now, the model labs in many AI startups in general don't necessarily
have the best brand and reputation.
The AI industry as a whole is suffering from a broader sense among many people, especially
in the West, that techno longer exists to serve people to the extent it ever did, but
instead serves only to enrich the people who create that technology.
Seems like a pretty good time to try to engage in some public-private partnerships that
actually bring the benefits of AI to a wider audience.
Frankly, there's also strikes me as an opportunity for perhaps a different class of business
that has different incentives.
I think there are very clear, profit-motivated business opportunities in AIifying how municipalities
work.
But I think that trying to find a new generation of entrepreneurs, maybe some of whom
have backgrounds in that type of municipal service, who are designing leaner, more
capital-efficient providers, who are going to be able to offer municipalities, contracts
and services that provide that transitional support without gouging them.
It just strikes me that this could be a great moment for a new class of civic-minded
entrepreneur to really do some damage in the best possible way.
When it comes to educators and education, goodness gracious, where to even begin.
This is something we've talked about a lot in the show, although not for a little while
believe it or not, but by and large, education is stuck being concerned that students can
now cheat on the test when the real problem is that in the future that we're moving into,
the test doesn't matter.
We need nothing short of a radical re-evaluation of everything that we teach.
To be wildly oversimplified and reductive in a way that the educators among you are going
to be cringing your faces off, I apologize in advance, let's start by separating everything
into three buckets.
The skills that are definitely still relevant, which by the way there are many, critical
thinking, ethical judgment, creative problem solving, human interaction and empathy, relevant,
relevant, relevant, relevant, in fact, more so.
So interestingly enough, a set of skills which we've often pejoratively called soft skills
that we haven't had nearly enough emphasis on in our education system for a very long
time.
So we've got that one bucket of definitely still relevant.
Then we have the things that are definitely changing in relevance, subjects that are
absolutely and undeniably being transformed by AI tools, writing in composition, research
and information synthesis, programming.
We don't have to fully throw out the baby with the bathwater to recognize that we're
talking about a lot more than going from adding with an advocate to adding with a calculator
when it comes to how dramatically this set of skills is changing in terms of how humans
are going to interact with them.
And then of course there's perhaps the biggest category, which we will generously call
who the hell knows.
There is going to be so much in this category where we simply do not know how AI is going
to impact it and having the humility to understand that some stuff that we're going to teach
could be irrelevant, but we just don't know and we have to hedge a little bit is I think
a reasonable way to proceed.
And of course there is a fourth bucket, new things that have become relevant.
Some of that's just AI specific skills, but a lot of that's going to be in and around
management and organization and basically the things that help people take advantage
of the fact that each of them will in the future have access to talent that every corporation
in the world would kill for today.
And then from there we redesign the curriculum around a balance of these things.
We run big experiments, we get okay with failure.
This area of disruption and change is going to be some of the easiest to talk about and
the hardest to actually do in practice and the best way I think we can support is to
create space for real change, not incrementalism, but true actual disruption.
The second last category we'll talk about today is the capabilities overhang for businesses.
And once again, just as the individuals are incredibly diverse, so too is the capability
overhang for companies.
As we saw with our AI ROI survey, there is a spectrum of capabilities overhang for companies
at every different level.
And while it may be the case that certain sizes of companies have different types of advantages
over one another, I don't believe that on mass, there is one category or type or size of
business that is experiencing dramatically less overhang than some other companies are
pretty much all dealing with going from no AI to figuring out how to use AI for efficiency
or they are dealing with the challenge of moving from efficiency to actually leveraging
AI for new opportunity.
In years of doing this and seeing thousands and thousands of executive interviews, I will
say confidently that I have never seen a single company of any size, including my own startup
and incentivized enough to get this right that doesn't experience the AI capabilities
overhang in some way.
We just had a super intelligent off site where all of us sat down and basically tried
to tear through everything we do and ask how we could AIify it even more.
And the amount that we are not doing is immense.
The big problems of the capabilities overhang with businesses involve really common patterns.
The thing we hear about most at super intelligent is creating time to redesign.
We have these whole new set of skills that people are expected to learn, but they are expected
to learn them while also doing their normal jobs.
Classic and quintessential conundrum of the moment is that we don't have time to learn
the thing that could save us so much time.
There's also the challenge of our normal disposition to wait for the future rather than
to go invent it, which of course gets into the idea of new opportunities.
We don't know what it's going to look like when every single member of every single company
can use software to deliver on their KPIs and invent new ones, but we're going to start
to find out.
Certainly, something that we could help with is that especially the farther away from
the AI efficiency era we get, the worse and worse the resources to support people's education
we find.
There are still plenty of prompt engineering courses out there, but actual really strong
resources on how to use coding tools for non-coders, how to build and manage agents,
how to think about automations more systematically.
These are much fewer and farther between.
Now hopefully the market incentive for this changes that in short order because goodness
gracious is there a lot of market opportunity there, but anything we can do to provide more
resources for education or self-education, the better.
The last group we'll talk about are the sovereigns.
And honestly, this is the group that might be the most aware of their capabilities over
hang of anyone.
The overhang in this case is a national security issue.
The delta between what's possible and what's deployed represents strategic vulnerability.
It also represents a challenge in terms of who gets to define the future.
Not only is there this strategic vulnerability, but sovereigns are also dealing with the
fact of what it means to have everyone's understanding of the world mediated by LLMs that
are reliant on a specific set of sources, which may not take into account the full complexity
and cultural legacy of your particular nation.
So there is so much in the AI capabilities overhang for sovereigns.
Certainly in the medium term, first mover advantages in AI capability could create some
seriously durable geopolitical asymmetries.
Now like I said, this is the group that I think is most cognizant of this challenge.
And it's why you see so many nations treating AI infrastructure in the form of compute, talent
and data as critical national assets.
It's why you're seeing massive geopolitical realignment around this stuff.
And it's going to be fascinating to see how this continues to interact with the geopolitical
conversation in the years to come.
Anyways, friends, that is our quick tour of the AI capabilities overhang.
The gap between what AI can do right now and how society in all of its various manifestations
is actually taking advantage of it.
I think we all in many ways have incentives to try to close the capabilities overhang
among every type of group not only that we're participating in, but those around us.
Hopefully identifying it as a challenge is a good starting point.
And that's what I've tried to do here and will continue to do on this show.
For now that that's going to do it for today's AI Daily Brief, appreciate you listening
to your watching as always and until next time, peace!

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis

The AI Daily Brief: Artificial Intelligence News and Analysis