Loading...
Loading...

Scott Wu and Russell Kaplan, co-founders of Cognition, are leading one of the fastest-growing, talent-dense AI companies. Their mission: make expert software engineering ubiquitous. What does a world of software abundance look like? How is Cognition delivering massive productivity gains inside some of the largest companies and organizations? And can AI finally modernize the broken, $100 billion government IT systems?
We discuss these and other timely topics with Scott and Russell. Scott was a three-time gold medalist at the International Olympiad in Informatics and world champion at age 17. After high school, we hired him at Addepar, where he became a top software engineer. Russell began his career as a machine learning engineer on Tesla’s Autopilot team before selling his video data company, Helia, to Scale AI. In 2023, Scott and Russell co-founded Cognition, and a year later, they shocked the technology world with the release of Devin, the first AI software agent.
We begin our conversation by discussing the incredible collection of young talent at Cognition, and why the next generation has new advantages in the AI era. Next, we catch up on Cognition’s explosive growth: Devin usage in the first few months of 2026 already surpassed all of 2025. Scott reveals that Cognition engineers no longer write code and explains how they’re able to test and ship new products faster than ever before. Then, we dive into the new era of software abundance and what it means if everyone has access to high-quality engineering, from modernizing large legacy enterprises to supercharging small businesses. We also discuss Cognition’s recent foray into government services and its work to modernize complex outdated systems. Finally, we explore the talent flywheel that has drawn so many former founders to Cognition, and why Scott and Russell believe we’re moving from Minecraft “survival mode” to “creative mode” — where the only limit to building is imagination itself.
00:00 Episode intro
01:35 Why technical talent & execution matters in AI
06:10 Do young people have an edge in the AI era?
08:26 Cognition’s rapid growth
11:55 The new era of software abundance
14:30 Cognition engineers don’t type code anymore
19:20 “Never sleep while Devin is idling”
21:25 The case for AI disinflation
23:50 How Devin generates 12X productivity gains
28:25 Cognition for government / taking on complex, broken systems
36:40 The AI race / competition with Anthropic
39:00 Forward deployed engineers?
43:40 How fast are LLMs improving?
47:10 AI-led small business explosion
you're really only limited to your ideas
and to your imagination,
where you can kind of just turn things into reality.
You were the three-time gold medalist
of the IOI top programming competition
in the world now you're running one of the top AI companies here.
When we launched Devon in March of 24,
it was the first autonomous agent.
One hour of human time spent managing Devon
was worth like six to 12 hours
of that human time doing the work themselves.
Elon had this phrase that he really drilled into us
which is everyone is chief engineer.
Let's talk about this new era of software abundance.
For us at cognition, for example,
our engineers don't type code anymore.
You really can just turn your ideas into reality.
The engineer and the designer and the product manager
all look at each other and say,
I don't need you guys anymore.
The engineer and the designer and the product manager
all look at each other and say,
I don't need you guys anymore.
Scott Wu was a three-time global gold medalist
in programming.
I worked with him in the past.
He's now running one of the fastest growing
AI companies in the world helping to usher
in this era of software abundance.
He and Russell co-founders met up with us.
We played some games, not going to tell you who won.
These are pretty smart guys,
but it's always really interesting to hear
from Scott and Russell about the cutting edge of AI,
how the world's changing,
and what we can create in this new era of abundance.
Scott and Russell built Devon
was a very first AI programming agent two years ago.
They're already launching
in all sorts of other areas.
They're now transforming how governments work as well.
Excited to see where they're headed next.
Welcome to American Optimist.
We have back Scott Wu, the CEO and founder of cognition
and your co-founder Russell.
And through mine people, Scott,
you were the three time gold medalist
of the IOI, a top programming competition in the world.
I think you were the one time world champion at 17.
And we worked together at Adapar after that.
And now you're running one of the top AI companies
here in the world.
Russell, I think you started your career at Tesla
as an ML engineer.
You sold a company to scale.
You guys are both in your 20s, still, right?
No, I'm a shout now, I'm 30.
You're 30, oh.
I just 30 this year, so it's, yeah.
So you're turning 30, okay.
Well, that's okay.
You're getting old like me.
You're still right in the heart of it here.
What's the average age on the team?
Actually, I'm curious.
I think it's probably, so on engineering, it's about 25.
And then obviously on go-to-market, it's a little bit older.
But yeah.
Well, go-to-market is different.
That's fair.
You're running more go-to-market stuff.
Yeah, I think the engineering team, we have 17 year olds,
we have 18 year olds, we have really young folks.
And then, but we'll take anyone at any age
as long as they're ready to grind
and ready to have a big impact.
And you guys have like huge numbers of people
with one gold medal globally in programming.
This is a very advanced technical team here
cutting edge of AI.
Yeah, I mean, some of our favorite people,
I would say are people who are like 17 or 18
and like finishing a high school,
but they had already played around
with a ton of building agents themselves,
like working with AI, training models, and so on.
And it's obviously, I mean, it's-
I actually want to ask you about this really briefly
because so you were gold medals in the world of 15,
world champion of 17.
It's obvious people can be really, really good
at these things at a young age.
Is there something about AI,
where like a young person's brain
that kind of grows up in forms using it
can somehow like be more ahead of anyone
who doesn't or something like that?
How do you think about that?
It's a good question.
Yeah, no, I mean, it's fun enough.
So I went to what's called the USACO, the USACO,
which is like the USA Computing Olympiad.
And from there, that's like the training camps
and all the selection camps that choose the national team
that go represent at the international impact.
And every year, there were about 20 kids,
which was like the 20 kids around the US.
I was from Louisiana.
And most people were from like, you know,
California, or like New York, or like, you know,
around like Massachusetts, like around MIT
and Harvard and stuff like that.
But in my year, actually, there were a ton of others
who all kind of went into AI.
And so obviously, Steven and Andrew,
who started the company, you know,
who started cognition with us, but also a ton of others.
And so Alexander Wang, who started scale, Demi Guil,
who started Pika, the, let's see who else.
Daniel Ziegler, who is one of the co-inventors of RLHF,
Alex Wei, who is like now running a lot of the reasoning
efforts at OpenAI, Johnny Ho, who started perplexity.
So we were all the same year, actually,
out of that like group of 20 people.
And it was kind of an interesting one.
I mean, I think there are a few things there.
I think for one, obviously, I think entrepreneurship
is infectious, you know, and I think, I mean,
Alexander was, I would say, the first to really start
a company and to see real success with the company,
he left freshman year from college to start scale.
And he sold scale, obviously, for like 16,000,000
to face, or whatever, some funny structure.
But yeah, he's successful.
But that and probably inspired other people
who would say, wait, I'm not smart, too.
I can do this, too.
Yeah, so I think that was a big motive.
And then, you know, we all kind of like came up together
and kind of got to go through some of these things together.
And I think that was a big deal.
I think the other thing I would say about AI,
particularly, is I think in AI, what you see is
that really excelling on the technical aspects
just matters much more, I think, in AI
than some of these other fields.
And I think there's been lots and lots of businesses
in the past that have been very, I'll say,
like very intense logistics businesses
or very tough kind of like marketplaces
to get started.
Or for example, businesses where a lot of your edges
just like how you figure out pure distribution
or how you kind of like, you know,
make the right little like,
addicting loop and so on.
And AI has a ton of these, too.
Obviously, and then, you know,
I think all of those same skills are still necessary.
However, I think in AI and a lot of factors
and a lot of these verticals, you know, what you see is that
obviously, pure technical execution,
it's like for every level that you push it,
there's still like another level to go and hit.
And a lot of the best companies that we see
in the valley are the ones that are just able
to roll out like technology pushes or breakthroughs
that others have not.
And to push you guys on this,
there's a 17 year old today who's like the Scott Wu
of today who's a world champion,
who's maybe you're maybe you're hiring.
Do they have some special edge having grown up
in this world where AI is already possible
whether you're using it?
Like is it exciting things further?
Yeah, I mean, everyone starts with the same level
of experience with AI for software engineering, right?
Which is basically none.
I mean, every three months,
you have to throw out your previous experience
and build new experience because the tools
get so much better.
That's probably harder for someone who's my age, I'm 43,
than someone who's like 18 and still learning or not.
Yeah, so because some people say,
oh, you know, it's going to be really hard
for junior engineers now because, you know,
the entry level tasks are being done automatically by AI.
But I think a lot of what we see internally,
it's kind of the opposite in some way,
where if you're coming in with no preconceptions
about how things are supposed to be done
or how things are supposed to work,
then you can just go all in just really embracing
this completely new way of working.
But I think the AI technical depth piece is,
it's actually not just in the sort of modern generative AI era,
you know, when I was at autopilot,
I was a machine learning scientist
working on the sort of the vision neural network.
And Elon had this phrase that he really drilled into us,
which is, you know, everyone is chief engineer, you know,
everyone on the autopilot team has to understand
how the full stack worked.
And this is actually extra important in AI
because what happens is the abstraction boundaries
between different teams start to break down, you know,
the sort of classical way that the self-driving system worked,
which you had a, you know, a perception team,
you had a planning team, you had a controls team,
and they had these like thin interface boundaries between them.
But the nice thing about AI is you can optimize systems end to end.
So if you want to actually optimize systems end to end,
you have to have an accurate mental model
about each of those pieces work.
And so I think more sort of technical breadth
than depth across the entire stack is becoming increasingly relevant.
That is an interesting kind of way Elon does things,
which I've seen a lot of really top people,
not too many, but some people do things.
It's in order to really be the best,
you have to understand everything going on it.
So it's really breath the end depth in a way.
And so you're saying,
I make that a lot easier to do that
because I can give you some of that breadth
you wouldn't have otherwise.
Yeah, I mean, like the way we onboard onto our own code base
for new people, you just asked Devon,
all the questions of what's going on,
why is this done this way?
What's the historical context?
Do you tell them they're also like equivalent
of like the chief engineer where they have to learn everything
or it seems like it's a take a while
to learn, I mean, he's a big code base now.
Yeah, I mean, I think in practice,
it's so much of it is all obviously very connected.
And so I mean, the simple example of this is like,
we bought Windsor, if you know, seven, eight months ago,
at this point, but like we don't have a distinction of like,
oh, this person is an engineer working on Devon
or this person is an engineer working on Windsor.
Like there are a lot of the same people,
should be people who are kind of, you know,
working across both of these.
So catch us up since we last talked.
Like what's the state of cognition?
Where are we now?
You're probably not giving out revenue numbers,
but you've drawn a lot.
Like what can you tell us?
Yeah, no, it's, I mean, it's,
we've had a ton of growth over the last,
I guess it's just under a year since we last talked.
And obviously, you know, back in July,
we bought Windsor, but I think over the last several months,
I think both Devon and Windsor have grown exponentially.
One of the fun stats actually is today's March 9th.
And we actually, at this point,
I've already done more Devon sessions in our customers
in 2026 than we had in 2025.
And so basically over the last like two months in change,
we've already done more Devon usage in total
than we had in all of 2025.
So it's more and more than six, six X or something.
Yeah, yeah.
And obviously, you know, we're working on making sure
that that growth trend continues.
And so we'll see how that goes.
But no, I mean, I think the business has grown a lot.
We've been working with a lot of the biggest companies
in the world, you know, Citibank, Santander, and so on,
on figuring out how we really, you know,
transform their engineering efforts.
Yeah, one of the interesting developments since last year
is, you know, when we launched Devon in March of 24,
it was the first autonomous agent, right?
It was like very early for the form factor.
And it was, I would describe it as kind of like just
at the edge of possible then in terms of doing,
real useful work.
It made a lot more mistakes back then, obviously.
Yeah, yeah, as much less reliable, you know,
our infrastructure around it, the connectivity
with the rest of the codebase was a lot less mature.
I mean, it took us until summer of 2024 for Devon
to become the number one contributor to its own codebase,
like which was like the first real milestone
at then towards the sort of the end of 2024
to really get deployed in production at large scale
at meaningful companies.
But one of the things we learned is that
if you look at sort of where the technology was in 2024,
it was not reliable enough to be used
as the primary source of software engineering for most tasks,
right?
You actually still need a tighter feedback
loop in that year between AI and humans for most things.
And so one of the sort of first niches we found
where this was actually already really useful back then
was in these very large code bases
that just have tons of existing technical data complexity.
And you want to do a large scale refactor
across, you know, 10,000 services.
If you're doing it the sort of normal way as a human engineer,
you might have to define some new architecture
and then manually go implement thousands and thousands of changes.
And these changes would be complex enough
that you couldn't just write a rejects,
but not so complex that AI couldn't help.
And so that's kind of one of the earliest places
I think we got to a real strong product market fit
is inside these very large complex code bases.
And that's one of the reasons now,
if you look at the state of our business,
we're deployed at a lot of the largest
most complex organizations in the world,
like most of the top health insurers, retailers, banks,
government agencies, and these places with large,
complex amounts of code have been actually surprisingly
early adopters of a gentics off.
Well, these guys just have like massive amounts of code
that it's been doing the same thing for 30 years
on a very old architecture.
And you can go in there and pretty easily accurately fix that, I guess.
Totally.
It's like, to your point on, is this a new skill for engineering?
It's, you know, the mindset of it architect
inside one of those organizations has become,
you know, you're almost like a CTO of an agent fleet.
Now, you define the problem space of what you want to go solve,
and then you just spin up your army of devins
to actually go do the implementation work
and then you kind of review the results.
It's like a very different way of programming.
There's all these memes in San Francisco
where like, nerdy guys are on dates,
but they're too distracted watching their fleet of agents
and to talk to the girl.
It's like, it's a problem.
I have gotten in trouble with my wife for that.
Let's talk about this new era of software abundance,
as you call it.
What does software abundance mean?
What's cognitions role there?
Yeah, you know, I think the simple way to put it is,
is, I mean, to Russell's point,
all of these traditional industries, you know,
it's, you, you've, in Silicon Valley,
the way we think about software engineers is, you know,
obviously, these tech startups or these tech companies
that are building, the reality is,
every company in the world has software, right?
I mean, software is, in many ways, I think,
the, the, the premier knowledge work job of this century.
And so, you know, you're, you're talking about like CVS
or you're talking about Walmart,
or you're talking about UHG,
or you're talking about, you know, Goldman Sachs, like,
all of these places have so many, so many software engineers
and they have so much software that they're building.
Because obviously, so much of what we do in the world now
happens, you know, over the web or, yeah,
with computers, right?
And I think what we, what we kind of think of when we think
of software abundance is just getting to a point
where you really can just turn your ideas into reality
and build what you want to build.
And, and, and one way that I like to think about this
is, you know, you can kind of think about software products
on a scale of like, let's say a log scale based on
how much reach they have or how many users they have, right?
And you think about the best products in the world
and the products that everybody uses all the time.
And this is like, you know, YouTube or Instagram
or TikTok or something like that.
And it's, you know, these are incredibly good products.
And it makes sense.
I mean, they have billions of users.
And so they have, you know, 100,000 software engineers
or tens of thousands of software engineers
that are working on them, right?
And you feel it in the product, you know,
it's like the experience is perfect.
Like there's never bugs.
It doesn't go down.
It's streaming you like gigabytes of data.
The algorithm is super addicting, right?
Like they've, they've, they've really like perfected
the software.
I know myself used two to three you mentioned.
They're too addicting.
Yeah, they're too addicting.
And then you go to the next tier of like, okay,
instead of billions of users,
let's talk about the products that have hundreds
of millions of users, right?
And you're thinking about like, you know, banks
or you're thinking about like, you know, apps like Uber
or DoorDash or you're thinking about like, you know,
a lot of these various other kind of services
and products that we all use, right?
And it's similarly, it's like, you know,
you feel the software and it's obviously built quite well.
But it's already, you know, I think you notice
a different level of like how much execution there is.
And then there's next level and next level
and next level and it goes all the way down to, you know,
you're like, you know, your kid's website,
a school website, which is from like 2001 or something
and it's like an elementary school
and it has like a picture and it's like super outdated
and has no other information about that.
Right.
And so like maybe one way to put this is, you know,
I think software abundance means making it much easier
for everyone with every idea or every product
or use case that they want to serve
to be able to climb that ladder and build products
as well as the best products in the world
are built right now.
And for someone for a lot of listeners
or not as in the AI world, they might be CEOs
of a banker or something or running other businesses.
Like what is actually changing
about how engineers work, right?
Like what does a great engineer's workflow look like now?
Yeah, you know, it's a great question.
And the simple way that I put this is that
for us at cognition, for example,
our engineers don't type code anymore.
Like that's just reality.
And this is out of the last several months.
And this is within the last, yeah,
three to six months, honestly,
when the ships has really happened
and there are other steps to be clear obviously.
But I think at this point,
you know, you used to maybe one way to put it
is like, I mean, you used to work with punch cards,
for example, and you used to work.
And now in many ways, the medium has shifted away
from code and a lot more of it has become basically English,
right?
And so, you know, we obviously use a ton of Devon internally.
We use, you know, WinServe and then the agents
inside WinServe internally as well.
But at this point, either way, whatever tools you're using,
you know, it's not really you typing out the lines
of code yourself.
It's you looking, understanding what it is
that you want to do, thinking about,
okay, how do I want to handle this case
or this behavior?
And you just tell the agent what you want it to do
in English, right?
And it goes and builds it all.
Yeah.
And in terms of then, like what the impact of that is,
especially, you know, if it's like a CEO level objective,
what we're seeing across the board,
across the customer base is people are just getting
more ambitious, you know, like it used to be the case that
you have this entire software development lifecycle.
And every step of that cycle is oriented around
not wasting the super precious time
of engineers writing code.
And now you have suddenly this overflowing abundance
of the ability to generate code from ideas, from prompts.
So you can iterate faster in ideas to use try stuff more.
You can try stuff a lot faster.
You know, you don't necessarily need to spend
three weeks on design before you then hand it off
to engineering because engineering so fast
that the turn, you know, the cycle time is much tighter.
And so, you know, increasingly, especially at the executive
level, it's like, oh, do we have to choose between A or B?
Let's just try both.
I mean, could the product people themselves
create some things now or how does that work?
I mean, we see it every, there's like a joke
where it's sort of, you know, the engineer
and the designer and the product manager all look at each
other and say, I don't need you guys anymore.
Because they're all just doing it all themselves, right?
It's like every person is empowered to do the other aspects
of the product development lifecycle.
And I think it's really rewarding people who are, you know,
actually personally highly agentic.
And thinking about, okay, what's the impact I can go have?
And be sort of like self-reline in that way.
And I know, you know, with Devon, a lot of product managers,
one of the very first things they started using Devon for
was actually to not bother the engineers with questions,
with silly questions, you know, how often have, you know,
you're a new employee to a company,
you don't understand what's going on somewhere
and you're a little nervous to say, hey, can you explain this to me?
You know, Devon is very non-judgmental,
you know, you'll ask a question, you get the answer.
And I actually ask it a lot of dumb questions, too,
it's like, I'm the boss, I'm supposed to know,
something like crap, what is that acronym you get again?
But you know, in judges, you do it right away.
Totally, yeah, you know, and you'll cite the,
it will sort of actually cite the source code alongside it, too.
And, you know, give, it kind of puts everyone actually
more on the same playing field in a way,
wherever it has the same content?
In, by the way, agency something I'm thinking a lot about
for society recently, it seems like highly agentic people,
this is like a big advantage for everyone, right?
But along those lines, like, what are the habits or instincts
of someone who's going to be particularly good
at leveraging Devon, like are there certain things you're seeing?
Yeah, yeah, for sure.
No, I mean, one way to put it is, you know,
I think software engineering for people
who've grown up doing programming and so on
have been through all these previous errors.
You know, usually what software engineering has looked like
is like, why do people who love coding, love coding, right?
And I think the answer is like 10% of that job is basically
this really fun part of just pure problem solving,
thinking about what you want to build,
being creative, like understanding the different solutions,
deciding, okay, what architecture makes sense here?
How exactly am I going to, you know, achieve my goals here?
What do I want to build, right?
And then 90% of the job is, once you've figured out all
of those parts, just doing all the dirty work
of the implementation to go make that happen, right?
And there's like a million bugs
that your customers have reported for you,
and you have to deal with this messy migration
or this upgrade to make your stuff still work,
or like, you have to go implement all the little cases
and all the little details and like write all that,
you know, the front-end code that serves this thing
that you just built, right?
And I think what we're seeing is that the best engineers
are just doing 10 times more of the first part
because you don't have to do that 90% right?
You have an agent, you have to have
and that's gonna go and do that for you.
And I think to your point, that obviously means that,
having high agency is really, really important.
We think about this ourselves internally.
It's one of the jokes is like,
we're a reasoning lab building agents,
and so the things that we really value are agency and reason.
And I think to your point, a lot of the skills
that really matter are you the kind of person
that's gonna think about, okay, this should be this way
instead of that way, or like,
what is the right way to solve this problem?
Or what do we wanna do here?
And also just like internally embracing
the abundance mindset, too, where,
a lot of times you might be in your head,
oh, should we do it this way or that way?
I think a lot of our best engineers,
they just rip it all of the way at the same time.
And then you get a bunch of devins come back
and then you can sort of analyze the results
and try and parallel.
It reminds me again, it's kind of like the mindset
of actually being a good machine learning researcher
is now relevant for every type of software engineering.
If you look back a few years ago,
what we're machine learning researchers doing.
So at Tesla, we had one mantra,
which was, never go to sleep while the GPUs are idling.
If you let your cluster idle overnight,
that's just a huge waste of resources.
Just kick off some experiments.
Before you go to bed, so you wake up,
you get some more data.
And it's the same for all of software now.
Why would you go to sleep while the devins are idling, right?
You could just be ripping a ton.
You could be ripping a ton of these.
I'm sure it's working on your sleeping.
Exactly, exactly.
And so I think there's actually a lot of parallels in my head
where it's not just about that.
It's also some comfort with non-determinism.
A lot of engineering, it's an incredibly precise craft.
And it's sort of naturally uncomfortable, unnatural,
to not have full control over everything that's going on.
But if you get a little bit more willing to embrace
the non-determinism and, okay, exactly,
how should this be implemented
as long as we can validate system performance end to end
and I can actually understand the results.
I think machine learning went through that lesson
years ago and now we're going through it
for the rest of the software.
So I mean, everyone's in all long while
someone would probably check the machine learning to machine.
You know, sorry, I was so much like the machine assembly code
actually on something.
If they're really trying to optimize something,
do people, there's still people checking
the actual regular code on things
if they really have to these days?
How do you think about that?
Yeah, for sure.
And for what it's worth, I think we are still in the midst
of a lot of this change.
And so I think while people are producing code
with just English, often you do come back
and you're still reviewing that code,
you're making sure that it looks right
or you're looking at the code as it is
to understand what's going on.
And I think to your point, it is kind of like,
in the right cases, when you want to peel back
the layer of abstraction.
And I think what we'll see over the next 12, 18 months
is that we will continue to get further and further
to this point where you don't have to use English
for almost everything.
You don't have to do any more.
And what does this mean?
So obviously, a lot's happening a lot faster.
We just covered this.
And this is money movement.
This is military, we're just off.
This is healthcare, it's flights, it's maintenance,
it's scheduling, building new things,
and permitting, everything in the world,
a lot more people realize is software.
And we're going now, what, five times faster,
10 times faster, soon on software,
there's almost like we're getting multiple years done
in one year, right?
So what does that mean for society?
How do you think about that?
Yeah, I think about this a lot.
Have you ever seen the graph of inflation
by different sectors?
And you can see, you know, the sort of highly
regular constraint sectors at the top,
you know, like tuition, healthcare spending.
The cost of these sectors are getting everything else.
Right, and then the plasma screen TVs are just way down.
Jeff, please taught us what to do.
Exactly.
Yeah, I kind of think that that chart is going to happen
before all of society, basically.
And in particular, if you think about where is software,
we're going to enter this hyper-deflationary cycle
of software, where it's so easy to build, it's so abundant.
That doesn't mean that people are going to stop producing,
software, they're actually going to produce thousands
of times more software, right?
And so you're going to have this total software abundance,
but basically any problem that's kind of solvable
in the digital realm should just get solved
in the digital realm.
And so I think what you're left behind with is,
you know, now how do we direct a lot of this new energy
to going improving the physical world,
going improving everything else?
You actually have to do things with real operations,
the real people in the real world.
Now I guess robots could change that too.
But for now anyway, with the robots aside,
the returns to actually doing things in the real world
should go up by comparison because it's harder
to make those cheaper, right?
Yeah, we think about that a lot at cognition of,
how do we not just improve things,
improve software for software's sake,
but how did those improvements basically translate
into real world benefits for everyone?
Ultimately, these things are tools and service
of our own lives, our companies, our businesses,
and our livelihood.
The real world's messy, and you have to interact with them.
It's super messy, it's super messy.
And I think a lot of engineers fall into the trap of,
oh, I want to solve this very pure problem, you know?
And there's nothing wrong.
It's actually really fun to solve, pretty fun.
You know, a lot of our team,
they spend their entire programming careers
just optimizing their ability to solve,
you know, the purest algorithmic programming problems
in the world.
But it's an interesting contrast to, you know,
one of the things that really differentiates cognition
in the market right now, what we keep hearing from customers
is, you know, we have this forward to plan
engineering team that will go partner really deeply
with large organizations and say, hey,
it's not just about the tools we're delivering,
but how do we get into the weeds together
to actually go drive like big structural changes?
Tell me a bit more about that.
You guys have obviously deployed
with a lot of the biggest companies in the world.
What are some of the more impressive results
you guys have seen in the wild?
Yeah, so the earliest results that we,
where we saw, we were like, oh, there's something here
where they started with basically like modernization programs.
So people that had large legacy existing things,
they needed to transform.
And if you sort of did the math to scope out how long it would take,
maybe it would be like a two year project.
And you know, relatively quickly by like late 2024,
we were measuring, you know,
somewhere between a six to 12 X productivity gain
for those types of projects,
meaning that, you know, one hour of human time
spent managing Devon was worth like six to 12 hours
of that human time doing the work themselves.
And so that was a big, that was like a big early result.
We started doing lots of, lots of engagements
where our customers would use Devon
to just refactor my great modernization large systems.
Now the interesting trend that we're seeing is a shift
from really reactive to proactive engineering work.
So if you think of the early days of the internet,
most of the packets that were sent on the internet,
it was like a human clicking a button
or visiting a link or initiating some requests.
And then at some point it totally flipped
and now most of the packets are initiated
by machines talking to other machines.
And I think we're now seeing the sort of the flippening
there for software itself,
which is the process of deciding what code to write.
It used to be entirely a human scope thing.
Now we have people, you know, we have people
wiring up Devon to all sorts of events and alerts
inside their organizations to say,
hey, let's just start the engineering work
right away when something happens.
I'll give you one example.
One of the largest regulated firms in the world,
they do very thorough sort of security vulnerability scanning
on their code.
And they make sure to understand,
oh, is this potentially insecure here or there?
And there's great existing tools for scanning that, you know,
so in our cube, Vera code sneak, there's this whole,
there's this whole market of tools.
And what they did was they hooked up all of those alerts
they were getting from those tools
and they just started piping them to Devon.
Saying, Devon, can you take the first pass triage?
Because if you're a human engineer at a company,
you're drowning in these alerts, it's not,
it's honestly not a really fun part of your job.
And they're remediating 70% of these automatically now
in production with Devon.
And so it's really flipping the, you know, kind of flipping
that in.
I've been thinking a little bit about the pyramids
with regard to this lately,
whereas like, it's awesome possible to see how people
with like no tools at all built the pyramids.
It does seem amazing if you look at the software
that we have built up until like a couple of years ago
that all this exists, all by people.
When you, and then, I feel like that would be like
something you'd look at 20 years for now
and everyone just uses it.
I don't know if you could build all of that
by hand.
How do you imagine that?
It seems like totally impossible.
Yeah, yeah, no, I think to your point too,
I mean, I think there's sometimes a question
about, you know, we're talking about, okay,
this is gonna get way easier to build.
It's gonna be way cheaper to build these things
and so then what happens?
And I think the reality is like, we have so much more,
I mean, I've just think about this even like today,
you know, it's like you wake up, you're like,
you know, you're like, let's say you're like logging in
to check your medical records and it's like,
not a very good example.
You're like logging into your bank,
it's like not a very good example.
So much to fix, so much to fix.
So many of these things and to your point,
I think the reality is there's so much more
that we can do with software.
I mean, I think some of the things
that people started talking more about,
which is I think what we're gonna get to
over the next little bit are things like,
even like generative UIs or like single U software, right?
If you get to a point where, you know,
so much of the work that you want to do is,
it can be done in code.
It just doesn't make sense to like write code
for a single, you know, for something
that you're only gonna do one time
or even something that you're only gonna do
like 10 or 20 times, right?
I love the AI can actually write code
for this instance coming up and give you the right UI
for what's going on.
It gives you exactly the right thing and builds it.
Yeah, and so this is, I mean,
people talk about Jeven's paradox and I think,
I mean, in software it is perhaps,
I think there's nowhere that it is more true
than in software, right?
As we have produced more and more
as it's gotten easier,
the reality is that actually demand has only gone up.
That's actually a really fun idea.
Like, right now luxury is like, it's like Vakunya,
it's really nice, it's stitching is right,
but what if luxury was like, it's a perfect UI for you,
just for this instance in your life
where you happen to need it, right?
It's got more funny idea.
No, either that or actually luxury might flip the other side
because artisanal handcrafted software
is gonna be so rare.
It's like, you know, in the early days
of the Industrial Revolution was like,
oh great, mechanized labor for goods.
Like, that was the higher status good
because of course the machines were more precise.
The bags were more, you know, we're more perfectly made.
And now it's totally flipped if something's handmade
that's obviously much higher status
because it's so rare and like,
how could you expend the resources for that?
I think site was handmade.
No, it can tell because of all the little bugs.
I think we're actually gonna,
I think we're gonna see that in software, yeah.
It's like handcrafted artisanal code.
It's like the homage, except that there's any software.
Yeah, yeah, my hands, it's like a 2020 society frozen in time.
People making you handmade because that's very silly.
So speaking of like things that are broken
and that there's like infinite need to fix
like government to me is like one of those areas
where I think if you like,
what a graph everything in terms of getting better
or worse and efficient or less efficient
is this like probably unfortunately comes off
is like really, really messed up.
And I'm always starting to hear some pretty interesting things
in government.
My friend Jared Kushner, who's obviously,
you know, been involved in these administrations,
he worked with Qatar with Elad.
And they built something where the permits
will only take 120 minutes there.
So if you want a permit to build something,
like it backed you into ours, it's pretty cool, right?
So there's like, there's always ways government
could do things better with software.
You guys just launched cognition for government.
Like what's the goal with that?
What's going on?
Yeah, I mean, I think in a high level,
I mean, we talk about places where there are
so much more software to build, so many things to fix.
And I mean, I mean, government is an obvious example
of that in all of these departments.
Similarly, you know, things that we used to do by hand,
not even that long ago, honestly.
I mean, 20, 30 years ago, like lots of these things
were done by hand at the Treasury.
Now, so much of that is software,
and yet so much of that software, obviously,
still has such a, such a long way to go.
And I think from our perspective, you know,
when we think about how do we make sure
that the US stays in pace with what it needs to be?
How do we make sure that, you know,
that the breakthroughs that are coming through an AI
in the private sector are also coming through
in the public sector?
It's something that's a really important problem for us to learn.
I also think there's something poetic about it,
because I think people don't appreciate this,
but the government is really responsible
for a lot of modern technical innovation,
you know, going back decades and centuries,
Scott drew the analogy earlier to, you know, punch cards.
And I think people don't appreciate this,
but really the first wide-scale production use
of punch cards was the 1890 census, okay?
In the 1880s, they did the census by hand,
they tallied it, it took like seven years,
and they were doing the math,
and they realized that 1890,
if they did the census the same way,
it was gonna be longer than 10 years.
And the census every 10 years, so they were screwed.
And so the government basically put out this call
for call for technology help.
Can we solve this problem with technology?
And there was a guy by the name of Hollerith,
who he invented, but later became the Hollerith machine
to use punch cards for actually running the 1890 census,
and it was on time, it was under budget,
and it kind of kickstarted actually a lot of modern,
somewhere, you know, where I studied at Stanford,
a lot of that, the Silicon Valley ecosystem
was actually really invested in by the government
back decades ago, partially for defense.
And now we're in this, we're sort of this flipping point,
even by the self-driving.
I mean, the DARPA doesn't get enough credit,
I think, for really kickstarting the self-driving revolution
with the DARPA grand challenge.
And now we're kind of in this state of the world
where the government spends $100 billion a year
on just IT modernization.
We have huge amounts of governments still written in cobalt
with people who have left or no longer understand
how the code works, and it's really holding us back.
And I mean, you feel it as a citizen day to day.
Think about your interaction with the DMV
or trying to pay your taxes.
Or waiting for a permit to come for some time.
Or waiting for a permit?
Yeah, I mean, these things have real world consequences.
And that's one of the things I'm personally really excited for
is you can bring technology like this
across the private and public sector.
It's like a great equalizer for entering work.
Can you deal with the cobalt stuff?
Is that something, David?
Cobalt is actually a massive use case for us.
Yeah, one of the early bets and investments we made
was in what we call code-based intelligence.
So if you think about a modern language model,
there's a limited context window, right?
But a lot of the larger organizations in the world,
they have very, very complex code bases
that do not fit inside a single context window.
We've done a lot of work both on the model training
and RL side, but also on the sort of harness engineering
around that and the indexing around that
to figure out how do you work with really messy custom
languages.
And so cobalt, it's actually not even the hardest one for us.
One of the examples I like is Goldman Sachs
has invented some of their own programming languages.
People don't know this, but Goldman has a pretty insane
internal engineering team.
And they've literally written their own programming languages.
And they've been able to sort of customize and tune
dev into work on those internal languages too.
So cobalt was actually easier in some sense than that.
So in the government, I mean, they spend 100 billion
a year on IT.
It's ironic because you're right.
Sometimes government in the past, especially when
innovation was really expensive, they
pushed some new things that otherwise wouldn't have happened.
Today, most $100 billion is spent on special interest.
Don't seem to be using the money well.
So it's a giant mess.
What are the types of projects you're working on?
Totally.
I mean, the incentives are obviously super screwed up
for a bunch of reasons that your listeners are probably
familiar to.
One of the less known ones that I think we actually
might be able to just sidestep is the government
is a really unique buyer of software for a bunch of reasons.
But one of them is that a lot of times they
want to own the IP of the software they're using.
And this is a really big implications
for most SaaS businesses.
If you make scheduling software and your businesses
SaaS business, you don't want the government to own your IP.
You want them to have a license to it to use it for scheduling.
And actually, that desire is literally incompatible
with how a lot of government contracting has worked
and happened historically.
So you end up in a situation where the government
says, oh, this scheduling provider is a real example.
This scheduling provider that has great SaaS that
can do scheduling, I can't use it because I wouldn't own the IP.
So I have to go work with the systems integrator
and completely custom build my own, right?
Insane.
Now, what we could, could we lobby and go try
to convince the government to change their policies?
But actually, I think easier for us to just sidestep the problem
and say, look, Devon can just write this thing for you.
You know, we're actually because of AI agents,
you're in the regime where actually now everyone
can own their own IP more easily than before.
So I think that's one of the ways
we're sort of trying to sidestep some of this.
Can you be your own government contractor?
Are you going to have your own government
contract doing this then?
Are you just going to power others?
How are you thinking about that?
Yeah, I mean, right now, we have dozens
of kind of FedRAMS deployments of cognition, both with agencies
and with Primes, you know, we work with US Army, US Navy,
the Treasury, we work with folks like Palantir,
with Andrew, and other Primes.
And so we just want to build sort of the most capable,
most useful, agentex software engineering platform
and then work like heck to kind of get in the hands
of people to make it useful.
And then Coddish for government,
this is a fast growing business for you then right now?
I would say government is one of those things
that it really takes some time to kind of build
and be compliant and work in the way that people want to work.
And then once you're there, once you're there,
it's much easier to be helpful.
And so we've over the past year,
we've really done a lot of the live work to, you know,
how do you get your FedRAMS certification?
How do you understand the needs of these agencies,
which are actually pretty different in a lot of ways.
Again, just in the civilian sector,
these, they're operating in a completely different trade-offs,
right?
A lot of the software they use is actually
by statute not allowed for them to write themselves.
Talk about regulatory capture and entree.
You know, there are literally laws saying,
you government agencies are not allowed
to maintain your own website.
You have to bid this out to a contractor.
And so it kind of sounds insane,
but you know, that's the way the system works.
And I guess my experience working on AI with technology
is that a lot of times it's actually easier
to solve like a frontier science or engineering problem
than it is to sort of change the molasses
of the existing world, right?
When we were working on self-driving,
a Tesla, a lot of folks would ask us,
hey, like, why don't you make the cars talk to each other?
Wouldn't that be way easier for self-driving
if all the cars could talk to themselves?
And the answer is, yeah, it totally would.
If you got every car talking to itself,
but until then, you're gonna have to deal
with human-driven cars.
And so then you have to actually solve the much harder
science problem of predicting the motion
of all those human cars.
I think it's the same for our worth of government.
So yeah, I mean, right now,
we have dozens of these deployments,
you know, I think folks are giving us really good feedback
on how much it's accelerating their work.
And some of the missions are really exciting,
you know, I think of an organization like NASA JPL.
You know, who doesn't want to help us get back
to space faster?
I love it.
Yeah, we just actually interviewed
Jared Isaacman is running NASA.
So it's a great guy to partner with.
It is true in government.
A lot of times people say, well,
this solution would work.
We just put this thing in the middle
and made everything talk to it.
And I'm like, sure,
everyone always tries to do in government.
But these guys are never gonna all work together.
They're all gonna debate.
And so you actually have to design it
knowing it's distributed.
It's very interesting.
Like the real world's messy
and you deal with it as it is.
Not necessarily.
Yeah, it's like the XKCD comic.
Like we have a dozen standards.
Like this is a mess.
We need one more.
Now, you know, you know, you have 13 more, yeah.
Exactly.
So it's very honorable to go work in government.
America needs that you're fixing it.
Obviously, you're growing even faster
in the enterprise in general.
So these are both big businesses.
This is obviously a very competitive time right now.
There's a lot of the smartest people in the world.
You have a lot of them here.
There's some of them in other places as well.
I think very famously,
some of the big labs like Anthropic have been,
I think they've doubled into the tens of billions
in the last few months.
It's obviously the highest growth thing right now
in the general area.
You guys are obviously, you know,
don't say you're exactly revenue,
but you're likely to get into the billions soon
if you're not already there.
What's the competition?
Like do people use that with you?
Is it help you when they do well?
Like, how do you think about these things?
Yeah, yeah, for sure.
So no, I mean, it's an exciting one obviously.
And software engineering and code is just so big
that I think there's a ton to do.
I mean, we've seen a ton of growth as well.
I mean, our usage, for example, of Devon
and our customers has, I think,
roughly tripled already since the start of this year.
So just the customer inside the customers alone
and the customer is alone, that's right.
But what I would say,
I think a couple of thoughts here.
One, again, there's so much different work
to go do in code.
And so you'll see a lot of others who are, for example,
building products that will help you make a quick
little website or something like that
or build something like fun, which is,
you can use Devon for that,
but I think that's not necessarily what we specialize in.
On the other hand, a lot of what we really, really focus on
to Russell's point is working with enterprises,
governments, regulated industries,
working with massive, massive code bases
and trying to make sense of that
and work in all of those systems, right?
And so a lot of the problems that you have to go deal with
and work on are, how do you absorb
all of the messy context and the knowledge of this code base?
How do you work with a massive,
something that has hundreds of thousands of different files
and work across that?
How do you test and iterate against your existing unit,
testing framework or how do you click around
and use these products to yourself
and make sure that the edits that you made were good?
And that's a lot of what we've always focused on
with cognition.
And so, basically all the guys that you mentioned here
in terms of the foundation labs and so on,
we actually partner with them
and we work very closely with them.
I think what we tend to find is that for obviously,
they're kind of like base research
and the work that they do with models,
there's a ton of interesting work for us to do together.
Whereas I think for a lot of this work
with really enterprise transformation,
we want to be like on the ground working very deeply
with folks and figuring out with them
how we use software to help them achieve their goals.
So Palantir originally we created this thing
called for-to-played engineers,
which didn't really make sense to people 10 or 15 years ago.
And it turns out there is like certain types of workflows
where you could build a product and do a lot
but then the Ford Flight Engineer would actually have to
understand the business value and kind of connect the dots
and then take things that they created
that oftentimes go back to the course,
the core gets better, so the core could do it next time.
Like do you have something similar to this?
Do you have a lot of these?
It's a big part of the value you're providing.
Yeah, so it's interesting.
There's some similarities and some differences for us
with the sort of the Palantir model.
So one of the interesting differences
is that a lot of times our customers,
they just use our product on their own,
even without us talking to them or without us discovering them.
People bring in our tools and they just start using them immediately.
But once Devon is inside an organization,
the ceiling of what you can accomplish
if you're a world class agent manager
versus a new engineer who's just sort of learned the tools,
it's an enormous delta.
And so a lot of the, there's kind of some parallels
with our business that are more like a,
kind of a data breaks or snowflake type thing
where you just kind of get in
and then people start using your products more
and consuming more.
On the other hand, if you actually want to go drive major outcomes,
like real business change that results
in like structural, you know,
oh, I can launch this entire product line
that I wouldn't have had the capacity to, you know, otherwise.
Like that type of outcomes, you know,
R4 deployed engineers are among the best in the world
at managing agents at really high scale,
because that's like exactly what they,
they sort of focus on and do every day.
So at Palantir, we had these frameworks of like,
like building an ontology of all the data
making it talk to each other, ontology, the processes.
And we had all sorts of different frameworks over time
that our conceptual framings would use when you go in.
Do you guys have like your own conceptual frameworks
for business value and do you have something called ontologies?
Like, like, and not to get the secret sauce.
Oh, yeah.
But other things like this, you could, you could tell us.
We really look at it from the perspective
of the software development lifecycle.
So we go inside an organization,
they have a way of doing things, right?
Of developing software, starting from planning
and deciding what they even want to write,
understanding all of their existing code
and process to then maybe scoping it out
and maybe writing some code, testing it, fixing it
when it's wrong, iterating on it,
deploying it in production, monitoring it.
You know, there's a pretty standardized
software development lifecycle at this point.
And what's happening is agents are just eating more
and more of the cycle.
And it kind of started with the writing of the code.
And now we're like well past that, right?
And so in fact, one of the more recent products we shipped
is called Devin Review.
It's because we observed that there's this totally new bottleneck
in the software development,
in the software development lifecycle
that wasn't the case previously,
which is there's this abundance of code being written by AI now.
How can humans even keep up with it all
to understand what's going on?
Again, we work with a lot of like regulated, large,
complex organizations that are,
they're running mission-critical systems
and you can't just sort of vibe code,
you know, your way and like, yolo-mers.
To the translation, be vibe-coded.
No, that, that, that, that, that, it's actually really important.
And it starts with, eventually, you know,
are we gonna like have English as a source of truth
and people are just gonna be collaborating on specs?
Yes.
I think, you know, March, 2026.
No, you still need to understand the code that you're merging.
And so we, you know, a big thing we care about a cognition
is we're building tools that are like future-looking,
but still meet people where they are, right?
We still wanna meet people where they are.
And so Devin Review, it's not just having like an AI
auto-comment on every PR and say,
oh, this was good or not good.
It's actually tools for humans to really deeply understand
huge quantities of AI generated code.
These are some sense that at some point,
one of the AI models, if it gets a little too tricky,
could like sneak something in to the things
that are being built to like do something
we didn't want it to do.
Yeah, so it's a great question.
Obviously, I think with a lot of these things,
that the simple answer in software is,
you wanna be working with the same review process
and the same QA processes that we all have, right?
And so any big enough engineering organization,
frankly, already has to think about this
with their humans, you know?
And it doesn't have to be on purpose, obviously.
Maybe you accidentally introduced a security risk
or so on, right?
And that's why you have review and that's why you have QA
and that's why you have user testing
and that's why you have release cycles
and all of these other things.
And I think, you know, I do think this is one of the,
one of the things that often comes up,
which is like, how do you make sure that your agent
behaves correctly, which of course is like a very important
problem.
The reality is that I think in software,
we have actually a lot easier or maybe at least
a lot more grounded of a path to do that
because we have the same thing already
with all of our humans, right?
And so when you work with Devon, for example,
Devon is making commands, is submitting, you know,
code diffs and so on.
Devon is not allowed to go and deploy your code
to production by itself or anything like that, right?
It works in all those same systems
and has the same guard rails.
And so let's just fast forward for fun,
a few years in the future.
I talked to some of the people running a top labs,
they're pretty convinced things are going to keep getting
better at a pretty high pace for at least two or three years.
It's like, I like with Moore's,
like you can't really see out what's going to get there
five years, but it feels like things are going to change
a lot.
So to tell us about 2028, 2029,
like there's certain things that just look very different
or certain things that are like,
we have to do now, we don't have to do it all there.
Like what are the unsolved problems for Devon
to just be like doing a massive project
on itself in three years?
Yeah, I think a couple shifts.
I mean, one of the obvious ones which I'll just call out
is just much more widespread usage of all of this
and just good knowledge on how to use these things.
I think right now, you know, you have this core group
of we'll call it like agent forward engineers, right?
Or agent forward companies that understand how to use this
and they are seeing these, you know, 5X, 10X,
productivity games as a result.
I mean, obviously, you know, all of these big organizations
or these companies or governments or things like that
are seeing the same results and re-icing.
Wait, we can't just sit here and be five times slower.
And you know, we have to go learn how to do this right now.
And so that's really happening.
I mean, even this year, I would say,
in terms of the continued capabilities gains
that we're going to see.
Yeah, I mean, I think the models are going to get better
and better.
You know, one of the stats that people talk about a lot
is this METR report, which basically says
for each different model that comes out roughly
how much human work can it do in an automated fashion
before you have to go interrupt it and say,
oh, that was wrong.
Let's go do this, right?
And so it's like, you know, just two or three years ago,
the answer was like 10 seconds or something.
You know, you would have it right one line.
And then it's like, all right, the next line is already wrong.
Okay, let's stop here, right?
And at this point, it's already gotten to the point
where it's in the scale of, you know, 10, 20 hours
is what the latest one has been.
I think Opus 4.6, for example, I think was around 18 hours
as well.
This is always very weird to me
because it's applying the model's work in human time though,
right?
Which is so weird.
So how did I see?
Yeah, so the answer, they do typically do the tasks
in less time than a human would.
And then the 18 hours is basically,
this is how long it would take a human to do that amount
of work in between each of the interruption points, right?
And the AI might be doing that in one hour or two hours
or something like that, right?
And the thing that's really crazy about the stat
is you just see it very consistently double.
And I think for the last few years,
it's doubled about four or five times every year,
you know, which is insane.
Which means, you know, you wait two or three months
and it's already doing twice as much work.
So the whole world is changing over two or three months
in terms of what's possible.
Yeah.
And this is what we've kind of seen as well.
And I think we're going to see even more of that.
And to some of Russell's previous points,
the thing that's kind of interesting for us is
that the form factor of what you want to deliver
or how you want to work with the AI changes a lot
as you're going through that, right?
And so when we're saying, okay,
you do 10 seconds of work, obviously the answer is,
you as a human need to be staring at your file of code
and like, shepherding it and hand holding it
with every single step, right?
If you're talking about,
it's doing days of work or weeks of work,
now you're actually giving it whole, you know,
output level tasks of like, hey, you know,
we really need to go make this app much faster.
Can you go and run this whole thing
and then, you know, do a smoke test
and make sure all the changes look right.
And it's going to go off and do that entire project, right?
Versus, or even bigger initiatives of like,
yeah, can you auto-response to all of the upgrades
or the potential vulnerabilities
that are coming in with our reporting
and just make sure all of that, right?
So you're going to see a lot more, I think, proactive work.
You're going to see a lot more, basically,
like event-driven kickoff work
where it doesn't have to be human.
That's moderating it every step of the work.
And then in terms of like, what does that mean
for society or where does it go?
I actually think one thing I'll go on the record
is I think there's going to be an explosion
in small businesses.
I think AI is actually like an extremely small business
enabling technology in particular
where you think about what's hard
about about building a starting small business.
It's, you know, you don't have the resources
of a large company that's specialization of labor
in each part of the, in each part of the process.
And I think AI, it's extraordinarily enabling, right?
Think about, you know, the quality
of quick legal gut check you can get from, you know,
from a chatbot, from a frontier lab.
The quality of, you know, analysis of your financials,
the quality of software that you build, right?
It's like, it's all coming together
to empower each individual person again
if you exercise that agency
to just do so much more on their own.
I actually, I love this.
I actually have a small thing on the side
where I'm trying to help create
10,000 to small business owners.
So I'm totally aligned with this.
This is a really good theme right now
for us for us to do.
What one last thing I want to ask you guys
about the business that I'm just so impressed by.
So I hired a lot of first 300 people
of Palantir.
It's been a lot of time on talent.
Obviously even higher, it's got at one point
a long time ago with one of my companies with,
with Vlad's help.
How did he do it?
How did he do it in his interest?
He's very, very, very impressive.
I did not, I did not see him at the time
as someone who was like a CEO person.
So it's, he really grew a lot, which is good.
I mean, you know, he's, he's like,
he's learning and growing as he goes
and it definitely seemed as a CEO and founder now.
But one thing you guys have done is like a significant
percent of the cognition hires are actually former founders.
So not only you hiring the best people in the world,
you're hiring a ton of former founders.
Like, like, like why are you doing that?
How are you doing that?
Tell us a little bit about the talent stuff.
Yeah, yeah, for sure.
No, I mean, I, I think the,
I think the reality is we just have such a massive problem
that we're going after, we're just, you know,
solving all of code.
And I think the way we even started this company is,
you know, Brussels has founded before this.
I was a founder before this.
All of us were, you know, I think of our,
of our kind of like initial crew.
And, and the idea for us, I think was,
let's make this one the big one.
You know, we're going to go for it all.
We're going to go for the most ambitious,
I don't even, I mean, I've, the, the, the play
of solving software engineering feels like a big enough one,
you know, that, that we can all do that together.
And, and I think that's a lot of what it comes down to,
honestly, is just like, yeah,
are we working on something and doing something
that's, that's, that's exciting for folks who,
to your point, are, are, you know,
I think a ton of the folks at cognition
are, could, could very, very easily go off
and start their own companies and get funded
and build their teams and so on.
And, and the question, I think for us,
always has been about how, how do we make this the place
that, that makes more sense for them to do that?
And it's, it's honestly easier to do that now
than ever before.
As a, as a company, like I'm thinking,
you know, we have one team of the company,
they're, they're called special projects engineers.
And basically every person on that team
is a former founder, like every single one.
And, and they, they do, they do, you know,
a really interesting mix of engineering work,
of product work, of talking to customers,
of driving commercial outcomes, you know,
the problem space to, to your point is so big
that actually if you're, again, a high agency person,
you're going to take initiative and you're working
at a small fast growing company with, with a,
you know, with the problem space so big
that, you know, you're only constrained
like your own ambition.
Well, it does seem like there's just a renaissance,
a revolution going on in a world where the capabilities
are doubling every two or three months.
I'm like, this is a place you can come and be around
some of the other smartest people in the world
who are part of growing something with that,
which I guess is pretty fun for people.
We have a good time.
What, one thing about the interview process
or the selection process to your point,
which I think is kind of interesting to call out is,
I think, you know, one or two years ago,
I think a lot of people had this mentality
in terms of how you interview of like,
okay, there's all these AI tools.
How do we interview people in a way that,
you know, how do we make sure that people
aren't using AI while we're interviewing them
considering, and I think that has totally flipped.
Honestly, I think that was wrong, right?
And I think if you're asking the question of like,
how do we evaluate people on exactly the thing
that AI can already do that kind of the wrong?
And so, you know, I think for us,
and this has always been the case for us,
you know, our interview process has always been,
you can use as much AI as you want to use, you know,
it's just like, we're going to give you a few hours,
just build your whole own product surface, right?
And build a, build your own, a lot of these are projects
that frankly is like, if you were trying to do this by hand,
you would not be able to get done in a few hours.
So you kind of have to use AI for this, right?
But in reality, what we actually want to test
is in addition to it to kind of how familiar
you are with these tools, what we actually want to test
is, yeah, what do you think is the right thing to build
or how do you make these product decisions?
How do you make these trade-offs?
How do you decide or collect information
about what you should be doing?
And so we found that that's helped us a lot.
I love it.
Well, we started the American Optimist podcast
to try to push back on cynicism
and pessimism in our country.
What's the best case for an optimistic AI future?
Wouldn't one inspire you about what you're seeing?
Yeah, no, it's a great question.
So funnily enough, I think recently,
we had this whole Satrini piece come out,
which I thought was frankly ridiculous.
I mean, I think it's, I think it gets a lot
of the basic economics wrong is maybe a simple way
to put it and it's, I think it goes off
of some of the same things that we are calling out,
which is it's going to be easier to build things,
it's going to be cheaper and so on.
And then somehow concludes that the outcome
is going to be much worse for all of us.
And I think maybe from an economics perspective,
there's a distinction between what is real versus nominal
deflation, which I think is maybe one thing to call out here,
which is of course prices are going to get cheaper.
But why should that mean that we're all worse off, right?
But I think the simple thing that I'd call out about AI
is I think we right now are at a point
where so many of the things that we want to build
and so many of the things that we want to do
are just hard bottlenecks by pure execution, right?
And I think we're pretty quickly getting to a point
where that's not the case.
And my favorite line on this is our co-founder,
Walden says this, which is for so long,
we've all been living in Minecraft survival mode
and now we're going to be in creative mode.
And I think that's really, I think that's what we're going
to see over the next, honestly, the next five or 10 years
is getting to a point where you're really only limited
to your ideas and to your imagination
where you can kind of just turn things into reality.
And I think that's going to be a great future.
Awesome.
Well, it's inspiring now to leave it on Thanksgiving.
Cool. Thanks for having us.
Thanks for having us.
Yeah.

Joe Lonsdale: American Optimist

Joe Lonsdale: American Optimist

Joe Lonsdale: American Optimist
