Loading...
Loading...

https://www.anthropic.com/features/81k-interviews
What is the name of the song that you're going to be singing?
Do you?
Yo. What is the song?
Good evening, good evening.
It's like mid-afternoon.
Okay.
4 p.m.
I was chatting with someone and I'm like,
I said we should have a call in the afternoon
and they looked it for 1 p.m.
And I was like,
dude, wasn't that a bit keen?
I was like, I do want to,
I do want to, I do want to like,
yeah, on your toes in the afternoon,
you know, Friday or whatever.
And I'm like, dude,
I'm pretty sure Portuguese afternoon doesn't start
till about 8 p.m.
Yeah, that's fair.
Have you actually noticed a difference in that?
I know it's a Spanish thing, right?
To have dinner at like 10 p.m.
or something.
Is there a thing in Portugal too?
No.
Well, not 10.
I think it's 9.
Which is kind of fit to me as well.
The...
Okay, so do you actually know,
do you all know something about the Spanish thing?
Yeah.
Okay, so this comes from an Instagram reel.
So I don't know if it's entirely correct.
But I need to do it.
I need to fact check it.
But I'm going to tell you anyway.
And it's the thought process that
not the worst.
It's why the Spanish have their time,
the way they do.
And apparently they're like,
completely missed time zone.
So, of course, yeah, yeah.
There they are.
So when Jeremy was going rampant around Europe,
they put everyone on their own time zone.
And...
Yeah, yeah, yeah.
Franco, who was in Spain,
was so enamored by the other dictators,
he was like, dude,
I'm going to do it to myself out of solidarity.
And so, he popped the whole of Spain
onto Central European time,
which is probably an hour earlier than they should be
if you look at the maps.
If you look at the maps,
it's weird that they're actually like
an hour later than the UK.
And they're actually almost further west,
somewhere totally.
Yeah, yep, yep.
And so, there's some thought that that's actually why
Spain has this like a really messed up
late evening thing,
or at least it contributes to it,
because they're actually just on the wrong time zone.
Yeah, yeah, yeah.
No, it actually makes total sense, yeah.
And funnily enough,
I think I actually saw the same reel.
Probably,
because I actually did it about it.
And she was like,
oh, you mean this one?
I was like, yeah.
We've all seen it.
We've all seen it.
We've also seen it.
But that makes me think the other way around
must be kind of interesting,
because like Poland,
actually even further,
like I'm pretty sure Latvia is right here.
They're all on,
they're all on the PC plus one,
or whatever it is.
Yeah, UTC plus one.
At least I think they are.
So, yeah, you're right.
Sweden, like all the way up Norway,
it was all on,
but that's not that much,
because you're north.
It's more like,
when you go further west,
what happens?
I know you do have very early days.
And in the Spanish,
I've very late days.
How does that work?
I think the wildest time zone
is the China one, right?
Because everyone's on the same time.
So, if you're in all the way,
that's crazy.
What is it, the west of China?
I think you're getting up at like 10 am.
Or you wake up at 10 am.
And then, you know,
it's like completely shifted day.
That's wild.
Good time zone chat to kick off the pod.
All right, let me roll the intro.
You've been a bad,
everything is changing,
total game changer,
new model truck,
from top singer.
Welcome to the bad age in podcast
with Matt and Will.
It's so catchy.
So, I was like,
singing it in my head yesterday.
Also, funnily enough,
chat with my brother a few weeks ago.
And he was like,
can you please send me the prompts
because I've tried 11 labs music,
and I cannot get it to generate something good.
Like, how did you make this intro?
I actually have no idea how you made it.
I'm convinced you've commissioned it
because I also tried 11 labs.
It was really hard.
It was a lucky strike.
But yeah, I've,
it's been very hit or miss with me
with 11 labs, to be honest.
Yeah, I don't know.
Yeah, I mean, all of that stuff.
It's like a,
it's all like lottery, isn't it?
Also, actually,
well, we're on this topic.
I know there's lots of licensing stuff around music.
I am,
one of the things I've been building
into my personal assistant app thing.
It's like a lyric learning thing.
And,
because I'm really, really, really bad at song lyrics.
Like, I just don't register for me.
I'm all about vibes with music.
And, but Jess knows, like,
the lyrics to every bloody song.
It's unbelievable.
So, I've been trying to make an effort,
well, I've been trying to try to make an effort with
learning these lyrics.
Let's, let's sing songs.
Let's, let's sing songs.
Yeah, and obviously, you know,
my personal AI chat app has, like,
Anki's style,
like, what's it called?
Like,
repeated,
there's a name for it.
I should know this.
I was just working on it.
No idea.
I have no idea what you're talking about.
It's like, when you don't learn?
Ah, whatever.
It's like, when you don't learn a thing,
and then,
or when you, when you,
you definitely know what this is.
It's like,
when you repeat something,
space repetition.
That's what it's called.
Like, a space repetition thing.
Like, a space repetition thing.
So you can, like, actually,
remember things, like, well.
Okay.
And it just built, like, a,
I just built, like,
the lyric learning thing into it.
But,
models have this hard-coded thing at the end,
where it refuses to output certain content.
So, like, a model will never output
you full song lyrics,
or at least the Claude models one.
But I think this is across the models.
So,
they will link you to a page that has the lyrics.
They will fetch, like,
they will execute a tool call that fetches the lyrics.
But they will never actually return you full lyrics
because they're, like, licensed or whatever.
But what you can tell them is, like,
okay,
use your, like, browser-use skill
to fetch the full lyrics from a website.
And then just save them to a file.
And then don't tell me,
tell me the lyrics,
but just work with them in the background,
and, like, copy them around,
and the code, whatever.
Because obviously,
if you want, if I want these lyrics to exist in my app,
they ultimately need to exist in the code in some way,
or they need to be imported in some file.
But, like, the lyrics can never pass through tokens.
Like, they can never pass through the tokens of the model.
So, if it, like, wants to write,
like, hard code the lyrics in, like,
a TSX file or something,
that's not allowed.
But it can, like, do some weird stuff to reference them.
And then the model is also, like,
the model is, like,
oh, that's such a smart way to get around the model limitations.
Let me help you with that.
Because the model itself is happy to do it.
But, like, there's some layer of content rules,
like, that get applied,
just as the model is about to reply to you, right?
So, like, there's, like, a non, like,
a deterministic, non-model layer,
baked into the API or something.
But the model itself is happy to do it,
which is just kind of hilarious.
I think this also applies to, like, certain open-source licenses.
Like, I think I saw a thing on Mario Zechner's The Pi Guy
as Twitter where it was refusing to output the, like,
a GPL license.
That's quite funny.
That's quite funny.
Oh, see.
Okay.
Okay. Speaking of the Pi Guy,
speaking of Mario, have you tried Pi?
Briefly.
Yep.
It's sick.
Yeah.
It's like, I like everything.
I want it from Claude code and open code when I'm sat in front of it.
I haven't tried that much for, like, remote stuff,
but sat in front of it.
It's the tree abstraction is so good,
or the extensions are beautiful.
People have written some extensions
so we can use, like, our internal AI gateway.
It's just so easy to write.
Like, you can basically, it just writes itself.
Like, the API structure is so nice.
And the auto research that Toby made for it,
the DriftFi CEO off the...
Well, I don't know if you made it,
but it was made, basically, like,
Kapathe did some auto research thing
and then they made an extension for Pi
that does the same thing.
It's insane.
Over the last, like, six hours,
I think it's been running.
Oh, for you?
Oh, no.
Yeah, for mine.
And I just took, you know,
you know, I'm obsessed with Git, right?
Yeah.
I took LibGit2.
Okay.
And I just made it a full Zig,
like, a native Zig version of Git.
Oh, amazing.
From LibGit2.
And it works.
It's 46% faster than the Rust implementation.
No way.
Yeah.
It's insane.
It's like, up to...
It's like a true in Rust,
but at the moment,
is that, like, the canonical?
Oh, okay.
It's in C.
But there's loads of other...
There's loads of random shit in LibGit2.
And, like, that...
I guess that's the point.
I won't be fully feature-complete.
But my Git hashes match, like,
everything matches,
and I'm running it online.
That's cool.
React repo.
Like, the biggest repo I could think of.
Yeah.
Nice.
And for context for people who don't know what LibGit2 is,
I happen to know this randomly,
but I feel like it's not very well-known.
Well, no.
But it's like the native bindings
that, like, any other application, right,
that ends up interfacing with Git users
to actually, like, do stuff with Git.
So, for example, in the GitHub code base,
the stuff that actually merges pull requests.
Like, merges branches under the hood
will call out to LibGit2.
Is that...
Yeah.
Match your understanding?
Yeah.
LibGit2 is, like, a Git server.
And there's about two interpretations of a Git server.
But they normally rely on LibGit2 under the hood.
They're like bindings in different languages.
There are a few others.
There's this rust one.
And there's isomorphic Git,
which is in pure JavaScript.
Which is kind of cool as well.
It's a bit slow.
Interesting.
It's kind of cool.
But yeah, I wanted to want to impure Zig
so I could run it in Wasm.
So, it's just, like, completely.
I can run it everywhere.
And you can't run LibGit2 in Wasm
because it has some stuff
that doesn't convert to Wasm.
Yeah, it makes sense.
It makes sense.
Yeah.
So...
Is this your first slop fork?
That this is, like, a mega slop fork.
Like this is hectic.
If you're not slop forking right now, what are you doing?
You're getting a little behind.
If you don't have at least three slop forks.
What are you doing?
Just taking stuff that's already existed in the ecosystem
and just, like, moving it between different tools.
And so this is the pie auto research tool
that's doing this, by the way, right now.
Just basically trying to get the full coverage
and get as fast as possible.
Because Git itself is a very simple...
It's actually a very, very simple specification.
Like, it's not long at all.
It's not big.
You know, most data model...
Yeah.
Like, it relies on compressing and decompressing objects.
And so, like, Zig is, like, the perfect language
for, like, the manual memory stuff.
And the models, if so easy,
because they can look at the standard library directly
of Zig, which is tiny, and they can just be like,
oh, this is how the standard library compresses stuff.
Maybe I should do the same thing.
Or we should use the standard library version.
Oh, should we try and tweak the standard library version
as long as there's a good feedback loop for speed?
It can just customize the hell out of it.
Yeah.
It's insane.
Have you...
Yeah, what have been some of the things
that you learned or that surprised you?
Is it the same thing that, like,
the test coverage of LibGit2 is really, really good.
So, as long as you get the tests passing,
like, is that, like, the main feedback loop that's driving it?
No, I'm having to make my own tests
and my own bench scripts for, like, common things.
So, I will be missing loads of stuff.
Yeah, I'll be missing loads of stuff.
But, like, comparing it against LibGit2 and the Rust version,
and just, like, testing...
Because Git doesn't have that, like,
the internals of Git don't have that many processes,
like, at all.
There's, like, I'm not gonna list them out now,
but there's, like, six of them.
It's, like, really not very big.
It's, like, walking a tree,
passing stuff, creating diffs,
compressing and decompressing objects,
like, passing commits.
Like, there's not a huge amount of stuff there.
So, yeah.
I probably won't have support for all of the crazy...
Or the main things, yeah.
Yeah.
All the most of the crazy, like, command line arguments
are just there in the CLI.
That's why Zaggy was so cool,
because Zaggy just replaced the CLI
and then called out to LibGit2 underneath
and was, like, double as fast,
half the time for most commands.
Because the CLI had, like, some pearl in it
and some crazy shit in it,
and all those arguments are, like,
a remnant of a bag on error.
Yeah, yeah.
But you can add all of those if you really want to,
but the actual fundamentals of Git is quite...
It's quite true.
Yeah, interesting stuff for it forward.
So, are you saying that you don't even need, like,
a crazy feedback loop to make this work well?
Or, like...
Not really, because it's just running LibGit2
and the Rust version,
and just seeing what it gets...
I see. So, it's running LibGit2.
And these other trim limitations in parallel,
or, like, to compare...
Right, right.
And then just comparing the time,
and then comparing what we're getting out of them,
and then just checking the hashes of each time,
because Git's based on hashes, like,
if stuff's changed.
So, it's...
I'm pretty sure it's quite approvable.
Like, it's wild that...
Yeah, models are this good that this is possible.
I mean, trying to work out the implications,
because I don't feel like this was possible
six months ago.
What model are you using?
4.6.
Nice, nice, nice.
Overse.
Yeah, I think, yeah, Overse, 6.
I think...
I think 5.4,
CodeX would be good as well.
I'm not sure how to access via my gateway,
but this is Opus 6 on the...
Max Plan actually.
Pretty good.
Nice. That's cool.
One interesting thing that I've noticed in recent use.
So I make quite heavy use of this like second opinion skill that I have, which is the most straightforward thing ever, right?
So my main driver is Opus and Claude code or my own wrapper around Claude code.
So Opus 4.6 is the default for everything.
But then I just tell it, oh, you should get a second opinion from
GPT and Gemini. And Gemini 3 used to be the default for that, and I think it used to be the smartest model when we were in the like
5.2 days. I think it was the smarter model.
Then I started, you know, okay, 5.2 and now we have 5.4 and it's really, really interesting to see the evolution because
what used to happen is that I would say, yeah, I'll get a second opinion and then it would call out to Gemini and GPT and
and be like, yeah, you know, some good other takes, let me incorporate them.
And by the way, I think this is a really powerful workflow. I think everyone should do this.
Like I think just getting other model opinions in is real.
Like it just, they will find random crap where like Opus went off the rails and did something weird.
It's like a good check, but it also genuinely brings in other interesting things.
Definitely.
Definitely.
But the most interesting thing that's been happening recently is when I asked it to use GPT 5.4,
Opus is always incredibly complimentary of what GPT 5.4 outputs.
So it'll be like, wow, I'm sorry, I'll quote it directly to not misquit it, but it'll say like
excellent response from GPT 5.4 or it'll say like that's really like the opinion from GPT 5.4
is the best and most comprehensive or stuff like that.
So I think like Opus really backs GPT 5.4 for like this use case.
And to me, it feels also like 5.4, especially for like architecture review or like planning.
Even maybe for coding, I know some people prefer using the codex model, like 5.3 codex for coding still.
It just feels like a really good model and a step change in some form.
And of course the different families have different strengths.
Like I think Opus and Claude are better at like front end stuff, for example.
But in terms of this like planning and thinking and architecture and review,
5.4 feels like we're kind of in a new league.
Well, according to my opinion and according to Opus as well.
Yeah, as well.
I remember back in the day like you'd get a GPT model to review another GPT model stuff
and it was like crazy sick of fancy and I was like, what crack are they smoking?
Like to each other.
That must mean just something where they recognize each other or something.
Something recognized, yeah, it's own writing.
Yeah, it's interesting, huh?
Yeah, I feel like I'm really, I feel like I want like Opus 4.7 or whatever, like I want
I feel like I need the next spot.
Yeah, I need the next crack for the crack dealer.
Exactly, yeah.
And then also Gemini feels like almost dated now, even though you know when it came out,
it was like, wow, this is a major step change.
So what a time to feel like.
I did like it that much.
I really loved it, but I wasn't the biggest fan.
Yeah.
I mean, I'm a Claude person, I feel like I,
yeah, I think other people like a lot more than I like Gemini 3.
But I always liked it for like it feel like it would catch,
like I never use it as a daily driver for calling tools or whatever.
I used, I used it for the review stuff mostly and it was useful for that, but I think,
and it's still useful for that, but I think the other models are just even better now.
So that's the brief weather report of where we are at March 20th.
And this is, yeah, 26.
Wild, isn't it?
It is wild. I like, I'm still struggling to deal with like the change recently of what's
possible and like what we should be building and it seems like there's so much stuff
that's possible, especially the last, last couple of days, I've just been quite,
like really quite struggling with that, getting a little bit into my own head about what I should
build because I can theoretically build anything now, even if like at the moment I don't know how it
works or even like what it is. So I feel like spending more time like imagining the future I think
is actually, I think is better, but it's been breaking with my brain, breaking my brain recently.
Yeah, massively breaking the brain. I completely agree. You should do a think week.
Yeah, a long think weekend or something like that.
I'm overjoyed actually as well.
A long think weekend. I have some, yeah, I have some fun ideas there. I was also listening to
Dwakash as well. Oh yeah. The past like, the past like a couple of weeks I've been trying to get
through the one with the semi analysis guy. Oh, I started that, yeah. It's wild.
I didn't realize how much upshit creek we were when it comes to GPUs.
As in like, you think the future is a bit bleak and we're very constrained. Oh, we're definitely
constrained. Actually, I had, I feel like I actually had a similar than that. I think back to,
so I have, I've listened to maybe the first quarter or something, but it made me like,
oh, I'm glad that I can get tokens out of these models right now. Like, who knows in like a
years time, like, will I be important enough to like actually be able to use these things?
Or will it just be too expensive or like, there'll be too many, like, I don't know, very valuable
enterprise use cases that take priority for like, I don't know, clawed sex or whatever.
And may mortals will be able to use it because there's just not enough GPUs to run all this stuff.
Yeah. Yeah. No, like, that's literally how I think about as well. I'm like, what's
what, what, what, yeah, I'm actually, I'm just kind of struggling with it too. I'm like, we are
massively compute constrained for this new world that we're entering. But every, there's also
another thing where like, this compute is expensive. People are paying two-ish dollars for H-100s,
which is more than we were paying two years ago for a chip that is now at six years old.
Like, that's wild. And that's happening because Blackwell, the next generation is coming out too
slowly because of a company, like a bunch of supply chain things, but a big one is a company in
the Netherlands. ASMR that only produces 60 of these lithography machines a year when they
should be producing thousands of them. And that's somewhat due to a lack of lenses that they can
get from Carl's Ice in Germany. It's all like kind of like, it's just this weird thing where
there were sort of like four manufacturers that control all of this stuff. And as you get further
down, further down the stack, everyone's more and more AGI-pilled. But in Carl's Ice, they're
not AGI-pilled. Yeah, it's very new territory, this for our software people, right? This like,
oh, what do you mean? You can't just turn on the thing. Just drag the Roku slider to more.
Just deploy to region Earth, you know? Yeah, I want more, I want more, I want more, but then like,
there's a problem. There's a problem on the software side as well, where
how do I phrase this? Like, stuff. Okay, so stuff has got a lot cheaper to make. People are making
more stuff. And it's got cheaper in terms of like human capacity hours. And obviously you see
this a lot in like San Francisco and things where you guys are paid like shit tons of money.
But for the rest of the world, I would say, paid lots of money for what, for like,
for software, for software. But the rest of the world, there's going to be like this like
flipping endpoint, where maybe, well, maybe there's not going to be maybe the model price goes
down more and more and more. But like, there's going to be a point where we're using so many tokens
that maybe it's cheaper to have humans, or maybe it's not, maybe that is ridiculous,
because these AIs are actually producing more than humans would anyway. So maybe the AIs,
if the AIs better. But yeah, I just, I find it, I find it very, very interesting.
It is very interesting. It's very mind-blowing. Yeah, this whole GPU and like, what is the,
what is the price of work these days? I saw Steve Ruiz posted on, like, the Teal Draw Founder
posted on Twitter. It's kind of mind-boggling to me now that, like, I have to pay to code.
And that never, that was never happening previously. Yeah, yeah, yeah. It also breaks like the
contract of the internet a little bit. Yeah, like, yeah. Previously, use the internet for free.
The internet's always been free. Like, software has been kind of free. That is in like a
hacky software has been free. Yeah. It just takes someone's time. To be a good engineer,
you don't, in theory, need an expensive computer, or like, if you have the skill, you can work.
But now, if you need like, five max subscriptions. Yeah, it starts getting, it starts getting really
weird. Or like, even five max subscriptions to learn now because Stack Overflip is dead. Like,
right. Yep, yep, yep. Like, there's a thing there as well. I'm like, these, I just wonder how the
economic model of these models is going to like carry on. And you do have like, open-source.
Free models. And cheap models, right? And the open-source ones are, do they get anything?
Yeah. Like, we just, we just published, uh, Kimmy K2.5 on Worker's AI. It's a pretty
banging model. Like, it's decent. It's sort of, it's sort of like, if it had come out in the
Claude 3.5 Claude 4-era, like, it's on it 4-era, I think we'd have said this is like the best
model we've ever seen. Now, it's like, not quite the best, but it's really good. I was going to ask you
like, what, do you use any open-source models like day-to-day in, in your workflows? Because I met
someone last week who was like, yeah, the Chinese models and like, Kimmy and I actually don't know
what, or Quinn where they're like, they're actually state-of-the-art. Like, they're totally on
power with the frontier models. And I'm like, no way, that's true. Yeah, but I actually don't have
and tried them in. But we do try, I do have a bunch of worlds in work say I, so I use them for
demos and stuff. And they're perfectly capable, especially the, actually, the smaller ones are like
mega fast on, on, on. And so it's that really nice to use them for demos. I think when I get my
home lab fully set up, I will do some more of that, like, with, like, personal assistant open
core thing. I'm just a bit lazy. I'm not going to lie in doing all of that, setting all of that.
But I'll probably use like, Quinn 3.5 or something like this. I heard it's good. Yeah, yeah,
nice, nice, nice. I started using an open source model actually on Wednesday two days ago. I went
for a meetup. This is actually a meetup here that I love because it has very similar vibes to AI
demo days. Yeah. Really interesting speakers. Really good crowd. And like, and it's, I think it's
happened three times. I missed the first edition. Went in January and then it was on again this week.
It's called agents anonymous. Okay. And you know, you spend like, what, like, two, three hours
at this meetup. And I just remember working home, being like, holy crap. I just learned so many,
like every conversation with someone else, you learned something, which is not true at all for
most conferences, right? Like most, yeah, anything. Like when, and I think it's a reflection on
the quality of the meetup, but also on just the time we're in right now, there's so much happening.
And so everyone's trying to figure it out. So the meetup solid, but like, I was just chatting to
this guy and he told me about parakeet, which is this Nvidia speech to text model, which I hadn't
heard about actually, maybe because I'm off Twitter still. And I was like, oh, I thought this was
like a software, but everyone just uses whisper and he was like, no, this model is actually really
good, especially if you just need like English, like maybe whisper is still the best for like
multi-language stuff, but this is like really, really good. And you can run it locally. And I was like,
whoa, can I run it on my Mac mini? He said, yeah. So then actually I recorded a voice message
into my Slack where my like Mac mini chat agent is listening and sent it off. I mean, like, hey,
build me this whole like voice pipeline. And it was like, UK, you send a file. I can't read it.
It doesn't have any voice capabilities. And I was like, yeah, that was a voice recording. Please
like use parakeet to like figure it out. And then, and I kid you not, it built the whole thing.
It downloaded parakeet. It like the whole model is like a gigabyte model or whatever.
Download the whole thing. Why did all up? Then played my transcription. And I was like, oh,
okay, cool. I can hear it now. It's just like so mind-blowing. It's so mind-blowing that you can
and I'm like texting it throughout the meetup talks like on Slack, like looking at the progress
and like giving it like other stuff to do or whatever. But they just figured out the whole
thing. Then it built a whole voice thing into my mobile app. And I just tried it out. Like,
and the model is fast, right? Like so it can transcribe or for me at least it transcribed like
30 seconds of voice recording in like 700 milliseconds. Wild. That is fast. Yeah. And that was just like
the first thing in the meetup. Like I need to tell you about the other stuff as well or like
at some point later in this. Like there's just like real Steve Faulkner gave a talk actually.
Oh yeah. Yeah. He was there. He talked about, because we talked about him last week, right? He
talked about his V-neck slop-forking adventures. And one of the funniest things he did actually,
I really like his humor. I think he has a very cool sense of humor. So funny. Yeah. He's really
funny. But he said, yeah, hey, I didn't really have time to prepare for this meetup talk,
but I was on a podcast this morning. So I just fed the podcast transcript into like a vibe-slide
generator. And it made these slides. Yeah, we have, yeah, that's an internal tool.
It starts to offer nourishes. But I also, I build slides with it as well. It's really good.
It's called Let It Slide. That's great. Yeah. But he was, I think it's public. I think it slides.cloudflare.com I think.
Oh no, it's not. I like.
Yeah. But he just said like, okay, so we're seeing these slides together for the first time.
Like, I haven't seen these slides. Let's go through it together. And it was really, really good.
And then as he was saying this, I was like, damn, I do a podcast. Why don't we feed our transcripts
into a thing and let it generate slides. And then post these as like, you know these like
LinkedIn, PDF, carousel posts. Oh, no. God, no. Yeah, I know exactly. But do you know that I spend,
and it's built the whole thing, Matt? Like, it has, I haven't sent it to you, but we,
it that whole pipeline works. Yeah, yeah, yeah. And it's not that bad. And it's great because when
we post the pod somewhere, obviously it's a commitment to download a pod and whatever. And now we can
like put our most rage, baby controversial stuff in these slides. And it'll make people click on
the pod. No, I've got to be careful what I say now. Okay. So you're not on Twitter.
You know, you're not on Twitter. Well, I actually post on Twitter that reacts is completely
replaced Google slides for me in 2026. And it wasn't like, wasn't on my bingo card or ever. And
I got a bunch of replies, but like, it's completely true. Like, I did this talk for Node Congress.
And someone, Rita, I think, sent me, sent me some themes that she, she made her slides with.
Okay. And I was like, cool, borrow those. Just plug them into, plug them into Cloud Code. And
was like, dude, can I have, can I have slides for this? And I had a whole like, you know,
there's voice mode on Cloud Code. I just absolutely, like, wrapped on about what I wanted to talk
about, talked through like all of the different repos that all of these examples live in. And
they all live on my laptops. It's like, where, where they all are. How I want to arrange them. And
then just went away for lunch and came back and came back to pretty much, not quite perfect.
It took like another couple of hours of like playing around with some stuff. But honestly,
considering 20 slides normally took like two days, a day and a half, like a good like few hours
in the tens of hours to like do before probably 45 minutes. It's now like a couple of hours. I would
say even less maybe. And all my slides are mega interactive now. So I have like, I embed my demos
on my slides, because it's react. Right, because it's just, okay, interesting. I want to ask you
about this because it makes sense to me in theory how it could speed things up. But when you talk
about like slides, 20 slides take like two days, most of the hardest thing of like doing a talk
right is figuring out like, okay, you want to communicate this thing that you understand. But like,
where's the audience at? How do you want to communicate it? What are like the main points? What are
like, like how do you meet people where they are? I know that's the hard part. But I know how I
want to do that. I'm going to show my slides a couple of beautiful day. Look, can you see them?
Oh my god, yeah, they look very beautiful. And look, they just like, like the like the motions,
like it's stunning. But just for me to understand, I mean, it looks incredibly beautiful. Yeah,
and it looks like a very coherent story. But like, do you think sandbox look like if I turn?
Yeah, that's incredible. But you can just like chat to it.
Your project management CLI. Yeah, I want to listen to this talk. Yeah, it's good, right? And then
this is like another way that you might do it. Yeah, very, very cool. Yeah, it's like the top
head, demos or slides. This is clearly the way, okay, I'm convinced. Look at this. Okay,
and now this is this is one on code mode. So look, I can listen to a project on this is this is
calling a running backend. It's calling like my project management sass is out there. Look,
I've just listed all my projects. We created your projects. Okay, this is crazy. Okay, so for
people who can't see this, this is what I'm looking at is basically like an incredibly well-designed,
beautiful, useful, interactive website that just happens to look like slides. Yeah, and how for
Syracuse moving between different pages, right? That is beautiful. We just created, we just created
a lot of projects in the sass backend using some, using some generated code. And now we can
create a task. Should we create a task? Okay, the point I was just trying to make is that you're not
some consultant who's been told to give a talk about X and then you've isolated 20 slides that
otherwise you would have actually thought about. This is like exactly what you want to say.
And now you're just making it beautiful. And even expressing it in this way, like I could not
have made this like this pretty personally, it would have been so hard. And the cool thing about
CSS, right, is the alignment is kind of done for you. Yeah, yeah, which it's really not in slides.
Like it can be a pain and like, look at this. This is one slide that I can like do stuff with and
like play with and oh, yeah, using CSS for alignment is obviously such a better way to do it.
This slides, it's just, it used to be harder. Now you can just tell it, make it aligned.
I'm sure I'm showing off like global outbound in dynamic work at Syracuse. I just call this
endpoint. And this is, this is live code, right? I can like console log. Yeah, that's incredible.
Console log. Actually, let's just return it. Let's just return it. I can go on this as well, right? This
looks like it's live on the internet. It is relatively large talk. No, it's allowed. It wouldn't be
live when someone else comes to look at it. But yeah, I'm right at the end. There's like a whole
like, just like, let's get that. I'm glad you showed me. It's fun, right? Oh, yeah, it's fun.
Oh, I found a bug. Oh, yeah, when you found a bug on my slides, when you hit, when you hit the
arrow key to go back, like go to the left, the animation still makes it look like you went to the right.
But yeah, this is, yeah, wow. I wish everyone could see this.
And you're giving this talk at MCP Dev Summit as well, right? In New York. I'm giving a similar
talk at MCP Dev Summit. That's awesome. Yeah. So what's your plan for this trip? Do you end up
coming out to SF or no? I don't think so, dude. Because I got into AI engineer Europe as well.
So it's MCP Dev Summit the one week. And then the next week is AI engineer Europe. Oh, wow,
yeah. So that's a stacked schedule. Yeah, fair enough. Yeah, another time.
Another time. I want to come to SF. I do like it there. It's fun. But I'm already away,
I'm already away for two and a half weeks because then I've got a friend's wedding
the weekend after AI engineer Europe. And then the week I'm going to stay in the UK a little bit
because then the Wednesday after is Cloudflare Connect. So if anyone's around in London,
Cloudflare Connect is pretty fun. Well, it's a good thing to connect. Yeah, all the
things through what we arrived. That's cool. Yeah. Another really interesting thing at this
meetup was so cursor, the cursor people gave a talk. And they seem to be like all in on
background agents now. Yeah, like that is very much like the focus. But it's interesting,
like I think the cool thing about these talks is not just like the actual content,
but also you see like how people speak about a thing, like how they seem to feel about it, right?
And it's interesting because cursor used to be such the like obvious winner in all of this,
like all the users are super early. And now it feels like a bit more like they're trying to catch
up or figure out like what their place is or like trying to convince like us that or trying to
yeah, pitch their vision for background agents, which feels like much more
or it feels like it's a kind of a compelling reasonable sales pitch. But it's just kind of far
removed from like the raw hype and excitement of like the initial stages of like using AI
for coding, which I think that like hype is now kind of more in the like open claw world of things.
That's where it feels like wow like holy crap like what's happening like let's do it all
rather than I guess yeah, maybe there's just a different stages of the hype cycle or whatever,
but it feels like a very different kind of pitch for a cursor product that maybe it was like a year
ago. Yeah, well I find all of this like the whole starter, like looking from the outside at the
moment, I find all of this kind of like strange and it kind of hurts my head a touch, but I can
imagine what they're feeling is it's a bit of an existential crisis, right? They started,
the models almost got too good. They started with this idea, which were like oh maybe we can use
AI for coding and we can have like this amazing auto complete that jumps lines, oh we can have
agents inside working with you. Oh shit, do the agents can do it all? Fuck what do we do now
because now you don't need the IDE. Now what happens? Okay, right now we have to we have to invest
in background agents, we have to do that whole agent hosting infra. Yeah, like there's no other
there's no other option. Do you use background agents in your day to day at the moment?
Do I use background agents? I'm not really. Yeah, I don't know. I spend most of my day in front of a
laptop and during that day I have like six agents running at maximum at any one time, all the time.
So whether I'm doing background agents, like they're always running in a shell on my computer.
So no, it's not background as much as like I'm collaborating and overseeing in a way that is
very much still on my computer. Yeah, yeah. That's the same for me to be honest to you. That doesn't
mean it won't change soon. I just haven't found a good experience where I can feel like I
still can do everything and know what's going on and like a big thing, I said it loads of times,
like a big thing for me, like I run all of my coding agents in my top level code directory
because like I work across multiple repos at once, like I'll have like a POC and one repo, I'll have
like a reproduction for a customer where something was broken in another repo and then I'll have
the main agent's SDK and then and then I'll have like a consumer of the agent's SDK like the
Cloudflow MCPS repo and like those four repos I'll be looking at one that then does something with
the other that then I need to make a fix in agents and then I need to demo and check the fix in
in Cloudflow MCP and I just like the hat workflow I really don't think works very well with
a singular repo. Right, if a background agent is tied to just like one repo then that kind of falls
apart. Yeah, it all breaks because the background agent doesn't have enough context. Yeah, yeah,
until no, I think that's how it works. Until I can have like a programmable computer in the cloud
which is coming very very soon, right? It's not going to be far off. No, you're Macbany's arriving
finally. No, no. Like, okay, so there's a bunch of sandbox companies that do something that I
like this that I'd be kind of keen on even like the not even but like Cloudflow sandbox is 100%
or something similar to this that I would and be keen on. It's just the whole dev experience of
being connected to a remote machine that I haven't quite got my head around. Feel free. And some
people are super have been super in advance of this already. Like, I remember DAX talking about
this ages ago where he's like, yeah, all my deving happens on this machine next to me, not on the
machine that I'm working on. Like I'm always SSH into my dev box. Yeah, I've never done that.
And yeah, and I've also never done that. I guess like when he was learning to do that I was still
in pajamas. I was still in like nappies, you know. Yeah. Yeah. So, so I think I probably need to learn.
Yeah, I either need to learn how to be productive in that workflow or I need to build some tooling
that allows me to be productive in that workflow because I want to like mount an external file system
onto my. Yeah, I mean, what's the benefit coding agent? Sorry. Like, what would be the benefit of
you adopting that workflow? I guess the main benefit of that is that I could turn my laptop off
and it would still run, I guess. Oh, yeah. Like if I have SSH into a dev box and then I run my agents
in the dev box, then then then then I don't need to then it can all just be running the whole time.
I will say this is so I mean to bring the Mac mini back into the picture like people ask like,
oh, what's like the best thing you've built with your agent or do you? And it's like lots of
little things and I've shared like a bunch of them on the pod, right? But one of the most fun things
that is like a very neatly packaged workflow that people seem to like is it's really fun building
something on the go like talking to the agent on Slack while I'm like at the meetup and I just
have an idea and I can fire it off straight away to get it to build and obviously the code runs and
does all the stuff on the Mac mini. And same thing in the gym, you know, it's like I feel like I
spend a little longer in the gym now because in between stuff, I'm just like chatting to my agent on
Slack and like it's building things and then it's maybe like updating the mobile app and I'm like
refreshing the mobile app to see the latest changes and like that is a really really cool experience.
Yeah, that distracts me too much in the gym. I can't I can't start doing that otherwise it won't
get any access like the aids that distractions. Yeah, they do tend to be long breaks between
some sets if I get too distracted. It's funny, even it's a break just like hanging out text
it's a weird time. Wait, I was going to mention the oh yeah, just on the point around background
agents, the course of people mentioned that one of the things they learned is that it's really,
really crucial to get to SSH or like to get like let users connect into what the agent is doing
like into the box at any time like that's something that really worked well for them which I don't
know maybe that's interesting take away. And obviously the the great premise of background agents is
that you can just like if you have a thousand tickets open right you can only get through them so
fast and you might need some crazy local work tree set up that I'm a bit bearish on work trees
now. I just think it's clunky to make all that happen but if you can just if you have like a background
agent that's well set up and has a feedback loop and has the little environment configured you can
just spin up a thousand background agents and they can turn through all these tickets at the same
time. That's kind of the premise that that really high throughput not that I've done this but that's
kind of how they were pitching it. Yeah the work tree thing I've like kind of worked out a way
that I like doing it I have um I have a a slash command that's forward slash work and then
I just basically dump all my context after that and what forward slash work does is it says
in the next message there will be a repo that is spoken about that's like somewhere in this
directory um whichever repo you're meant to be working on make a new work tree on that repo but
keep it at the high level don't like put it under dot clawed or like don't do anything like nasty
like that keep it like at that high level in the code workspace um and then just work on that from
then on and that works pretty well like I have I'll because I'm normally working on like four or
five different things at once even on the same repo and so I just open a work tree for each one
and then I need to use some printing at the moment I've got like like 20 open for agents but
then that just that just all works and then I can um then I can just like ask the work tree to be
destroyed and I leave I leave that thread open then normally for the lifetime of the session of
the uh of the feature until it's merged um yeah and then I destroy the work tree it's pretty good
interesting yeah I found or yeah I feel like um I've just used like separate clones just like
make a new folder as its own clone as its own its whole whole own get in there and it can
it can do things the work tree stuff yeah I've just not found it that I guess I haven't invested
that much time well the the set like separate clones you need you need you need them to be in
different top level folders don't you like if you do yeah in your in your code folder if you do
get clone of something and then you do get clone again of the same thing but even under a different
name it will still freak out right you want different top level I mean that's how I've done I've not
tried cloning things in like sub folders I guess yeah yeah okay I have a little thing about that
but it's just like I don't know it's just a bit of a simple mental model or whatever in my mind
but yeah actually haven't yeah I don't know worth more investigation for me for sure
yeah did you see the um speaking of cursor did you see they released composer 2 that's
Kimmy k2.5 I saw an announcement something but I have no idea I actually can get back on twitter man
do you want to hear the beef oh yeah go on so um mystic I actually missed it a little
bit for the first time I think this week I was like I think I wasn't like that productive or
happy or whatever so I wasn't just I was just like oh give me some distraction and I want
to be to a composer 2 composer 2 is Kimmy it came out that they hadn't changed some of the
model IDs and that it's basically like an rl version of Kimmy k2.5 which makes a lot of sense
right cursor have like probably one of the best rl training sets in the world like at all because
they're doing background agents and they can see whether they've emerged or the changes have
emerged or not and like they must literally have the best training forever and they are using
yes are they using Kimmy k2.5 but the problem with that is that it's like a modified
MIT license so it's MIT but if your company has over 20 million MRR you have to like put
like a like say that you're using Kimmy k2.5 and you have to be like like pretty adamant like
you have to there's some like there's some wording I can't remember the wording but you basically
have to say that you're using Kimmy k2.5 on any model that is derived from Kimmy k2.5
as a fully renamed it composer too which goes in breach of that license and then a bunch of
the Kimmy folks were like posting on Twitter and then they all deleted their posts so I'm assuming
they just got paid off but like crazy that is some spicy gossip yeah so Kimmy k2.5 is the one that
has this license the modified MIT license it's like a modified MIT I've got more spicy gossip
actually I've really I just forget that you're not on Twitter now Johnson Moore yeah go on hit me
did you did you see the astral got acquired by open air yeah yeah yeah big news yeah how did you
see that I saw an happiness oh okay well yeah I it you know it was inevitable do you see how
much they got acquired for no supposedly on VC posted on on LinkedIn in that in their like
thought leadership post and they were I think they were pretty right about all the other numbers
some of which weren't public as well so they listed like a book they were like who says open
source you can't make money prompt food or whatever that that company's called bought for by
open AI for 200 million bun bought by anthropic for 350 million astral bought by open AI for 700
million no so no that's how much that's true yeah I don't know if it's true but like so this
wasn't a VC that was involved in the deal this was someone speculating on LinkedIn it they might
have been involved in the deal like okay I don't know they they were like basically blowing their
own trumpet about investing in open source yeah yeah I see I see okay I mean that is wild that is
it's an insane amount of money isn't saying amount of money yeah yeah I feel like I'm so
obviously I follow like lots of Python people on other social networks or on email or
or I don't speak in email are you gonna do you actually I actually implement it all the fixes
we discussed last time I actually pointed my clank or at our podcast transcript and was like
Matt had all these gripes with it I think he's right about how the page needs to load faster
please investigate and then it it figured it out and it fixed it sick there is I they have some
fundamental things I need to figure out still but like mean it's ready it's it's it's out there
it's it's used I'll yeah I have some more thoughts on it but maybe another time
but yeah I feel like there's a lot of people who comment on this astral thing
by the way everything this company makes is hard to pronounce right is so we think it's called
UV right I swear there was a time people were calling it ov that's horrific and it's UV it's UV
are you sure it's not of and then I thought it was astral I don't know man
this company but obviously it's astral like it's astral constellation like it's it's named after
something you can't change the name so you think it's a Jeff oh Jesus Christ okay let's not
going to yeah I don't know but rough is pretty easy I don't know rough to rough stand for anything
now yeah and then there's tea yeah and then there's T.Y I always I always
yeah tie maybe it's called I always look at it and think thank you
which kind of breaks my brain oh sorry I thought you were thanking them for making T.Y.
no I wasn't letting them to make it yeah but they astral especially UV has been incredible
for the python world like it's like it's it's been fundamental piece of tech yeah fundamental piece of
tech yeah and I think a lot of people are very doom and gloom and like the hacker news comments were
incredibly negative across the board like you know like I'm gonna stop using them immediately and
you know they raised money I knew this wasn't gonna end well but what would they use what would
you use instead uh just go back to raw raw dogging it go back to like which to be fair that's
what Claude does it's really hard to get Claude to use UV even if you tell it about it five
hundred million times you know the answer to that the answer that is to stop using python
that's your answer to how to get around not using UV you just stop using python stop using
python yeah no bites on the silly silly scripting language when you use something that's actually
decent can't you can't argue with the masses crowd has decided we thought we're all speaking
English aren't we is that the best language I don't know man I'm in a country where not that
many people speak English um I was trying to make sure they cut you well I feel like we're
50 yeah dude you're there's no like to stand on me and you are speaking English but you're in a
country where isn't it something crazy like 50 50 odd percent of people don't speak English what in
America people 50 percent no way isn't it something mad I sort of mad stop uh I think we're in
like the sixth nested tangent now okay no idea what I was trying to say wait we need to unwind
these uh walk the tree back five parents and then right I'm looking up what percentage of people
that's speaking English on the earth we should all be speaking Esperanto or something
oh crazy but that's not going to happen oh okay so we're going to use python okay okay so I was
kind of right and kind of wrong um yeah I was kind of right and kind of wrong okay so 21.6%
of the population speak a different language other than English at home but that doesn't mean
that it's being ordered you have 8.4% of the population supposedly speak English less than
less than well okay and it's less than 1% essentially speaking no English which makes sense yeah
that makes sense yeah that makes sense yeah but I think 4% of the population speak English less than
well less than well yeah that's but that's I didn't realize that that's a bit higher than I
thought that's quite crazy um and yeah it's only 70% of the population speak only English at home
which is interesting so so you have like 20 yeah 22% of the population speak another language at
home which is like a large amount of the US yeah yeah it's a very um yeah it's a very
varied place as well I mean that's like the that's like the whole allure of the US right is that
there's people from all different walks who've like come together and all that crazyness story
yeah um story the point I was trying to make I think is that I feel hopeful about this news for
a straw like they were going to have to make money at some point and they have been such an
unbelievable asset to the Python community and what they've built and I feel like there are so
many open source projects that um start out really well but then fizzle out or don't improve much
like literally other Python packaging projects and um then we have this like splintering of
ecosystem and um like everyone complains about like open source funding or open source isn't
getting funded and this path of like a company raises VC money and puts out like incredible
open source and then gets acquired for a lot of like like it's obviously it has downsides but
it feels like it's well people people are yeah people are or some people are making it out to be
like a big like a more worrying path than it seems to be uh and I I don't know I feel like it's
going to keep it's going to keep being good I'd rather yeah yeah yeah like and UV specifically
like I think it's just going to keep um it's going to keep getting better UV is going to keep
getting better and it's not going to be killed off tomorrow it's not going to be killed off
in a year now it's going to be good okay it's almost a okay so I the way I see it's like it's like a
self um perpetuating prophecy um where VCs invest in open source because open source gets a lot of
it's relatively easy to make a big splash and get like a few thousand stars on on GitHub
and that seems like traction to VCs so they invest in open source they also have a history of
companies that have done well in open source so they invest right and then when you're a VC when
you're a VC back company that has had money thrown at you and you're all you're doing is open source
you're probably not doing like a crazy lavish lifestyle because you're open source and like
that's not really people's style on the other hand you're um you're not spending all your time
looking for customers at all because your open source they come to you like that's literally
the point like you actually don't care you're making the thing for them and they'll come to you and
the community is actually almost overwhelming more than anything else so you're not spending all
this time looking for customers you can spend this time shipping and so you can have much smaller
teams so you spend less money that so that bit self perpetuating already you've already have like a
a little a little whirlpool in that in that in this whole big circle and then what's if you're
good at writing software which you'd hope so after some time you're gonna you're gonna get good
anyway like if you put enough grind in you're gonna get good um if you're good at writing software
then other people will see your software and be like holy shit these guys are good at writing
software and because you're an open source team that has to do no other logistics other than
write software they have to do they have to sell they don't sell anything they don't have any marketing
people they don't have any sales people they don't have any ops people hardly at all right they
might have one person for a team of 30 um who organizes all of that stuff and everyone else is
engineering like or or yeah like or in the engineering teams like because you don't have all of that
like a extra company facility that most companies need you're actually really lean yeah and people
then look at you as like this big as this like really good engineering company because you'll
ship way beyond the amount of people you have because you're very heavy engineering um and then
like that the I guess the crux of it the crux of it is people will see all the stuff that you're
doing is really good and they're like holy shit these people could do the same stuff but for us
we should pay them right money right right like and they're building some genuinely critical
infrastructure that open a eyes developer a customer's need and use and rely on and make
better and make sure we should definitely bring that in house and honestly these guys are
shipping so much they must be killers like we don't get that sort of performance from like
these teams that we have double the size they couldn't bring that right right right and so these
people seem very mission critical and they're external and at some point some exec finds that
buy them off a 200 mil and everyone's like happy as Larry you know like yeah but so it's self
perpetuated because then then the next open source company that comes along the VC's like oh my god
the last one I just made 200 mil on let's go I wonder yeah I feel like it's also interesting to
maybe think about like the counterfactual right could something as good as UV is today and it's
really uncontroversial that UV is like really good yeah it's like yeah so it could something as good
as UV have been created in like another world where none of the VC like downsides exist right
like could there have been a team of this caliber working on something so intensely for so long
in like some kind of other model that we don't understand which is kind of broadly more like
community or like more donation based or like it seems really unlikely like everyone's always like
we need to solve open source problems you know and then the best next thing we can come up with is
like some kind of donations thing where like if you're lucky a project gets like 100k a year
can maybe pay like two people and buy some stickers and that's just like a completely different
like if that's the best other thing we can do then obviously we can't get UV
yeah so like zig is a really interesting one there where they are profitable they make
enough money they do some I think the way they fund it is donations but also some donations
in sponsorship but then also they do some contracting I think for like other companies
but all of this is distraction like from what they're meant to be doing and they only really have
a couple of people working on it full-time right that was full-time team members and so like if you're
building something on zig what are you paying for but what are you what are you like putting your
I guess like staking your your flag to you're staking your flag to Andrew Kelly Kelly the creator of
Zig is going to keep on coding for another 20 years right and is going to find a good successor
when he's done you know yep yep yep yep like it so it is a bit tough but it's it's definitely a
different angle because I don't think on the other hand would you want to invest in a programming
language like specifically a programming language that you thought was going to get bought by a competitor
right yeah yeah that would also be tough so I think it only works for a very specific set of
tooling where like it's kind of an ecosystem play yeah yeah that's interesting like if like who
like you could think like who are the like losers out of this UV open AI acquisition right like if
you were a hypothetical competitor to open AI who built a bunch of stuff around Python and around
UV are you now like worried I think that's that feels like a very reasonable thing but I can't
not but like probably not that worried like I'm sure there were bidders to this there's there's
the fact that it's double the price of bonds acquisition yeah this is this is while yeah I did
not realize it was it was like that that's so it's so much money I mean it's ubiquitous in Python
and like there will be like everyone in open AI is I think they have a massive Python code base
right yep I think I've heard this as well yep yep yeah I heard it's a massive monorepo
so like if you're if you're contributing to that and like there's an if the option to buy like the
you're like number one piece of tooling you're probably going to buy it and Astral had all of the
other stuff as well wasn't just you be like like tie rough like all of this stuff and they had
a few other bits as well I forgot what they were yeah and there's a history of this right like I
think Facebook hired all sorts of open source folks who build powerful stuff that they were using
a lot like Maccurial and like the HHBM or I don't might be getting the details wrong there
yeah but it's less common I think I think I don't know I'd like I actually like to see which
which one has more numbers because I've not entirely sure about this but it is it more
common maybe someone can like message me all common underneath because I would be really
interested like is it more common for companies to buy in open source and actually acquire the
open source company or the open source developers like Cloudflite did with Astral for instance or is
it more common for something like Facebook and React where React was invented at Facebook
like Atom was invented in GitHub right right right like which one's more common yeah which was
more common because then they went to do Zed so I think it's probably less common for something to
be invented in a big company and that's actually like a mega bullish signal on the state of the
company like if like stuff is being invented there that is used outside of the company because I
think it's not super prioritized like that type of thing yeah yeah totally yeah yeah interesting
interesting to chat through this and appreciate all the context yeah I feel like I don't know how
useful our ramblings and our thought leadership how useful this actually is to people I think people
like this type of stuff I mean people outside of tech just don't understand open source when I
talk about my my work and maybe like you can chime in as well like like when I talk about my work
I'm like yeah so I for the last sort of like three years 90 odd percent of the code I've written
is being given away for free and people like what and I'm like yeah totally yeah community and like
getting more people into a platform and like empowering developers and stuff and people are just
like what makes no sense yeah that makes no sense but actually favorite moments of this happening
was a Twitter interaction between Gido Von Russo the Python creator right I think he post or one of
the Monty Python people I forget who it was posted on Twitter saying like hey I'm playing around with
Python and then Gido played actually Python's named after Monty Python and I spent the last
I don't know 40 years of my life building it and giving the language away for free and then the
Monty Python guy replied like how do you make money if you give it away for free
I mean it's fair like it's a fair question but I guess like as we saw with yeah I don't know
like with the AI service so confusing like for instance there's there's more like there's
the numbers are so much bigger yeah like LLV um LLV no not LLVM LLM
LLVM what is it do you mean the one of the the French one VLLM sorry that's how many ages
but VLLM is like an inference stack the mo I think a lot of people use
uh super popular came out of universe uh UC Berkeley I think right and I'm pretty sure it did
yeah and ended yeah like if that was a VC funded company oh there is VC funded companies that
are taking that sort of angle like a ZML is like a uh a French version and they have taken the
angle they're like we're going to build the best inference stack we're going to raise money
and hopefully at some point it gets used by everyone and then we sell to a big lie I guess I
guess that's yeah yeah yeah um but I don't know if the the time for all of this is shrinking or
increasing because yeah be bought by a lab you have to show that I guess you or to be bought by
anyone I guess you have to show that you're better at this stuff than they will ever be that you
have that experience that you have that um yeah yeah that competency right one interesting thing
maybe also with this is like if we are at a time where you know some people say you know software
engineers are done software engineering solved it's interesting that like um you know clearly
software is still valuable enough that um a big lab like opening i is happy to spend 700 million
or at least hundreds of millions I don't know what's the like conf like I feel like yeah it'll be
something like that right um yeah it's it's over 200 and it's less than a billion right um that
is still a big amount of money even for a company like open AI and what are they acquiring they're
acquiring people right and um yeah like a my technology but it's it's it's software and software
people so clearly there's some value left and it's okay okay okay so people who are painting a
very bad picture I'm just saying this is a good but people have been taught when we talk about
this for ages right like literally the last sort of like three ever since I've known you right we've
been taught about this where yeah as the models get smarter um the bit underneath the models is
commoditized and the last bit that the models can't do like or or don't have the the ability to do
without someone next to them who does know how to do it right you that there is there is there is
there is a distinction because I think like a model the models can do a lot but they need probably
they need probing they need prodding in the right way by someone who is actually an expert
those experts they become immensely more valuable um and so like you saw for bun it was sort of
around 10 million per engineer I don't know how big astral was but I can't imagine they were
70 people so no I think it was like it is something yeah yeah if it is 700 million then it's a lot
more per engineer and I remember like being told a couple of years ago that the average price per
engineer was around one and a half million so like the the price per engineer to me in these
companies that are getting acquired is going up and up and up because these engineers have some
knowledge of a of a stack that that is very very valuable yeah that's why that's why I like
Gitman Git's not going away except what it does yeah nice no it's it is interesting to talk about
and I think you write people or this is on people's minds I always just I'm like um I feel like
back when I worked at a company and had like internal information about what was actually going on
seeing like the public speculation on hacker news about something you knew very well about it was
just like so completely different like it just felt like so stupid to see these comments and people
being convinced that like some evil bad thing was about to happen or whatever and you just saw
the internal narrative was so much more boring or so completely different so that's why I'm like
I feel like this podcast it's most valuable I think in real terms when we are like oh we did a
thing this is what worked for me and it's like least valuable when we're like oh yeah let me tell
you what the strategy is of behind this acquisition but I think you're right yeah yeah but I think
you're right that probably um it is like I don't know whatever is some entertainment or some
it's like um reality our reality TV equivalent of podcast production but it's weird okay so
you might think of this as reality TV but like I think a lot of people feel very strongly about
this type of thing because this is our livelihood it's our career it's like something that we're
that we're trying to get better at and improve at and like the skills that you want to learn are
changing and I had this amazing interaction with Kent Kent do Kent I don't know how he says
last name you know that you know the JavaScript guy Kent C Dodds yeah that guy yeah amazing
interaction with him on Twitter where he's just like I'm like stuff's moving and he's like yeah
I have no idea what to teach at the moment right yeah and it's like he's an educator and he's
struggling to work out what teach people because he doesn't know what's going to be defunct in like
six months six weeks six days uh totally and but as an educator he sits in a really interesting spot
because people are probably always going to need to be educated I think education was actually
quite low down on on on anthropics list of things that are going to be automated away
yeah which makes kind of sense to me this oh this reminds me of what I was going to talk about
sorry um I think this was the the speaking of um these anthropic reports there was a really cool
one that came out a few days ago called 81k interviews did you see this no go on point me at it
some mail everyone should everyone should look at this is really really fascinating so because
this is some research anthropic did I think maybe last summer or something like that and they
wanted to research like how is like how are people actually feeling about AI and how um like is
it useful like are they worried like uh what are they they're using it for uh I'll send you the
the link and it has beautiful visualizations as well this report um but instead of just like
looking at transcripts which um are obviously like you know maybe hard to detect sometimes like if
someone didn't continue conversation is that because Claude helped like resolve to the thing for
them or the opposite that like it Claude didn't help them and they stepped away because it wasn't
useful right so it can be hard to tell so they built this AI interviewer where when using Claude you
would get a little pop-up was like hey can we interview you about your AI usage and then it was like
an interactive thing where you and Claude would discuss your experience using Claude um which
essentially which is uh oh it is from um December uh did you did you did you see one of these pop-ups
uh like like I didn't actually I regret not doing it because I could have been part of the
largest quality of study in history um but I didn't do it I think I saw it but like I never
replied to any of that stuff yeah so not last summer it was last December um and published this
um this week but the insights are fascinating like really really fascinating um and I would
encourage everyone to check out uh the report and it's just a beautiful experience I'll highlight
just like two things that are really interesting um well that I found really interesting one um is
that uh this tension of like AI is useful but I'm also worried or I'm worried about it
it doesn't exist um within like distinct groups of people there aren't people who are like really
worried about it and then people who think it's really useful everyone has the tension basically
like the tension exists within every person um they're both finding it useful and they're also
concerned about various aspects of it so I thought that was really really interesting um and then
the second one is that um they're like less wet like the the less uh western you are and the
maybe less economically highly developed your country is um the more you're excited about uh the
opportunities and potential from AI so if you're in that again yeah so if you're uh in sub-Saharan
Africa or in India or in Latin America you're more excited about AI more excited about the potential
seriously more of what it can do for you and if you're in especially Germany UK South Korea
Australia um you're less excited about AI and more worried
that's unintuitive that actually kind of like it fits with my own lived experience
actually yeah yeah I think it's fascinating it's really really fascinating so yeah those are
like two um things I thought were we're cool from that report but yeah it's it's it's really really
incredible work and yeah yeah that's true can we can we end on something else um that's a little
bit off-piece go on so I'm gonna I'm gonna bring it up but there was a study that was done
relatively recently um I'm just gonna find the date for you about the effects of radio frequency
radiation on mice and another one that came out on uh the effects of radio frequency generation
on rats I'm just gonna find the date for you and this little bit off topic um when was the date
I think it was it was really quite recently I think um okay I'm struggling with the date
but let's go with recently as another author recent oh okay so it came out in 2018 um it part
of the National Toxology program in the US NTP report of the Toxology and Carstaginist studies
in mice exposed to whole body radio frequency radiation um at a frequency of 1900 megahertz
and modulations GSM CDMA basically um stuff used by cell phones so the the thought process is
like are I wear my airports for quite a lot of the day um am I given myself brain cancer
um due to radiation um of my airports uh due to like 2.4 gigahertz radiation oh no it's not
it's not even a whole gigahertz basically that the lower level radiation that the stuff in the mega
hertz is kind of like similar to bluetooth and the phones and something yeah like is that coming
with and the what they what they found this 2018 study and I need to see if it's been replicated
anymore because this was the only one I found of it um and it did pop up because someone posted
about on twitter so I need to actually like go into it a little bit more so just hold the thought
see if it's valid um but the study on mice and the study that came on out on rats which
so they they they were two different studies but they they came up with the same the same conclusions
was that this type of radiation um in huge doses actually um something like 10 times the
amount of uh those that you get from your airports and consistently for the whole day actually
extended uh mice and rats's lives by by like some of the low 20% and above 10% um which I didn't
that's not what I thought this was going yeah which was actually wild so you're saying my airports
make me live longer thank you Tim Apple so I'm saying currently I am less worried about wearing
airports but I will check back in on you as my uh thought progresses that's beautiful I was about
to hit you with like a quote from um Chernobyl to show like 2.5 rent gun not bad not terrible
but no it's um damn who that that's some positive news to end on love it yeah um we'll see if
it gets uh we'll see if there's some more studies and stuff like that and I need I need to do some
more googling but from first impressions I am less worried about airports than I was for about 10
minutes um nice yeah awesome good place to wrap it um look out for a LinkedIn PDF um slop
presentation shortly for me uh that breaks down this transcript into some catchy slides
I love that you've like got off twitter and I you're like I must engage with people so where can I
engage LinkedIn the greatest social network LinkedIn and Strava yeah Strava's good actually Strava's
good back Strava yeah cool all right happy Friday bye dude love it's the bye bye big love
You've Been a Bad Agent

