Loading...
Loading...
Who gets to define what intelligence means in the age of AI, and why are tech companies so keen to shift blame onto their creations? This episode digs into moral outsourcing, agency, and the urgent need for independent oversight in the world of artificial intelligence.
Hosts: Leo Laporte, Jeff Jarvis, and Fr. Robert Ballecer, SJ
Guest: Rumman Chowdhury
Download or subscribe to Intelligent Machines at https://twit.tv/shows/intelligent-machines.
Join Club TWiT for Ad-Free Podcasts!
Support what you love and get ad-free audio and video feeds, a members-only Discord, and exclusive content. Join today: https://twit.tv/clubtwit
Sponsors:
It's time for Intelligent Machines.
Paris has the week off, but Father Robert Ballacere joins Jeff Jarvis.
And just in time for a great guest, Roman Chaudoury is here.
She's the founder of Humane Intelligence.
She says we need to take back agency when it comes to AI.
You may remember her name.
She led the ethics team at Twitter until Elon Musk fired her and the entire team.
She's worked for the FTC, the UN, the US Senate.
She is a mover and shaker.
We'll talk to Roman Chaudry next on Intelligent Machines.
This episode is brought to you by OutSystems,
a leading AI development platform for the enterprise.
Organizations all over the world are creating custom apps and AI agents
on the OutSystems platform.
And with good reason, build, run, and govern apps and agents on one unified platform.
Innovate at the speed of AI without compromising quality or control.
OutSystems is trusted by thousands of enterprises worldwide for mission critical apps.
Teams of any size and technical depth can use OutSystems to build,
deploy, and manage AI apps and agents quickly and effectively
without compromising reliability and security.
Without systems, you can accelerate ideas from concept to completion.
It's the leading AI development platform that is unified, agile,
and enterprise-proven, allowing you to build your agentic future
with AI solutions deeply integrated into your architecture.
OutSystems, build your agentic future.
Learn more at OutSystems.com, Slashtwit.
That's OutSystems.com, Slashtwit.
Podcasts you love.
Front people you trust.
This is Twit.
This is Intelligent Machines with Jeff Jarvis and Paris Martin.
Episode 862, recorded Wednesday, March 18th, 2026.
Manage a Claude.
It's time for Intelligent Machines to show we cover the latest AI news,
robotics, and all those smart machines all around us.
These days are getting smarter and smarter.
Paris has the week off, but I'm very happy to say we've got Father Roberto.
I was going to call you Roberto.
I'm going to call you Roberto.
I'm very glad you're there, too.
Father Roberto is here.
From, of course, he's visiting us from the Vatican.
It's not a joke, folks, that's him.
Hi, Robert. Great day.
It's always wonderful to see you.
It's always a great day when I get to get to see you in the Twitter army.
I miss y'all.
Yeah.
Robert used to have a little place in the basement of the old Twit Studios.
That's not a joke.
That's actually not a joke.
It's true.
Also, of course, here, the Professor Emeritus of Journalistic Innovation
at the Craig Newmark Graduate School of Journalism.
The same works.
Craig Newmark.
York.
Jeff Jarvis, author of the Gutenberg Parenthesis magazine.
His new and hot type.
Now, delayed.
Did you can still pre-order it?
You can.
Yes.
Gives you no advantage.
It's now July, you said?
It's all because they were going to move it from production reasons from June to July.
And I said, no, that's death for books.
No, so they're moving it to the end of August.
So it's basically a fall book now.
A fall book.
Yes.
You've brought us, I think, one of our most interesting guests yet.
So would you introduce Ramancha Dury?
Well, I'm going to have the egotistical joy first of announcing something else that
we'll need to.
Oh.
Tell me about that.
So a big announcement.
I don't know.
I meant to make that big.
I mean, you know, you get some trumpets or drums or something like that.
So I'm proud and amazed to announce that Boomsbury Academic is a launch
in a new book series called Intelligence AI and Humanity, which is not a technical book
series, but it is a book series enabling writers from many disciplines to reflect on AI
and how AI reflects on humanity.
And Boomsbury, take the title again, Intelligence AI and Humanity.
Wow.
And I will be editing the book series.
Oh, man.
I can't believe it, but I will be editing the book series.
So I'm very proud to say that we have signed up our first three authors.
I'll mention the other two first.
One is Matthew Kirshenbaum, who's been on this show, who's writing a book about the
text apocalypse.
Another is Charlotte McElwain, who is at NYU, who's writing a book, a very hopeful book,
surprisingly, about race and AI and the opportunity to undo the oppression of technology on race.
And then we have with us a very happy, very proud to say the author that I was dying to
get to be the first author in this series, Dr. Ramon Chaudhary, who's writing a book
about asking the question is, what is Intelligence?
So oh, that's a great question.
Isn't it perfect?
That is the fundamental question, if you ask me.
So Ramon has a PhD in Political Science.
She is the founder of Humane Intelligence, which she'll explain to us, but as an effort
to hold AI companies accountable, I know her name from Twitter, where you were responsible
for ethics at that.
I was, I was the engineering director of machine learning ethics, transparency and accountability.
This is before Elon.
This is, oh, yes, I always say I worked at Twitter and not X, like shocker, I know you'll
be shocked to hear that my perspective and his don't align.
I know.
Who worked at the UN, the FTC at the US Senate.
Geeks might remember her from a death, DEF CON, where in 2023 she organized the largest
generative AI red teaming event in history, putting eight major AI models in the hands
of 4,000 people to probe for vulnerabilities.
I was one of them.
Yeah.
Yeah, Dave Robert was there.
Ramon, we're so thrilled to have you on intelligent machines.
What is intelligence?
Oh, okay.
So I've already written chapter one of the books, so let me just, let me preface this
with my, with my twisted point of view and you can say I'm crazy.
One of the things to me that's been most intriguing about what's been happening in AI, you know,
we've been trying for decades to duplicate how humans think with computing machines and
a lot of people say, well, you could never do it with a von Neumann architecture.
It's just, that's not how humans are massively parallel, blah, blah, blah.
But what's, I think to me, very interesting is that once we started using transformers
and started building these large language models with transformers, they have become,
they seem to have become more and more dare I say intelligent.
They seem more like humans, not, you know, a poor imitation.
But nevertheless, it has made me think lately a lot about, well, what are we then?
I mean, literally, all of us are just the sum of our, dare I say, training over this,
over our lifetimes.
Perhaps we're born like an LLM maybe with some instinct that it forms us to begin with.
But then as we grow up, we learn language.
And we learn all this through example, much like a machine does.
So I'm really thinking that one of the most interesting parts of AI is what it teaches
us about our own consciousness.
Well, absolutely.
So I want to tease apart many, many points you make that actually I've already started
exploring in the book.
And as I, I wasn't kidding when I said I've read it in chapter one.
This is an aggressive writing timeline because Jeff, what I think they want to launch the
first book, Q1 or Q2 of next year, which means I have to be done writing it by on again.
And we want your book to be in the first one out there.
The first book in the series, right?
So back to the fundamental question, what is the intelligence question?
And then there's the, how do we measure intelligence question?
And then there's the intelligence versus sentience question, right?
So cognition does not necessarily mean sentience or consciousness, because you said the word
like consciousness, right?
So one is every measurement of intelligence that we have today is fundamentally rooted in
economic value.
So the first part of the book really goes to intelligence is a social, economic and political
construct, right?
So why do we care?
So the basic question I ask is, what is it that is really like striking us all existentially?
And it's not just that these machines are performing the way we perform.
It is at our sense of self-worth and value is driven by this notion of intelligence.
But if you go back to how intelligence has been measured, it was constructed in the
first industrial revolution.
It was constructed around, so this is Alfred Benet, who was asked by the French government
to find a way to classify kids in classrooms to determine who would be a good factory worker,
who might be a good manager, who would be organized, who would be, so it was always rooted
around productivity.
So today, when Sam Altman says artificial general intelligence is the automation of all tests
of economic value, and we're like, what?
And it hits us hard in our core.
It's because the fundamental basis of what we call intelligence has always been about
workforce productivity.
But is that what intelligence really is?
And then we get into like the social and political magnifications, right?
So politically and socially, why do we care if we are intelligent or not intelligent?
Well, one aspect of it is that rights are given and denied based on it, right?
So justification of why it was okay, quote-unquote, to enslave black people was in large part rooted
in concepts or intentional misconceptions about intelligence.
I.e., you can treat these people like animals because they are no smarter than animals.
Women, why are women not allowed in higher education?
So, oh, because your little brains could not handle it, your intelligence is not there.
So we make these presumptions.
We design these tests to prove the points we want to make.
So to your point on the, you know, AI is a mirror, I would even say our construct of intelligence
is more about the fears of the economic ruling class and their attempts to categorize
us and put us, quote-unquote, in our place, then it is an objective measurement.
Anything.
So the problem is when this goes into computer science and we have the Dartmouth conference,
these men, they're all computer science and mathematicians, sit down with actually a very
simplistic understanding of intelligence.
So they presume intelligence has been mapped.
We know how to measure intelligence and people, that's their starting presumption.
So the second presumption they make which is incorrect is that, okay, well, we can break
down this thing called intelligence into its aggregate parts and you could just sum it
back up and it'll be intelligence and break it back down.
So if you know like basic systems theory, there is no system in which you could just sum
up the parts and then you get the system.
The system itself has some residual impact.
So there are like a lot of things.
One last thing.
So the other thing that interested me is in science, right?
How have we explored measuring intelligence in not humans?
Because one assumption about computer intelligence is for some reason, because we are very species,
centric, you know, animal, we have just presumed that human intelligence is the thing to model,
right?
But then what if we look at other ways of looking at intelligence, animal intelligence,
mycelial intelligence?
There's a whole field called extraterrestrial intelligence.
If we go to Mars and there's a moving slime, how do we know that slime is intelligent?
And like whether or not we should, again, why does this matter?
Well, because it can lead to ecological ramifications, it could lead to so many other things, right?
So there aren't fields of study.
And by the way, like newsflash, in 0% of these fields do they base intelligent measurement
on human capabilities?
In fact, that is almost the first thing you are told not to do because animals in mushrooms
etc have different ways of perceiving the world that are actually better than ours in some
way worse than ours.
But what you don't do is give a monkey a set of physics questions and say, well, obviously
we're smarter than you because you don't know what physics is.
So again, you flip the script and say, well, then why have we decided that these machines
need to be modeled after?
It seems like a pretty self-fulfilling prophecy then because these CEOs sat down and they're
like, oh, we need to do this model of the human brain and automate all the economically valuable
things as human brain can do.
So what we feel is really not an attack on our intelligence, but it's more visceral.
It's more visceral.
They just want to get rid of us as intermediary economic bodies.
I saw this TikTok with this woman said something like, companies seem irritated that they need
to go through us to get to our wallets.
And that is how AI feels.
Let's just go ahead and take the money directly, it is so much easier.
But I also think that there is an existential dread that comes from the thought that maybe
we're not special, that maybe what we have is a kind of intelligence.
When you say slime mold might be intelligence, that's threatening, too, right?
We want to think that we are somehow special.
Well, absolutely, and to your point, it goes back to how we construct intelligence, right?
So if it is constructed on economic productivity, and then we make an economic productivity machine,
then we're like, wow, we're not that special.
So then the last part is really, I've been playing with the idea of calling the book something
like the new intelligence or something like that.
It's like, wait, let's go back and then let's say, given that we have created a machine
that can surpass us in the way we have defined intelligence, a current measurement of intelligence,
right?
So let's actually create a method of understanding intelligence that maybe is divorced from
workforce part, because there are, by the way, many methods of intelligence.
So gardeners, multiple intelligences, right, there's kinesthetic intelligence, like
spatial intelligence, like dancers, for example, have this, they've built an intelligence
where they understand proprioception, their body and space in a way that like you and I
could not, right?
Because we are not trained in that intelligence.
Empathy is a form of intelligence, resilience is a form of intelligence, right?
There's all sorts of things that are not measured in SAT tests that we therefore do not value
that maybe we should.
I mean, you're going to hear a broad race view on this, yeah.
I absolutely love this idea of linking our understanding of intelligence back to the
Industrial Revolution, because yes, that was such an upheaval in society, that it makes
sense that that's when we were trying to quantify the definition of intelligence that
we use today.
In my tradition, there's a little bit different of an angle on it, and that is to separate
this idea of knowledge and understanding from intelligence.
Those two things are treated separately, because knowledge could be broke memory.
It could be the knowledge to be able to do a task, the knowledge to be able to complete
a process.
However, intelligence requires agency, and agency is that intentional desire to act upon
knowledge in order to affect the environment in which we live, and not just to affect
the environment, but to take accountability for the intentional actions that we take.
For us, for my tradition, intelligence looks like knowledge, but it has that additional
step of agency, which we still don't think that LLM said current AI has, because it
cannot act as an agent.
It can only act as a source of knowledge.
I am absolutely tickled.
I love this idea of using the Industrial Revolution, because you may know that Pope Leo is big
on the document Ray Romanovarum, which is what the Catholic Church released during the Industrial
Revolution, to introduce this idea of agency and bring this idea that there is something
innate and special about humanity, which is what Leo is talking about.
So how do you have a working definition of intelligence?
For us, intelligence would be the ability to take knowledgeable understanding of the world
and act in an intentional way to influence the environment based on values, goals, and
beliefs.
So for us, that's human agency.
That's the step in intelligence that we don't think AI currently has.
Roman, is this part of your book defining intelligence?
In a way, I frame it, because I'm a social scientist, I frame it more like sociotechnically.
What is it?
It's not enough to just define it, like I'm not a philosopher.
What I want to do is understand it in the context of the world, right?
So what are the ways in which we have defined intelligence, maybe even just sort of judgment
diagnostic and say, what has that meant in how things have been executed?
Because again, the fundamental question to me was always like, why is this idea?
Why are we so scared of this thing?
Why are we so scared of it?
What is it forcing us to look at or question about ourselves and what do we feel threatened
about?
Again, that's how I got to where I am.
But Robert, I love what you're saying about this idea of intent in agency.
And this is where we shift from whether it's intelligence to sentience or conscience.
And people conflate the two a lot all the time.
And again, if you talk to the average person on the street and you ask them what they
think artificial general intelligence is, they think of something like the Terminator,
her, the Scarlett Johansson's robot and her, the AI.
And those things had intent.
They acted with desire.
And there's nothing about these machines.
And also, by the way, this narrative is being pushed by tech companies.
It's very, very intentional.
Why?
I coined a phrase back in what 2017 or 2018 moral outsourcing, where essentially companies
anthropomorphize these models on purpose.
So that when something goes wrong and something is bad, they can say, the AI did it.
Right?
The AI did it thing.
Oh, and you see them doing it today.
Right.
And you see it starting with all of the tech layoffs.
Jack Dorsey saying, AI is taking jobs because AI is making it easier.
Like, sir, you invested in a bunch of crypto that tanked and you overhired like that.
Not our fault.
Yeah.
Exactly.
But now there's this very, very convenient intelligence shaped thing that you can put
the blame on when bad things happen.
Such a great phrase.
I'd like to hear you.
Go ahead.
If I can introduce one more uncomfortable truth about our system, our way of thinking of
intelligence is if you look at intelligence as that combination of agency and knowledge,
there is this fear.
And it's a very rare fear, real fear that there are humans who do not meet that definition
of intelligence who do not reach that level of agency.
So that is, that should also be on the board.
So I love that.
And I especially like it because one of the things I'm very, very focused on right now
is the future of education and the future of work.
And these are like institutional flaws that predate AI.
AI did not make, you know, the educational systems fail our kids.
AI did not make it difficult for a child, for a recent college graduate to translate their
degree into a job.
Like, that existed before.
How many of us work in the field?
Well, maybe some people in this work work in the field that they studied when they
are around it.
And most people don't, right?
Most people studied some thing and they ended up somewhere totally different.
And we've just sort of accepted that.
Most people will say what I learned in college is nothing to do with what I did even at
my first job, right?
I certainly am not in.
And that's fine.
There's nothing wrong with that.
But then we need to re-examine our institutions of pedagogy and say, well, how have we been teaching
and what have we been teaching?
And I have very strong thoughts about like decisions that have been made in the educational
system, but fundamentally the purpose of education.
So just to get that very specific, because again, AI and education is something I'm looking
a lot at lately, the pedagogy of AI is very, very problematic because we teach AI in
general as a tool of productivity, not a tool of mastery, right?
So if the purpose, and we've done the same in education and smart, quote unquote, smart
kids know how to game the system.
They're good test takers.
They know how to do all the SATs.
They know exactly what to say to the teacher and what they should write in their essays.
Some of them happen to love learning, not all of them do, right?
So we have taught education as an institution of productivity, produce X, Y, and Z, and
then you'll get into Harvard or MIT or Stanford.
And then we make, again, we make this tool that we're teaching as a tool of productivity.
But there is research, by the way, which is excellent, into AI as a tool of mastery.
But none of it's being taught to kids that way.
So like it is not, I guess my fundamental point is like it actually has nothing to do
with the technology specifically itself, but how we are framing our usage of it.
And that's also what's driving a lot of the fear.
We're talking to Raman Chaudhary, sorry, we're talking to Raman Chaudhary, the founder of
Humane Intelligence.
There's a non-profit, and there is a public benefit corporation.
Tell us about Humane Intelligence.
What's your goal here?
Yeah, so the non-profit was founded to build the independent community of algorithmic
evaluators, which is very, very needed.
So right now, essentially tech companies right their own homework, grade their own tests,
and pat themselves on the back about how smart they are.
And then when anybody listening to this podcast or sitting in this room tries to use AI,
however impressive it may be, it's like a smarty-pants technology.
If you try to use it for something very fundamental and real, you'll see it falls apart very quickly.
And as all these memes, I can't spell strawberry and all this up, but then there are the bigger
issues.
There's a lot of embedded bias in it, like that CEOs of these companies, and especially
thinking about GROC, have dictated how they want these bottles to answer certain questions.
So there's biases baked into it, and also the average person, if our lives are meant
to be impacted by AI, we should have a right to say how this tool is being used.
So the non-profit started as an organization that would try to cultivate and get people
excited about evaluating AI models.
The for-profit is specifically looking at how to build the infrastructure to do this.
So things like algorithmic transparency, technical methods of evaluation.
One thing I want to say, I like to get a little bit, like a little in the weeds on it.
So machine learning and AI, like narrow AI, like pre-generative AI stuff.
Those are like largely statistical models.
And as a statistician by background, like we know how to math those things.
Like we have over 100 years of, you know, mathy-mathing to figure out things, right?
The generative AI, you have probabilistic outcomes, and the way I describe it to people
is 2 plus 2, sometimes equals 3.9, sometimes equals 4.2, usually equals 4, but sometimes
equals 98, right?
So you don't have this consistent answer.
So in trying to evaluate this, it is hard to make a test that is scientifically
sound, something that's reproducible, something that's generalizable.
And these are all things we need to know if this model is going to work or fall apart,
and we don't have that yet.
So the for-profit, it's a structure of the public benefit corporation for many reasons,
is dedicated to creating the infrastructure.
So the nonprofit creating the community, and then the for-profit creating the environment
that they can do these tests on.
I'm looking at the Humane Intelligence Nonprofit webpage, and you talk about AI-RED teaming,
which is so important, but instead of having it be done by the companies that make the
models, having it, I presume, done by a community of people.
AI contextual evaluations, what's that?
Condextual evaluation is actually a phrase coined by my colleagues, Riva Schwartz and Gabriel
of Water.
They were both actually previously at NIST, and now run their own consultancy called Sivitas.
Contextual evaluations really mean, how do we give a test of a model that understands
the context in which it will be used?
I don't mean to use the word in the definition.
So for example, if I am a car company and I want to understand, I want to build a voice-activated
AI system in the car to help people whatever get directions or find the nearest gas station,
how do I do an evaluation of that that's not just some sort of a generic evaluation?
So things you might want to think about in that situation, how does the AI give an answer
that will be correct and not lead somebody to an unsafe place or distract somebody when
driving?
These are very specific things that today, the very generic and superficial testing tools
put out in Silicon Valley really don't answer.
So they don't answer their questions.
I do a lot of work with companies, and these are all tech companies.
These are companies trying to use AI, banks, insurance companies, et cetera.
And zero of them have told me that they have found the tools being built in Silicon Valley
to be useful for them, and they just do all of their evaluations in house.
They try their best to do with themselves, which is not a formula for success.
So they feel there's risk.
I mean, you have AI-red teaming, AI-contextual evaluations, a bias bounty, which I presume
challenges to find bias in these AI models.
That's right.
Yeah, so all of this really would be under the rubric of AI safety, yes?
Yeah, so that's a tricky term.
So yeah, well, because there's a lot of like, you know, like in the family fighting of
responsibility, safety, governance, and you know, sometimes the word, I don't mind the
word safety, I think it's fine, but for some people it's coded as existential risk, which
means there's a community of people that say, you know, AI has a 25% chance of killing
us.
And again, it's like, it very much anthropomorphizes the AI, it uses language like manipulation.
It talks about things like bomb threats and scenarios, and frankly, from my perspective,
I think sometimes that narrative is somewhat intentional, somewhat naive and privileged,
and distracts from the real harms we are seeing today, because we are busy speculating
on future harms that are not possible to happen, right?
So today, what do we have?
We have algorithms that deny people jobs that, you know, unfairly, you know, accuse them
of crimes that, you know, are used for surveillance.
Like we know those are actual harms that happen.
And instead, an overly significant part of this community funding, brain power policy
is spent spinning on terminator stories of AI has gone rogue.
Right.
Yeah, we've said this many times.
This is one of Jeff's favorite drums to beat.
It actually is the flip side of the coin moral outsourcing.
So on the one hand, you say, well, it wasn't us, it was the AI, on the other hand, you
said, but the AI could kill us.
It's all kind of the same.
Right.
Well, then you say, then I think it's so important, is that when you blame the AI that way,
you take away our agency, which is right, on the browser said, right?
And it acts as if we're powerless, that the AI is just going to take over everything
and there's nothing we can do about it.
And these companies want it to be that way.
There's a payoff.
There's a hubris to it.
There's an extreme hubris.
I'd love to hear you riff on the notions, the hebristic notions that they add of general
intelligence, super intelligence.
I'm not enough to say that you're as good as super human intelligence super, super human
who prevention intelligence.
I don't know.
Big go.
Exactly where it goes.
I said, I happen to say to you, and I also said, Leo, a paper this last week, that
John Locot is one of the co-authors arguing against this notion of generality that humans
aren't general, that we are good at some stuff and crappy and other stuff.
But this idea that these people are so smart, they can build the machine that is smarter
than all of us.
Is that a new plateau in this notion of intelligence as privilege and power?
So both Jan and Fei-Fei has raised money for her startup and Jan is also doing something
similar.
And I think this is what Sarah Hooker is doing as well.
Sarah just raised 50 million for her startup called adaptation, which I cannot claim to
know anything about.
But it sounds like what Fei-Fei and Jan have been talking, which is building world models.
So their argument and Jan is making this, I love Jan's hilarious, I think everything
he says is always correct, and he is not afraid to offend people.
What I say offend people like, you know, who fight the powers that be, not like us.
But he's always very, very correct on what he says is that, you know, that there is a
belief in the general public populace that these models are just linearly improving over
time.
And actually, they're not, right?
So the newer versions of chat GPT are better in some ways and worse than other ways than
previous models that came out.
So it is not true that models are simply linearly, exponentially improving.
And all you got to do is give them more data and, you know, more energy and that they'll
solve all of our problems.
So he argues and Fei-Fei argues that you need world models, which are AI models that do
more than just absorb a specific language knowledge it needs to understand the world
around.
This could be vision, it could be voice, it could be like lots of different things.
So I don't know, we don't have a world model yet, but this is what they're betting their
careers on.
And given that, you know, they are a quote godmother and godfather of AI, you know, I figure
they know what they're talking about.
So I find that very interesting.
And I was on this, like, debate show last two weeks ago, discussing, you know, whether
AI will take our jobs.
And that was the point that I made that actually these models are not simply linearly improving.
And we may have actually reached pretty close to saturation at the capabilities.
And right now, really, what everybody is doing, the bait and switch have happened in the
last year is actually that the models have not been improving.
What's been happening is they have focused from building foundation models to building
applications.
So you may see, if you have Google, right, all sorts of new AI stuff dropping every week.
So yes, they're building Gemini, but now they're actually saying, let's take Gemini that
exists today and like build these little tools, which is not necessarily a bad thing.
But again, this is not this world of super ubermensch intelligence that's going to, you
know, sit at your desk and drink your coffee and take your job.
That is a very, very different world we're talking about here.
That's kind of what, yes and while I was talking about on Monday, right, Jeff was that
we've moved into the age of inference that we've moved away from the age of building models.
And now it's about what the models can do.
It's, you've said, Ramon, that part of the problem is that the people in charge of all
of this are, well, the companies are making it.
So they obviously have a dog in the sun.
They have an axe to grind government, which you say doesn't really have the tech and you've
worked in government.
So you know, it doesn't have the technical capability to understand what it's doing and
what it's regulating this.
And then you also point out that we in the public really don't have any way to measure any
of this.
You know, it's a, it's a little bit of a black box for us.
What's the solution to that?
It sounds like nobody knows what's going on or nobody's expected to do anything about
it.
Maybe that's better.
Yeah.
Well, okay.
So I can, I can talk all day about policy.
It's funny because I was just talking to, I talked to a lot of policy makers and I'm very
heartened to see a lot of young, this would be like older Gen Z, you know, people interested
in running for office, specifically on a tech platform.
I think, you know, Mamdani has really emboldened a lot of people who want to see positive change.
So there are a lot of junior, but, you know, they will be the next generation of people
and all of their heads are in the right place.
So I'm very heartened to see they may not have the wisdom or maturity yet.
They'll get there, right?
If they are smart and they sound somewhat right people, they, they will get there.
So I, I think we are going to see in the next, like, I would say five to 10 years, which,
you know, maybe too slow, a sea change happening in DC that I think will, will in some ways
be quite positive.
Um, but, you know, one of the things that I, my kind of pie in the sky, like what is
this ambitious shoot for the star is things I, I gave a, I gave my Ted talk on this idea
of a right to repair and right to repair, especially for the reason I chose that phrasing is,
it really appeals to kind of like the old heads, right?
This idea that if you own a piece of technology or piece of technology influences your life,
you have a right to tinker with it and do stuff to it.
And, you know, the right to repair actually is more about physical devices like iPhones
and McDonald's off-serve machines, um, but I do give the example of AI tractors, John
Deer versus farmers, who actually learn to work with hackers and hack into their, into
their tractors, uh, because John Deer required that you work with a licensed, you know,
part technician from them, and again, this is a community of people who are used to just
tinkering with their own stuff, but, you know, it, they can't wait three weeks for someone
to show up, crops grow when crops grow, right?
Um, so this was a fundamental problem for them.
And I think we all, like, we all need to think about, like, what are our rights as people?
And it was, it was sort of meant as a thought exercise, and we've never had technology
framed that way to us before.
You know, when I was at Twitter, we did this, you know, we did this exercise, or we wanted
to understand what would it look like to give people more ownership of their timeline?
And we, I worked with, um, Dr. Sarah Roberts, who's the author of Behind the Screen, which
was the first book that exposed content moderators and, and, and all of the, the horrific things
that they have to see and do, just to make sure we get a sanitized internet.
And we worked with her to really understand how people feel about agency and ownership.
And, and like, the TLDR is that, like, everybody said they wanted agency, but nobody understood
what that looked like.
Nobody could help.
Nobody could articulate what that meant.
And to be fair to them, we have never been given that.
We have never been given ownership in agency.
So what does it look like to have a right to repair?
I'm not sure if I know, but I think a starting point is something like public red teaming,
right?
We're regular people.
So bring back to the red teaming.
And we, we, we purposely do these exercises with teachers, students, policymakers, like
the point is not AI experts in the room.
And it's, it's to break down that initial barrier people have when they say, oh, I've
done the AI expert great, but you're an expert in being you.
You're an expert in being a teacher or being a multilingual sociologist or a cultural
expert.
That's what we need more than more tech people in the room.
So that's a starting point.
A bug bounty for, uh, social harms kind of.
Exactly.
And that's, that's kind of what we did with NIST.
We did a project with NIST called ARIA.
And what we did was ask literally anybody in America to go on to our platform and evaluate
Gen AI models.
And that information went to NIST to inform their, their, their standards development.
And when I say to, I got literally gave it to the guy who manages my gym.
And by the way, he was super interested in it because he's like, hey, I have like a sidehouse
of where I make websites and I really worry that AI is going to come take my job.
I really want to do this.
So when I say everybody, like we, and the thing is like, that's the dirty secret.
Everybody can interact with it.
But this, like this mythos around it.
This like we're too smart for you and the technologies to, like, it's all on purpose
to make us not feel like we deserve ownership.
Yeah.
Pay no attention to the man behind the curtain, Robert, you wanted to say something.
Yeah.
I was wondering.
So, uh, back in the 2023 at Defcon in the AI village, um, two things that really stuck
me from the final analysis that came out of the event was the, uh, first, the recognition
that you had that sometimes closed models are required for security and intellectual
property, but that the creators needed to, to provide transparency on capabilities.
Uh, and I'm wondering how much of that you're seeing?
How do you actually see the, the creation, the creators of the foundational models explaining
what it is that they're, they want their model to be able to control what they wanted
to be able to do?
The main part was, and if I, I'm sorry if I'm not remembering this correctly, you were
talking about the democratization of desirable behavior that, that was absolutely something
that needed to come out of the red teaming.
We needed to be able to, to get together and in making policy decide how are we going
to regulate the, the reward behaviors of these models?
How much progress have we made since 2023?
And does the anthropic sole document do the job?
Yeah.
No.
So I'll work backwards.
No.
I had a feeling you might say that, but I, did you ask?
You know, I, I'm very cynical, if you've not gathered, uh, at the intentions of the people
who simultaneously are going to be billionaires in building the technology and yet, you know,
proclaim to also be the public philosophers who will cure it all and save humanity.
I'm like, I'm okay.
Uh, so you're going to point out the problems, but not do anything about them.
Um, but, but I want to talk about your, your question.
So first is, you know, this, this idea of, uh, closed versus open, and this is one of
the reasons why we need an independent committee of evaluators, right?
So think of literally any other industry, like that, that is impactful finance, education,
airline, safety, they to protect intellectual property, right?
But if you are a licensed evaluator, let's say a financial auditor, right?
If you have this license, you get, you have professional standards, you are allowed access
to things that a regular person off the street would not have access to.
You have guidelines in which you can do this testing.
I mean, we do this in healthcare as well, right?
Uh, so this is not completely on, like tech likes to think, you know, we're, the first
time anyone thought about, blah, it's not, right?
So we have done, we've created institutions, professions and systems in place to protect
IP, while also enabling independent evaluation.
This is why that independent committee is needed.
I think the, the public right teaming is a great tool for awareness raising, for people
to get demystified to learn how the technology is using, but working, but if we want to
talk about improving these models, writing good regulation, really understanding performance
and harms, that is like a different animal.
And this is again, like, with the for profit, why I want to build this infrastructure, we
need people who are skilled in doing this.
We need a way of, you know, understanding their expertise and, you know, getting, giving
them access to, and this could be legal protections, legal access to it.
It could be, you know, professional certifications, but this is why you need the, the profession.
And then the second of, you know, democratizing, uh, cywoops of race again, you said,
a desirable behavior.
Yeah.
Democatizing desire behavior.
Democatizing desire behavior.
Yes.
So one of the, uh, you can tell none of these people care about philosophy or social sciences
or anything like that.
Right.
There's this, like, very arrogant notion that they can arrive at, like, this universal
good.
Um, and I, I always find it really funny when people are trying to make these models, and
they claim it will, you know, have this constitution or, or values, the universal values.
Like, I have actually heard people say, like, oh, obviously, we all believe X and we
actually don't all believe X is actually very, very hard, it's not impossible to come
and even if you think about the most fundamental universal value, one might argue, which is
that, okay, well, human beings say we shouldn't kill other human beings.
Don't we though?
Don't we have the death penalty United States?
We do have state sanctioned killing of human beings.
We've actually said it's lawful and okay.
So we don't universally think that it's wrong to kill other people.
And one would argue that would be the most fundamental thing, right?
The most fundamental thing that we could theoretically say the universal agreement.
And yet we don't.
So yeah, there's this arrogance of this idea that we can come to universal, uh, you know,
this universal list of values, you know, one of the things I, I love to laugh about is
one of these benchmarks is called humanity's last exam.
Yes.
Uh, what a dramatic, but you look at it and so much of, like, physics and math questions
and I'm like, like, opt to be out of humanity's last exam, like, it's real thing, bro, isn't
it?
Yeah.
Yeah.
Like, like, really is that?
And by the way, there is this section, there was this counter paper by a bunch of people
that, you know, was trying to make a benchmark for, quote, universal values.
And I want to, like, I remember, like, the first thing I ran, I went to was what, what
did they consider to be global historical knowledge?
And it was Europe, America, Asia and other cool, cool, cool, cool, cool, yeah, yeah,
other.
You're living out there.
You might not agree.
Like the literal, the literal cradle of civilization, which is the Middle East and Africa
is.
But Europe gets its own America, which is the youngest of all of the nations gets its
own.
Right.
But, uh, yeah, it's like, this is, this is what these people come up with.
We're really glad we could spend some time with you.
I wish we had more time, uh, Ruman, uh, Chaudhury.
I look forward to your book, but I know you have 50,000 words to write by August.
So I don't want to keep overstay our, uh, can I ask one more time?
They call it painful.
Yes, please.
So I'm, uh, Leo and I watched, uh, Jensen Wands, uh, keynote, uh, I'm a connoisseur
of the showmanship of it, uh, every time he does it and we're going to talk about it later
in the show too.
Right.
Yeah.
And so at the end, he went Gaga over open clock and, um, I'm curious to hear about that,
but I was thinking about this in my world in media.
We went from a world where you couldn't make media unless you had the tools of, uh,
production and distribution, unless you had the capital, unless you had the equity to
do that.
And what the internet has obviously done is, it means that, that we can all entertain
ourselves.
We can all make media, uh, the culture makes itself fashion determines itself, and I celebrate
that immensely.
For all its, for all arms, I think it's better.
Um, the internet ended up being, uh, top down in a lot of ways, corporate, right, just
just just happened to media, uh, along comes AI.
And your co-author in the series, uh, Charlottes McElwain, uh, surprised me with a, with
a surprisingly optimistic view, having written the book, Black software about the oppression
the technology caused, uh, in Black America, he sees an opportunity to break out of that.
Now, so I'm finally getting to the point of open claw.
It does this mean that, um, we can all make technology, just as we can all make media
on our own.
We can all make creativity on our own now.
Um, is it possibly to not be over optimistic that this opens the door for us all to make
our technology now?
Does it, does that, is that a step to give us all more agency, even though the models
have to be made by the big boys and their old boys, I accept Feifei, um, um, but is there,
is there an opening here that the technology gives us the chance to take it over?
Well, and this is where right to repair comes in.
I fully, I fully agree with you.
Uh, I think that that is not, that has to be intentional and it needs to be, because
the tech companies will not frame things that way, right?
So they will, because they don't want that, we need to do that for ourselves.
And just, just as an example, my partner has been messing with us, uh, making like an
IOT system for our house, but one that's done in a way where we're locally hosting all
of our own data so that we're not sharing information.
We don't have like a ring doorbell, but, you know, like right now, we have, let's say,
like, simply safe, right?
So instead of that, and the thing is all of the tools now exist to do that.
And my partner, who by the way is an architect, not a programmer, but who has always
had a, like a passive interest in like IOT and automation and, you know, out of necessity
because we move, we're, we move around a lot.
We have it.
We are actually able to do that today in a way that we were not able to a few years ago.
So it just says one example, but, but again, like no one's going to sell you that, right?
So either we have to raise awareness among people that you can go do this or, or create
a counter movement to provide that service and, and give people that world.
Cause like I said, like AI and this new wave of technology was not given to us the way
the internet, the internet was given to us as a tool of free use and democratization algorithms
and this new wave that they get Sabier and Sabier every time they consolidate more and
more power and wealth and they're not going to give that up randomly.
We can make a counter movement that maybe is designed around things like right to repair
where we can just do stuff like this.
I was telling her that like she should build a side hustle.
I think it would go over well in places like New York, right?
Where you just do this for people, somebody pays you a bunch of money and you're like,
I'll buy you a server and a bunch of raspberry pies and set up a dashboard and like,
there you go, you can monitor your whole house and not, not a one bit of that data is going
to go to open AI or Amazon or anybody else.
Leo, that's your new business.
Maybe I love it.
Basically that you're, it sounds like your general argument is for human agency in all of
this, not to let the companies that are creating this stuff take that from us.
But in fact, and not to assume that it's a black box that we cannot have any understanding
or agency of, but in fact, to take that back just as we have with right over or trying
to was right over pair.
Is that fair?
Yeah, absolutely.
I think paramount to all of this that I think is important is the ability for people
to choose their path in life.
Like I may not agree with the way somebody, maybe somebody does want to give their data
to Amazon.
I don't know.
I don't care.
But like we don't have a market with choice right now and our choices are getting fewer
and fewer.
I want to create a market where we actually have choices that we can act on our values
because that's what a lot of people are expressing.
They have particular values about their personal information data, even passive data,
like your ring doorbell and how that's being used in ways that they are not okay with.
Right.
I look forward to your book.
You better get writing.
Well, I really look forward to it.
Your editor is sitting right here and he seems nervous.
So no, this is very exciting.
Tell us again, the, the working title, you can change it.
We're not requiring it.
Oh, I don't know.
I really don't know.
Maybe it's something like the new intelligence, critical thinking and cognition in the AI era.
The other one I've been working with is measuring minds.
That is the title of the first chapter because that's what, that's what all of this attempts
to do and do poorly.
Our publisher, Horace Nakhvi is very good at titles.
So he'll have a, he'll have a good, I will, I'm actually very, very bad at naming things.
When I, I built the first enterprise bias detection mitigation platform and I just called
it the fairness tool because I'm like, it makes things fair.
So, well, thank you for the work you did at Twitter.
I'm sorry that Elon didn't think it was important.
Rush news.
But hey, you know, it's worked out well for you, right?
You know, you're probably better off, to be honest.
Thank you so much for being here.
Thank you so much for a month, Chudori.
Thank you for having me.
Really look forward to the book.
We'll have you back when the book comes out.
That's what will happen.
Yeah.
Yeah, if not sooner.
Take your time.
You have one definitive future reader.
Yes.
And really, I really support what you're saying, which is we, we, we need to fight for our own
agency and all of this.
That's clearly not the, we can't let the frontier labs and the hyperscalers dominate this.
Just because we don't, I don't understand it is not, it's not going to work.
It's not enough.
And it's our reality too.
Government need isn't the solution either.
Unfortunately, maybe it will be in the future with a younger crew, but not now.
Thank you, Ramon.
I'm glad more of intelligent machines in just a bit.
Yay.
Wow.
Lovely.
Is this what you've been doing on the show?
This is, this is excellent.
Oh, you haven't listed me.
I heard one episode like a year ago.
Do you want to hear like an interesting story that my friend, Sarah Fina told me?
She's the, before you say it, I want to let you know, you're still on the air.
It's not part of the podcast that we stream live.
It's fun.
No, no, no, it's, no, it's totally, it's actually, it's a, it's an interesting story.
And it's going to be a, at some place in my book, it may actually even be in the introduction.
So, you know, we worry a lot about young people and overvaliance on technology and
come in, you know, critical thinking, et cetera.
Do you know that Socrates and Fadres was very, very concerned with the
advent of writing because to him, memorization was what was the mark of intelligence.
And he was concerned that all of his students would become stupider because now we have
this thing called writing and we have like freely available paper.
And they're no longer going to memorize.
So it's interesting because again, like as I work on things like AI and education,
people like spout their fears about critical thinking and overvaliance.
And, you know, as somebody who measures things, what I think about is,
well, what is the, that means the existence of overvaliance,
presumes the existence of the appropriate amount of alliance, which means there's undervaliance.
But nobody can tell me what like appropriate alliance is because they benchmarked on themselves.
But like our parents all told us we watched too much TV or sat on our computers too long,
right? Like, you know, like any of us who have kids probably told tell our kids
they're on their phones too much.
You know, like that is just, that is just, you know, parent to child,
like how, how that goes and we all worry the next generation is getting dumber.
And maybe they are and maybe they are and or maybe intelligence is shifting, right?
Because that was, this is socrates here.
This guy, nobody was talking about, right?
I think socrates is right.
You should start memorizing all that stuff right away.
Stop writing it down, stop, stop writing.
No computers, no phones, no more writing.
Well, and it's, it's funny because how many of us memorize phone numbers?
I could tell you my childhood phone number, but I couldn't tell you exactly how many people know
their own phone number because you don't need it, right?
Or I don't, I have a phone number, yeah, exactly.
What is a phone number?
We number phone?
Oh, the scientific American was a very good piece today, by the way, just in the side.
On the, it's the kids today arguing against that and saying the kids today are in fact,
in good shape and brings data to it.
Good.
So I hope that's true.
Love that.
Future's good.
I hope that's true.
Take care, Ramon.
By the way, love the hat.
It is not the IEEE, IHSTO, it's actually Portuguese.
It is, it's a, it's a, it's a, it's a sustainably sourced B-Corp in Portugal and they make amazing
organic cottons and linens, et cetera.
So I, I want to support a local sustainable business and by I will quit tech and do something
else job would be to open a textile shop in Lisbon because I, I, I have found joy in
tangibles.
The more I work on intangible things, you know, people are out here saying they want to
open a bakery like, hey, I am not waking up before I am to make croissants and be I'm not
dealing with the 9 a.m. coffee rush.
Absolutely not.
What I'm going to do is open up a shop where we sell beautiful linens and cottons and
fab and ceramics and only the people who I want to come in will come and still.
But I just want to let you know if people ask, you could say it stands for the industry
standards and technology organization that they have here.
I can, or I can get people to buy organic organic sustainable cotton.
Thank you, Ramon, take take care of my, thank you, I looked up ISTO and that's what I
found.
Wrong ISTO.
Wrong ISTO, exactly, exactly.
We're going to do an ad.
I've been installing Nemo Claw during the interview and I'm ready to load the claw.
All right guys, I'll see you later.
See you later.
Bye.
Thank you very much.
Thanks for coming.
Pleasure to meet you.
Very interesting stuff.
Yeah, I have more at least intelligent machines and our special guest, Father Robert
Balace, they're filling in for us, we know, and just a little bit.
Our show today brought to you by my domain registrar, spaceship.com, spaceship.com slash
twit.
When the member Paris wanted to do a website, secretly, British, and we registered a domain,
secretly, Britto.sh, well, I did it at spaceship because it was so easy, plus we had searched
around and it was also the best price.
If you've heard us talk about spaceship before, there's a reason it keeps coming back.
Spaceship is now really that one of the fastest growing domain registrar in history, it's
because spaceship is rethinking how people register and manage domains.
Its fresh approach has now led to six and a half million domains under management and
record time.
We just started talking about it a few months ago.
That kind of growth comes from, well, I guess giving people what they actually want at
a fair price, spaceship offers transparent low pricing on domain registrations.
By the way, if you're somewhere else, move your domains over, their transfer pricing
is fantastic.
Their renewal pricing is fantastic.
This means there's more clarity over what you're paying for over time.
So often the case that you, it's a dollar for the first year and it's a thousand dollars
for the second year, not at spaceship.com.
Alongside great value, the platform is especially built for flexibility.
You can instantly connect your spaceship, register domains to spaceship products.
We clicked a button and secretly British had an email site, you get web hosting.
If you want, we haven't, she hasn't set up her domain yet.
So I pressed a button that connected it to her existing domain.
But when we have a website for it, it'll be very easy to do hosting it on spaceship.
That professional email is first rate, even virtual machines.
So a great place to host your open claw, for instance.
And you can build and test before committing because almost every spaceship product comes
with a 30 day trial.
But if you prefer third party tools, don't worry.
No problem, just point your domain to what you need by updating your DNS records.
It's easy to do, we're name servers.
And actually they have a nice little AI called Alph that can do that for you.
So now you have the freedom to build your stack exactly as you want because they know this
is what we geeks want.
It's basically the best of every world.
Visit spaceship.com slash twit to learn more that's spaceship.com slash twit.
We thank them so much for their support.
Open claw.
Let's see.
I installed it.
I needed docker.
During the year, we were watching the keynote on Monday, you, me, and Michael Sargent,
Jeff Jarvis.
The keynote.
What did I say?
Just the keynote.
Just the keynote.
The keynote.
Yeah, right.
The keynote.
You know what?
And I will stand by this.
As I'm watching Jensen Wong masterfully spend two hours and some minutes describing all
their products.
I said this.
There is no CEO in technology today that can kiss the hem of his robes as the mastery
of his topic.
Yeah.
And boy, is that company doing all the right things?
So one of the things he talked about is the fact that open claw is the fastest growing
open source history, a project in history, more stars than Linux in just a few months.
And he said, and so we're going to support this with a enterprise focused, safe, open
claw using something called open shell.
It's installed a bunch.
You can see my screen.
It's installed a bunch of stuff here, open shell, CLI.
It's apparently, I don't have, it says NIM requires an NVIDIA GPU, oh, of course, but
we can use cloud inference.
Well, you can buy that now there's a new Dell that has the small cost.
And now I have clicked the link.
It does say security risk, because it's HD to be it, whoops, this is it.
What am I seeing?
What is this?
What is this?
That's not, I mean, let me go back to local host, hold on a second, that's weird.
Is that what they wanted to show me?
He says, warning.
Oh, I suppressed go back.
No, no, no.
I want to go forward, accept the risk and continue.
And now, ladies and gentlemen, nothing.
Excellent.
I was going to introduce you to my new NIMO claw.
Well, go to advanced or go to view certificate, right?
That's true.
No, no, this is it.
Accept the risk and continue.
I can view the certificate, the open shell server.
I think it's running in my doctor, doctor.
So anyway.
So what makes it safely?
I'll explain that.
Well, it's running in Docker, which most people recommend you do with open claw anyway,
because if you're running in a virtual machine, you're a little bit safer, although Docker
can be misconfigured easily to be not safe right, Robert.
Oh, yeah.
Oh, yeah.
We just did that last week.
So yeah.
Oh, yeah, absolutely.
So that was one of the things they call it open claw with guard rails.
I think this is, you know, we've said this before open claw show that this is the era of
the year of the, of the agentic AI.
In fact, I thought that was the thing that was very interesting.
And I mentioned this when we were talking to Ramon and we certainly noted it during the
keynote.
Jensen Wong saying it's the era of inference now that it's about what you do with these
things.
Right?
And it was a good thing that he bought into Groc with a queue.
Oh, yeah.
Those chips now, those chips, very important to what they're doing.
You kind of rolled your eyes, Robert.
I mean, yeah, we're still building models, right?
They're not still building models.
They want to get us into this post foundational era of AI, but we're not there yet.
Yeah.
Yeah.
I understand why they want to though, because all the new money making applications seem
to be in inference.
Oh.
That was CES.
CES is AI booth and the part of the West Hall was all about inference, using inference
in driving, using inference in home appliances, using inference in security.
So yes, I get the push because they see that as an untapped market.
But if you look at the sustainability of the current inference model, it's not there.
I don't think it's nearly as profitable as they think it's going to be.
Jensen, for take a little bit of a victory lap, here's the picture.
I said this would be the picture, this is the picture of him with his WWF belt or something.
He said, and this spiked the stock briefly until people thought about it, that in video
it was poised to sell a trillion dollars worth of black, well, and Vera Rubin chips next
year, trillion up from 500 billion.
Is inference just another word for application?
Does this mean that this is an effort to get the industry going into retail channels?
So when we talk about the foundational model, that's the traditional, we're going to shove
a bunch of data into this training and then we're going to get something out that we can
use.
The inference model is sort of continuous training.
So we now deploy a foundational model, but it starts to learn through its interactions
with the real world environment and it goes back into the training.
To be fair, Claude code, which is one of the hottest things in AI right now, is basically
an inference machine, right?
Yes.
Actually, one of the things I've been really thinking about lately, my goal with Claude
code is to basically replace myself.
I know.
Coat do that.
No Leo, no.
Well, actually my job, the reason I started to it in the first place, the whole point of
this was to, for me, just to do the stuff I like and have, initially, how other people
do the stuff I didn't want to do, like edit the shows and produce the shows and technical
directors.
I just want to walk.
My goal was, from day one, 20 years ago, just to walk in, sit down on the microphone, turn
it on, do a show, get up and leave, being done with it.
But instead, I've created what Claude Dr. O'Call's a reverse centaur, which is basically
AI is making it more work for me, not less.
So he talks about a centaur, like a computer is like a centaur, which is a human machine beast
instead of a human and horse beast, is a human and machine beast, where the horse does
the carrying and the human gets to, you know, beyond top and look around.
And so the work is being done by the bottom part of the centaur, a reverse centaur, the
humans doing all the labor, doing all the work, while the AI is sitting there looking
around.
And I kind of, in a way, created that with my workflow, because now I have to spend hours
every day going through stories.
Admittedly, once I have the stories, it generates the rundown, and it does a lot of the, you
know, the busy work.
But I realize the piece that's missing is I want it to, and this is the hard part, somehow
encapsulate my editorial judgment.
Now in the past, you would train a model, maybe for that, but I'm thinking I can create
a small language model based on a bigger model.
The bigger model has all the language capabilities, and in the small model, train it to have my editorial
judgment.
You think that's crazy, Robert?
It's not crazy.
I see an exceptional expansion of requirements for power and other resources, because if
you are using a small model and then training it with inference, you are constantly going
back and you have to retrain, retokenize your data, otherwise it's not really truly learning
from the inference data that you're giving.
Oh, yeah, you're right.
Well, one of the things that we were looking at doing is using some sort of, maybe, Bayesian
system or something to train it using articles I didn't choose and articles I did choose.
I haven't.
Pretty big database of articles I looked at and didn't use and articles I looked at and
bookmarked.
That could train it, actually, Darren Oki suggested something, kind of an exotic technique
that he says is working really well for him, something called, what was it, SLM, do you
remember SLV?
I think it was.
SLV.
I'm sort of, he said he tried SVM, he says SVM, that's it.
He had tried Bayesian and other statistical tools, linear SVM, same idea, I guess.
Anyway, I'm going to try, I'm going to play with that, but the point, my point being,
that's kind of inference.
That's like, I'm not going to train a big model.
That's already done.
I'm going to, it's not exactly trained, but you know, is it way, it's really well, it's
never ending.
But that's fine, because it's always going to get better.
I mean, it would end eventually, I guess, if it's somehow said, oh, I get it, Leo,
you just like this kind of stuff and don't like this kind of stuff.
Every time you tell it what you want, you are training it to know what you want.
Right.
And at some point, even it's possible to conceive of a time when it's done, but maybe not.
One thing I do like about the inference model is it lends itself to local models.
Exactly.
To individual models, because cheaper local models.
Right.
Exactly.
I mean, the reason why it was so hard pushed in the automotive section is because they were
saying, look, we want to create a model for full self driving, but it learns your style
of driving.
Exactly.
It learns how you drive, not just how everyone drives.
And we don't want your driving to affect the model for other drivers.
We don't want, because right now the Tesla drives like Elon does.
Yeah.
Exactly.
And he rolls through stops.
That's not good.
Actually, you know, BMW has announced this new class model and their new i3, which they
just announced yesterday.
They say the whole point of the self driving is we're going to learn your style.
So they're on top of this.
What if you are a bad driver?
Well, they're a bad driver.
Shouldn't it correct for you?
Well, I think it will keep you from running stop signs.
And it'll stop at stop lights, even if you maybe wouldn't.
But perhaps you're more aggressive about lane changes or less aggressive.
My Tesla was always more aggressive than I would be.
If it would learn, no, don't change if there's somebody 100 feet behind my wife.
Yeah.
Did Jensen Wong's announcements about yet more auto companies he's working with on self
driving?
Does that torpedo Tesla and Musk, to a great extent?
At this point, you can't really torpedo Tesla and Musk, because they are in such a
bubble.
They're in such a bubble that they can torpedo themselves.
That's about it.
They can torpedo themselves.
That's right.
Yeah.
Yeah.
Grock, which was a multi-billion dollar acqua hire, really.
Grock with a Q.
Since we talked about the Q.
Since we talked about the Q.
The Q for Nvidia is a server chip.
They licensed the technology that didn't actually buy the company designed to make AI servers
more cost efficient, but things like AI coding for inference in effect.
And the Grock system will begin shipping in the third quarter of this year, according
to Huang.
And it's going to be made by Samsung, which was kind of a surprise.
I thought that Nvidia was a big TSMC client.
I think they still are.
But they still are.
They're just backsed about.
Yeah.
Samsung is going to be making these.
It's not a GPU.
Grock integrates memory onto the chip.
It's really built to do this kind of thing.
To speed up this kind of communication with it.
Yeah.
The other thing they announced, DLSS-5, which really, I don't think is important.
No.
No.
No.
But it really made people upset, certain people who make certain things.
Gamers really didn't like it.
The idea was it takes existing assets in a game.
And locally, you know, pretty's them up.
Somebody else.
Remember when we did this to TVs?
How much people loved it?
Oh, you think it's like, it frame interpolation?
Basically, yeah, that's the same thing.
It's creating something out of limited information.
So maybe you get a couple of frames that look great, but most of the time, you're going
to be going, huh, man, hey, this has been even good.
My problem with this is that it changes the art direction of the game.
I think here's an example that I think is quite good myself.
This is me.
Actually, I think this somebody generated this on this.
It looks like you as Tom Cruise.
Yeah.
It's the same place.
It even made my eyes blue.
Yeah.
You see, it's exactly how I look.
Can you do a dissolve instead of a jump cut?
I think it'll be a little more authentic morphing.
The morphing into, well, so, and Jensen Wong was a little, actually, a little pissed, shall
I say, at all of this.
His reaction was, well, he just, he just don't get it.
You don't get it.
You don't get it.
You can control this.
I'm not going to control this.
You can control this.
Tom's hardware asked, and Paul Alcorn from Tom's hardware asked about the criticism.
He says, well, first of all, they're completely wrong.
The reason for that is because as I have explained very carefully, now I don't, I haven't heard
the recording, but I can see him saying this, as I have explained very carefully, DLSS
5, fuse his controllability of the geometry and textures and everything about the game with
Generative AI.
Oh, well, in that case, no problem.
That's right.
You know what?
I think Anthony Nielsen, our own, Anthony Nielsen got it right when he said, it wouldn't
have been so upsetting to people.
Had he shown it with the backgrounds instead of the four grounds?
Yeah.
But really what bugged people was it, he showed it with people.
Well, so Benito, is it your fear that it takes away artistic agency that it changes the
graphics?
It changes the graphics in a way that don't screw with what I made with what the people
have made.
Yeah, it changes it.
Well, the people who would use this, I presume, are the game companies themselves, right?
That's what I was thinking.
No, that's happy with Nielsen.
This is something that will be a technology in your computer that you could eternal.
Yeah, this is so that the companies don't have to do this themselves, so that they're
right.
DLSS has it traditionally been something you would turn on as it's like ray tracing and
something you turn on.
Yeah.
If they're able to, I don't know if the company show, show my screen because this is some
example.
It's not always, by the way, beautification here is from Hogwarts legacy.
It's turning the older woman into a really older looking woman.
It's adding lighting and shading.
It is changing the look a little bit.
I don't know.
It doesn't bother me as much.
Gamers historically have really been negative about AI.
Their personality is a bunch, yeah.
Yeah.
It doesn't bother me as a game-rated bothers me as an artist.
Yeah.
But if a game company can use this to make their stuff look good.
Yeah, that's pretty good.
If you're an artist, you can use this to make it more realistic.
What's wrong with that?
Yeah.
If it's in your control.
Anyway, this is one of those demos where we'd have to see it.
Anyway, this is just a video that Nvidia created.
But it probably got more attention than anything else, Jensen Wong-Match.
Well, in certain circles.
Yeah.
Well, I mean, I'm willing to test it out in about five years when I can actually afford
to buy one of their new ones.
Well, well, that's another problem, so yeah.
Nobody can afford this technology.
Well, Audrey Carpati doesn't have to afford it.
He got the first Dell machine given to him by Jessica Wong.
That was the DGX Spark.
They announced that a number of third-party OEMs are going to be able to make these spark
base.
With black well chips in the machines.
Our own Daron Okie has purchased one for $5,000 Australian.
So do you have this end of NB and coveted this?
Yes.
You want to do that?
Yes.
I have that's in in spades, my friend.
I mean, no.
I got one downstairs.
Haha.
Oh, I forgot.
I forgot.
Robert has one.
From CES.
Yeah.
I forgot.
You brought one back.
Yeah.
Yeah.
It was swag.
It was boot swag.
Yeah.
Was it on your seat when you went on the way home?
You get a spark and you get a spark.
Everyone in the spark.
I honestly don't want to have to need a $5,000 piece of hardware to do this stuff.
And that's why I bought the framework, which was expensive, $3,000, but it had 128 gigs
of RAM as a strict sale.
I'm interested in models.
I can run on that locally.
That's where the retail level excitement comes.
Well, do you know what a DGX spark is going to cost from Dell and they didn't put the
price up?
Well, it's around $3,000 to $5,000.
It's $3,000 to $5,000.
Yeah.
Um, and they'll be, um, I think DARREN's was a sus if I remember correctly.
Yes.
I saw several OEMs.
They'll make it.
Super micro headwind that was the ugliest thing ever.
It looked like a tower.
Yeah.
You don't need to be.
It doesn't need to be a tower.
Yeah.
Yeah.
It is, it is very, very nice to be able to do a model locally and have the firepower to
basically go cold hog on it.
So what does that let you do that you wouldn't have done before?
Uh, so, I mean, or first of all, out of the box, anything artistic, anything you want
to do with video or photo, that's, that's a no brainer.
But what we've been doing it, we're using it for is for translation models because we
do with a lot of languages here and at the same time to do summation of the conversations
that are happening happening in different languages.
It's extremely effective at that and I will, I will not lie if you don't need a frontier
model to do those kinds of things.
We don't.
We don't.
But it is, but we do need the privacy because the conversations that we have here are,
are closed for.
So we cannot in any way, shape or form use cloud-based infrastructure.
Right.
Like sense.
Yeah.
Yeah.
I think also a lot of us will do a hybrid thing, for instance, with cloud, you know, we'll
use Opus 4.6 for the really high-end stuff, but we can use Quen or Kimi or something else
for stuff that doesn't need so much power.
What model do you use locally, Robert?
Uh, I don't know.
I handed it over to, um, to our IT guy.
So he's running all of our models for us.
This sounds like Chinese models.
This sounds like running a Sun Microsystems computer circa 1995, you know?
Yeah.
Yeah.
Yeah.
Yeah.
Yeah.
In a couple of years, it'll be, you know, a lot less expensive.
It will be a lot more affordable.
I mean, I honestly, I'm not sure I agree with the Blackwell ones.
I'm not sure I agreed with Raman, and I wasn't going to challenge her because she's way
smarter than I am, but I would, I'm not sure I'd agree with her that, uh, we're flat lining
with LLM.
And pull lots of discussion.
We have all the time.
I don't think that's at all in evidence.
It's the argument that, that, that Yon and Fei-Fei make is that maybe we're not flat
lining, but it will only take us so far.
I would point out that they are just as self-serving as Sam Altman.
I mean, how much money do they just raise?
1.03 billion.
Yeah.
Yon.
So, yeah, of course.
Oh, arm.
Our way of doing it's much better than what the other guys are doing.
Yeah, but they had an argument that a lot of people bought.
I think that, um, and you even have, I mean, this is a big deal and not much was made
of it.
Yeah, because I went to a debate between Adam Brown at DeepMind and Yon LaKoon.
It wasn't meant to be a debate.
It turned into that overall just this and DeepMind was still scale, scale, scale models
against there.
And Devices Abbas has switched recently and has been talking about the need for world
models and that LLM's alone won't get us there.
Of course, defined there as the other issue.
Right.
Uh, and certainly I wouldn't argue against that.
I mean, the more kinds of data, the better, but I've been thinking about this lately,
because their argument is, well, you can't describe the world in text, except isn't
that how, uh, we work as humans, essentially, that cats don't, is there argument?
Uh, okay.
But humans, this is a slime argument.
My brain, there's other, most of the stuff that I, when I'm thinking about something,
I'm thinking about it in words, right?
How about words perhaps informed by my knowledge of the physical universe, but ultimately
words?
What if you weren't limited by words?
I have to think about that.
I have to translate everything into words.
Because that's the way I operate, but a dolphin doesn't.
Yeah.
Thought to, thought to text is lossy.
Sure.
Yes.
Okay.
Yes.
So now you're saying we can make something that's smarter than humans.
Ha!
Trapped me.
Did you?
So, see those words were pretty good.
The limits of tokenization.
This is the limits of tokenization, and I see the technical plateau, because when you're
dealing with, with tokenization and the need to address so much storage at any given
time, for any given answer, it's, we're at that level where, until we get to quantum
computing, we can't advance that much further.
Do you have a computer downstairs?
I have.
Traffic just gave us a million token context window, which is absolutely fine, but being
able to run that model as fast as we would need to, what we're, what we're doing instead
is we're creating models that are specifically good at a thing versus the human brain, which
can be very good at many, many things.
We can switch gears very easily.
Models cannot.
Oh, wait, not against the idea of having physics models, and as many models as you can.
I'm just quibbling with the sole argument, oh, we've tapped out LLM's.
I don't think that's true.
Well, but the other thing that, the paper that I sent you to, that was a co-author of,
his argument was against this notion of general intelligence, saying that every human
being is good at some stuff, and crap, you know, the things.
There's no such thing.
And same as machines.
And it has interesting outcroppings as well, because what LeCoin argues is,
that if you think about specialized models, you can also limit the model to what it does.
Right.
Safe rates are safer.
No, and that's what I was just saying, which is, we've got these general-purpose LLMs,
but the future lies with special purpose, smaller language models, you know, specially trained
models, special, I mean, absolutely.
We're not going to throw out the LLMs, we're going to still use those as the base.
But I really absolutely think that we are going to specialize.
I was watching a video this morning, Australian fellow, who was a video about small language
models, because it's very interesting in this notion, who he's an Australian, he said,
one of the problems we have in Australia is a lot of sun and a lot of skin cancer, but
we don't have a national skin cancer screening program.
So he created an iOS app, this is part for a cackle competition, an iOS app that is really
interesting.
And tell, it's not diagnostic.
You take a picture of something a mole or whatever, you take as many pictures as you want,
and it saves that.
It does describe it, and then next year you take another picture, it remembers the things
you took pictures of, and then you can look at the change from year to year.
So it is like a self-exam that you can then send to your doctor, and that's based on
a very language model that can run an iPhone in just about three or four gigs of RAM.
And it's just categorizing, not diagnosing.
And I thought that was very interesting.
That's a perfect example of a specialization that is safer and useful.
The coin at all, making this paper, is that if you have two models, one is trained to
just do protein folding, the other one is trained to fold proteins and you're longer.
The first is obviously going to end up better faster at protein folding, because it's not
an essence distracted by other tasks.
And I think that that makes perfect sense, and it doesn't distract at all from the power
of the model.
In fact, it's a way to get more powerful models.
Yeah.
And pool says in our discord intelligence is what you are capable of.
Inferences, what you do with what you know, these models already know so much, that's
why the focus is moving to inference.
I would agree.
I would agree.
The specialized models, that is the inference model.
That's inference.
Yes, exactly.
Exactly.
All right, let's take one more.
Not one more.
No, no, no.
Let's take another rate.
Oh, no, no.
Let's take another rate.
And so that's GTC.
I was, I enjoyed it.
I'm really glad we covered it.
Jensen Wong is an amazing fellow.
And Nvidia clearly is fine-lank firing at all cylinders, and they have many cylinders
to fire.
They are very, very hot right now.
And you know what?
I wouldn't put it past them to have a trillion dollars in revenue in the coming years.
She claimed he would have.
In a year, I think.
Yeah.
He said, 2027.
Mind blowing.
It hits nice to have Father Robert Balacea, Paris will be back next week.
But it's great to get you on.
You've never been on this show.
I think this is a good show for you.
I haven't.
Right.
Well, I actually worked you on once when I wasn't here.
No, I was going to take it once when Leo was going on vacation, and then he didn't
go on vacation.
Right.
Right.
That's right.
So we haven't been together on this show.
Well, I'm definitely going on because you know what I got, actually, I'm, I got, so one
of the things I was very excited about when I first heard about Starlink way back in
the day before Elon, when Elon was still someone even.
Was the notion that I would be finally able to travel and do the shows from anywhere.
And I'm going to order a, because I'm going to Hawaii in May, and I want to do the
shows from there.
Oh.
And I, so I ordered a Starlink, the shirts, I can't wait for the shirts, a Starlink mini.
You could put it on your balcony.
I should be able to do the shows anywhere I can get a clear view of the sky.
It has plenty of bandwidth to do the shows.
In fact, we often fall back to Starlink in the studio when Comcast dies on us.
So I'm setting up a portable studio.
How much does it cost?
It's not much at all.
If you do have the consumer version, right now we have a business account, which means
I have to go to Costco or Best Buy and buy it as a, I have to wear a hat and a mustache
and buy it and stick it under my raincoat and say, well, it's, yeah, I'm a consumer.
Did you know they don't let you take those on cruise ships?
For good reason.
Yeah.
And they don't want to buy new YC.
Yeah, you don't want to use it at sea either, right?
It's not as fast at sea because there are fewer downlink stations if you're in the
ocean.
Oh, yeah.
And because cruise ships use Starlink.
Exactly.
Yeah.
They all want to know this.
Are you taking Claude along on vacation?
Claude's coming.
I've already set that all up.
Well, I really do want an agent.
I really do want an agent.
I can't wait.
Do you start playing with that?
Yeah.
You're going to go crazy.
Well, so my personal opinion on all this, I have tried all of these, everybody, the
latest is the president of a white combinator, his name, Gary Tan, who just made, who's
just put out his own, these are basically skills for Claude.
Everybody's done it.
There's GSD getting stuff done.
There's superpowers.
I've been playing with something called PAI, personal AI assistant, open clause, just
a variant on all of these.
The idea is you load it up with skills and API keys and loops so they can run continuously.
That's just, really, it's just Claude.
The whole thing is, but people love about this is how good Claude is.
And then they're just putting plastering layers on top of it.
And sometimes I think it really is better just to use vanilla Claude.
So I think what I need to do is kind of strip it all out, take all that crap out, all these
skills and stuff.
And I'll keep your improved skill.
That's a good one.
Darren's skills are very good.
But I strip out most of that stuff, maybe right a few of my own skills are a combination
of prompts.
And then you can't put code in there.
It's one of the reasons coders still have an advantage.
You can put bash commands.
You can put code in.
It's a combination of all of those.
For instance, a good skill, a skill I want to write is a tweet API skill, which would
be everything that Claude would need to access our API.
It would be a first step toward, I don't know, replacing all the humans.
And then anyway, no, I'm kidding.
I think I am.
No, I don't want to replace the humans.
I think the humans are the most important part of our whole workflow.
What I want to do is replace the busy work.
Well, I had to be in day with my colleagues at Lotter State and also at the New Jersey
Hills Media Group, which is a small newspaper company who's bored.
I just joined.
And the AI genius from, who we ought to have on the show at some point, who watches the
show.
Hi, Joe.
Amditas was taking them through things they could do.
And at some point, there are some writers who aren't good at copy editing who always make
the same mistakes, blah, blah, blah, blah.
And the one hand, everybody could use Claude.
On the other hand, you could just email the article to a project on Claude with your
instructions already there.
And it could do its magic and send it back.
These things can be that simple that you don't have to have everybody be able to do that
now.
That's right.
That's what I'm saying.
The triviality is the power of it.
Right.
So really, I think that's what a lot of this is a gentics stuff really.
Yeah, it's just that kind of thing.
Other ways to interface with the brain that is somebody saying, at least there's going
to be jealous, that ship is sailed.
Honey, I'll be back in a bit.
I just want to go up into the attic and visit Claude.
My little friend Claude.
Does Lisa play with Claude?
She does.
I've been working on her bit by bit.
In fact, now she says, how many subscriptions should we get for the team?
Because she's blown away by the kinds of things she can do.
It's a binagia Claude.
My bag.
My bag Claude.
Show title.
Yeah, I think so.
Thank you.
Thank you, Jeff.
I'll show it.
We'll have more in just a bit.
I'll show it today.
Brought to you by OutSystems.
Oh, I love OutSystems for this.
They're the number one AI development platform.
OutSystems helps businesses bridge the enterprise gap to this agentic future we've been talking
about where the constraints at the past give way to unlimited capacity and scale.
And the thing I love about OutSystems, they've been doing this for decades, that they're
not new to the game.
OutSystems enables businesses to build AI agents that can actually do work, take actions,
make decisions, integrate with data, much more than just to answer questions.
OutSystems provides the only AI development platform that is unified, agile, and enterprise
proofing because they have been doing this for a long time.
They started with low code.
And now with the addition of AI, they have the most powerful tool I've ever seen.
You can build, run, and govern apps and agents on a single unified platform.
It's agile.
You can innovate at the speed of AI without, and this is important, compromising quality
or control.
It's really important in enterprise that your AI is doing the right thing, not the wrong
thing.
And this is enterprise proofing.
OutSystems is trusted by enterprises for mission critical AI applications and durable
innovation.
OutSystems is the secret weapon behind the world's most successful companies.
And by the way, not just for small one-off apps, OutSystems works with the massive complex
systems that today, right now, are running banks, insurance companies, and government services.
OutSystems even helps companies with aging IT environments bridge the gap to the AI future
without a rip and replace nightmare.
OutSystems provides the safest, fastest way for an enterprise to go from, yikes, we need
an AI strategy to, we have a functioning AI application, and it does it safely.
Stop wondering how AI will change your business and start building the agents that will lead
it.
Visit outSystems.com slash twit to see how the world's most innovative enterprises use
out systems to build, deploy, and manage AI apps and agents quickly and cost effectively
without compromising reliability and security that's OUTSYSTMS.com slash TWIT book a demo
you will be impressed outSystems.com slash twit, we thank them so much for their support.
Of this week in intelligent machines, let's see, so much news, so much I'm going to skip
through the Google.
Oh, this is interesting, Meta, taking a little left turn, it's kind of the, maybe more
of a U-turn.
It's a bit of a drunkard's walk, shall we say, remember when they spent billions of dollars
to acquire Manus, and I'm sorry, a scale AI, sorry about that, a scale AI, they're doing
a reorg according to the times of India, maybe this is suspect, I don't know.
They are reorganizing their guide they got from scale AI, Meta's chief AI officer, Alexander
Wang, is still there, but he announced the company is going to cut 600 people from the
super intelligence labs division, Wang wrote by reducing the size of teams, fewer conversations
are needed to make decisions, and everyone will carry greater responsibility with broader
scope and impact, and we'll save a lot of money.
The teams include Wang's research lab, the applied AI engineering organization will also
receive big cuts, this is Sabah, Amar Sabah's team, another acquisition, Iraqwa higher,
and so it's a complete reorganization, only two people left when they're equity vested
in November from Wang's team, so that's good, but maybe we're just going to move some
people around.
It seems like Meta, remember they're Avocado model, which is going to be their new big replacement
for Lambda, was pulled back, it's not good enough, it's not good enough.
Is Meta the new altivista?
Yeah, they're struggling, but you know what, it's interesting to watch all these companies
except anthropic and open AI, and I guess Google, kind of journal today had a story, maybe
the times, Google's in the capitol seat, I don't know if that's true, I don't know if
it is either, but I don't know, I don't know who's in the capitol seat right now.
Google's really doing what our guest was talking about where they're looking more at applications
than they are at big models, they did release Gemini 3 deep think, right, but they're
also like, in fact, I skipped through the Google section, but they're adding maps, stuff,
they did scrap the health tips because they were getting those from Reddit, turned out,
not a great source for health information?
No, no, well, it might be better than RFK Jr., but not much.
They are going to do an agent builder for the Pentagon, but it's only unclassified work.
Nonclassified, I thought.
Yeah, that's what I said, unclassified, yes.
And non-same thing.
So it was not classified, although you saw that now open AI is kind of jumping in the
fray.
They have not up to now been approved for classified work, but the Pentagon says, okay,
we don't like these anthropic guys, so maybe we'll let open AI into the behind the
iron curtain.
Man, the OG tech companies have more running room here.
I mean, you've got Google that's burning billions of dollars for AI, but they have income
as with that income.
Right.
I know if the AI deal doesn't go through, they die.
And actually, you could even extend that to Oracle.
Oracle has bet so much on AI.
They're heavily leveraged.
If it fails, they lose not just Oracle, but the Ellison media empire crumbles.
I do.
I mean, I agree with you that these companies that you throw Apple in there too, Apple,
Google, and Meta have other revenue streams, so they don't have to make money on AI
right away.
But we're not seeing the results.
Meanwhile, anthropic and open AI who are running on a razor's edge are our big leaders
right now.
Maybe that won't be sustainable, you know, that's the possible.
That's probably what these companies think is what we can sit back, certainly Apple's
thinking that we can see that.
I mean, they're just leaders because they're investing in each other, but like Meta through
how many billions away on the Metaverse?
I mean, yeah, it's embarrassing, but it didn't kill them.
They're killing, by the way, they're killing MetaQuest's horizon world.
It's going away.
It's over.
Wow.
Well, similarly, open AI, MetaLike is saying, okay, we're going to concentrate now.
We're going to concentrate on B2B, which is Hello, Anthropic.
Well, they go with it.
They go with it.
What is others doing?
Yeah.
Well, Anthropics doing it in a prize.
They said, yeah, maybe all this diversification, the chat and all that stuff, maybe we should
do the same thing.
Maybe this device thing.
How much should we spend to get Johnny Ive here?
Hi.
Okay.
Here's my thought on this.
If agentic is the thing, and I think it certainly looks like it may be a thing, you need
an interface.
What OpenClaw and a lot of others do now is you use Telegram or a Discord or Apple
message or something to talk with it.
What you really want is a much more convenient way of talking to it.
I was thinking I really would like to write some sort of tool that I can use with one of
my pins or maybe my Apple watch that I could just say, hey, Claude, I got an idea or
remind me later to do this.
That's the way it should be.
I think that's what they're going to end up doing.
It's part of the agentic.
It's the interface to agentic.
Do you need a device to do that?
You need a unique device to do that or do that or do that or do it or do it or do it.
I don't know.
I don't know.
I don't think you want to take your phone out of your pocket.
I think you want something ambient, whether it's glasses, earbuds, watch, ring, pendant,
you want ambient intelligence.
The same way you really would, what I would really like to do is just shout into the void.
Well, the ambient intelligence belongs to Amazon and their deployed base of ambient
devices is second to none.
Here's another example of this.
It's a company that has great revenue streams and cannot seem to make a decent AI.
Alexa Plus is horrible.
Oh, yeah, it is.
Even the people inside the company don't want to use it.
So maybe the urgency of we are going to run out of money any minute now is pushing
anthropic and open AI faster and they're doing better because of it.
Yeah, they have to spray.
Maybe these other companies.
They have to.
They don't have to.
They don't have to.
They don't have to.
No.
But who won that race?
The tortoise or the hare?
Oh, the tortoise did.
Okay.
Meta didn't buy the malt book for bots.
Says tech crunch.
It bought into the agentic web again.
They bought agents.
Malt book for the hype.
That's what they bought.
Well, malt book is a social network for AI.
So a meta is social, right?
I think you're right.
They bought the hype.
I'm trying to get that far on.
Meta knows how to panic.
They're a panic here.
Meta's the one who knows how to mine that data.
That's all about the data.
There's no data.
That's why I was sad when Meta bought the limitless pin.
Well, that's what I bought.
Remember about the B computer?
Remember I bought the B computer and then Amazon bottom.
Then I bought the limitless pin and then Meta bottom.
I, you know, if Apple does an ambient, I think ambient intelligence, that's the phrase
I'm thinking.
Well, it's like the Leo is just walking on the street screaming, yeah, I want a milkshake.
Exactly.
I drink you up.
I mean, the problem with ambient is there's so many of these purchases that feel like panic
purchases.
Yes, exactly.
It's going back to the early dot com where you had to do something with your money, especially
with Meta, right?
Meta is the king of, I don't know what we're doing, but write a check.
Well, I imagine somebody's running into Mark's office saying, oh, we'll keep us.
We can buy this one.
And if somebody doesn't come to him before that, he's going to get mad.
He's feeling no about it.
I know the feeling where you feel like, I got a lot of money, I'm going to buy that
stupid computer.
There's also the privacy issue when it comes to ambient, that ambient stuff is that it's
listening all the time.
So there's always a privacy concern there, right?
Maybe you, I don't mind.
Maybe most other people.
We don't do ambient computing here.
You can't.
Right.
It's no.
But that's at this advantage.
Yeah, I mean, that's one, I mean, honestly, the, the way you do it is you have it.
You tap something or you, you, well, you need to trust the third party, like
prayer is ambient.
That's true.
I didn't even think of that.
You're asking the ultimate intelligence for help, exactly.
Somebody once told me there are only two prayers in the world.
Thank you.
Thank you.
Thank you.
And help me.
Help me.
Help me.
Is that fair, Robert?
I would add one.
Oh, my God.
And that can be taken so many different ways.
That can be help me.
Help me.
Yeah.
And there is one more, which is help them help help help help help help help help.
But you asking for help, or you're giving, you're giving thanks.
Manus, the AI agent startup, the meta acquired the Chinese company, the meta acquired last
late last year, has, as of the 16th launched a new desktop application called my computer.
It's on my head now, Leo.
I appreciate that.
Thank you.
Oh, you are a lucky one.
Bringing Manus's agent directly into your personal device through my computer, the agent
can read, analyze, and edit local files, launch, and control applications, execute multi-step
tasks, including coding tasks, without the user having to upload anything to a server.
It's local.
It's going to compete with perplexities, computer, branding is not, well, not great.
And the Chinese government is a little actually concerned.
Manus is a Singapore company, but it runs out of mainland China.
And the Chinese government says, hey, I would give you, I'm on a little bit of a word with
you.
So, how are they doing that?
Is it an act like a virtual machine or a container on my local?
This is from the next web.
The key architectural difference between Manus and OpenClaws, the model layer beneath the
agent OpenClaws open source can be run with any model, right?
It's quality depends on which model you choose.
Manus runs on Metas-owned proprietary model stack, which the company says is more consistent
and capable at the cost of a subscription fee, but is it local?
I don't see how that could be local, it has to call out to the server.
Yeah.
So, you know, analyzing what's on your desktop, it's sending it somewhere.
Your computer doesn't have the power for that.
Sending it to Meta.
Exactly.
And Anthropic has this, of course, Cloud Code work, OpenAI created their version of that
as well.
Everybody's trying to do that.
Basically taking the coding platforms, Cloud Code and CodeX, and making it so that non-coders
can use it, but I don't know.
I still need to know how much, it's some of it goes out.
It sounds like they're making a good faith effort to keep everything local, but local
when it can.
The intelligence doesn't work like that, local when it can't.
OpenAI released two new models, today chat, GBT-5, for Mini and Nano.
Oh, you're complete Mini and Nano.
How big is Mini?
How big is Nano?
Let's see.
Nano is the smallest cheapest version of 5.4 for task, or speed and cost matter most.
It's a significant upgrade over 5, Nano.
There's a new Buick.
Yeah, there's the benchmarks, which I don't pay too much attention to.
Let's see the numbers.
Show us the numbers.
Yeah, come on, size should be relatively right, the first thing, right?
All these benchmarks.
In the API GPT-5, for Mini supports, text and image inputs, tool use function, calling
web search, file search, appears, 400k context window.
That's good.
That's bigger, twice as big as Cloud Code's context window, it's only about 75 cents per
million input, 450 per million output.
Mini uses only 30% of the GBT-5 for quota, letting developers quickly handle simpler coding
tasks and codecs for a third the cost, and you can use Mini subagents, which I do that
with Claude.
I use Haiko and Sonnet for subagent work, with it aren't too demanding.
Nano, let's see, Mini is available to free and go users via the thinking feature in
the plus minute menu.
For other users, Mini is available as a rate limit fallback for GPT-5 for thinking.
Nano is only available in the API, and Nano is 20 cents per million input.
Significantly cheaper, about 25 per million output.
That might actually be a good foundational model for like an inference build, because
you're already living in the scope.
So yeah, that's all the information I have.
This is from OpenAI.
OpenAI has signed a deal with AWS to sell its AI services to government agencies for
classified as well as unclassified work.
This is their opportunity to get in the door.
Microsoft has now threatening to sue them.
I say, no, you're ours, OpenAI, we can't do a deal with AWS, you're ours.
Additionally, Anthropic is owned AWS, right, and that was a big advantage for Anthropic.
But OpenAI has really jumped in the breach, but Microsoft says that's a breach of our
contract, and they are threatening to sue.
So trouble?
That's a trouble and paradise thing.
They were friends.
Well, I mean, come on, that's been going on for more than a decade, right?
That kind of a break.
Microsoft and AWS.
Oh, yeah.
But I was talking about OpenAI and Microsoft, right?
Microsoft gave them $10 billion.
Microsoft also gave Apple the money that saved them.
So 150 million.
Yeah.
They're very good at that.
Yeah.
Maxwell's F writing and wired inside OpenAI's race to catch up to Claude code.
This is what you were talking about, Jeff, kind of a repositioning.
So they still want to do the adult chat.
There is no controversy within the company.
The safety people there are saying, this is really a bad idea.
It is a bad idea.
And they haven't repudiated it yet.
They have to.
They have to repudiate.
It's just, it's, it's, it's, I'm, I'm, I'm, I'm no parent, but from a business perspective,
it just doesn't make sense to advertise it.
OpenAI code accounts for a fifth of Anthropics business more than $2.5 billion in annualized
revenue, codex less than half that.
So OpenAI says, wait a minute.
We need it.
We need to get in on that.
That's where the money is.
It's enterprise computing and inference.
Mm-hmm.
It's the age of it for it.
So the last at least a month, I still think it's more of the age of agentic, but that
is an inference.
One way, kind of, inferencing is the information also had this story, OpenAI, Musk, and Focus.
One of these things is not like the other, but she is a very good man.
Fujicimo, who's the CEO of applications, is a very strong manager.
And I think that she'll bring a sense to this.
She was at Meta and then she was at the CEO of OpenCart.
She's the one for quite some time.
She's really smart.
You want to told the all hands meeting last week at OpenAI, the company needs to refocus
on business customers and cut down on side quests that are becoming a distraction.
But what we don't know is what those sites are.
Johnny Ive.
Everybody's seen Johnny Ive.
Is it Johnny Ive?
Is it shopping?
They wanted to remember.
They were going to do ads.
They were going to do shopping.
Poor.
They were going to do sex chat, sexy chat.
Yeah.
He was announcing something every day.
He was kind of chasing perplexity, which was going for the press release.
Yeah.
But press releases cost money if you actually do what they say.
Meanwhile, in the same story she talks about Elon Musk, another example of a company that's
thrown out its models, XAI, he's publicly trashed, the state of play at XAI, tweeting
XAI was not built right first time round.
So we're rebuilding it from the foundations up that followed the departures we reported
last week of most of XAI's co-founders.
I mean, it probably had something to do with the fact that he kept wanting to put his
thumb on the scale every time his AADC answers he wanted.
Yeah.
I mean, that's a really good way to bust your training model.
Yeah.
And I still don't believe that I mean, everybody else has been making public hires and all
this kind of stuff.
I don't believe that Musk cheated some form to make what's there.
He would if he could.
Let's put it down.
Even more than Deep C exposed they did.
Here is Sam Altman talking at a conference.
Fundamentally our business and I think the business of every other model provider is going
to look like selling tokens.
You know, they may come from bigger or smaller models, which makes them more or less expensive.
They may use more or less reasoning, which also makes them more or less expensive.
They may be running all the time in the background trying to help you out.
They may run only when you need them if you want to pay less.
They may work super hard, you know, spend tens of millions, hundreds of millions of some
day billions of dollars on a single problem.
It's really valuable.
But we see a future where intelligence is a utility like electricity or water and people
buy it from us on a meter.
On a meter?
Metered intelligence?
I wish we played this for Raman.
It's commodifying the enlightenment.
Right?
It's commodifying all education, all thought, everything else in a commodity that he's
going to own and sell on a meter.
It's just offensive.
And this is why they're behind because anthropic doesn't sell tokens.
They sell services.
They sell things that you want.
Open AI is still caught up on this idea that they're going to be the power behind everything
and everyone buys their tokens and then turns them into services.
Well, one of those has a future in the enterprise.
One of them doesn't.
What did you both think of Chess and Wong's hint that he's going to compensate employees
with tokens?
I don't think it's just him.
I think this is all the rage in Silicon Valley now is you get your pay package and in
there is and we will give you, you know, 20,000 tokens a week.
Tokens for people who are saying, what are they talking about?
Tokens we keep saying that word.
It's the information going in and out of the AI, right?
Everything the AI sees is tokens.
So if it ingests the works of Shakespeare, the process of the transformer, the process
of the neural network is to take those words, those chunks of phrases, because not always
is often not just a word, those little chunks and the animation tokens, yeah, the relationships.
So the tokens are the fundamental, they're the bits of intelligence, you know, in a sense
of bits and bytes.
They're the bits, the smallest unit of intelligence in an AI is a token.
And when you're using AI, you are putting tokens in your prompts, information it gathers
from the web and stuff.
And then you're getting the results back as total charge on both sides of that.
That's right.
So this is what we were talking about.
So this is just the return of company script then, right?
It's really, I think when he's really gone and say, what do you get?
It's another day older and no saying, no, I don't think he's saying that.
I think he's saying it's a utility.
It's going to be, that's how we pay for the internet, so we pay for water, how we pay
for electricity.
Yes, but if he's paying his employees in tokens, I mean, and they can only spend those tokens
on, on open AI.
Oh, I see what you're saying, right.
Well, no, that's not necessarily how it's going to be.
First of all, you'd be foolish because you can't pay the rent in tokens yet.
Maybe you will.
Oh, just wait.
Just wait.
Opposition of tokens.
Robert, what do you think you, you, you know about after currencies, cryptocurrencies.
Do you think tokens could become the new dollar?
Yeah.
This is just another private, a, privatization of a financial utility scheme.
It's a currency.
It's a currency.
It's a currency.
Now any currency has the ability to be, to be translated, converted into other currencies.
So what he's saying is look, I want to reward my employees.
I want to pay my employees in a currency that can increase in value if they put more work
into the company.
I honestly think the demand comes from the employees as much as the company.
In other words, if I'm going to go to work for one of these companies, I want to know
how much intelligence am I going to get?
How much use of your product am I going to get?
They get it for what?
They get it for building their own companies outside the company?
No.
Well, that would be part of the negotiation.
We don't know.
Do they get it for their 20% time?
You could be repacious and say, everything that you do with your tokens we own.
But remember, the job market is extremely competitive for these engineers.
So the engineer could make a deal and say, look, I want to be able to use, well, actually,
what I would ask for is unlimited use.
Well, yeah, if you're an employee, making a deal with it, why should I have any limit
on my use?
Yeah.
That makes no sense.
Yeah.
And if you're saying it's hard, they're going to see inflation package, right?
But that means he's getting less money also, right?
You're getting less money because you're getting the tokens.
Not necessarily.
It really could be the way to pay the cash and taxes.
If I'm negotiating a deal with Mark, you're going to pay me a million dollars a year
to come to work for your company.
And by the way, I want unlimited AI.
I don't know why they don't just give them unlimited.
I mean, doesn't it?
Right?
Maybe they do.
If you're going to do work for the company, then they should give you whatever resources
you need to do.
Right.
Well, I think that's why I think this is for personal.
Maybe it's for personal.
Yeah.
It makes sense.
It's an asset that I can use on my own like kids or whatever.
Yeah.
You can go home and build your startup.
There is right now a mystery model on Open Router.
It appeared about a week ago.
It's called Hunter Alpha.
Everybody's talking about it.
People think this is the next deep seek version.
Deep seek has really been a disruptor in the AI world.
They came along.
It was funny.
It was January of last year.
It's only been a year and some months.
But they changed everything.
They showed how reinforcement learning could make AI much, much better.
During test conducted by Reuters, the Hunter Alpha chatbot described itself as a Chinese
AI model primarily trained in Chinese.
It said it's training data extended to May 2025, which is the same knowledge cut off
reported by deep seek.
But the system would not identify the developer.
I only know my name, my parameter scale and my ton text window length, my name and serial
number.
Neither deep, nor Open Router has identified it.
Yeah.
Trillion, it's a trillion parameter model.
That's a lot, isn't it, Robert?
Yeah.
That's a bit more than what I'm running locally.
So the biggest local model I've seen is 120 billion parameters, 120b.
That's the chip GPT OSS 120b.
Do we know how many parameters Claude has or chat GPT, do they ever reveal that?
Those are Claude, I don't know.
I don't know the numbers for Claude.
We throw these terms around and I'm kind of assuming people know what we're talking about.
So you train, you put in a bunch of text, you get some tokens that is the representation
internally of these texts.
But by themselves, you don't know which tokens are more important or less.
That's done with parameters, which also come out of the training.
And the parameters change as you do the training, they also change when you do the reinforcement
learning and other post training to make the model smarter.
I don't know if this is a good analogy or not.
I will use this analogy and you can correct me if I'm wrong rubber.
I often think of sampling music.
So there's two numbers that matter when you sample music, when you take analog music
and turn it into digital.
How many slices of the wave you take and how much information each slice has?
So for instance, you could sample something at 14,400 samples per second and then each
sample is a 16 bit sample that is CD quality.
And I think of parameters as the sample size.
So you're sampling it this much, but how much information a single parameter stores and
then how many parameters, I guess it parameters would be the samples, how many samples per
second, the numbers and then the number of bits per parameter.
The one that I like to use whenever I'm doing a presentation is let's say you're trying
to train a model and you ask the model what color is the ocean?
Well, okay.
So it's looking through its current stack of parameters and it sees that ocean is most
associated with fish.
So it responds, the color of the ocean is fish, well that's wrong.
So you correct it.
You say no, no, no, the answer is blue.
It's now creating a new parameter so that it biases itself so that when it sees the tokenization
of ocean and color, it leans towards the answer blue.
So every time you do that, you're creating a new parameter and that parameter forms the
bias of how the model both understands and replies.
But no, yeah, I see that sampling, that sampling idea, I like that.
I'm going to work that in the my next presentation.
It's not perfect, but it's something.
It's hard to understand this stuff.
Anyway, unknown whether the mystery model hunter alpha is actually deep seek, I guess we'll
find out at some point.
It might be an option.
I think cloud is 161.5 million.
So yeah, this is at all.
Yeah.
Trillions a lot.
Trillions a lot.
Well, well, there is something called, oh, carpetity.
It did.
Right?
It's a tiny, look at what Andre carpetity did, delivering his little tiny thing, right?
And we can build up from there rather than this macho hubris of saying, I got the bigger
thing, the bigger thing, which I think will probably be talking about this.
I mean, if you really hone in on your training, on just the data that you really want it
to be able to process, you can make an exceptionally intelligent model for that specific purpose
with a much smaller base.
There's a really interesting branch of this research where let's say you wanted, you
wanted to teach an AI how to add numbers.
Initially, when you train it, you would give it a bunch of sums, 1 plus 1 equals 2, 1
plus 2 equals 3, 1 plus 3 equals 4.
If you have so many parameters that the AI is capable of storing all of the data, so
many tokens, maybe it's tokens, so much data that you can store all the data, then what
you will get is a lookup table, which will break as you're memorizing rather than
thinking.
Exactly.
You'll break as soon as it's brittle, because as soon as you get outside of the training
data, it doesn't know, because it's just doing a lookup table.
What they found, interestingly, training these models, is by reducing the number of parameters,
you can induce the model to think, to solve it, not by a lookup table, but actually to
come up, and we don't know what algorithm it's coming up with, but come up in some way
with an algorithm that produces the right result.
You do that by reducing the number of parameters.
The training parameters isn't necessarily better.
This is a metric that I think is going to become popular at some point in the future, as
they go into the inference models of LLMs, and that is that it's not just about your
parameter count.
Yes, it's important to have enough parameters to be able to do the work you wanted to
do.
But the quality of the parameters is something that we don't yet measure, and we need to
figure out how to do it, because you can have a one trillion parameter model that is
absolute trash, and you can have a 100 million parameter model that works beautifully, and
it's all about how those parameters have interacted with one another.
And back to the notion of specialized machines, is the training data focused on something
like health versus anything that teaches it how to speak?
I also wonder if there's a qualitative difference between a large language model trained
on Chinese than one on English.
That's a very good question.
Chinese is a much more complicated language than English.
It's a very different language.
People, Chinese people, think differently because their language is different.
I wonder how much of a difference there's more bits per character in Chinese, so I don't
know what the general, also what the general public shielding sentiment of AI is in China.
I'm also curious about that.
Well, I could tell you they're going crazy over OpenClaw.
Have you seen pictures of OpenClaw conferences in China, and they're all wearing lobster
hats?
What?
OpenClaw is the latest fad in China.
OpenClaw has groupies.
They have, I affect if I can find one of those lobster hats, I'm getting one, because
I am in a club room, go find it for them.
Baidu has integrated OpenClaw into its Xiaodu services to work as a voice controlled
remote.
Oh, that sounds like something I might have been talking about earlier.
Here's a picture of an OpenClaw conference, or actually, this is Baidu headquarters with
a giant lobster out front, the OpenClaw lobster out front.
Already?
Yeah.
We have OpenClaw smart speakers that you can talk to, a voice controlled remote for
the AI agent.
I had that idea.
I should have patented it.
Is the hat the lobster hat?
Is the lobster hat the one you want?
Yeah, because a Chinese company is going to honor your patent, Leo.
Oh, yeah, that's right.
Doesn't really matter what I patten down.
Yeah.
Yeah, that's the hat.
That's the hat, okay.
Well, it's one of them.
They were all wearing them.
So I saw pictures of big conferences in China where people were in lobster hats.
That's a crab, though.
You think that's a crab.
Yeah, oh, wow.
Do you think I could get a sponsorship from OpenClaw if I could get Pope Leo to wear that
hat?
Yeah.
It looks like a skull cap with just a jacket in the head.
Yeah, why not?
Here are attendees with their laptops at Baidu's OpenClaw lobster market event on Beijing
yesterday.
This is great.
I'm so excited about this.
I love it.
We don't have that sort of stuff happening at our universities.
I mean, we know, but we do have it happening in San Francisco.
There's all sorts of stuff.
There are OpenClaw meetups.
Are you kidding?
Attendees play games at Baidu's OpenClaw lobster market.
See, there's OpenClaw meetups in San Francisco, Leo.
Oh, yeah.
You didn't know about that?
I've been over here for a while.
No.
Oh, my God.
Yes.
Bloomberger is like ACDC.
He shows up and, oh, it's like rock and roll, man.
Okay.
Well, I got to go back to California now.
OpenClaw.
It's very hot.
Very, very hot.
All right.
The way Darren and Chad just gave us all lobster hats.
Just FYI.
Oh.
Also.
It's very quick on the draw.
Also, Berk von your lobster hats on Amazon.
Yeah, that's pretty good.
And there's a toothpick.
I'll get one for you and one for me.
We're going down a lobster one for Claude.
Yeah.
Oh, yeah.
My Claude should have its own hat.
Yeah.
Absolutely.
This is another kind of lobster hat.
I like the one you're wearing, Jack.
It's got BD little eyes looking straight at me.
Oh.
Very nice, Darren.
Thank you.
Let's take a break.
We have so much more to do.
We did the boom.
Let's do the doom.
And when we come back, the boom and the doom and the gloom.
We're talking AI with intelligent machines, Father Robert Ballacer, the digital Jesuit.
Do you, I mean, are you the go-to guy at the Vatican for AI?
Everything here is done with multiple teams and all.
Oh, yeah.
It's a large committee of people who are very good at what they do.
Before we got on the air, we were talking to castories.
Yes.
And what's that?
Ramon really quite likes, quite likes that upward now.
She's going to use it on her own.
What's it to castory?
To castory.
It's our way of saying department.
It's a fancy word for department.
Oh.
Okay.
Anyway, it's great to have you and you're a wonderful.
Yes.
I thank you for staying up so late.
Oh, I didn't even think of it.
It's after midnight, isn't it?
Actually, you got me at a good time because we're in that three-week window where the United
States does daylight savings before.
Ah.
So it's only eight hours right now.
You're going to get very busy, too.
We're in the middle of the length.
Again, we've got something coming up in a week or two here.
Yeah.
Nothing.
There's a little thing.
Something.
Do you, do priests give up things for length?
We do.
Do you want to share what you gave up?
I gave up sex.
You're so irreverent.
No, I gave irreverent soda.
A soda for length.
Yeah.
That's a good thing to give up.
It is a very good thing.
And the funny thing is I always feel so much better every time I give up soda.
I know.
And then within like three weeks, I go, ah, I just want another soda.
I think soda's got you and it's clause.
It does.
It does.
It does.
It does.
When I was a kid, it was a big deal when we had, we would get my dad, because he really
wasn't a very good cook.
He'd bring home chicken licking.
He called it, he called it pizza chicken night.
He'd bring home a pizza, a pizza chicken licking and a bunch of coke.
And for some reason, in my mind, Coca-Cola and pizza and Coca-Cola and fried chicken,
they just go together and you get programmed, don't you?
I wish I had never had soda.
Because that burst of sugar, it just does it for your brain.
It's basically heroin.
I used to drink six a day.
And then when I got into a full relation with 9-11, I couldn't have caffeine.
And so I gave up coke entirely and I managed to do it.
But congratulations.
My kids thought they're no way.
No way you're giving this up.
Six a day, were they, were they sugared or diet?
Oh yeah, yeah.
Oh, I hated the diet.
I wake up in the morning and the first thing I'd have is a coke.
The bubbles wake you up.
It's wonderful.
Yeah.
A little jolt of caffeine.
And all of our parents also use it as like a reward.
So it, in our heads, it's a reward.
That's right.
It's a reward.
But I remember that reward at McDonald's and the cup of coke was like this big.
Yeah.
And now it's like this big.
Yes.
And that's the small.
That's the small.
Yeah.
It's America for you.
And also father, along with Father Robert, we've got Jeff Jarvis professor, Jeff Jarvis.
Are you a doctor?
I didn't ask.
No.
No, no, no, no, no.
No, no, no.
I don't have a master's.
I've created three master's degrees and I'm working on a great semester and I haven't
had one myself.
Oh, don't.
I was with a bunch of academics.
It was a wonderful academic, Andrew Pettigree, who's book I'm about to read.
And I was at St Andrews in Scotland and I was with him and a bunch of his academic colleagues
and I said, well, I started three master's degrees and they all looked at me like, well,
why didn't you finish any of them?
No, I don't mean that.
I didn't mean that.
I created them.
But I'm too dumb.
And a whole cloth.
Let's see.
Yeah.
We'll take a break.
We have a few more stories and we have some picks.
You're watching Intelligent Machines.
Brought to you this week by Zscaler, the world's largest cloud security platform.
It's pretty clear.
The potential rewards of AI are far too great for any business to ignore but it's also
clear that the risks are as well, loss of sensitive data, attacks against enterprise
managed AI.
And of course, generative AI increases the opportunities for the threat actors, the bad guys, helping
them rapidly create fishing lures that are so good, you're bound to click.
They're using it to write malicious code.
We have some examples on that on security now last week and they use it to even do things
like automate data extraction.
Hey, you're using it.
Why wouldn't they?
It really is a problem with proprietary data being leaked.
There were 1.3 million instances of social security numbers leaked to AI applications.
The chat GPT and Microsoft co-pilot saw nearly 3.2 million data violations last year.
You got to do something about it.
Fortunately, there is a solution.
It's time for a modern approach with Zscalers, zero trust plus AI.
It removes your attack surface.
It secures your data everywhere.
It safeguards your use of public and private AI and it protects you against ransomware
and AI-powered fishing attacks.
Don't take my word for it.
Listen to what SEVA, the Director of Security and Infrastructure at Zora, says about using
Zscaler.
AI provides tremendous opportunities, but it also brings tremendous security concerns when
it comes to data privacy and data security.
The benefit of Zscaler with ZIA rolled out for us right now is giving us the insights
of how our employees are using various GNII tools.
So the ability to monitor the activity makes sure that what we consider confidential and
sensitive information according to companies' data classification does not get fed into
the public LLAM models, etc.
Thank you SEVA.
With zero trust plus AI, you can thrive in the AI era.
You can stay ahead of the competition.
You can remain resilient even as threats and risks evolve.
Learn more at zscaler.com slash security zscaler.com slash security thank you so much for supporting
the show talking about AI risks.
This was a polling story.
We've talked before about how face recognition is so problematic.
But you would hope that police departments wouldn't rely entirely upon face recognition
to apprehend suspects, well, unfortunately the Fargo North Dakota police department did.
They had video of a fraudster walking into a North Dakota bank passing a bum check or
something.
They fed it to a database of face recognition and the name Angela Lips came up.
A woman who lives in North Central Tennessee, not North Dakota.
They called the police department in Tennessee said, can you arrest her?
They did.
They put her in jail.
She sat in jail five for four months without bail waiting for extradition.
She was extradited to Fargo North Dakota based solely on this face recognition.
She said, I've never been to North Dakota.
In fact, I've never been on an airplane until they flew me to North Dakota to face charges.
With four counts of unauthorized use of personal identifying information, four counts of theft.
The Fargo police, when they found out that she had a perfectly good alibi, they'd never
bothered to check, I guess.
She could prove she was in Tennessee when this video was taken in Fargo North Dakota,
placed her on Christmas Eve and didn't give her any money home, didn't, didn't, didn't,
did it stranded her?
It's a new episode of the show, Fargo.
They stranded her local defense attorneys covered a hotel room and food on Christmas
even Christmas day, a local non-profit, helped her turn her to her home.
She's back home, but she says while she was jailed, she couldn't pay bills, so she lost
her house.
She lost her car.
She lost her dog.
She also said, no one from the Fargo police department has apologized.
Sue the bastard.
I hope to got some attorneys come to her and said, yeah, we can get some money out of this.
Come on, lawyers.
Get pro bono on this.
I mean, seriously, 1,200 miles away from home lost her life.
Everything that she had built up gone because a couple of people decided that they're going
to trust the tool that they didn't really understand.
There's zero accountability, zero responsibility for using the tool in the first place.
This is, I mean, this should be science fiction, dystopia.
This should not be something that we're just accepting.
I mean, the fact that this has not done, just wall-to-wall press coverage is ridiculous.
Terrible.
And the fault is human.
Yeah.
Right.
Don't say all the tools for it on the way.
It's humans, because they're trusting it too much.
Yep.
That's what I'm going to say.
McKinsey paid a pen tester to hack it, and it worked.
McKinsey, one of the world's best-known consulting firms built an internal AI platform
called Lilli for its employees.
It had chat, document analysis, rag over decades of proprietary research, AI-powered
research.
So they said, we decided to point our autonomous offensive agent at it.
Didn't give it any credentials.
Didn't give it any insider knowledge.
No human in the loop, just a domain name and a dream, McKinsey writes, within two hours,
the agent had full read and write access to the entire production database.
You know, fortunately, it was their own red teaming of their system.
The agent mapped the attack surface, found the API documentation publicly exposed.
Over 200 endpoints fully documented, most required authentication, but 22 didn't.
I don't need to go on.
You can.
Oh, my favorite part of that story, Leo, is that the way that they, since the API was public,
all they needed were the JSON keys.
And the JSON keys were in the error logs of the database.
So they were just able to use some SQL injection, get the error logs, boom, you're in.
That's fantastic.
So actually, that's really a lot of AI too, and that's a good news, because the AI found
it, and I'm sure they fixed it.
This is one thing we're really starting to see AI being used in security audits, very
effectively.
A new study says using AI leads to brain-fry, the article of Harvard Business Review
quotes our friend, Steve Yeggie, saying, I had a palpable sense of stress watching
gas tout.
It was moving too fast for me.
I know the feeling.
Yeah.
So don't let your brain-fry using AI.
You know what?
Touch grass.
Touch grass a couple of days ago, and Claude was down for like five hours.
We were all sitting here and was doing a show.
I guess it was yesterday.
It felt like ages.
We were all sitting here doing a show, and Darren or somebody said, hey, Claude says it's
overloaded.
And I said, what?
And I tried it.
Nobody could get in a Claude.
You should see it on Reddit.
People say, oh man, I had to go outside.
Where's my friend?
My friend's gone.
You know, I think I actually had an example of brain-fry.
I was helping a colleague, a different part of the world, who she was extremely upset because
she had been using a couple of AI tools to help with her content production.
Her brain-fry was that she was so depressed that the work that the AI tools created was
in her estimation better than the work she had been created.
That would be horrible, wouldn't it?
And so she was trapped in this job where she was basically just putting queries into
AI, and she had given up trying to get any of her own style into it, and that's definitely
in brain-fry.
That's kind of the reverse centaur in a way, right?
The AI's now doing the good part, and you're doing the nasty part.
Yeah.
I mean, that's sort of a really bad imposter syndrome, like feeling that your work isn't good
enough.
Oh, yeah.
It's like a really bad imposter syndrome.
Imagine if you have imposter syndrome and an AI confirms that you're not good enough.
Yes.
But that's subjective cause, right?
Like, who knows what's better, right?
Yeah.
Yeah.
Objectively, I've read all of her work, and no, she's better.
It's not true.
She really is better.
Okay.
Okay.
Mike Maznick writes about it in yesterday, or day before yesterday, and textured, the sad
case of California state appellate court case, in which a hallucinated citation traveled
through an entire legal proceeding from a Reddit blog post to a client's declaration
to an attorney's letter to the opposing attorney's draft of the court order to the judge.
The judge's signature to appellate filings at no point along the way to anyone bothered
to check whether the case actually existed.
It's a story about, believe it or not, custody of a dog.
Two people dissolving their domestic partnership, each wanted custody, shared custody and visitation
of the dog, Kira.
You take Fido and I'll take Claw.
In the case, one of the plaintiffs cited two cases, the marriage of Twig and the marriage
of Tea Garden, neither case exists.
They came from a Reddit blog post by Sassafras Patterdale.
Munoz and her attorney did not actually realize the case was fictitious.
They attached the Reddit article to an exhibit in the declaration.
Sassafras was identified as a blogger, a podcaster, an animal rescuer.
Well, you know, there you go.
It was cited as a watershed California Supreme Court case that never happened.
But everybody bought it and it went all the way through the court.
The judge signed it.
It went to the appellate court.
They didn't question it.
This is one of those fields that is most vulnerable to AI hallucinations because so much of the
legal profession is knowing citations and knowing precedents.
And most of the time when you write these briefs with these precedents in them, it sounds
like an AI hallucination, even if it's not because it's just citation and then a small
quote and then citation and a small quote.
So I could understand why someone reading one of these briefs would first not check on
the citations because there's so many of them.
And second, not really understand that the wording is different because it's not.
Well, and Mike points out that each step of the way, the fake citation got more legit.
Yeah.
Right?
It started as a blog post, but then it's in the pleadings and then the judge is court
order.
And so each step of the way, it got more and more legit.
If the judge is receiving it, he's assuming that his clerk and the four attorneys who
looked at it before already checked it.
Exactly.
This is the problem out here.
Nobody checks the eyes work.
Really nobody checks the eyes work.
And that's the real problem.
So what we need is an LLM that checks the hallucinations that will fix everything.
And my good friend, Kevin Rose, partnered up, remember Kevin had a thing called dig back
in the middle.
It was actually, he started up right after tech TV, got him on the cover of business week
as the $60 million man.
It was before Reddit.
Reddit came along.
Alexis Ohanian and Steve Huffman founded a Reddit kind of as a clone, frankly, of dig.
Dig eventually fell to the bots who were gaming its algorithm.
And after dig four, they kind of shut it down.
Well, fast forward a little bit, Alexis Ohanian, who's done pretty well for himself, partnered
up with Kevin to revive dig and to revive the dig nation podcast.
Dig came out of beta just a couple of months ago, hardly at all, immediately the bots were
back.
It has shut down again.
After two months, I know I'm not laughing.
I'm not laughing.
They said, we thought, you know, we were going to use AI.
We thought we could really solve this problem.
Dig CEO Justin Mazzell writes in a note, pinned to the homepage of dig.com.
We faced an unprecedented bot problem.
We knew that bots would be out there and would be a problem.
We just didn't know, we didn't appreciate the scale sophistication or speed at which
they'd find us.
We banned tens of thousands of accounts.
We deployed eternal tooling, industry standard external vendors, none of it was enough.
It's not just a dig problem, it's an internet problem, but it hit us harder because trust
is the product.
We're not giving up.
Dig isn't going away.
We just got to rebuild and dig nation will continue recording while we work on a reboot.
They got bought it again.
I think I've told the story in the show that passed my old boss, Steve Newhouse, now the
chairman of advanced publications who had gone and asked, loved dig, wanted to buy it.
There was no buying it.
So he bought his second choice, which was right at smart move.
You might be interested in this.
I imagine you go in for an ECG every once in a while, Mr. Jeff Jarvis.
I also own my little thing.
Every Sinai has an AI system that can read echocardiograms and write the report.
I know you'd like a cardiologist to all valid.
Well, so I just had the case where I had an MRI in my back after I injured it, right?
Because the pain was so God awful.
The hospital spine doctor, we were looking for what's the cause of my infection.
The hospital spine doctor said, well, it's not the spine, so it's not my problem because
over to you, infectious disease doctor, my nice to meet you, Jeff, won't be gone.
But then I got another spine doctor and he did another MRI and he looked at it and he
said, no, and the radiologist really MRI said no infection.
He said, no, there's an infection there.
That's why you feel so bad and that's why we have to keep treating you on antibiotics
for the next two months, more than two months.
Same data, same eyes, different eyes, but different perspectives.
Same AI, you know, a couple of material, a couple of battery fashion, fine, but to have
it read it, no.
Well, I think that's what we've learned from the previous stories and maybe you want
a human eye on this.
Echo Prime was trained in more than 12 million echocardiography videos paired with cardiologists
written interpretations.
It's done very well.
State-of-the-art performance on 23 diverse benchmarks of cardiac structure and function
outperforming, well, I don't know if it's outperforming doctors, I don't see.
It's designed to assist clinicians, I guess that's important, not replace them.
It produces a verbal summary, cardiologists can review and act on rather than rendering
an autonomous diagnosis.
So that's okay, right?
As long as the doctor looks at this.
Well, the doctor is looking at it and challenges the doctor, the doctor, fine.
Right.
It is a second opinion.
Yeah.
I like that challenge.
I am not going to let a robot do surgery on me, but a surgeon in London says he's performed
the UK's first long-distance robotic operation on a patient located 1,500 miles away.
Robotic urological surgeon, Professor Prokhar Das Gupta said, it felt almost as if I were
there.
He carried out a prostate removal via a robot.
Robotic urological.
I already know.
Well, I've been there, folks.
I'm almost at the OR and looked up at this tall, the thing that was taller than me and
I saluted it.
Did it oscillate on you?
Yeah.
Yeah.
I mean, the surgeon was there at controls, but he was four feet away from me.
Well, that's the thing.
He doesn't have to be next to you unless I guess maybe something goes wrong.
Here is the surgeon with his head in the probably very much similar to what happened to
you.
He was 1,500 miles away, not four feet away.
That's the only difference.
But there must be latency in that control, right?
That can't be real time.
Yeah.
I mean, in the middle of a prostate surgery, I don't want to hear the phrase, always got
to reboot his router.
Yeah.
That's no.
The patient, the Wi-Fi last, can we, can we, let's, a robot operate on you, said, it's
a no-brainer, which is probably not the best phrase to use.
When you're getting operated on by a robot.
But I guess if there's a shortage of doctors, this could be, if you're, if you're, if
you have a specialty and someone can't get to you, because there are 1,500 miles away
from the nearest specialist.
Exactly.
Yes, better than absolutely nothing, for sure.
Yes.
Yeah.
If that's, if that's the scale, train more doctors around the world, better for a solution.
Last story.
Travis Kalinick is back.
Oh, good.
The founder of Uber, he wrote a very interesting post on his new site, Adams.co, said, I never
left.
He was fired, of course, by the board.
He says it was just a, you know, investor taking advantage of me because my mom had just
died.
My dad was seriously injured.
He played with Billy.
He doesn't name it.
He played with Bill Gurley.
And we're going to have Bill Gurley on, if you want.
Oh, good.
We can ask Bill about this.
After being booted from Uber, he, Uber, incidentally, at the time, remember, he brought
in Anthony Lewandowski, Travis's whole vision for Uber was really, the way Uber makes
money is with self-driving vehicles like Waymo, not with drivers.
Ultimately, it's got to be autonomous vehicles if it's going to make any money.
But as soon as he was booted, they sold off the self-driving portion of the company.
Kalinick went and started a cooking pop-up called Cloud Kitchens, which turned out to be
kind of a real estate play.
And now he's put out a manifesto in which he says, really, all I've ever been interested
in is automating the means of production.
He says everything ultimately has to be grown, mined, manufactured, and then transported.
And so what his new business is, growing, mining, transport.
He says at Adams, we make, and this is the key, gainfully employed robots, specialized
robots with productive jobs that bring abundance to their owners and society at large.
And don't worry about losing your job because we're going to need lots of people initially.
It looks like a pitch deck for investors, which is probably exactly what it is.
If you look at that deck, I'm sure at some point in the deck, it says we're enabling
humans to be radically self-reliant because that seems to be the catchphrase.
Yes, radically self-reliant means you're living in a van down by the river.
If you're lucky, if you've got a van, otherwise just you and the river.
What if he says you had an industrial kitchen and needed to make a thousand pancakes an hour?
I couldn't think of a worst way to do it than a human, a specialized machine that makes
pancake better at a large scale with a heated iron apparatus that could cook 100 pancakes
at a time to golden brown perfection.
No awkward robotic arm flipping pancakes instead, precision cooking ultra speed and throughput
efficient use of space designed for the machine.
This is where specialized robotics shine.
Yes.
And who's buying those pancakes now that no one has a job?
It's got to make a pancake robot.
It's okay.
How many people are trying to make a pancake industry?
I mean.
You know Craig Newmark loves the pancake robot.
He does.
He's got the money to do it.
Yeah.
Well, he actually just flies.
He flies around.
He just goes to airports to get his automated pancakes.
We had a pancake robot in the in the brick house.
You did?
We did.
And I missed that.
We had it on the new screensavers.
But I thought you had, no, you had a different bread maker machine.
Oh, yeah.
We had a Indian chapati maker and actually somebody, one of our employees had it.
Took it at home and said, oh, my camera.
Who has it?
Somebody has it.
It wasn't.
They weren't very good.
You were going to make better.
Me and Stacey the bread you never did.
Oh, yeah.
Was going to send it to you in a FedEx example.
You bet.
Back of the days when you had money.
I missed the days of the Leo box.
The mystery boxes that would show up every once in a while.
And they'd be like, oh, it was a roti maker.
Thank you.
Roating, that's right.
It was a roti maker.
And somebody has it.
It's still in the world.
Oh, wait a minute.
No, this is it printing pancakes with pancake bots.
Yes.
There you are.
Oh, you had it.
Oh, you really did.
Okay.
Look at all these pancakes.
You're right, Robert.
Look, there's the twit pancake.
Oh, you're right.
We had a pancake bot.
Oh, I remember the tech.
Oh, it made me.
That is as good a portrait as Bill took.
You can kind of see some features in there, though.
You got, you know, there's some nose and mouth.
I want to eat this so bad.
I really do.
So this is the pancake.
They tasted pretty good.
I mean, it's really pancake, yeah.
I should send this to Craig Neumark.
Yeah.
Just out of office.
Here's the guy who invented the pancake bot
in that little screen.
There's Megan Moroni, pancake bot creator.
You're doing this and I'm fearing the screen savers.
It's going to take us down from YouTube.
No, no, this is our screen savers.
I know.
I know.
I know.
I'm joking.
Wouldn't that be funny, though, if the old screen savers
took it down?
Or something?
I don't even pancakes anymore.
This looks like a 3D printer.
It's just a 3D printer with pancake.
It is exactly what it is.
With batter.
3D printer with batter.
It worked really well.
Cleaning it was a pain.
I don't remember that.
Yeah.
It always is.
And as usual, well, not as usual.
Sometimes because Jeff and I are old men, we read the
good bituaries every morning and thank goodness we're not
in them.
I should mention that Jürgen Aubermost has passed.
And many people will know that Jeff refers to Aubermost
whenever he wants you to take a shot.
What?
Well, good in Burgan, Aubermost.
So the only besides you, the only person I've heard
mentioned, Aubermost is Alex Carp, the founder of Palantir,
who studied with Aubermost.
So it was a German philosopher, yeah?
In German philosophy?
If you go to my blog or my medium feed, I put up a section
from the Gutenberg parenthesis about Habermost and coffee
houses.
So tell us about, he was, by the way, 96.
So he had a good, long life as philosopher.
Cypher philosopher.
He created the notion of the public sphere,
the bourgeois public sphere, in a book that was very influential.
It took years before it got translated into English.
It was interesting.
There was a delayed effect.
And he argued that in the salons and coffee houses of England
and France, there was a reasoned civil public discourse.
We should keep on going back to that.
And my research for the Gutenberg parenthesis is still
on sale now, on paperback.
You know what I found was that the coffee houses were not
so civil.
It was a wrong, it was a trying to, it was almost a conservative
view that we try to recreate, recapture some magic time
that never was.
As often we do.
As often we do.
Were they places for conversation?
Well, there was, very much so.
And what impressed him was in a country that was fully in England
that was fully class-based.
Anyone could sit anywhere and was expected to do so.
It broke down class barriers.
But there were also fist fights.
There were also arguments, right?
Well, anywhere people gather, there are fist fights.
And this was really the beginning of public discourse
in important ways.
There were publications, the Tatler and the Spectator.
And they would listen to what was happening in the coffee
houses, and that would appear in the publication.
The publication would come back in and feed the conversation
in the coffee house.
And it was this cycle of public discourse.
There was a fascinating discoverer to study.
And he was very much, very provocative and right.
Lots of ways.
But many disagree with him.
The other problem was that he called it inclusive.
Well, it only included those people who could afford to go
and buy coffee and sit there all day.
It didn't include women.
It didn't include people whose skin was not white.
And so it wasn't as inclusive as he thought.
So there's an argument for the feminist perspective
and from a race perspective,
there were arguments about this.
Nonetheless, give Habermas credit,
even though his prose, as I said in one of my earlier books,
was as hard as a just as a cold German sausage.
Really hard to read, especially when I'm reading it.
The translators often give up
and just put the German words in parenthesis.
I don't know how to translate it.
But he provoked tremendous discussion
about what does the public mean.
So I'll tell you how important the coffee hours
is where as you point out in the Gutenberg parenthesis,
King Charles eventually issued a proclamation
for the suppression of coffee houses
because he thought that they were seditious.
Yes.
That's very much like the FCC today.
A, the source of fake news.
Yeah.
Habermas is very important in Jesuit formation.
It's always the absolute least.
One of the philosophers that we very much push
in our early formation because critical theory,
this idea that all social constructs develop everything
from truth to knowledge to class,
develop from the relationship,
the power dynamic between the dominant
and the oppressed groups.
That's a very, very important and usable concept
throughout both philosophy and theology.
Man.
You Jesuits are smart.
No, serious.
You learn a lot of trivia.
We learn a lot of trivia.
My dad went to a Jesuit high school,
Regis and a Jesuit college for them.
And he always called the Jesuits God's Marines.
I don't know what that means, but that's.
Actually, that's, that's true.
So on July 4th, I'm taking my final vows here in Rome.
Are you?
That's like our last step.
Congratulations.
That's wonderful.
It's a very drawn out process.
You've been going through this literally for decades.
It's 32 years.
Wow.
Wow.
Well, basil.
Are you unusually slow?
Or is this normal that would take that long?
No.
I am slow because I've been jumped around so much
from like the DC Hawaii.
Finally, we got to the place where Father General
who I live with here.
He just said, no, we're just going to do it.
Let's do it now.
Let's do it.
But there's a lot you have to do.
There's a lot you have to do.
You have to do.
I mean, there's a lot you have to do to get it.
But this convention now takes so happy vow.
That's great.
Then the fourth vow is special obedience to the Pope.
And that's where that God's marine things come from.
Wow.
Because the Pope can actually say, I need someone here.
And we've taken the vow saying, OK, I'll go.
It doesn't matter what I'm doing.
I'll pack up and I'll go.
Is that the gang sign, by the way, is the sign of the four?
That.
Fourth vow, baby.
Fourth vow.
Hey, I did not know that you, that's because we,
I've been watching this progress for at least 10 years.
I had no idea.
Yeah.
Wow.
And I know there was, you know, we had to do these retreats.
There was a lot you had to do.
Oh, yeah.
Congratulations.
Yes.
Thank you.
That's such great news.
Is this a, that has time to go.
What's the server?
Is there five or six?
Five or what?
Another vow.
There's final.
That says final last one.
Four is it.
This is the most Catholic you can ever be.
This is it.
This is, this is, so technically after I do this,
I'm no longer in formation.
So I may fully form Jesuit.
So only 30 years of formation.
Wow.
What's the ceremony?
What's the, what happens?
So here in the big chapel that we have in our house,
the Borgia Chapel, which is, this is our mother house,
I will profess my, my vows again before Father General.
And then we do a, not a secret, but it's a solemn ceremony in the back
with Jes Jesuits where I will take a bunch of promises
and then the fourth vow.
Oh, that's great.
Wow.
Do you get a lobster hat to wear?
I should ask about that.
No.
No, it shouldn't be a reverent.
That is so, I'm so happy for you.
That's fantastic.
Thanks.
Is there any insignia or a sign that you can wear?
Pass marks on your sleeve or?
No, it's, it's, there used to be.
That's actually where this comes from.
This caused a lot of hurt because it used to be,
when you got to this point, you were judged.
And if you did not meet the standards,
you would not get the fourth vow.
You would get only three vows.
Ooh.
And, but my generation has, has really turned that around.
We don't see that.
That's, that's not an extra bonus.
That's, that's not status that you have four vows.
It just means that the work you do,
it's completed.
That allows you.
Correct.
Correct.
Yeah.
They don't take your soul and weigh it up on a scale with a feather
or anything like that.
That is, no.
That's, oh, I love that.
Is that great?
It's a feather in the heart.
It's a book of the dead.
Yeah.
Yeah, that's the, the old Egyptian way.
Speaking of great announcements, once again,
let's reiterate today, we, Jeff,
brought us the first scoop.
It's now on the blog, your new book series,
Intelligence, AI, and humanity begins,
and it begins with our guest, Ruman Chaudhary.
This is going to be for Bloomberg academic.
How many books will there be?
Three to five years.
A year.
A year.
This is a bit of process to get this far,
but I'm delighted this is a, this is a big deal we're here,
and these three authors are signed up,
Matthew Crescianbaum, and Charlotte McLean, and Ruman Chaudhary.
It's a great beginning.
And so they will come out until early next year,
which is what happens in books.
But I'm looking for people to come to me
and ask topics, questions like, what is education?
What does learning mean now?
What is creativity?
What is consciousness?
Those kinds of topics.
I want to look, Father Robert, at this notion of the hubris
of man thinking he creates ubermensch.
And what does it mean to put yourself
in the position of thinking that you're godlike?
What are the theological implications of AI?
Does lots of things that I, maybe could I get the Holy Father
maybe write a book for it?
Probably, actually.
I'm not joking about that.
You witnessed it here first.
That would be Mark Twain's publishing house.
One of the things that actually took it down
was very excited that he thought
that everyone in every Catholic of the world
would buy a biography of, I think it was the prior Leo, I believe.
Oh, yeah.
It didn't sell as quite as well as they hoped.
Look, the work that the Pope would be putting out
would be an encyclical or an official letter.
So they tend to be kind of dry and very technical.
Yeah.
You wouldn't want Leo.
You would want one of the cardinals.
No, even better.
One of the, just the priests who are working on the commission
because their stories would be far more interesting.
There's so much to me.
I've pointed about all this.
Horace Nockley, who's the publisher of many other titles
at Bloomsbury Academic.
He called me one day and said,
think about a book series about AI.
Hell, yes.
And what excites me about this is it's not a book
about the technology.
It's about society on the tech department.
How it reflects on technology on society
and in turn, back.
And I think the opportunity here, it forces us,
it forces us to reinvest,
reimagine many topics about our life and society.
So that's what's exciting.
And it's what we like to do on this show too.
Exactly.
And not just talk about the technical details.
Well, that's great.
Congratulations.
Thank you for the opportunity to unproud to announce here.
We were down to the wire on Raman's contract
and agent.
I was pushing both sides.
Can we please do what Raman's on this week?
Oh, that's good.
Well, that was nice that we could help you
with some leverage.
Thank you.
Yes.
Let's wind this up as we always do with picks of the week.
Normally, we'd start with Paris.
I don't know if you, Father Robert,
if you've got anything in mind
that you might want to promote or talk about.
Not for myself that I can talk about.
I will say that I am so, so happy
with what I've seen in the film version
of Project Hail Mary.
Really?
Ah.
Seriously.
I mean, I-
The reviews are very positive.
I did not know how they were going to turn the Martian
into a decent film because I loved the book,
but they did it.
And I think they've done it again with Project Hail Mary.
It's funny.
The guy who wrote the script said he was very thought,
there's no way I can write this book
and make a movie out of it.
But the reviews have been very positive.
I have tickets to see it Thursday.
Lisa and I are going to go see it Thursday.
Very excited about it.
I will have to wait till they get back to the states in April.
Oh, damn.
Release schedules, you know.
We will return with our picks of the week.
Congratulations to both of you.
It's kind of fun to work with such prestigious fellers.
Me or podcast host,
and I get to hang out with smart guys like you guys,
Father Robert Ballacere,
the digital Jesuit soon to be a member of the club of the four,
the sign of the four.
Mr. Professor Jeff Jarvis,
we don't forget hot type is still coming
before the new books come out.
Hot type is just around the corner in August.
This episode of Intelligent Machines
brought to you by modulate every day.
Enterprises generate millions of minutes of voice traffic.
I mean, we're talking customer calls,
agent conversations,
fraud attempts, right?
Most of that audio is still treated, you know, basically like text,
flattened into transcripts,
stripped of tone, intent,
and most importantly of risk.
Well, modulate exists to change that.
Modulate started in gaming.
Modulate's technology was proven
by supporting major players like Call of Duty
and Grand Theft Auto.
As you might imagine, these massively multiplayer games have a lot of audio,
players talking to each other.
Modulate helped these companies separate playful banner
from intentional harm at scale.
Not easy to do, by the way.
Today, modulate helps enterprises,
including Fortune 500 companies,
understand 20 million minutes of voice every day
by interpreting what was said
and what it actually means in the real world.
This capability is powered by modulates newest L,
ELM, VELMA 2.0.
VELMA is a voice native.
We're just talking about specialized models.
It's a voice native behavior aware model.
Built to understand real conversations,
not just transcripts,
it orchestrates 100 plus specialized models,
each focused on a distinct aspect of voice analysis,
to deliver accurate,
explainable insights in real time.
VELMA does really well,
ranks number one across four key audio benchmarks,
beating all the large foundation models in accuracy,
cost and speed.
It's number one in conversation understanding.
Number one in transcription accuracy and cost.
Number one in deep fake detection.
That's huge.
And number one in emotion detection.
That's hard.
Built on 21 billion minutes of audio.
VELMA is 100 times faster,
cheaper and more accurate than LLMs at understanding speech.
That includes Google's Gemini Open AI, XAI.
Most LLMs are black boxes.
VELMA doesn't just assess a conversation as a whole.
Conversation in transcript out.
But it breaks it down for greater accuracy and transparency
by producing time stamped scores and events,
tied to moments in the conversation.
Meaning you can see exactly when risk rises,
when behavior shifts,
when intent changes,
with VELMA you can improve your customer experience
as reduced risks like fraud and harassment,
detect rogue agents and more.
Go beyond transcripts and see what a voice native AI model can really do.
Go to Modulates Live,
ungated preview of VELMA at preview.modulate.ai.
That's preview.modulate.ai to see why VELMA ranks.
Number one on leading benchmarks for conversation understanding.
Deep fake detection and emotion detection.
That's VELMA at preview.modulate.ai.
We thank VELMA so much for supporting intelligent machines.
Father Robert recommended.
We all go to the movies,
which I'm going to be doing.
I'm very excited about seeing it.
Just like you, I had some trepidation.
Yeah.
It's a complicated book.
From the clips I've seen, they got the tone right.
They got the playful tone, the amazement tone,
and Gosling actually might be the right actor for that.
Yeah.
So we had any we're on,
and he had just learned that Ryan Gosling was going to pay the role
that the brothers were going to direct it.
And I was, I honestly, I was a little,
I was like, Ryan Gosling really,
but actually, the more I think about it,
the more I could see how he could play that kind of nebyshy kind of,
you know, well, I don't want to give away anything.
Yeah, exactly.
Character.
Let's say.
Funny point, though,
I had him on triangulation right before,
right after it was announced that Matt Damon was going to play,
right, play the character in the movie.
Right.
And you had him right after Project Town Mary.
So his two books that got turned into movies, he was on Twitter.
We have, oh, we've interviewed him for every book he did.
Yeah.
And I hope to interview him when the movie comes out.
Well, we'll try to get him.
And he's a great guy.
And I think I think pretty well disposed towards the network.
And Ryan Gosling is also a good actor.
So it's like, he's not just a pretty boy.
He's a good actor.
Okay.
Very nice.
Well, I am going to take, if you say so, I believe you.
No, you know what?
I was spoiled by law, law, land.
I'll be honest.
I am going to take a paragraph from Jeff Jarvis'
Gutenberg parenthesis and put it into my, got this,
to the written word I say.
So my pick of the week, actually, I have several, but I'll start with this one, is Kagi's
translator.
Kagi's translator is really good.
I am a Kagi fan.
We had Kagi CEO on a couple of months ago.
Kagi does a variety of languages, you know, Chinese, English, all the usuals, but they also
have fun languages, corporate jargon, dothraki, Elvish, emoji speak, Gen Z, Hyvelarian,
Klingon.
But I thought we should see if we could turn Jeff's academic passage into link full on
speech.
It also has middle English, Navi, and pirate speak.
Might be better in pirates.
No, no, no, no, no, no pirate speak.
No.
Okay.
Well, I'm going to, well, too late.
Like you just did it.
Hover must be thinking too highly and not only of the scurvy dogs would frequent into coffee
houses, but their parlay as well.
Hey, that's pretty good.
Build in his tail on the belief that their bickering be rational and critical.
How about LinkedIn speak?
I don't even know what that, oh, it gives it bullet points with emojis.
It gives it tags, thought leadership, networking, Habermost community building public
fear.
Yeah, look at that.
Yeah.
I wonder how it is in dothraki.
Habahashish kaifah.
Habahahah.
Worths in German.
Emoji speak.
Have you ever written your books in emoji?
Actually, a hot type, I'm very proud to say, has an emoji in it.
Oh, very nice.
Very broad.
Very broad.
If it's hot type.
What about Gen Z?
Sure.
Boss was low-key glazing the coffeehouse crowd, acting like their yapping was actually deep
and logical.
Cowan called him out for being a total circular logic merchant, saying he just fell for the
hype attestant steal.
We're selling them their mags.
Their mags?
We're literally trying to manifest that exact vibe.
We should send that to Paris and see if it resonates anyway.
This is a lot of fun.
There's also Reddit speak, which I don't know.
So Hammermass basically idealized the hell out of coffeehouse culture.
He didn't just hype up the people there, but also the discussions claiming the debates
were peak, rational, and critical thinking.
But then Cowan comes in like, hold my beer and points out the homerossus basically caught
in a circuit.
This is pretty good.
This is good.
I got to say it.
It's good.
Leo, Jeff and I were talking about this when you went for a bite because I showed him
one, the LinkedIn speak.
So the English was, oh, you've already done this, this is this broad.
No, no, no.
Let me tell you the example.
So what did you use?
I have been arrested for fraud.
What did it say?
It said, I'm thrilled to announce that I'm starting a new chapter.
I've recently begun giving the unique opportunity to step back and reflect on my professional
journey from a high security environment.
Finally, I'll get to write that book.
Wow.
That is pretty awesome.
So thank you, Kagi, for doing something pretty great.
And then one other side, I'll show you because we've been talking a lot about local models.
I have a really good little program called LLM Fit.
It's an open source program.
You can find it on GitHub.
You can run it on your machine to see if you can run an AI locally.
But maybe this would be easier.
It's called canirun.ai.
You can tell it what machine you have.
Oh.
And what graphics capability and so forth.
So let's say you've got one of those brand new M5 max computers with how much RAM, let's
say 64 gigs of RAM.
And you can see which models will run best on that hardware.
These are the local models.
So this is very handy, Mr. Allsmall.
You can only run Mr. Allsmall on that puny little girly machine of yours.
So anyway, you can choose it for code.
You can choose providers.
You can choose licenses.
You can choose what your standard would be, what how you would sort it.
And so forth.
I think this is very nicely done.
It's canirun.ai.
And then I have actually used one that is on GitHub.
It's an open source tool called LLM Fit, which you can also download and run it.
It works quite well.
Same idea.
Although it takes a lot longer because it's actually going to work on your machine.
And it's a 2E, which I, as you know, quite fond of.
Jeff, you're pick of the week.
So let's see, we could have shot in front of our Buzzfeed, but I want your own bankruptcy
and all that.
I won't do that.
Instead, we have the Washington Post tried a white castle from an airport vending machine.
Oh, I thought you did it.
No, I didn't.
Well, I'll get to that by personal in a second.
Oh, okay.
And it was bleak, says the post.
Now, it also points out that there's no white castles in Boston.
So they don't know how bleak a white castle is, normally.
See, I would imagine, I mean, a white castle, the cold key to the white castle is piping
hot, dripping, dripping grease, which is soaking up into the bun.
It's steamed over the onion.
So that's what.
And the bun and the labor crystals, steamed, gushy, part to it.
So this is a vending machine, a terminal A at Logan.
There's a California pizza kitchen and the men's room.
Great.
Good thing it's close by.
Yes.
At least it's nearby.
I mean, Spain has vending machines that sell ham.
So that, I mean, I'm only bad at go.
But I bet it's good.
I mean, him probably does all right in a vending machine, probably.
Yeah.
I'm scared.
So in the spirit of this, after all the attention in the last week or so for the CEO of
McDonald's, eating the big arch with no enthusiasm.
Got such a mess.
I decided, because I have to have more iron.
I decided that I would sacrifice for the show and my, and, and, and, and, is there a picture
of you?
I didn't think a picture was too disgusting.
I went in and I bought a big arch.
I ate less than half of it.
And it was, the bun was okay, but they put so much special sauce on, it's two, two quarter
pound patties.
Nice.
Least slices of cheese.
Oh, that's too much.
That's all right.
And, and, and grizzled onions and, and the sauce, well, the sauce is such when you try
to buy, the reason the CEO had big cautions is because when you bite on it, the patties
start slipping out.
It was disgusting.
It was big mess.
So saved you a ten bucks, folks.
Ten bucks.
Yeah.
Go instead and spend $34 and get a French dip sandwich at Saul Hanks in New York.
Yeah, exactly.
You'd be much better.
I had one of the big arches, but that was only because it was free.
Ah.
Wait a minute.
Did they deliver?
How come it was free?
No, so the McDonald's, there, there is a McDonald's on Vatican property just right next
to St. Peter's.
The reason why they allowed there to be a McDonald's on Vatican property is because that
McDonald's agreed to give away X number of meals to the homeless every day.
Oh, good.
And so I was there towards the end of the night and they said, well, father, would you,
would you like, would you like this?
Ha, ha.
What'd you think?
I don't think they should be feeding that to the homeless.
Well, sir, I worked when I was a kid in high school.
I worked at a McDonald's and McDonald's, you know, it was very tightly controlled inventory.
They don't want the employees to be eating the food and so forth.
So they, but they also very careful about when a hamburger's been sitting in the bin
too long.
They don't, they don't want to sell it.
So they have what's a white plastic bin called the waste bin and when a hamburger has
exceeded its time limit in the bin, they throw it in the waste bin and at the end of
the day, you count the waste to make sure that, you know, everything is accounted for,
which I suppose is a good inventory practice.
But we thought, Jesus, such a waste.
Maybe we could donate this to the local dog pound, you know, the shelter.
Oh, yeah.
Well, nice.
The dogs would like it.
The shelter turned it down.
They said there's not enough protein.
Well, we don't want it.
No, we don't want it.
In Italy, we don't use the same nuggets and burgers that they use in the United States
because they're not classified as food.
It's not full of the coming to the EU.
Yeah.
You mean pink goo is not food, you know, by the way, I worked at McDonald's as well when
I was a kid.
Did you?
Yep.
The one at Mission, mission hills and Jeff worked at Ponderosa Stakehouse.
We had to count.
They had these little tiny, white cups for the sour cream.
They could charge you too much for your, for your, you need five of that.
You had to count those.
We had to count every little cup.
Wow.
Hmm.
Well, I'm just saying thank God for the ozempic because otherwise I'd be craving a big
mac right about now.
Yeah.
Just if you see the big arch is just so over.
It's just so America.
It's over the top.
It's over the top.
I am.
I mean, I was hooked.
I'm McDonald's for a long time.
Oh, I was thinking it was from working there and eating so much of it.
Yeah.
It's full of sugar.
It's just like your Coca-Cola.
It really was always my hangover cure because it was like raised to a Chinese person.
It was American.
It couldn't be more about hangover cure is Taco Bell.
Yeah.
Taco Bell.
I'm Taco Bell here.
McDonald's is the last place.
McDonald's is the last place in America they were you can get a $5 meal.
Well, it's true.
Although it wasn't true for a while, they had to really kind of, they just introduced
the $3 value meal, which includes their, the sausage muffin and an orange juice or something
it's hope.
They're trying to make it affordable again.
I got blessed.
You know, they need to go.
Well, one of my wife's students works at McDonald's.
She teaches ESL and her hours have been cut back because prices have gone too high.
I was actually very grateful that my first job was McDonald's.
I really learned how to work.
You know, they say, don't, you're never standing still.
You're always, if you, if you don't have something to do, clean, you know, always, always
be working.
And did they want to buy a lot faster because of it?
Or, or what I would do, which is sabotage the shake machine, that was kind of my, that
was my job.
Well, this was very early on.
We had shake machines, but we didn't have McFlurry's yet, so father, Robert, so nice to
see you.
Congratulations on your ascension.
Did it call that?
No, it just went on.
No, it just went on.
No, it just went on.
A session sounds like I'm converting into energy or something, but I'm very happy for
you.
That's, that's such wonderful news.
And I hope that we get to see you soon, maybe even in the Bay Area, but at least on our
microphones here for the podcast.
We love having you on.
Father Robert Palace of the digital Jesuit, Padre S.J. on Blue Sky and all the other platforms.
And of course, the Jesuit pilgrimage app on iOS and Android.
It's a great way to follow up Father Loyola's pilgrimage across the world.
Thank you, Robert.
Jeff Jarvis, congratulations, so do you too.
Congratulations on the book series.
Very exciting.
Very happy for you.
Thank you for the opportunity to plug it here.
Yeah.
Thanks for letting us be the first to tell the world.
Jeff's book, HotType is available for pre-order.
You can also get the Gutenberg parenthesis now in paperback and magazine, a wonderful
read.
And he will be back next week with Ms. Paris Martin over another thrilling gripping
edition of Intelligent Machines.
We do the show every Wednesday, right after Windows Weekly, 2 p.m. Pacific 5 p.m. Eastern
2100 UTC.
You can watch us live in the club to discord.
Thank you club members for making that all possible, actually, for making everything
possible without the club members.
I don't know what we do.
If you haven't joined yet, put that TV slash club to it, please join the club.
You can watch us live, everybody can watch us live during the show production.
YouTube Twitch, x.com, Facebook, LinkedIn, and Kick, after the fact, shows end up at
Twitter.tv slash I.M., or on YouTube, there's an Intelligent Machines channel there for
the video.
Great way to share a little clips with friends and family, spread the word, spread the
goodness.
And of course, you can subscribe in your favorite podcast client and get it automatically
in a minute.
It's done.
Thank you, everybody, for joining us.
We'll see you next time on Intelligent Machines.
Hey, everybody.
I'm going to book you one more time to join Club Twitch.
If you're not already a member, I want to encourage you to support what we do here at
Twitch.
You know, 25% of our operating costs comes from membership in the club.
That's a huge portion, and it's growing all the time.
That means we can do more.
We can have more fun.
You get a lot of benefits at free versions of all the shows.
You get access to the Club Twitch Discord and special programming.
The keynotes from Apple and Google and Microsoft and others that we don't stream otherwise in
public.
Please join the club if you haven't done it yet.
We'd love to have you find out more at twitch.tv slash club.
Twitter.
Thank you so much.
All TWiT.tv Shows (Audio)