Loading...
Loading...

In this episode, we explore Anthropic's new AI code review tool designed to check AI-generated code for bugs and security risks. We also hear a personal message from the host regarding a birthday request for podcast reviews.
Chapters
00:00 Anthropic's New Code Review Tool
00:48 Birthday Request and Review Segment
04:23 The Problem with AI-Generated Code
08:28 How Code Review Works
10:13 Multi-Agent Architecture and Pricing
12:27 Impact on the Software Industry
Links
Tyler Reddick here from 2311 Racing, another checkered flag for the books, time to celebrate
with Chamba.
Jump in at chambacasino.com, let's Chamba.
No purchase necessary, VTW Group, void, we're prohibited by law, CCNC, 21 plus sponsored
by Chamba Casino.
Welcome to the podcast.
I'm your host, Jaden Schaefer.
Today on the podcast, we're talking about a new tool to end the topic has just launched.
Basically, we have this issue where 70 on some companies, 90% in other companies, percent
of all of their code is being generated by AI.
Anthropics has just launched a new co-review tool that is going to be able to check this
massive flood of AI-generated code to see what's good, what's not, and I think this is going
to be awesome for developers, but also for all of us users of the software, there's a lot
of cool implications and a lot of stuff that I am excited about.
So, I want to break down everything going on here because I think we're about to get
a lot less buggy software, a lot of the software is going to get a lot more usable.
There's obviously going to be rejoicing, but there's also some pullbacks to all of
this.
So, I'm going to talk about all of that.
Before we do, I actually have a request to make.
This week is actually my birthday week.
I am turning 30.
I'm super excited.
It's crazy.
It feels weird turning 30.
But there is one request I would ask for my birthday if you would not mind, and this
is something that I'm not going to beg you for the rest of my life for, but for my birthday
week, this is the thing.
I'm not going to plug my company AI box.
I'm just going to ask for this.
If you could leave a rating and review on this show for my birthday week, it would be
amazing.
This is something that I've spent the last three years of my life almost every day uploading
a podcast episode to this.
So, if you've gotten any value at any point in the last three years, if you're a new listener,
if you haven't already, this is the time to do it.
It is my birthday week.
I'm turning 30.
I would super, super appreciate a review on the podcast.
And as a celebration, and I don't know what you want to call this, but as a fun way
to say thank you, I will actually be reading the most recent reviews, the good and the
bad, the five-star and the one-star reviews that I've gotten, and be sort of, I'll give
you a quick response.
This is something I don't usually do, especially if I get a one-star review.
I'm not going to sit there and argue with the person if you want to move on from the show.
That's cool.
If you get value out of it, it's cool.
But because we're doing this for this one week, this is kind of like review week.
This is what I'm dubbing it.
I'm going to read it.
So, we're kicking this off with one of my most recent reviews I got.
This was on March 2nd, and it is a one-star review.
So, fair warning.
This is a one-star review.
And this is what it said.
It said, it's from Hemacham, and he says, stop the Islamophobia.
One was the last time you heard about Saudi Arabia being an enemy to the U.S.
This is a one-star review.
I think this review is specifically responding to my opening-eye steals $200 million contract
in anthropic versus pentagon battle, and basically what happens, you guys all know.
I think there's a lot of emotions are high.
We have anthropic that has this whole battle with the pentagon, and then opening-eye comes
in and jumps in and steals it, and this is like right before Iran gets invaded.
I'm not exactly sure what I said in this podcast that got, I don't know, made this person
so upset to say it was Islamophobic.
I mean, evidently from this, I was probably criticizing the country of Saudi Arabia,
which by the way, I think Saudi Arabia generally is like a good partner to the U.S.
as an ally, we buy all of their oil, even if you hate them because of how their government
is set up.
We buy their oil, we use their oil, so we get a lot of value out of that partnership.
We send them a lot of military supplies, they're kind of an ally in that region, so generally
I'm happy with that.
I actually almost took funding from a huge Saudi Arabian kind of like an incubator over
there and actually almost went and moved to Saudi Arabia for three months.
My wife, we got a few kids, so my wife at the end of the day didn't want to have to go
to a apartment in Saudi Arabia for a few months for that program, so nevermind I'm doing
it, but you know, I've considered it, I think Saudi Arabia is generally good.
The only response I'll say on that is obviously whatever I said wasn't Islamophobic since I'm
not as longphobic, I think, you know, all people with all their beliefs and religions
awesome since I have my own, but what I will say is I would just encourage that person
or anyone listening, like don't get misconstrued if I'm going to criticize the country of Saudi
Arabia, especially when I'm criticizing countries in relation typically to like AI policy into
being Islamophobic or like disliking your culture or whatever, I don't know, I just
think that's pretty a pretty shallow take.
I'm going to criticize every government, if I think they're not doing something smart,
including the US government, my goal is to be unbiased and academically honest.
All right, thanks for listening to my rant.
If you could leave a comment or review for this one review week for my birthday, I would
super appreciate it.
Let's get into the episode.
So I think right now pure feedback has been one of the most important, but kind of tricky.
It's kind of the safeguard basically in software development.
It helps teams catch bugs early.
And you can also keep your consistency across your whole code base.
You can improve the overall quality of all of the software that you're shipping.
This is something that we see with my startup AI box all the time.
I think right now we're doing all of this vibe coding, even myself, I have tons of vibe
coded projects on the side.
It's sometimes hard to productize them because of tricky nasty bugs in there.
And if you're not a developer, it's hard to catch, find, and fix them.
And so I think where developers use a lot of AI tools to generate like, you know, cloud
code or any of these other players codex from open AI, we're generating tons of code
right now.
And that's also really cheap and really fun and really fast.
However, I think a lot of these tools can, you know, beyond just speeding up development,
they can also give a whole bunch of hidden bugs security risks.
And basically code that developers don't fully understand.
So then it's hard to understand all those kind of hidden bugs and security risks.
And the topic is building something they think is going to be the solution for this,
which personally I'm super stoked about.
I use cloud code on my startup AI box.
And so this is a new AI that can review the AI generated code.
They're calling this code review.
It's built inside of cloud code and it's essentially designed to automatically analyze
these pull requests.
And then it's going to flag any potential risks or issues before they actually make it
into production.
This is what they said about it.
This is Anthropics head of products.
This is Catwoo said, we've seen a lot of growth in cloud code, especially within the enterprise.
One of the questions we keep hearing from enterprise leaders is now that cloud code is generating
a huge number of pull requests.
How do we review them efficiently?
Poor requests are basically just the way that developers are going to submit code changes
for review before they're merged into a project.
But Woo says that AI assisted coding has dramatically increased the volume of those requests,
which is kind of creating a new bottleneck.
And to be honest, I actually have heard this.
It was funny.
There was a moment with OpenClaw that you know, went mega viral.
It's kind of this agent that can run on its own computer and take over and do all these
tasks for you.
OpenClaw, the founder of like a one-man team running this thing gets acquired by OpenAI
because it went super mega viral and so many people were using it.
And it's funny because even after the acquisition, I remember seeing him post on X and say, hey,
guys, like you're putting in so many, because it was open source, right?
So anyone can kind of like submit code to make improvements to the project, which is
a super cool, you know, super cool that he built it that way.
But he was saying, look, guys, like it went so viral.
I'm getting like so bogged down by trying to review all of the code you guys are submitting.
And he like had like a certain amount of pull requests.
He said he was able to basically review every day, but he was going, you know, full speed
trying to get as many done as he possibly could.
And it was a huge struggle.
And basically, it's very, very difficult.
In any case, this is definitely a huge problem for I think a lot of people, especially
when you kind of look at some of this open source stuff, some open source communities
won't even allow AI generated code.
I don't think that's like the most common stance, but I think it's just hard for them to
always know what's going to have bugs or what's going to have issues and to properly
review it all because people could just try to push so much.
So this new feature is going to launch in a research preview for Cloud for Teams and
also Cloud for Enterprise customers.
It's going to, I think, come at a pretty important moment for Anthropic.
Obviously, like I was mentioning earlier on in the podcast, they have this big huge
high profile dispute with the US Department of Defense.
They've got designated supply chain risk.
They filed a couple lawsuits to kind of, I don't know, fight that.
So Anthropic has a big moment right now.
A lot of people are looking at them.
I think at the same time, Anthropic is saying that their enterprise business is booming.
Subscriptions have quadrupled since the start of this year, like they are on an absolute
air cloud codes.
Run rate revenue has already passed $2.5 billion, which is insane because it was actually
one of their developers over at Anthropic that kind of built it as a side project.
And now, you know, it's doing more than $2.5 billion, it's run rate revenue.
According to WooCodeReview is going to kind of be aimed at basically, like for the
most part, large engineering organizations that are already using cloud code, companies
like Uber, Salesforce, Accenture, all of those are already using it.
The engineering leads are going to be able to enable the feature for their teams, which
basically allows it to automatically analyze every pull request once you turn it on.
And then the system is going to integrate with GitHub, and it's going to leave comments
directly on the code, which is going to point out any issues and basically suggest fixes.
So, you know, like a human developer coming through, instead of having to, you know, manually
code review all these things themselves, they're just going to see, um, cloud has come
through, skimmed it, written a code review, highlighted any issues, kind of pointed out
and given notes and they can go review just those notes or any sort of points of interest
or concern that I might have.
So, I think unlike a lot of other automated code tools that mostly focus heavily on formatting
or style, anthropic is intentionally designing code review to focus on logical errors, which
is interesting.
Woo was commenting on this and said, uh, that's really important.
A lot of developers have seen automated feedback before and they get annoyed when it's
not immediately actionable.
We decided to focus purely on logic errors, so we're catching the highest priority problems.
I think when an AI is going to identify an issue, it basically explains its reasoning
step by step.
So, it's going to actually outline what it believes the problem is.
And then it's going to, you know, say like, this is why it matters.
This is how it can be fixed.
And by doing this, issues are going to be also labeled in severity.
So, there's going to be, it's like, so basically in a color coordinated, red is like the
critical problems.
Yellow is potentially an issue, purple is bugs that are kind of tied to historical or legacy
code.
You're going to have this like color coding.
You can skim through it.
So, they're trying to make this fast and easy for developers to, to make their, make
their workflow more as basically streamlined at all.
I think under the hood, the system is going to use this multi agent architecture, which
is important, right?
It's not just one agent.
They have multiple agents running through this.
A couple of the AI agents are going to analyze the code base in parallel.
So, it's not just like, you know, you run this thing once and you've got to wait for it
to go finish.
Like, there's multiple agents running through different parts of this.
At the same time, they're going to be examining pull requests from different perspectives.
Then there's going to be a final agent that aggregates the findings.
It's going to remove any duplicates, right?
Because like if two agents are running through and they both see a security finding and maybe
it's, you know, kind of related to two different sections and they both report it.
There's going to be one agent that just kind of, you know, merges those two together.
It's going to remove the duplicates and then it's going to rank the most important issues.
The tool is also performing kind of a light security analysis.
I think they're, they intentionally want to say, you know, look guys, this is a quote-unquote
light security analysis.
They don't want people to get overly confident that this is going to like fix all security
that could ever happen from this AI-generated code.
But yeah, I think it is important that we're starting to have this conversation because
this is something that absolutely is an issue in the industry.
Engineering teams are then going to be able to customize any sort of additional checks
based on their own internal standards, which is cool, right?
It's beyond just like, hey, we built a tool that can do this for you.
It's like, well, do you guys have anything that you, you know, frequently need to check
inside of your code and inside of your industry, you can go add those to it.
And then I think for deeper security reviews, Anthropoc also has a separate product called
Claude Code Security that can go even deeper on all of that.
I think because the system is running, you know, multiple agents simultaneously, Claude
Review can be basically pretty computationally intensive, like it's going to use a lot.
The pricing follows the same token-based structure that they use for all of their AI services.
So basically, the costs are going to depend on the size and complexity of the code that's
being analyzed.
They're kind of estimating right now that the average review is going to cost like $15 to $25.
And of course, their argument there is that this is some sort of increased cost, but I
mean, come on, if you were to go and hire an analyst or any sort of developer or any sort
of security researcher to do something and this would be hundreds or thousands or tens
of thousands of dollars, not $15 or $25.
So significantly bringing this down, again, there's a couple interesting thoughts from
Wu who said, this is coming from an enormous amount of market demand as engineers build
with Claude Code.
The friction to create new features drops dramatically, but the need for code review increases.
Our goal is to help enterprises build faster than ever while shipping far fewer bugs.
I'm excited for this personally.
I think a lot of different of these kind of like vibe coding tools aren't, know this
is an issue.
Probably a lot to vibe code things and it has a built-in security feature where it scans
your whole project and it kind of highlights different security issues and you can go
and apply and kind of have them fix some of those issues or it tells you what to do to
fix them.
I think this is incredibly useful.
So I'm excited that Claude and Claude Code are going to be integrating this.
I mean, of course, because I use it a lot at my startup Claude code, but also I think
just broadly for the whole industry, we're going to see a lot less bugs.
We're going to see hopefully if Claude is doing it, it's kind of setting the standard
for the whole market and hopefully we can see more of these other players in the space
doing similar things.
So excited to see where this kind of goes in the future.
Guys, thank you so much for tuning into the podcast.
Remember, if you haven't already left a review, I would really, really appreciate a review
on the podcast.
We are past 150 and I would love to get to 200 reviews.
Before I turn 30 this week, guys, it's my birthday.
If you can leave me a review, I would appreciate it.
I hope you guys all have a fantastic rest of your day.
Gary VGW Group Void for Prohibited by Law 21-plus Terms in Conditions Apply.



