Loading...
Loading...

John Melas-Kyriazi is the co-founder and CEO of Standard Metrics, which powers portfolio management for more than 150 venture capital firms and 10,000 companies. He runs a high-growth SaaS company at the leading edge of the AI wave. How is AI transforming how investors source and diligence deals? How are agents parsing pitch decks, prepping board meetings, and building powerful new workflows? And as competitive advantages shift, what are the new moats for SaaS companies in the AI age?
We discuss these and other timely topics with John. At an early age, he fell in love with science fiction and built vacuum-tube amplifiers in his parents' basement, before studying physics and materials science at Stanford. There he became a research scientist until pivoting to investing, first at StartX and later Spark Capital. Born out of firsthand experience, he co-founded Standard Metrics alongside the 8VC Build team to create a better solution for portfolio management software.
We begin our conversation with John’s path from academia to investor and founder. Next, we explore how Standard Metrics centralizes data, improves portfolio intelligence, and powers smarter investment decisions. Then we dive into the new possibilities with AI, from technical diligence copilots and investment stress-testing to new workflows and internal AI analysts. In an era of new agentic tools and shifting competitive advantages, we discuss new moats for existing software companies built around network effects, data, and more. The world of finance is changing quickly; John offers a unique perspective on the AI wave and how top investors are leveraging new workflows to get ahead.
00:00 Episode intro
02:10 Academia to investor and founder
08:00 The pain point that led to building Standard Metrics
12:10 How AI is transforming venture capital
14:20 AI as a technical copilot
17:10 AI Analyst vs human in the loop
19:20 Parsing pitch decks and new AI tools
23:10 How AI search is changing marketing
28:30 New capabilities and workflows
33:47 How quickly is everything changing?
AI is a technical co-pilot for diligence is actually one of the most interesting
use cases for AI as an investor right now. If you come across a really interesting company,
you can get really, really deep within the matter of hours and then pull in human experts to help
you to go the last mile. The mission for our business is to accelerate innovation in the
private markets. If people had access to better data, they could make better decisions. Is it
changing how venture capital firms operate when they're using tools like cloud and chat GPT
to poke holes in them? What am I missing? Where's my argument weakest? Investors for decades have
not had access to the kinds of tools that they need to really operate their businesses in a data
driven way. But let's talk about how AI is changing things right now. You can go into Excel,
hook it up with cloud, hook up cloud with standard metrics and then ask you to do a discounted cash flow,
it will just build it for you. It's amazing. Keep going to be able to spend more time on what they're
truly passionate about. John Milas-Karyazi is CEO of standard metrics. Over 10,000 innovative
companies uses technology with hundreds of venture firms to do their work and figure out what's
going on, how businesses are doing, how to interact with them. John is on the leading edge. He's
running a really high growth SaaS company. He's becoming an AI company. It's really interesting
to hear from John about how AI is transforming business, how it's transforming finance and what's
going on in the innovation world. Welcome to American Optimists. Today we have my friend John Milas-Karyazi.
John, thanks for joining us. Thanks for having me. John, you're a co-founder and CEO of standard
metrics. I think it's a good example of what's happening AI with agents right now with all sorts
of other things. So I want to talk about that. But first, tell us a bit about yourself. Where'd you
grow? I grew up in Brooklyn, Massachusetts right outside of Boston. What were you into as a kid?
What kind of stuff? So I was really into reading fiction and then over time more and more science
fiction, fiction and then science fiction because you studied physics eventually. Eventually, yeah.
I think science fiction was maybe a precursor to that. And your master's was a material science.
That's a different area than you're working on now. Is there cool stuff happening in these areas
you studied? Is that something you eventually want to go back to? You know, I don't know if I'll
go back. I loved it though. And it definitely inspired me a lot, I think, as a kid. Being interested
in science, I got really into electric guitar in high school. And then that drew me into electronics
and circuits. And I think I freaked out my parents because I ended up setting up a soldering iron
and printed circuit board etching station in my basement. That kind of was my first foray into
getting into engineering. What were you trying to build in your basement? I built a tube amplifier,
a vacuum tube amplifier. And I also built a bunch of guitar pedals to help with reverb and distortion
and stuff like that. So you're an engineering kid, a physics nerd, went to Stanford, you started
a PhD, you dropped out of a PhD. How come? That's right. You know, I was really lucky in my sophomore
year in undergrad. I ended up at this material science labs, Mike McGee, who's now a professor at
CU Boulder. And he was researching next generation photovoltaics. And I had this incredible PhD mentor,
this guy named E. Kong Ding, who kind of took me under his wing. We ended up publishing a bunch
of papers together. And it was this really cool experience. Papers on what? The first one was around
a new manufacturing process for a new type of solar cell we were working on. And the second was more
of a device physics paper. So it was understanding really the loss mechanisms in the solar cell. Like
when the sun shined on the solar cell, what happened to all the electrons and holes, which are
like the kind of part to electrons? And where were they getting lost and what was degrading the
performance? So it was a really cool experience. And I thought, gee, like, this is awesome. I should
keep going. I should get a PhD and become an academic. So that was where I was originally kind of
pointing myself. And then, you know, I had a bunch of friends that went and worked at startups.
I went and worked at a startup for a summer. And when I got back to my PhD, I realized that I
wanted to be doing startups instead. It's interesting. You know, I think this is a really common thing in
our society right now, where I think it used to be maybe 50 years ago, 100 years ago, the smartest
people who wanted to pursue the most intellectually demanding and interesting things would stay in
academia. Because it was just more interesting there. And there's more challenges there. And there's
things you could do there. And now I feel like the last 30 years, especially the last decade,
startups have become so much more compelling. And there's like so many smart people there. And you
can build and solve so many problems. You actually get like more of your, you know, Aristotle would say
that like the that one of the things is most satisfying for man is to use the highest use of
their faculty applied to successfully. And I feel like that's now more in startup world sometimes
in academia. That was definitely your experience. That was my experience. And look, there are
aspects of academia that I absolutely loved. I think that there was an element of intellectual
freedom exploration. That was incredible. It's actually something I love about being an investor too.
Like post academia, I spent a couple of years helping run this early stage startup accelerator
called startups. And then became a VC. And I was a VC for six years. You're a spark. I was at
spark. And that was one of the things that I loved about being at spark was this idea of
being able to explore many different things, being able to go very, very broad and then choose
when you went deep. It's very different from being a founder, which I also love for very
different. It was interesting that because you like that about academia, but then in spark,
you're able to do even more, even more things that sounds like and go even broader, right?
Yeah. I think as an academic as a researcher, I felt like I had a lot of freedom to explore
different areas within a certain field. And part of the goal was to become the world's expert
in that field or in a subset of that field. As a VC, and I think it's actually something that
AI is now changing, which I talk about with you. It's possible to have a much, much broader scope
in terms of the types of businesses that you're looking at, but then kind of gear up and start to
get really deep technically where needed. The AI thing is actually interesting. I feel like I've
always had a little bit of advantage because I do go pretty deep in a lot of areas, maybe not as deep
as the PhDs, but I'd go deep and lay deep. But now with AI, everyone could just pretend, right?
And by the way, it's useful for me too because I'm like, oh, how does that? How does that really
nuance deep tech thing work again? And you just like ask it and then all of a sudden you're
like asking really good questions. It's, yeah, dangerous right now. It is. Yeah, I think like AI is a
technical co-pilot for diligence is actually one of the most interesting use cases for AI as an
investor right now. I don't think it's a replacement for human experts necessarily, but it certainly
is a huge help. If you're, if you come across a really interesting company in a field that you
know a little bit about, but you're not an expert in, you can get really, really deep within the
matter of hours and then pull in kind of human experts to help you to go the last mile. Whereas,
you know, five years ago, it would have been this kind of mad scramble kind of calling people,
trying to assemble folks that might have taken days or weeks. That can be done in much faster now.
I actually, one of the hacks that I really enjoy when I have free time is to ask the AI,
like, what are the most interesting and important or cited papers or new papers or controversial
papers in a field? And then explain these to me as if I only have an undergraduate degree. Like,
so dumb it down a little bit from like the most sophisticated PhD level of whatever it's saying
and explain it how it works. And you actually can pretty quickly kind of learn what's going on
in pretty much any area. For sure. Yeah, I think that as a VC having the ability to tailor the
output of a model to meet you where you're at from an understanding perspective is one of the most
useful aspects. Opening up a 10, 15 page long academic paper full of citations is extremely
intimidating. It's so, it's so annoying these papers and it's like they purposely, I think it's
like they purposely sometimes make them so you can't understand this. I always felt about
a manual cons and some of these philosophers, I mean, pretty convinced they're purposely doing it
to like, so I'm not one of the initiated. There's like a proof of, there's a proof of work there.
You need to put in the time and effort and they like that. They're purposely putting the barrier in
so the AI can remove some of that period. It's not anybody. It's dangerous because you probably
are missing a few things still. Yes. So you were helping people build things. You were an investor
and then you went on and I was involved, obviously, in founding what's called standard metrics now,
which is this awesome, really quickly growing company. You have over 10,000 companies now that
standard metrics report on for people. That's right. What was the idea behind the company? Yeah. So
when I was at Spark, you know, when I joined the firm in 2014, the firm had already been
fabulously successful. It's like taking Twitter public, Wayfair, you know, Oculus Assault to Facebook.
It was a really interesting time to join the firm because it was a group of very technical people
who were really interested in investing in cutting edge products. At the same time, the firm
didn't really have core software that it was running to operate day to day. And you know,
when I got to the firm, I was actually surprised by that. I went to talk to friends in the industry
and started realizing that very few investors in the private markets really had access to great
software. It was this kind of cottage industry that had suddenly grown a lot and become very
competitive and very global, but software hadn't yet cut up. And so I started going on this quest,
actually ended up meeting Ray and Shubi from Affinity as part of this. I'm looking for software for
our firm that we could use. We ended up buying that software and using it to try to run and improve the
way we were doing relationship intelligence for data. So kind of discovered firsthand and some
of the stuff that I went through when I was there was around, you know, for example, I have an
LP meeting coming up. How to collect data from over a hundred companies, clean all that data,
have it be apples to apples, use it for a bunch of modeling exercise and reporting exercises.
When you say apples to apples for people, it's basically you want to show off the metrics where
you actually compare the company's progress to each other. That's right. For example, let's examine
revenue growth rates across the entire portfolio. Let's examine, you know, growth profit margin
across the entire portfolio. And so going through those exercises, even things like, hey, let's
keep a list of the companies that are pretty low on cash to make sure that we're jumping in to help
them. Those things were incredibly manual and time consuming. And those experiences really inspired
me, kind of gave me a prepared mind. Then when I ended up meeting you Alex and Danny and the team
eight VC who are also working on this problem, kind of gave me the inspiration to go and start
this business and try to build a platform to automate and improve portfolio reporting, reporting
management for the private markets. It sounds like pretty SOTAR, great portfolio reporting,
management for private markets, but there's there's literally hundreds of thousands of the smartest
people trying to solve these problems. And so you're kind of supporting how many customers do you
have now? How many people use this? Over 150, 150 firms. 150 firms, yeah, and well over 10,000
companies now on the other side of the platform, you know, and it's neat because if in any,
obviously we built to solve one part of the problem. This is like a, this is like the other
probably biggest part of the problem for all these firms. And it started in venture. Are you doing
private equity now as well? We're doing some private equity now as well. I say, especially in growth
equity, which is kind of an interesting in between looks venture style in some ways, but then also
they also do buyouts. You know, we see, you know, the mission for our business is to accelerate
innovation in the private markets. We think that ultimately a lot of folks spend a lot of time
doing things that ought to be faster and more data driven and more automated. And we also
think that if people had access to better data, they could make better decisions. So portfolio
companies, for example, when they report data on state or metrics to their investors, they get
access to aggregated and anonymized benchmarks that are relevant to companies at their stage and
their sector. The 10,000 companies actually in some ways are kind of your customers as well.
Absolutely. Absolutely. Yeah. And from our perspective, if we can help those companies to build
better relationships with their investors, have a better dialogue with their investors around
their numbers and also have access to better data, we think that they're going to improve their
probability of success. And then on the other side, you know, investors for decades have, we think
not had access to the kinds of tools that they need to really operate their businesses in a data
driven way, which extends from the, you know, the back office and processes like, you know,
working through the audit and valuations and LP reporting all the way to, hey, our portfolio
companies coming in, they're raising another round. Should we invest more? Should we invest less?
Should we participate a lot of my stuff right now? It's like I was with the CEO last week and
send a new AI company that's, you know, a couple of years old. He's like, Joe, we're not just
doubling revenue this quarter, we're tripling revenue this quarter. Yeah. And that always sounds
crazy. It's like, really good. But I'd be interested to know, okay, well, I'm sure there's like
another 50 that are doing that too. Like, how does it compare it based on the stage it's at?
What are those raising rounds out? Like, so like, you're based on the algorithm, like, what is
this worth? Like, actually, don't even know. Yeah. Like, it's all changing so quickly. Even for
me in the middle of it, probably as much as anyone, like, what is that worth? And it's like,
you would know a lot better than me because you have all the data. Like, yeah, like, like, how does
that workflow work? Yeah, for sure. So I think it's a couple things. I think one is,
um, getting all of the data in one place. And I think the way we think about it is that
firms need a mechanism to collect and store all their information on their portfolio. As we said
before, kind of in an apples to apples format, that's clean, that's auditable. Um, that's,
that's kind of traceable back to where the data came from. Um, they need a way to access that data.
We think about it as, you know, we have this deep commitment to interoperability at our company.
So we think that when an investor has their data, it's their data. They should be able to use it,
you know, with their own AI agents that they're bringing to the table or with software that they're
building on top of standard metrics or in a spreadsheet or on our platform. Um, and then lastly
is, uh, kind of robust tools that help them to leverage that data as well as the corpus of data
that exists in an aggregated anomaly. Yeah, because everyone asks what that's worth. I need to use
your aggregate data to see to compare other things. Absolutely. So we think that the most powerful
mechanism is if a firm has great intelligence on their portfolio company, they understand all
the numbers. They also have, uh, written, uh, artifacts, notes, partner commentary. Um, and they
also have access to what's going on broadly in the market and how that's changing over time.
They can take all of that data to bear to make the right decision for their firm. And that's
something that we, we think is an opportunity for firms ultimately to generate more alpha,
not just creating like having really smart initial investment decisions, which is obviously
mission critical, but then also, you know, many firms take 20 to 50% of the capital they're fund
and reserve it to invest in their existing portfolio companies. And we want to help them to do
that better. So let's talk about like how AI is changing things right now for firms. Obviously,
you guys are seeing what's going on. You guys are seeing the numbers. Yeah. Is it changing how
venture capital firms operate? Like, like, like, how are people using AI in general? Yeah. It's
changing how firms operate, um, all the way from the very, very earliest stages of sourcing and
identifying companies all the way through things like, you know, portfolio management LP reporting
on the other side. Um, it's actually really fun right now. I say, when we started standard metrics
with you guys, you know, there was in the, in the initial phases, a lot of what we were doing
was evangelizing to firms like, Hey, you should use software. Like software will help you to run
your firm better. And, um, you know, now that's still happening, but there's also a lot more
that we're learning from firms. They're experimenting a lot. So for example, on the sourcing side,
for the last decade plus, there've been a bunch of firms that have built data-driven sourcing
engines. They're looking at things like GitHub stars, a number of employees on LinkedIn,
social media followers. We're now seeing people use large language models to do a lot more qualitative
research that compliments that quantitative. For example, like, what's the tone of what people
are saying on Reddit about this company? Um, or much more nuance is a good if they're attacking.
I mean, I mean, those guys are pretty crazy. Maybe. Maybe. Well, there's, there's also, there's
also a lot that people are doing around the analysis of people. So for example, if you take someone's
GitHub profile for their kind of public facing work and you take their LinkedIn profile,
can you start to build a map of the type of founder that you want to identify it back early?
That matches with your investment thesis. So we're seeing a lot of really interesting experimentation
there. Um, on the diligent side as we discussed before, um, you know, AI is a technical
super powerful. Another thing we're seeing is that firms are writing investment memos and then
they're using tools like cloud and in chat GPT to poke holes in them. What am I missing? Where,
where's my argument weakest and then using that to help to guide the diligence process and then
feeding that back in again. So you start to get into iterative mode on the portfolio management
side. I mean, we have customers that are doing wild things right now. For example, we have one
customer in Singapore that's using an agent orchestration tool and it's building this wide
variety of AI agents that interface with standard metrics, grab data and fulfill a bunch of really
critical internal reporting workflows. Um, for example, an LPS a question, an agent automatically
gets triggered reaches into standard metrics, goes back, composes an email, flags it for a human
and then it sends it off to the LP. Things that you wouldn't have even considered a few years ago
are now starting to get built. Just all automatic basically. Yes. People are talking to agents
more than they realize. Oftentimes with humans in the loop. And I think human in the loop has
been a really important part of our business, particularly on the data processing side and making
sure that data is correct. Um, and then yeah, when it comes to, you know, the sort of technical
diligence side, this another area where I think the AI is really helping people to push deeper into
into deep tech. Can you not teach the AI to like use the common sense check to make sure the data
is correct? Like, like, like, it's like how long do people going to have an advantage in that?
Isn't it like a common sense framework you can give another agent to do the check? I think
there will be. I think there will be. And we're certainly seeing even just the, the, the,
the kind of core models, the reasoning models get better and better and better at identifying
issues. Um, I think that when we think about the future, when data is really well organized,
um, for example, the internal AI agent that we've built, which we call our AI analyst, um,
when the data is really well in the analyst that's part of standard metrics that a firm could use
if they have standard metrics. That's right. Effectively you have a name for it. We call it AI
analyst. Not the most creative name in the world. I guess standard metrics, there's no like
little, like, standee or something. You know, like, you know, like, you know,
adipore has Addison, right? So that's right. I tried to like, yeah, standard metrics. This has,
AI analyst, we just, yeah, we decided to not anthropomorphize the AI. It's not an agent.
It's an agent. That's fair. But we, maybe we could do some branding help there. But the, um,
the AI analyst, when it has extremely structured context with really clear harness in guardrails,
it's unbelievably accurate at pulling data across large amounts. So for example, you could have
hundreds of portfolio companies and ask a question like, um, make me a table with my top 10 fastest
growing companies where I own at least 5% that have at least $10 million in revenue and include,
like, these metrics. And it'll produce it with very, very high fidelity. We do want our customers
to always go and like, double check the data. But that works really well. What's tougher is feeding
in a completely unknown document and having AI extracted well out of the unknown documents as
well. We do, we do that currently, but we use humans in the loop to make sure that we queue.
And that's something that's just like, it's not, it's not a fully solved problem. Obviously,
it is not a fully solved problem yet. We think it will be in the future for now, having humans in
the loop dramatically improved. So this is where a lot of kind of cutting-edge stuff's happening,
like one of our companies, like with real, like hand-drawn landscaping documents, they can get
it right, even though it's like different every time, because like it's so smart enough to like
figure out that they're applying me to plan, given what they're doing. They're applying me
to this type of plan. But it's a very hard problem, right? So you have to have a person check for
most of these things. Yeah. By the way, Chris thinks Stan is a good name for us. Stan? Okay.
Cool. We'll take that into consideration. So internally on your team, this is where you
were all living in AI world. I think there's like probably a few hundred thousand of us that are
part of this in a lot of ways. And obviously everyone in society's part of it, but a lot of people
are listening and they're not building things they are right now. Like, how's it different for you?
Like, tell us about this. Like, you have a team. Obviously, you have a lot of technical people on your
team. Like, how is the AI changing how your company works internally? Yeah, dramatically.
And it really feels like the rate of change is increasing. Every day, the sort of the product
releases, the conversations we're having, it's super exciting. And it's a lot to stay on top of.
I see the first place where we saw it change the most on our team had to do with data operations,
as I mentioned before, we sort of ended up building a human team to go and work with our customers
and help them to process our data. And the first place we launched AI agents in our product was
to assist the data operations team. Now, AI is doing the vast majority of the work. So that's
been a big, big, big shift. Have you cut people or you could do new things now? How does that work?
Because our teams grown a lot, we actually have continued to grow that team, but we're just handling
a much larger volume of data. So it didn't need to grow the team as much as you would have. Exactly.
Do do the same thing. And also what we tend to do with this with this team called data solutions
team is they're oftentimes focused a little bit more on breaking new ground. So we're really good
at processing data, data from financial statements. We just launched a board deck parsing with AI.
And so our humans are like much more involved in running that. So someone sends me a board deck,
like right now I have agents like helping us with emails and stuff too, of course. And like,
you know, in the loop, obviously, when you hear back from me, it's not the agent, but it's
healthy. So if the board deck comes in, that's actually smart. Like there should be like something
where our agent knows the gift, make sure Sandra metric automatically gets it and just automatically
goes in. That'd be that'd be a good work. And the data is extracted, but doing that from a board
deck is really hard, right? From a financial statement, it actually is quite hard because
the range of different formats and currencies and fiscal board decks, even harder in some ways,
but at least this usually should be labeled like usually, but oftentimes data is in charts. So
then you need to use AI to start extracting approximate values out of those charts. So there's
a sort of a different set of technical issues that crop up. Well, super annoying when it's a chart.
There's not label perfectly or something. That is also like to your guess. But so data operations
is the first area. The second big area for us is engineering product and design. And that's been like
a, what do you guys use? What do you use? We use a bunch of different tools. So we use on the
engineering side, you know, cursor, we use cloud code, we use this tool called AMP, which is,
you know, in many ways, it's also a GenTX similar to cloud code. We, what we've ended up seeing is
engineers adopt some of these tools. Product managers and designers say, hey, that looks cool.
I should try this too. And suddenly designers and product managers are doing very different things
than they used to. So for example, we're redesigning kind of a core part of our app. And one of our
designers built a fully functioning prototype using cloud code. I love it. I love how you can just
vibe code. It's something that actually works with a designer. It's amazing. And it's not just like a
mock that you can look at. It's actually something you can play with. You play with and get feedback
on it. You can start to identify edge cases and corner cases that aren't being met necessarily.
So that's changed a lot. Velocity has increased overall and also the ability for people to be creative
and contribute to areas outside of their traditional core domain has dramatically changed,
particularly in product and design, being able to reach in a little bit more, contribute a small
PR pull request, for example, for maybe something looks visually off and there's a little bug in
the platform. That might be something a designer could identify and fix and then engineering needs
to review at the end versus adding it to the stack of all the different tasks that the engineering
team is working on. And then of course, there's a bunch of other places in marketing. One fun thing
that we've seen is the way people find standard metrics is also changing. Interesting. So are there
agents finding it now? Yes. Well, I don't know what extent it's agents, but certainly it's people
using, you know, chat GPT and Gemini and cloud. How do I do this workflow? Oh, well, this thing out
here does is like our friends at Versel, Guillermo was I guess this the power like tens of millions of
things, like the vast majority of the time it's used as infrastructure now. It's an agent that knows
it's the best thing. And then there's like a really interesting question of like how do you put
things online to make sure the agents like totally identify and know it. So you're you're thinking
about this? Absolutely. Yeah. And where I actually vibed coded my personal website with Versel.
And and it was it was a ton of fun. So yeah. So we we actually use software that helps us. It runs
hundreds of prompts per day across four or five kind of major LLMs. It helps us to identify exactly
where standard metrics is getting mentioned, which pieces of contents that are linked to standard
metrics, the LLM is referring to and helps us identify gaps to improve the rate at which we're
getting. So you're actually creating some content out there to make sure that they can see it.
We're creating content specifically for LLMs that's also human readable and useful for humans.
But it's really it's really engineered to make sure that LLMs understand what we do. I love
it. It's really helped by the way. It's a brave new world of marketing to the computers now.
Yes. Please excuse me. And then you know, so people talked about the SaaS
pop clips was a big term last month. Yeah. Where I mean, this is obviously this is a high growth
SaaS company. Yeah. But it is like, but also you're moving into AI and your AI company as well.
Yeah. So should people be afraid of the SaaS pop clips? Like what are the certain companies? You're
actually your bearish on because of it. But then like, how do you think about it? Yeah. I think
that the SaaS pop clips like most things there is there's a there's a lot about it that's correct.
And there's certain things that are probably overblown. I think that the thing that the most correct
about the SaaS pop clips is getting us to question all of our fundamental assumptions around competitive
barriers to entry for software companies. Historically, the number one, you know, competitive
barrier to entry for software companies has been switching costs. You buy software,
you build a bunch of workflows on it, you get a bunch of users using it, you get a bunch of data
into the system, and then it's very painful and expensive to switch to another platform.
Because of AI, that's changing. It's cheaper for competitors to build software. It's cheaper for
people to vibe could their own software. And it's also easier to migrate data because of AI agents.
So I think switching costs have gone down. And so I think it's right for the way that software
companies, the way that software companies speak, value to change in light of that. It turns out
though that there are other competitive barriers other than switching costs. So for example,
for our business, one of the major competitive barriers that we have is around network effects.
Specifically, you have 10,000 companies. Exactly. We have companies that use our product when
we bring on a new investor, we're talking to a new investor, we just closed a new customer
yesterday. And they had like 40% of their companies already using. So a new customer comes on
and it's like a ton of their companies. Our data is already right there. Exactly. It's really
easy for them to onboard. It's also really nice for their portfolio companies to be able to use a
tool with more than one investor. Of course. So that's one of the KPIs that we track internally.
It's like how many companies are sharing data with multiple investors in the given month.
So network effects is one. Another one is data. So for us, I've mentioned before, we leverage
aggregated and anonymized data for the benefit of our users. We help to offer insights to our
users from that data. That's something that's also quite hard to assemble and quite hard to build.
You can't like vibe code a network. You can't vibe code a new data set. And people are starting to
use this obviously. I know we use it. How easy is it? I'm going to push you off line on this. How
easy is it to like check what valuations might be based on data? Yeah. So that's a workflow we
can make really easy. So right now our AI analysts will do that for you if you ask it to. I feel like
you should like make this like a thing that just automatic. I know. I know. We had a conversation
with an auditor recently about this too. That was really excited about it. I think that there's
going to be a lot of opportunity for innovation and automation around valuations. Leveraging AI plus
benchmarks plus proprietary company data as well as public market comps and other 100%. And I feel
like this is like super controversial because everyone gets pissed if you're like given the
valuation they don't want. So you probably have to have multiple models like under this framework
is this. Under this framework is this. It should be really nice just to have a starting point.
Agreed. You need multiple models and you also need auditability and traceability. Like so I
think that's actually an example we're having like a more opinionated deeper harness for the agent
is really helpful. There's some cases where Claude's harness that it's built is so good that
can just do a lot of stuff out of the box. Like one thing that we published recently was
you can go into Excel, hook it up with Claude, hook up Claude with standard metrics and then ask
it to do a discounted cash flow for a portfolio company. It'll just build it for you. It's amazing.
Which is of course, by the way, there's nothing to do with valuation in venture these days.
That may be the case. It might in 5 or 10 years. That's right. That's right.
But for something like valuations, you could imagine building like much more opinionated.
Like Joe is involved plus five. This other person's involved minus one.
I think it's exciting though. I mean, one of the things that we're I don't know if you're
other portfolio company. I've actually be curious. There's other examples. But one of the things
that we're starting to see is that our product development, we're also like discovering a lot of
capabilities in our product that we didn't know our product was capable of. Interesting. Because
it's pretty cool that way. Yeah. We're like, oh, we should try to do this and see if this works
in it, but then it does work. And then we tell our customers about it and they start using it.
There's probably like a lot more product marketing that can exist in an hour old that doesn't
right now. I'm seeing some of my companies because there are just like all these capabilities,
like all these things and then no one even knows. Totally. And we're like much more actively
investing there. We're doing a lot more visual content for users where we just we created a I don't
know how much like you guys other companies are using MCP sort of protocols that exists. Explain
that to our listeners. Yeah. So and we think of MCP effectively as a protocol that allows AI
like large language models or agents to to really effectively use an API. So for standard metrics,
for example, we have a really robust public API. We also have an MCP. And so if you want to hook
up standard metrics to a large language model, it can figure out how to use our product really.
And you're just letting anyone plug in. Do you have to be careful as someone's getting
with you, is going to try to plug in and just like steal stuff. So so yeah. So obviously the data
privacy and security side is critical. I'm making sure all of that works really well. There's a
there's an opportunity with MCP for large language models to be able to go and you can get super
creative with how to use products. You can also start to combine different products together.
Is it model context protocol? That's right. Yeah. We're all a context protocol. I should have
looked that up. But which is by the way, very funny. Because in the in the past, APIs are like the
super APIs. Yeah, rigid programming interfaces, super program, super like rigid. Yeah. It's like a
program. And it's defined exactly. You submit this and you get this and it works. It's exact
weights. It's a programming language, whereas model context protocol is like models are funny.
It's like you just talk to them. It's just like explains what's going on here. And what they
look at it. And they use that to kind of figure it out, which is a fun example. Yeah. So like so
we so we had a so we when we started exploring this, we had this hypothesis that we could do a
bunch of really crazy, interesting new workflows. One of them was, okay, what if we hook up with
notion calendar and standard metrics and then you ask a question, it's probably relevant to you.
What are all of my upcoming board meetings over the next month? Create a notion page for
each of them. Go into standard metrics, pull all the latest data and build me a like quick
summary and an agenda for each of those board meetings. And it'll go and it'll understand what
a board meeting is. It'll go into calendar. It'll look at your calendar. It'll pick out. Okay,
these are the six board meetings coming up over the next month. It'll go create those notion
pages. It'll grab the data and they'll build those reports. Now that might need to be tweaked by
someone on on your team or by you to get this would say my chief of stuff a lot of time. It could
save the chief of staff hours through a single prompt. So the ability to be creative like that,
I think is one of the really new fun things about building software. And how quickly is everything
getting better? Like it's given us a sense of this. Like there's some metric we're like clawed
now as like the agents are able to work twice as long. Yeah, interference every three months,
which is kind of insane if you think about it because eventually there's going to be able to work
forever. And I guess they're also getting faster. Like what does this mean? Like over the next
year, how does your business change with this? Yeah, for sure. Yeah, there is that metric that's
sort of yeah, the it's like what there's there's a task that has a 50% chance of successful completion.
How long does that task look like for a given model? I think clawed is now like 20 hours or something
like that. And it was at like, you know, one hour a year ago. Yeah, it's pretty remarkable.
I think for us, one of the biggest areas where we see the models getting better improving our
business has to do with the extraction of more and more and more structured data from a longer
and longer tail of different types of written communicators. You're still to bring everything
together. Everything together, centralizing everything and giving it access to the AI
analyst as context is is actually relatively straightforward now because of the size of the
context windows. How much data the model can kind of be fed. What's trickier is the extraction
of structured trustworthy data from that. So that's going to be an area where we think they'll be
to be rapid improvement. And then we also think that the tooling around everything is going to get
better and better. It's going to be easier to orchestrate work. Some of that orchestration will
happen within applications like hours. And some of it will happen outside of applications like
hours and we'll reach into these apps as a key data interface. Some people are referring to this
as like a headless model. And I think it's really important for companies to be thinking and building
with that in mind and making sure that if their users are extremely sophisticated with how they're
thinking about building their own internal AI agents that your company can be a partner for them
and not be sort of in a combative position. Sounds like work is going to get a lot more efficient
and effective. You know, it's like a dangerous subposition, you know, his canes when he saw
all the proctivity and wealth coming over the next, I guess he was 90 years ago. Now he thought
we would just only work 10 hours a week because he's so wealthy, we wouldn't need to work. And it's
like it seems even more so that way right now, although I guess I guess there's like reasons why
people might want to work harder and still make tens of millions of dollars. But it's just he's
saying that you're not going to need to be doing as much work as much done, right? Maybe. Yeah,
it feels like the opportunity cost of not working is so large right now though because there's
so much happening. That's the air side of you with a little bit of the little bit of time you
can create so much. The opportunity cost is like, oh, you should actually work more, which actually
is my life. I'm obviously working really hard because every little bit of time I could do something
to help cure disease or help fix this thing that's broken over here. And so every little bit of time
work, you're kind of compelled to work more because you get so much done. For sure. I think the other
side of it is something that gives me like a lot of excitement about the future is I do think that
the infrastructure around running a business, a lot of the infrastructure that's almost
frustrating and annoying to deal with, things that most small business owners don't want to spend
time on, they wanted to be done right. But I think like, you know, accounting or setting up a
website or, you know, customer support queries that are coming in. It just happens now. You
make those things can just happen now. So I think what we're seeing now, which is really exciting,
is this explosion of small businesses getting started. And I think that people are going to be able
to spend more time on what they're truly passionate about, probably spend more time on things that
directly interface with other humans, whether it's producing products for them or working with
them or helping them with something versus, you know, jobs that were more about paper-pushing
that are going to become more automated. So that is something that gives me a sense of excitement
about the future. I love it. Well, it's a very optimistic case of the future pushes back
against some of the businesses we're seeing from others. John, thank you for joining us. Thanks for
having me.

Joe Lonsdale: American Optimist

Joe Lonsdale: American Optimist

Joe Lonsdale: American Optimist
