Loading...
Loading...

we're entering a period where there is going to be
much more hardware testing.
So I actually think that the pendulum is going to swing back.
I think we are coming to grasp with how little we actually
understand about how physical systems operate in the world
and how lacking we are from a data perspective.
It's going to be a race to try to collect this data
and actually develop these models.
I always think of it as if you have AGI designing
like a video game for your child.
You might let them play it without it being rigorously
tested just a video game.
But if you had AGI building a toy for your child,
you would really want to make sure that it wasn't physically
dangerous.
It's like the physical world will just always
be different with this.
What we love them.
MUSIC
Cameron Jason, thank you so much for joining us today.
Thanks for having us.
Nominal is the all-in-one data and AI
platform for hardware engineering.
You are used by amazing companies from Anderill all the way
to the Corvette racing team in industries,
including aerospace and defense, robotics, autonomy,
and more.
And I think one notable set.
You just shared with me your use by four of the top five
defense primes in the US.
Congratulations on everything so far, including the recent race.
Thanks so much.
Yeah, thank you.
Let's jump right in.
You've talked about how we're entering a new age of hardware
and that America is rapidly reindustrializing
its industrial base.
Can you just discuss that?
Yeah.
I think we talk a lot about it.
There are a few, I think, tailwinds, macro tailwinds
that Nominal is not really benefiting from now
in terms of industrialization and hardware development.
Really, when we think about that,
we have this huge compression in timeline.
People are trying to build and field hardware products
faster than they ever have before.
We think hardware testing plays a particular part in that.
And we can cover that in much more detail.
I think reindustrialization more broadly,
more money is going into building hardware products
really, really rapidly, particularly in an area
where we're very prevalent in aerospace and defense.
And I think really the paradigm of how hardware
is being developed is shifting really, really rapidly.
I'll give a little sort of vignette
I think of how we kind of think about it.
I think if you look back at core software development
and you think about what happened over the past two decades,
I think one really good way to think about it
is actually talking about GitHub as an example.
And I'll use that to talk about GitHub
is a version control system of ECS.
But if you go back and do a little history,
it's like companies that were building pure software
used to, they would locally manage versions
of their software that they would develop.
They would eventually they started
to centralize that internally, but still within the company,
all internally managed.
And then eventually it got so good it became productized
and outsourced and venture dollars pour in.
And that was really I think the first of the creation
of something like GitHub, but I think all of the CICD
and DevOps tools that we take for granted today
and the software testing problem really is a solved problem.
But that same luxury does not exist
for the hundreds of thousands of hardware engineers
that are now at the frontier of software
to find hardware, autonomy, and robotics.
And that's really the space that that nominal is playing in.
What do you think is driving the, you know,
hardware feels hot again.
I have so many hardware companies on my calendar every week.
Seems like there's a whole generation of founders
that feels regalvanized.
They've been trained at the likes of Anderl, SpaceX, Tesla.
Like what do you think is driving,
it feels like there's something in the water
and the kind of re-industrialization kind of
start up community?
Yeah, I think it's probably there's probably
a positive frame answer to that question
and probably like a negative frame answer to the question to it.
The positive frame is I think that we're just like,
I think humanity is sort of reconciling
there's like these big oscillations.
I think with like a lot of the ambition
and a lot of the things we want to exist in the world
are in the physical world.
And I think people are just sort of coming around again,
two decades of like the specification of the world.
And I think people are just excited
to build real things again.
And I think particularly companies like SpaceX,
like Anderl, like Tesla, I think have proven
that if you make investments in the infrastructure
and the tools to do this type of hardware development,
it's a massive competitive advantage.
That's a positive frame.
I think that the opposite framing, I think,
is when we can talk about AI and how it is impacting
many, many worlds here.
I think hardware is still a world where there is
defensibility in itself because hardware is hard, right?
I think there's capital intensive,
it's difficult to ban metal and steel and electronics
and all of this world is like, it's very difficult.
And so I think that there's people excited about it
from an investment perspective as well.
Awesome.
So one of the things that we've always observed
is that there's a big gap between what works in simulation
and what works in real life.
What's that gap today?
How do you help founders with that gap?
And how do you make it concrete for them?
Yeah, I'll start and then I'll pass to you
as more of the expert.
But I lived, I lived this problem very viscerally
at my time at Anderl.
I got there around sort of 2018, 2019 frame.
The company was very early and there was,
I think it was very invoked particularly then
to try and simulate everything.
And I think that the real power comes
from blending simulation outputs of models
with real world telemetry sensor data logs
coming off of physical systems.
And the advantage is being able to do that
continuously and very iteratively.
I saw the pendulum swing to let's do everything in simulation.
Let's like get as early as we can in the design lifecycle.
Like we can solve problems there.
But we sort of always joke like physics gets a vote.
It still gets a vote.
And we have started on, yeah, physics gets a vote.
Yeah, physics gets a vote.
I mean, we particularly have started
with hardware testing as the like narrow kind of wedge
that we've built nominal around in these sort of early years
because that is where software define hardware
is like touching reality for the first time.
And I think it is where most of the,
it's the tip of the spear for how software is going
to impact the, you know, physical AI
and the development of systems.
Eventually, I think we will spread more and more
into the simulation in design worlds.
But I think being able to like merge those two
is actually where the advantage comes from.
But yeah, I was going to say part of the reason
our customers have an appetite to partner
with someone like nominal is because, you know,
these hardware organizations 25 or 30 years ago
where they developed a model of solving these things
in kind of a fragmented way.
So the people who were building your simulation
would be different than the people who were doing
the first prototype would be different than the people
who were doing the manufacturing.
And as it all becomes more connected,
the lack of a common data platform
or infrastructure should really become obvious.
So recently, I like, I talked to someone who,
you know, for 30 years has become like a specialist,
this is at one of the traditional primes,
like a specialist in their specific proprietary
simulation technology.
And while it's amazing, like the length that they've gone to,
you know, it's all getting disrupted very quickly
by the incumbent players like Anderl.
And so to move at the speed that people are kind
of expecting nowadays,
you have to make sure that the engineer
who is maybe involved at the early stage of the lifecycle
can actually take the logic, the validation
that they're building on tool like nominal
and apply it, you know, much, much later
when something's actually out in the field
and they're monitoring something
that's a production use case.
Before you guys started nominal,
what did the primes use?
What did Anderl use?
What did SpaceX use to do all this testing
and monitoring and learning to change the product?
Well, so SpaceX is really interesting to us
because kind of like unlike other players,
they decided from the beginning that they had hired
some of the most talented, intelligent,
hardworking engineers in the planet
and they wanted to empower those engineers
and they said that the existing software that people use
for testing, especially like test data analysis
wasn't good enough and they started to build
something proprietary in-house.
And when we were starting the company
and kind of studying that, we said like,
hey, this is a huge reason for their eventual success.
Like it actually led to this acceleration
but, you know, a thousand companies that are now
being started this year next year,
it doesn't make sense for all of them
to build a platform like that.
So that's part of the motivation behind nominal.
And I'll give an example of the many companies
that are not SpaceX or Anderl or Tesla.
I think this sort of status quo in the industry
for test data management is pretty shocking.
It still is an area where for most hardware development,
data is almost like default stored locally.
So there's a lot of like network accessible storage.
It is still a world where like the cloud is not like common.
It is engineers like downloading data
from a central drive to their local machine,
their laptop, to run their own individual MATLAB
or Python or, you know, insert other parsing
or analysis, you know, software,
to come to their individual result.
Like I'm an avionics engineer, Jason's a GNC engineer,
you're a thermal engineer.
Like we're all doing our work independently.
And then we are trying to find a mechanism
to like post those insights and results back.
Often via screenshot, so PDFs, PowerPoint engineering
is like still the bleeding edge for like many,
many of these companies.
And I think we often talk about like the early days
of nominal, we were trying to like rip the industry
from, you know, 2003 to 2019, 2020
and just like good software practices,
sound data engineering, like Jason often talks about,
you know, we'll be able to data nominal is having to get
a 11, 10 or 11 really, really hard software problems, right?
To empower our users.
And then now we're on a very setting journey,
I think of like of coming from 2020 to 2021
into the world we're living in today for our users,
which is, which is pushing the frontier.
Yeah.
How much are you educating incumbents?
Like you said, you were working with four out of five
defense primes out.
How much are they really adopting AI?
How much are you educating them?
What they need to do to improve their product?
It still looks like it takes many, many years for them
to make any change to their hardware products.
Well, as Cameron saying, like the state of the art here
is kind of behind.
And so as we kind of catch them up,
that's the like necessary first step to using AI.
So like as someone who uses AI tools every day,
you know, you might think it's natural for a hardware
engineer to ask a question like, hey,
what happened in the last 50 tests that I ran
and is relevant to the tests that I'm looking at now?
But that kind of assumes that the data
from the last 50 tests even lives in one place.
And that's kind of the problem that needs to be solved.
And the, you know, the primes are interested in solving
that they recognize the value there.
And you know, some of them are getting,
I would say like tired of trying to build it
in house themselves and want to,
they're having appetite to work with a partner like us.
A conversation I often have with the, you know,
chief engineer, CIO, CTO sort of across the table
is like this concept that they're well aware
that there is, there are insights trapped
in their hardware systems.
So this is the real world of like data acquisition systems,
test stands, lab testing, power supplies, instrumentation.
Like that is their bread and butter
for bringing their hardware, you know, products to life.
And such a small percentage of that data
is ultimately making it into some central repository
where it can be sort of structured with metadata,
organized catalogued, just like that basic step.
It used to be digital engineering.
I think that was sort of the, the term of art
that was very invoke.
And now the conversation is rhyming more with physical AI,
but I think the building blocks to getting these organizations
ready to build like AI capability and applications
on top of that really starts with
a sort of semantic layer that nominal provides
in a lot of the way that we catalog this hardware data
for our customers.
And I'll say that like the ambition of AI here
is like gets me really excited because sometimes it's,
you know, asking really interesting questions of like,
okay, is there something that my team didn't catch
when they did all the review of,
if you have 10,000 sensors that are each producing
a million points a second, like that's a ton of data
that, you know, automation can maybe surface things
we want otherwise notice,
but we should recognize that some of it's also going to
just accelerate the more tedious parts
of data ingestion and data review.
So right now it might be the case that, you know,
one of our hardware engineering users every week
they want to automate, hey, this data check
should be happening every single time we do a flight test.
Even as I'm not becoming involved in it,
we're having to do that testing in a remote location.
There's a flight operator who's going to be doing it
in my place.
Oh, but I still want that data check to be happening.
Like maybe the friction to them doing that
is they don't want to learn, you know,
a custom domain specific language
for encoding that check if they could use, you know,
in English to code prompt in a tool like nominal,
that might be the thing that like unlocks them
to actually get that across the line.
And then they can focus on the kind of like more creative,
more judgment, human aspects of designing hardware systems.
You mentioned the GitHub analogy earlier.
If you map out the hardware design life cycle, so to speak,
I'd imagine there's, there's the design of the thing,
there's the testing of the thing,
there's the manufacturing of the thing,
and there's the monitoring of the thing in production.
That's what my simple brain kind of maps it on too.
Is that fair?
Yes, yeah.
Why start with testing?
And you mentioned it's like, it's one of the only category,
that's been served by PDFs, like, you know,
design tools, manufacturing, these,
these each have their systems of record.
So why is testing been neglected today?
Yeah, I think it's really, I mean,
one answer to the question would be like,
start with it because it has sort of been neglected
and it doesn't really have its system record.
One way to frame dominoes, like, we can be a form
of a system of record for testing particularly.
I think it's like, there's a quick business reason,
I think, for starting with it, which is like,
it is an area where I think,
demonstrating ROI with speed is just like so clear
for a customer result.
So being able to, there's sort of this like mantra
in hardware development where, you know,
testing is like, it's this function,
there's sort of incremental improvements you can make,
you know, save seconds that can pound a minute and hours,
like that's real value value for a customer
that's trying to feel the product in a competitive market.
But there's always this sort of like long tail of risk
that everyone who's been on a major hardware program knows.
There's always like something hidden in the data
that they can't sort of figure out
and it's sort of like an all hands on deck effort,
it can hold programs.
And so like, it's been able to help customers
or I think surface insights there.
I think the other answer though
is just that like, testing is by definition,
iterative, like that is what testing is.
It's sort of the most classic like experimental
independent variable like science, right?
And so I think it is just it's it's iterative in nature
which is exactly what nominal wants to be aligned with
which is like, how can we drive that sort of like iteration?
And I think when you look at the hardware development
lifecycle, testing is a really good place to start.
And then we have this vision in our customers'
policy in this direction already of,
if I use a software platform, data platform in testing,
I develop all of the validation logic
that governs that systems performance
on a specific test.
That should be the exact same set of logic
that is easily dispersed in an organization
to the production manufacturing sort of end of line
quality test where I am just automatically running
and nominal, we call them checklist
but like validation logic essentially.
And then I should also be able to deploy that
to the edge to this, you know, hardware system.
And so almost like nominal core
core product becomes like the authoring hub of all logic
that governs the performance of physical systems
and I can sort of version control it
and into play at the edge.
And for people in the audience who are software engineers,
I just want to clarify because hardware testing is so rich
and it's one of the things that I've come to really appreciate
as someone who comes from a more pure software background.
You know, when I think of software testing,
there's something as basic as like a unit test
which is just like so simple and deterministic
and even like richer kind of like end-to-end
or production level testing software.
It just pales in comparison like if you are building
an aircraft and you were performing a flight test,
like the test still involves, there's a physical machine,
there's hundreds of people involved,
there's someone like in it who's flying
and so you might do pre-test to make sure that that's safe.
It just, you know, it actually becomes closer to what
you might think of as like a quote-unquote production use case
coming from world-like software.
It's all like, and then to your earlier point
on physics gets a vote.
Testing does seem like, you know,
where the rubber meets the road
like does the thing behave as expected,
which is really all that matters.
My AI brain immediately helps to
what an interesting data set you're collecting there, right?
Because you, you know, have data cross-customers
on different configurations, different design patterns
and like how they actually perform in tests.
And so can you talk a little bit about,
do you have designs about going further into kind of pushing
AI research in that space or not?
Yeah, and I'd say like we are nominal is already in use,
you know, with companies that are,
that are doing, you know, model,
like sort of physical model development
and training these sort of models.
And we're nominal where we started
to be really valuable for these customers.
Was an interesting insight for us was,
there's so much, I think that you have to sort of like,
what's a good way to say it,
you have to be able to sort of like separate out
when you are testing the performance of models
on hardware systems.
And so nominal, it turns out,
I think that we are really good at doing
is automatically finding anomalies in data.
And so for customers that are trying to figure out
does am I collecting good data to then inform
the development of my model in a robotic system?
Let's just take a robotic arm, for example, simple example.
There could be issues with the servo,
issues with the motor, issues with the physical performance
of that system that are actually going to make all
of the data you collect bad.
It's a nominal sort of running in the background,
actually saying, hey, of the 120 second test
where this robotic arm folded a piece of laundry,
actually only this percentage of data,
did we have like high fidelity confidence
that the actual physical telemetry and components
were performing within calibration, within standard.
Therefore, you can extract those pieces of data
to go into sort of like actually training a model.
And that's just like, that's the sort of crawl step of this.
But yeah, I think we're getting more and more involved
with that with our customers
and think that we'll be sort of an integral part of that stack.
And in an area where they frankly,
don't see, don't see it as a differentiated capability
that they would want to build themselves.
It's hard to, and they sort of,
their proprietary IPs is developing the model itself.
But like nominal, I think the ability for us
to derive insights across many of those like use cases,
I think is going to be helpful for customers to bring them.
Like, you know how in the coding space,
there's like the verification agents.
Seems to be like, you guys can also be like,
a verification agent that assists in each company's development
of its design agent, so to speak.
Yeah, I mean, this is the analogy that I'm the most excited about,
which is like, it would be amazing to have unit testing
for hardware, but part of why agents have gotten so good
in the world of coding is just because things are verifiable.
And so like that learning loop can go really fast.
And it would be a huge dream to have that for hardware.
But I think it's necessary to build,
you know, essentially like test and validation
infrastructure to get there.
Yeah, makes sense.
You brought up the robotic arm example.
So I have to ask, do most companies
have separate hardware and autonomy teams
that you observe today?
And then is it separate hardware and autonomy stacks?
Do you serve one to the house only both sides of the house?
Yeah, well, we see it, let's pick the robotic arm
and keep unpacking it.
I think what we see is we kind of see no pun intended.
But we see, we often see three teams
that have three different stacks,
depending on if the company actually manufactures its own
robotic systems, but there's a manufacturing stack
and there's a manufacturing team.
So the people that are actually assembling the robot,
it could be, you know, even that digital thread
could start at a supplier, you know,
in their onsite, you know, doing the final construction.
But there's a manufacturing team.
There's normally like an R&D team
that does a lot of prototyping kind of experimentation.
More of what we were talking about,
the sort of model development use cases.
Then there's generally like a customer facing team.
So fleet operations, they're trying to observe
how the robotic system is performing out in the wild,
collecting all of that, you know,
onboard telemetry information.
So three different teams and three completely different stacks.
And so it's really interesting to come through
and work with customers to actually find the way
that nominal spans all three of those use cases
and how powerful that is.
We talk a lot about continuous hardware testing.
Like it's a term that we speak internally
about it nominal and externally.
And so being able to have that sort of invisible thread
between an anomaly or an issue that happened
with the robotic system deployed in the field,
where that comes back to the R&D team,
they can quickly triage it.
And then if it does derive from a physical component,
you know, malfunction or something that's out of calibration
and sort of follow it all the way back,
like I think that's a big area where nominal plays.
Yeah, I would say that like a word
that our users care a ton about is just traceability.
Like they always want to understand,
like where did this part come from,
what tested it undergo and the cataloging
and that just gets really, really complex
at the scale of systems that our customers are building.
So if you're building an aircraft, you know,
it's not the case that you can have every single subsystem
go through every single test all the time.
It's just too expensive.
You don't have enough budget time resources.
And so keeping track of that is like fundamental
to doing good hardware engineering work.
So today you can have basically a club code
write substantial software.
What do you think is needed before we can have
an AI system design, manufacture, test monitor
and sort of come up with new hardware for scratch?
Which other vibe couldn't there, plain?
Maybe the airplane shouldn't be an airplane.
It should look something different,
especially if we want vertical lift airplanes.
It's one of the things that I talked about
when I'm like trying to hire a team.
Like when I'm trying to say like,
Hey, like if you're a software engineer,
like come work on nominal,
it's like we've all spent so much time
building the internet and the internet works like pretty well,
but we're still really far away
from being able to vibe code and airplane.
Like I think about like right now I can't,
I have to assemble IKEA furniture myself at home, right?
Like it would be great to have that problem solved
and that's like such a microcosm of saying like,
Hey, can I design my own IKEA furniture at home?
So it just feels like there's like many, many steps
between where we are today and being able to like
vibe code hardware, but a lot of them come back to
whether it's like the feedback loop
of like is this thing working or not?
Or even just like how do we even have training data sets
to do hardware AI research?
Like a lot of it comes back to the problem of like
data collection, data cleaning, data standardization,
which is, you know, again, like really where we're focused.
But if they, if a company uses nominal,
they have, if they integrate all the data,
they have the data from the test,
they have data from how different designs perform,
they have data from all the context on how something was made.
Shouldn't it be able to sort of learn from all of that?
Yeah, I think so.
Like I think about, I was talking to someone this week
about, you know, when a test is happening,
like even just the audio data of like the operators
talking to each other during that test,
like that's a really valuable data set to collect
and start to, you know, incorporate into a platform
like nominal, I think before, you know, AI tools,
like that would seem like a little bit too much effort,
like the bang for a buck wouldn't be there.
But now it's like, oh, of course we should do that.
Like that should all just kind of like be brought into one place.
And I, I think over the next couple of years,
I'm excited to see what's unlocked
by just even having the data as I collected.
I think one, one, there's a lot of like really frontier work,
I think happening in a lot of the modeling and simulation side,
CFD fluid dynamics, like people are picking apart.
I think the testing world is one where it's, I think we're doing it.
Like nominal is the one that is going to do it.
And maybe I'll answer the question too
by giving a vignette of some work that we're doing,
some pretty frontier work we're doing with the US Air Force.
So we are working with them, working with DARPA,
the Defense Advanced Research Project Agency
on this really cool effort called Cypher.
It stands for Cyber Physical Systems Executing in real time.
It wouldn't be the defense if it wasn't a lot of acronyms.
But essentially for those kind of listening
and quick high level about what test engineering looks like
for a major airplane or weapon system sort of development,
the, it's this giant matrix of very deterministic test points
that need to be satisfied.
So my system needs to be between this and this value
during this condition.
It's just literally this giant matrix that kind of is burned down
very sequentially often over the course of years.
What this effort is getting at is actually involving AI agents
that in sort of faster than real time
are paired with digital twins.
And that recommend the next best sort of test condition,
the sort of like knowledge maximizing next test condition
extremely quickly.
So rather than like run a flight, go fly, collect data,
see if I met one discrete deterministic test point,
land, look at data, say yes, do it again.
Actually now that especially the systems themselves are autonomous,
you can have like really high endurance.
And so in sort of again, real time or faster than real time,
sort of change the paradigm of testing from a matrix
where I like, discreetly go through to a actually
just sort of like a gradient curve
where I'm sort of like always adjusting my vector extremely
quickly and sort of retraining my model
and updating the digital twin sort of physics
informed surrogate model of like what the world is.
That's really cool.
And I think like that is the Nirvana
that we're like getting towards.
And I think like it's we're seeing it in sort
of the earlier design phases again,
but I think it's just been really hard to do in the test world.
But like the fact that we're working
I think hand in hand with the government on this
where like they have access to test ranges
and infrastructure that make this stuff possible
is really exciting for us.
How advanced is our defense department
on the use of AI or not advanced?
Yeah, it's interesting.
I think this administration
particularly has been like very forward leaning on AI.
So it's actually been, you know, it used to be AI
used to be sort of a disqualifier
almost from some contract, some sort of opportunities
just because it's, we talk about nominal
like the epitome of mission critical applications
you don't want experimentation.
Jason, sometimes we have a Slack channel
where we'll post, you know, we use coding agents
and tools as well.
And they're really good for a lot of like front end,
you know, react components and different things.
But some of the recommendations for some of the like back end,
you know, things our team will like laugh at
and be like, if we had merged that
it would have been really bad for the customer.
So I think like there's good reason
to have some sort of you know skepticism
but that's changing quickly.
So I think like the, yeah, the department
is like really leaning into more and more experimentation here.
The sort of collaborative combat autonomous aircraft platforms
are really like pushing the frontier.
We worked closely with Andrew
and some other vendors on that project.
So I'm inspired by no pun intended,
the gradient of like where we're going.
Okay, simplify your business to collecting data,
visualizing it, analyzing it, iterating it,
report on it.
Then isn't that perfect for a genetic AI?
Yeah, I think if you think about the loop
that is hardware testing, there's a ton of different,
like every single point in that process
could be accelerated.
So earlier I talked about like
there's some tedious aspects of data review.
And I would say like one of them is reporting
where once you already have the data analyzed,
if you the electrical engineer
who's designing a battery subsystem
have already kind of done the like interesting parts
that extract from your brain
only the things that you know about like,
okay, how do I take these input channels
and actually synthesize it into the,
did the system perform what I think it should have performed
or not in a way that my team can understand,
the VP can understand.
Now at this point, other people might wanna ask that question
and get into like a certain PowerPoint slide format
so that they can disseminate it or you know,
in literally in some cases it's like there's a PDF
that I must ship to our customer,
like someone who's purchasing this,
like yes, like AI can like accelerate all of that.
Yeah, I think, I mean, I get excited by the shift
of the paradigm, we sometimes internally talk about,
like it used to be that 50 humans would be involved
in the testing and validating of like one physical
hardware product.
I think right now we're sort of in the like,
that's changed from a ratio to like one to one,
but like how do we get to a world where one human
can sort of be using like agentic tools in this space,
using nominal can sort of be doing it in parallel
for 50 systems and what does that kind of look like?
And so we've already built really, really interesting
and powerful things in our system,
just where you can have that sort of like chat interface,
LLM interface where you're saying things like,
hey, plot the kinematics of the drone.
And that's just like a really simple example,
but on the building blocks that nominal sort of has like,
users eyes sort of light up when that's just a task
that is extremely manual that they would have to go through.
But there's still these areas where I think like human insight
has been really key and we're trying to build,
one way to look at is like,
we're trying to build a massive data set
of the human enriched like data,
which is I think, you know,
mechanic engineering masters, PhDs like enriching this data
and doing it in nominal is a powerful asset.
What innings do you think you are in terms of AI in your product?
And if you really zoom out to AI in Irvana for nominal,
what does that look like?
I'd say it's like still early innings just like thinking
about how much has changed even just the last three months.
Like I think I'm someone who's just like 12 months from now,
I hope we still think that we're in the early innings
because if we don't then we're probably not humble enough
about just like what's coming around the corner.
But I think about like the features that we've added today
and I think we could have twice as many software engineers
at nominal building AI capabilities
and still like discovering new things
that our users might find exciting.
So one thing I was like joking about earlier
is like, do we need like a money ball for hardware testing
where it's like, if you're watching a sports game,
there's like always these very obscure stats
like, oh, if this person completes this play
then they'll be the third best.
I obviously don't watch a lot of sports.
But seriously, like when we talked to our customers,
one of the reasons they like nominal is like,
we're putting more data in front of more eyeballs
than they're used to having going on in their organizations.
And what that leads to is someone notices something
that when you catch in that moment,
it only takes you 30 minutes to address something
that's going wrong versus if it went unnoticed,
you know, it could lead to something exploding
and then it's like two days of like the entire company
being shut down from their like most critical test campaign.
And when you just again, like the volume of the data
are only going up as these systems get more complex,
it makes a lot of sense to have agents kind of like monitoring
almost as like pair programmers in your control room
as you're doing these high-scale tests and saying like,
hey, like you're not looking at this
but relative to the last 50 times you've done this,
it's out of family and like it's probably worth
someone investigating.
Yeah, got it.
I guess if you zoom out to this like AGI future,
you know, hardware company of the future.
What does the hardware company of the future look like?
I have a thesis that actually I think that we're going to,
we're entering a period where there is going to be
much more, obviously I believe this from a business perspective,
but much more hardware testing.
So I actually think that we are like the pendulum
is going to swing back.
I think we are coming to grasp with how little we actually
understand about how physical systems operate in the world
and how like lacking we are from a data perspective.
And so I think companies are building more and more hardware.
I think we're like, it's going to be a race
to try to like collect this data
and actually develop these models.
I think it's very, it's good for nominal.
I think eventually that's going to like come full circle
where the best way to build a hardware product
is like, is minimizing the amount of real world testing.
But it's a world where you have, you know,
AI agents working along that very simple sort of
the steps you laid out in hardware product development,
like optimizing each of those steps
and then optimizing sort of the steps between those
and actually being able to link the design space
to the test space with like, you know,
agentic reasoning across like,
how do I optimize testing of the system
in the smallest amount of time possible
and only preferably do it once, like pre-trained,
pre-simulate everything and then run that sort of like
agentic test agent across my physical system
and hopefully get 100% satisfaction.
I think we're far away from that.
And I think to get there,
there's going to be like this huge explosion
of the need for more testing and more fusion
of real world test data and model outputs.
I always think of it as like,
if you have AGI designing like a video game for your child,
like you might let them play it without it being like
rigorously tested just a video game.
But if you had AGI like building a toy for your child,
you would like really want to make sure
that it wasn't physically dangerous.
It's like the physical world will just always be different
because it's what we live in.
Yeah, do you think all hardware companies
will become like physical AI companies?
I think yes, I think like in the sense that,
I mean, at least I hope that, you know,
the design, even the generation, the manufacturing,
like as all of these things,
hopefully become accelerated by more sophisticated AI tooling.
It will, you know, I hope that people's creativity
is unlocked in the physical world in the same way
that it is in the software world right now.
Because most hardware just does one thing
and one thing well, it should be a lot more flexible.
Yes, yeah, I think that's,
I think it's a really good point, Alfred.
I think like the ability to unlock,
yeah, I think more versatility.
And I'll give like the present day simple example,
which is like if you talk to people that they'll often cite,
I think it's like the F18,
I'll give another federal example,
but the F18 to jet like the limitations
and inefficiencies of that vehicle
as a result of the process in which it was tested.
There's like all this extra stuff on it,
the way that like rear fins are mounted
is like any aviator would say,
it's like a very inefficient sort of vehicle.
And I think that's just like an interesting example
of like what you get when you have the worst test process.
But I think think about close your eyes and squint,
like when you have the best test process,
I think you can actually build in a lot more flexibility
and versatility into the end product,
which will be really, really interesting.
It's fascinating.
Why not just take all that data,
all the reasons that it became inefficient,
feed that into an AI model and say,
let's strip out all the things you don't need,
from that F18?
Yeah, I mean, I'm sure I don't actually,
I don't know this to be true,
but I've talked to some pretty emboldened people
that I think are trying to do that type of like work,
by example, I think to showcase.
And I think that fits in line with some of the efforts
that we are working with as much as we talked about
what the status quo tools are.
They're people pushing the frontier there right now,
both at the primes and other places.
You both graduated from MIT.
Why would a person graduating from MIT,
why should they join nominal?
I mean, I think that Cameron talked about 20 years of
sassification and one of the things I'm really passionate
about right now is just that for our customers use cases,
like the running of software,
like you have to think about the laws of physics.
Like physics gets a vote not just in,
did this hardware system work or not,
but if you have a scale of data that is too expensive
to ship to AWS and that's net like crunching
that data is necessary to determine
to your physical system work or not,
you have to just like operate with a set of like software
and computing principles that a lot of people have moved away
from, but you know, I think if we're really, you know,
ambitious, like sounds like this room is about like where
physical AI is going to go in the next 10 or 20 years,
like a lot of people are going to spend a lot of time
thinking about their problems.
The nominal, you know, I think we're on the leading edge
of like where, you know, software engineers
are going to disproportionately be spending time
in the next decade.
Are you guys ever going to build hardware yourself?
I think yes.
I think, no, I'll just give my take on Karen Smiling already,
but we shouldn't like play all of our cards,
but the supply chain of hardware data is like really
what we spend a lot of time thinking about.
So you have the, you know, the source of the data
would be a sensor and then it goes all the way to,
you know, you're crunching it,
you're giving these reports to people who can actually
apply their human judgment to is it safe to launch
this satellite.
Now, how do you get better and better at like,
managing that supply chain?
It's like probably by touching every part of it.
I always say that we have to like,
earn the right to capture data.
Like we have to make our users lives better.
We can't just say like, hey, you have to use this tool
because it gets the data cataloged in the right way.
We say like, hey, you should use this tool
because it will actually, you know,
it will shave an hour off your day.
Oh, by the way, it also catalogs your data in a way
that's like organizationally beneficial.
And when I think about those workflows and like,
pulling the thread all the way,
how do you reduce the number of steps involved
in this person's, you know, labor?
It eventually gets to hardware.
I was smiling just because I,
Jason said, yeah, I don't want to play all the cards,
but it's something that I, I think is going to be
happening sooner than later.
So our partner Sean would be beaming right now.
He constantly reminds us that hardware is the only mode.
And not only do you guys sell to hardware companies,
sounds like there might be some interesting things
up your sleeve.
We have a lot of, I think, very unique insights there.
And yeah, are further along there
than we might be wedding on.
So wonderful.
Well, I think it's an incredibly exciting time
for the, for hardware, for the physical world,
for physical AI.
And it's inspiring to see you all build a company around it
and, you know, build the, the GitHub equivalent
that's going to just radically transform the professionalism,
the reliability, the speed of all the engineers
who are now inspired and galvanized to go off to the space.
And so congratulations to you all on what you've done.
And excited to see what you continue to build.
Thanks so much.
We say all systems nominal.
All systems nominal systems nominal.
Thank you.
Thank you.
Training Data



