Loading...
Loading...

AI isn’t coming.
It’s already here — and it’s moving faster than anyone expected.
In this episode, Michael, co-founder of the AI4 Conference, breaks down what’s actually happening inside the AI world — from self-driving cars and humanoid robots to AI drone swarms, large world models, and why the skills parents are teaching kids today may be irrelevant in 10 years.
We talk about:
• AI drone swarms and existential risk
• Why humans won’t be driving cars much longer
• Self-driving safety vs human error
• Humanoid robots entering daily life
• Large World Models vs ChatGPT
• Why learning to code may no longer matter
• How education is being rewritten by AI
• Open-source vs closed-source AI
• Why AGI goalposts keep moving
• The one skill that still matters
This isn’t fear-mongering.
It’s a reality check on where the world is heading.
Chapters
00:00 AI Drone Swarms Explained
01:00 Inside One of the Fastest-Growing AI Conferences
02:25 Why ChatGPT Changed Everything
03:00 Self-Driving Cars vs Human Drivers
03:58 Why Humans Won’t Be Driving in 16 Years
04:17 AI in Education & Drug Discovery
05:44 Autonomous Weapons & Ethical Limits
07:41 Delivery Drones That Save Lives
08:38 Large World Models vs Language Models
15:49 The Only Skill Kids Will Still Need
🎙️ APPLY OR CONNECT
👉 Apply to be on the podcast: https://www.digitalsocialhour.com/application
📩 Business inquiries / sponsors: [email protected]
👤 GUEST:
Michael Weiss - https://www.instagram.com/ai4conferences/?
💼 SPONSORS
QUINCE: https://quince.com/dsh
🥗 Fuel your health with Viome: https://buy.viome.com/SEAN
Use code “Sean” at checkout for a discount!
🎧 LISTEN ON
🍏 Apple Podcasts: https://podcasts.apple.com/us/podcast/digital-social-hour/id1676846015
🎵 Spotify: https://open.spotify.com/show/5Jn7LXarRlI8Hc0GtTn759
📸 Sean Kelly Instagram: @seanmikekelly
⚠️ DISCLAIMER
The views and opinions expressed by guests on Digital Social Hour are solely those of the individuals appearing on the podcast and do not necessarily reflect the views or opinions of the host, Sean Kelly, or the Digital Social Hour team.
While we encourage open and honest discussions, Sean Kelly is not legally responsible for any statements, claims, or opinions made by guests during the show.
Listeners are encouraged to form their own opinions and seek professional advice where appropriate. The content shared is for entertainment and informational purposes only — it should not be taken as legal, medical, financial, or professional advice.
We strive to present accurate and reliable information; however, we make no guarantees regarding its completeness or accuracy. The views expressed are solely those of the speakers and do not necessarily represent those of the producers or affiliates of this program.
🔥 Stay tuned for more episodes featuring top creators, founders, and innovators shaping the digital world!
🔑 Keywords
ai drone swarm, ai existential risk, ai4 conference interview, michael ai4, self driving cars future, humanoid robots, large world models ai, agi debate explained, open source ai vs closed, ai education future
See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
Visit Our Website at https://digital-social-hour.simplecast.com/
Presented by https://podgo.io/
Hosted by Simplecast, an AdsWizz company. See pcm.adswizz.com for information about our collection and use of personal data for advertising.
The concept of a drone swarm is crazy and is possible now.
A drone swarm.
Yeah, have you heard that concept?
No, what is that?
It's like one of the sort of existential threat ideas in the AI space.
Imagine a bunch of small drones, like literally, you know, this big or even smaller.
But each one is like a little tiny explosive.
And then imagine having like computer vision model on that drone
and then imagine saying to your drone swarm,
we want you to go these people of a certain profile or even an individual person
and then you send your drones out there and the drones just zip really quick
and they hit the target.
All right, guys, we got Michael here, co-founder of the AI
4 conference where we're filming out right now, man.
You ready for this week?
Ready to go.
Yeah, I mean tired as hell, but ready to go.
The growth of this thing has been crazy, man.
So congrats, first of all.
Yeah, yeah, thanks.
Yeah, I think we're probably one of the fastest growing tech events in the world.
We started in 2018 as a 300 person event at this little hotel in Williamsburg.
That doesn't exist anymore.
Wow.
And now this year we'll have around 8,000 people from 85 countries.
Did you expect that growth?
No, I mean, no.
I mean, in 2018 when we started AI was cool, but it was mostly people,
you know, making their own custom machine learning models
on their own data for like pretty niche use cases.
And then it was really 2022 and tech GPT came out
in the sort of foundation model era began
that yeah, this event and just the industry in general has started growing like crazy.
Yeah, over 600 speakers at this one,
250 exhibitors, 85 plus countries from the attendees.
It's crazy, dude.
I mean, the first time they ever did was just literally 300 people, almost all in New York.
Now I see people from Australia and Dubai and every corner of the world flying here to be here.
It's really nuts.
Man, the innovation's been crazy.
I just had someone that's like bringing back, just see loved ones.
He's an exhibitor over there.
Reflected.
Reflected.
So cool.
So cool.
So cool.
I mean, that would have never been possible a few years ago.
No, I mean, the number of things that are going to be possible
or that are possible is insane.
From your reflector to tomorrow, you're talking to the Wolliamamoth guy
using AI lot and their stuff.
To even, yeah, I live in Austin and just seeing the waymos drive around
is not really, you know, an LLM thing, but it's still new in a sick example of AI.
Those are in Austin now because when I was in San Fran, I wanted one.
Yeah, so they're in San Fran, San Francisco, Austin, and then I think in Arizona somewhere.
Scott's still maybe.
Self-driving is going to be all AI in a few years, I think.
Yeah, I mean, yeah, I was telling you before we started.
I just had my first kid and, you know, she's a month, definitely is one month old now.
But I think when she's, you know, 16, normally you should learn to drive.
I think the odds that she learns to drive is very slim.
Oh, by then, 16 years from now?
Yeah, there's no way that a majority of people,
or the majority of cars are being driven by humans.
Yeah, there's just no way.
Well, it's more dangerous to drive a car than be a passenger in a plane.
It's way more dangerous and it's more dangerous now to drive in a human driven car than
a self-added driven car.
If you look at the stats,
you know, Waymo released something and they said like for every 50 million Waymo miles,
the safety just data is much better for their cars than for human cars.
Wow, yeah, that's crazy.
Yeah, I've been in an Uber that I've gotten an accident before.
So yeah, so I, oh yeah, so I, you can see you'll see online, like, you know,
some of the stuff of like, oh, the Waymo did this crazy maneuver to avoid the person.
Humans just aren't doing that.
We're just, we're not superhuman AI drivers.
We're just not.
We're tired.
We're texting.
We're distracted.
We're stressed.
Absolutely.
What other sectors and industries other than driving,
you're really excited about AA integration?
So, you know, unique now with this technology,
every industry is having to like figure it out.
So we, at this conference, there's literally 55 different track themes,
covering applications across pretty much every industry.
You know, back in the day in 2018-19,
it was really the bigger industries were adopting finance, health care, retail.
But now it's every industry is adopting.
Stuff that I'm excited about, the education stuff is really cool.
Yeah.
The idea of just completely personalized learning experiences
for each person that goes at their exact pace, I think is amazing.
How about you try to that?
Because I hated public school because of that reason.
Yeah, I was, I didn't really like being a student either.
I think the health care stuff is sick, particularly the drug discovery stuff.
So there's a lot of really cool applications around using AI to not automate yet,
but make the drug discovery process way more efficient, way higher throughput.
And then you have companies like ISO Morphic Labs owned by Alphabet,
you know, kind of positioning themselves as like digital biology companies,
but the goal is to literally turn, you know, the wet lab into a digital process using AI.
And there's some really sick stuff that's happening there.
So, you know, the promise of that would be turning, you know,
10-year, two, three-billion-dollar process to find a new drug into hopefully a year,
or hopefully, hopefully eventually it's literally immediate personalized just to you.
Wow.
Well, Viom's kind of doing that, right?
Yeah, Viom, yeah, with the gut stuff.
Yeah, they're kind of doing that.
Yeah, I don't know how deep they are in terms of really building,
yeah, like AI models for the gut stuff, but they seem cool.
So the health care stuff is cool, the education stuff is cool, you're the driving stuff is cool.
The defense stuff is cool, honestly.
What's the defense stuff?
Like the autonomous weapon stuff, autonomous drone stuff.
Yeah, it's just cool.
Obviously, there's a lot of implications there around using it responsibly,
but the concept of a drone swarm is crazy and is possible now.
A drone swarm.
Yeah, have you heard of that concept?
No, what is that?
A drone swarm.
So it's like one of the sort of existential threat ideas in the AI space.
Imagine a bunch of small drones, like literally, you know, this big or even smaller,
but each one is like a little tiny explosive.
And then imagine having like a computer vision model on that drone and then imagine saying
to your drone swarm, we want you to go, you know, kill these people
of a certain profile or even an individual person and then you send your drones out there.
And the drones just zip really quick and they hit the target.
Wow.
And you could imagine, you know, people like there's this one group, the Future of Life Institute,
thinks about this a lot.
They're a nonprofit focused on like AI alignment stuff.
You know, you could imagine that like what if you had a swarm of a million drones or a billion
drones, like it is kind of like an existential threat.
That is an application of AI.
I didn't know you can make a bomb that small that it could fly around and just explode.
I don't know what the what like the bomb innovation, what the bomb status is honestly,
but I just know the drone swarm concept is one that, you know, people stressed about,
it's like a long-term threat.
Well, there's some already really advanced drones in the military, right?
Yeah, like this is what the Palantir does.
Yeah.
Like it can enter a house and then target someone.
Wow, that's crazy.
Yeah, it makes sense.
And then on a lighter note, the delivery drones are also cool.
Yeah, let's mention the good ones too.
Yeah, like Amazon using drones for delivery or there's this one company I think they're here.
They use drones to deliver goods, especially medical goods in places where they just aren't roads.
Oh, that's cool.
Which is great, because it's like someone sick.
We need to get them supplies.
We can't really drive.
They are in a timely fashion.
Let's just use a drone and then they'll use AI to deliver it.
Yeah, there's so many, I mean, there's so many applications that are cool.
Yeah, it's truly as endless.
I mean, you know, we built civilization using human intelligence and now we're just starting to
rely more and more on artificial intelligence to progress it.
But skies or beyond this guy is the limit, honestly.
I feel like I'm finding out new things all the time about AI that just blows my mind.
Yeah, yeah, it's not going to stop.
I mean, the rate of progress of these models and just how capable they are is not slowing down.
I mean, the one thing I'm also super excited about is the large world models.
Have you heard of that concept?
Yeah.
So like an MLM would be like a chat GPT or Claude or Gemini or whatever.
These are all large language models.
They're trained mostly on a text database.
And now they're multimodal.
So there's text and there's image and some have video now.
But now you have people working on what are they're called large world models.
And so on day three of the conference on Wednesday, we have Feifei Lee speaking.
Feifei became kind of famous for in the AI industry, at least,
for creating the ImageNet conference, which kind of popularized deep learning.
And then Jeff Hinton won her conference, who's speaking tomorrow.
But anyway, so she's always in super and computer vision.
She started a company called World Labs recently.
She raised like 230 million bucks or something.
And they're focused on building large world models.
So the concept is really just a foundation model for navigating the physical world.
Advocating the physical world.
Yeah, so right right now, OpenAI has their like GPT series of models.
GPT five being the most recent that they released.
And then that's called a foundation model.
Because a lot of other people can just tap into that API and build off of that
foundation model without having to train.
Got it.
Their own model, which is super expensive and time-consuming part honestly.
So now you have that foundation model, which becomes really just the foundation of
all these other apps that are built off the API.
But you can't, for example, take GPT five and build an application to get a humanoid robot
to navigate the world.
You can't.
You can use GPT five as a foundation model to build the application,
to build a chap on your website that you can do.
But you can't take that foundation model and teach Boston Dynamics,
humanoid robot to go pick up your groceries and bring them to your house.
Got it.
So I see what you're saying.
So nothing physical.
Yeah.
But the vision of this large world model concept, which is going to, it's going to work,
is going to be, you know, is to create, you know, an analogous model to the foundation model
of an LLM like GPT for just navigating the physical world.
So then people can just start building companies, people and companies can start building
applications to just get robots to navigate physical space to do the million trillion
things we would want them to do in physical space.
That'd be a big step because that's a huge thing right now, right?
It would be a big step.
I mean, shout out to today's sponsor, Quince.
As the weather cools, I'm swapping in the pieces that actually gets the job done,
that are warm, durable and built to last.
Quince delivers every time with wardrobe staples that'll carry you through the season.
They have false staples that you'll actually want to wear like the 100%
Mongolian cashmere for just $60.
They also got classic fit denim and real leather and wool out of wear that looks sharp
and holds up.
By partnering directly with ethical factories and top artisans, Quince cuts out the middleman,
still a two-deliver premium quality at half the cost of similar brands.
They've really become a go-to across the board.
You guys know how I love linen and how I've talked about it on previous episodes.
I picked up some linen pants and they feel incredible.
The quality is definitely noticeable compared to other brands.
Layer up this fall with pieces that feel as good as they look.
Go to quince.com slash DSH for free shipping on your order and 365
their returns. They're also available in Canada too.
Yeah, it would be a huge staff.
I mean, human civilization, we built it for the human form.
And so now you have these companies like Tesla, you know, on their Tesla
border optimists, whatever they call it now, Bosn dynamics, that company figure.
We have a company here actually with a human robot that's going to be on stage tomorrow.
I just had them on the show.
Oh, nice.
They're going to do a really fun demo tomorrow going to, you know.
They went viral at CES, right?
Yeah, yeah.
What's their name again?
I should know this.
They came on a few days ago.
I should know too. They're literally.
Yeah, they're basically biggest.
Unlike just blinking.
Yeah, but yeah, I mean, these human robots, like the hardware is there.
Like these robots are so capable.
They can lift stuff.
They can walk.
They can do backflips.
What?
Yeah, yeah.
Look up the videos of like Bosn dynamics, Alice robots, a human robot doing backflips.
Wow.
But, yeah, so the hardware is there.
Now it's just about getting the software to catch up and these, you know,
large world models could be the answer to that.
And then, you know, there's a world where like five years from now,
we literally just, like a lot of us have human robots in our houses,
delivering groceries, doing the last mile for delivery.
Yeah, it'd be like an assistant or a maid almost being assistant,
house manager, drive the kids to school.
And you're in your human non AI car that you don't,
so you don't have to replace it to get an AI driven an AI powered car.
It's like a black mirror episode.
It is like a black mirror episode.
I mean, this whole thing is going to kind of be like that.
This whole AI transition.
And it's just, you know, it's happening, it is happening quickly.
In large part, because every new generation of models that come out
is enabling the next generation.
So it's like, right now, it's something like, you know,
80% of code, new code that gets written is using AI coding assistance.
Wow.
Like, you know, things like a replet or whatever.
And, but, you know, these AI coding assistance are powered by whatever the latest AI models are.
And so when the next generation comes out, i.e. GPD five,
that is going to make the, you know, AI assisted coding better,
which is then going to write, you know, better, more efficient code.
Yeah, you're good.
There we go.
And it's just, it is kind of a vicious cycle in terms of it's getting faster and faster.
And not only is AI writing the code that is going to lead to the next generation,
it's also helping design the chips.
So if you listen to like Nvidia or ARM, like they're using AI to help design these chips.
And so the next generation of chips is also becoming dependent on the current generation of AI.
So it's just like, it's a crazy spiral.
We don't know exactly where.
But that's why you have people like Emmett who you're talking to tomorrow,
who are thinking a lot about this alignment problem of how do we make sure humans and AI
play nice with each other.
Yeah, it's so nuts because as a, you're a parent now and as a soon to be parent,
I'm thinking about what skills do I want to teach my kids that are going to be relevant
when they're adults.
And I remember when I was growing up, my dad wanted me to learn programming.
But now that's like almost not needed, right?
Yeah, I mean, I wouldn't, yeah, with my kid, Daphne, would I be like, Daphne,
go learn the code?
No, I wouldn't, I would not say that.
Yeah, I think, yeah, what, yeah, the skills are changing.
Do you need to learn a drive?
No, do you need to learn the code?
No.
Crazy.
Do you need to learn to critically think about all the things that we learn to know?
Like the, you know, I keynote this morning, the woman from the
teachers union, she was talking about how stressed of hers is critical thinking.
And so she stressed like, oh, these kids are going to just use tech to be tea
to do their assignments.
And then they're not going to learn to think critically.
Which like, I think is definitely something that needs to be thought through.
And, you know, valid concern.
But at the same time, I'm kind of like, if we have a technology that's reliable
that can do all the things that we currently think are critical thinking,
maybe we just let the technology do all those things and then find the new bucket of things
to think about.
Interesting.
Yeah, it's like, yeah, it's like a good example is with Google Maps.
Before digital maps or before even map quests, which was like the, you know,
printed version you would do, people knew directions, but they memorized directions.
Now, I live in Austin.
I literally, I can probably get like 10 places the rest.
I just plug it in.
Same way and Vegas for me.
I get my Google Maps.
Yeah, that's good.
Yeah, I'd rather think about other stuff.
But I grew up in Jersey.
My dad could pull out a map and drive all the way to the shore from an hour and a half.
Yeah, that's crazy.
It's nuts.
But was there as efficient as the, as the Google Maps?
I don't know.
He probably made some wrong turns.
I had no idea.
Maybe he was having more fun.
Yeah.
I do get sometimes, I do get stressed sometimes because like something,
you make a wrong turn off Google Maps and you're like, oh,
shit, I made a wrong turn.
This is horrible.
But it's like, who cares?
Yeah, that's like four minutes.
Yeah, exactly.
No, it's an interesting thought because growing up in school,
they taught us like not to use calculators, right?
Then we had calculators.
I know.
It's like why we have the calculators using calculators.
And now they're teaching kids not to use AI, but I think that's kind of backwards.
It's definitely, it's definitely a little confusing.
But at the end of the day, we're just all figuring it out.
One good thing also that education woman mentioned is like, so her union,
it's like a union that represents 1.8 million people.
It's like a huge teacher's union.
They partnered with, I think she said, Microsoft open an anthropic and they,
I guess those three companies funded like an AI education lab in New York,
where I think they're going to just be figuring out like how do we use AI
for education?
So people are asking the question and I think we'll have some pretty cool answers.
Wow, 1.8 million teachers.
Yeah, it's crazy.
It's incredible.
Yeah, it's, it's a huge.
And she's integrating AI.
That's impressive.
It's really cool.
Yeah.
Yeah, there's some great speakers here, man.
You did really well with this lineup.
I mean, I miss your G2 Pro, Ben Lam, Fafi Li, Godfather of AI.
I can't miss him.
Yeah, Jeff's the best.
People love Jeff.
Yeah, I mean, he, you know, he literally invented neural networks in the 80s.
And then for the next like, roughly 30 years, they weren't that cool.
Yeah.
And then when he, you know, when he eventually figured out deep learning,
just, you know, multi, multi, multi-layered neural networks and how valuable those were,
compared to shallower networks, that's what really kicked off this whole wave, to be honest.
And he figured that out at Fafi Li's ImageNet conference.
Wow.
It's a really cool connection.
That's the importance of events like this, right?
Bringing the smartest people together.
Totally.
Yeah, yeah, I mean, at the end of the day,
you know, this whole AI thing, whether it goes good or bad is up to us, humans.
And literally just talking about it and communicating about how we're approaching this huge transition
for society is critical.
Yeah, yeah.
I know the opportunity is massive.
Because when I see Mark Zuckerberg paying $250 million to hire one person for an AI role,
it's like, what is, what is going on here?
Yeah, I mean, it is wild.
You know, whether that's going to become the market rate or not, I don't know.
I don't think for everyone, but yeah, I think what he did with that, I mean,
it was a really interesting, like, marketing move to basically just be like,
Netta is really serious about, yeah, AI.
They made the investment scale of like 15 billion for 49% of it.
And then he started paying, yeah, this few researchers, a shit ton of money.
And yes, like, the legitimacy of the, the salaries aside, it's just, it shows like,
okay, Netta is going to be really serious about AI moving forward.
Yeah.
And this, this was a strong signal.
He's trying to get the Avengers over there.
Pretty much.
Yeah.
Yeah, he's trying to get the Avengers over there.
Yeah, I wonder how Jan feels about it.
Who's Jan?
Jan LeCon, he's, they're like kind of their head of AI or has been for years.
Another very famous person in the AI community.
He actually, him and Hinton did a lot of work together.
Jan, I'm pretty sure was the lead inventor of convolutional neural networks,
which was originally critical for image recognition.
Got it.
But yeah, Metta, I'll be really interested to see what they come out with.
And if they stick open source, because they've been, you know,
all open source with Lama, but maybe now with all this investment,
they'll go more close source to mimic the other, you know, big, big models.
Yeah, we'll see.
Where are you out on that whole debate with open first close source?
You don't care.
I mean, the coolest models that I use day to day and that, you know,
most people are talking about are the close source ones.
You know, Chatchee, BT, Cloud, Gemini, Perplexity, etc, etc, etc.
But the open source ones are definitely powerful,
because they let people do whatever they want.
I mean, they let people do whatever they want.
Because right now, you can't, you know, build on top of GBT-5,
whatever application you want.
They have guidelines and restrictions,
which does limit creativity inherently.
Yeah, like I'm sure you've noticed.
Sometimes you'll prompt, you know, like, Chatchee, BT or whatever LMU use.
And you'll get like a, I can't answer that.
It's against my guidelines.
I've got thought about Epstein Island.
Yeah, but like, you're right, but you should be able to talk about that
with your LM, if you want to.
It's not like who cares.
I literally asked for, can you name me some survivors from Epstein Island?
It said I can't.
Right, which it should be after that.
There's nothing wrong with that.
But like, at the end of the day, it still is software.
And it is still, you know, behaving based on its prompt.
It's pre-prompt.
So that's a kind of an example.
Whereas maybe you want to make a model that knows all the F-seeing stuff,
because people are super interested in it.
And you could really only build that off of an open-source model.
No, I was trying to get one of them on the show,
just to hear their perspective.
And it wouldn't provide me any details.
Wow.
It was crazy.
Yeah, that is crazy.
Yeah.
Do you think this AGI thing is overhyped?
Or do you think it will actually change the world when it comes out?
I think the funny thing about it is just that the definition keeps changing.
The goal, the goalpost keeps moving.
So, you know, back in the day, so Alan Turnne
was, you know, the original, the ultimate OG for this space.
Alan Turnne essentially invented computer science or formalized it at least.
And he essentially invented the concept of AI.
He wrote a paper, I believe this is a 1950 paper.
Those are super famous AI paper.
In it, he posed this concept of the Turing test, which I'm sure you've heard of.
Yeah.
Yeah.
And the whole premise was, you know,
as soon as we can have a human talk to a machine and the human can't decipher,
whether it's talking to a human or a machine,
AI is like arrived.
And we have, you know, super smart AGI type technology.
But now, like, we've passed that.
Turing test, obliterated.
And people are still now, like, we're still pursuing AGI.
So, the goalpost keeps changing.
And I think the reality is just like,
artificial intelligence, it's always going to look a little bit different than human intelligence.
And we're always going to experience it just a little bit differently.
Whereas, you know, we keep trying to compare it to ourselves with this AGI concept,
which like, you can't fault us for it because what else do we have to compare it to?
But I think it's always going to look a little bit different.
And I don't really think, I think we're just going to keep moving the goalpost.
To be perfectly honest.
And it's going to keep getting more, more powerful,
more general purpose, more amazing.
But I don't know if we're really going to get to this moment where we're like,
now it's, now it's arrived.
Develop a consciousness.
Yeah, I mean, like, yeah, like the whole conscious debate.
I mean, you could do that debate forever,
but we don't even have consciousness pinned down in humans.
Right.
We don't.
It's not defined fully.
It's not defined.
It's not been proven in humans too.
Yeah, we don't know.
We don't know.
We're just some physical pattern doing cool stuff.
Yeah.
And AI is going to be similar.
Some physical pattern doing cool stuff, a lot of which we can't do.
That's a good point.
I haven't heard that argument, but if we can't even prove it in humans,
how do you expect to prove AI's conscious?
Yeah.
Good luck.
But it's going to keep getting crazier.
That's for damn sure.
And it's going to keep doing, you know, just crazier and crazier stuff.
Like, one thing that, you know, Demis, the founder of DeepMine,
often points to is like, can we get AI to generate original science?
For example.
For original science?
Yeah.
Like people like Newton or Einstein, or Teller, or Dara,
Teller, their own laws and stuff.
Yeah, they, you know, observed our reality and extrapolated
math from it, essentially, that we could then
use to make predictions about how reality behaves.
And can we get an AI to do that?
Is a really interesting question.
And right now, it's like, unclear.
Like, there's some little early examples.
But we have, we definitely haven't figured out how to like automate physics,
like automate scientific discovery yet with AI.
But that would, that would be sick.
That would be nuts.
That would be nuts.
That would be nuts.
That would be nuts.
Because right now, it's all the prompting.
Like, the human still has to put in a lot of effort into the AI.
Humans have to put a lot of effort into the AI.
And there's just, there's no prompt right now that we can give
to really get like some original observation about reality out from AI.
So there's just also just a model limitation
that the model just doesn't understand or isn't able to observe our reality like
sufficiently enough to really draw its own conclusions.
That makes sense.
Yeah, for sure.
Michael, what's next for AI for a man?
Where could people find you and come do a future event?
Yeah, thanks for asking.
We just can keep getting bigger.
This year, yeah, around 8,000 next year.
We'll probably have around 12,000 people.
And so we've fully outgrown the MGM now.
We've been here since 22.
And so we're moving next year to the Venetian.
Still in Las Vegas.
Nice.
But just a much bigger space.
So yeah, we'll be here at the Venetian August, I think,
3rd, 6th at the Venetian.
Yeah, bigger, bigger, better, more awesome.
We're going to have 1,000 speakers next year.
Holy crap.
That might be a record for a conference.
I don't know.
You got to call up Guinness.
I mean, I think there's just so much shit to talk about with AI.
It's like, with most conferences, there's not much to talk about.
But AI is literally touching every part of society.
And so we need to bring people from everywhere to make it a truly AI
for everything in the show.
Yeah, well, thanks for having me here.
I've met some great people.
And I look forward to getting to know more.
Yeah, good to meet you, job.
Check them out, guys.
Check out the AI for conference.
See you next time.
I hope you guys are enjoying the show.
Please don't forget to like and subscribe.
It helps the show a lot with the algorithm.
Thank you.
Digital Social Hour
