Loading...
Loading...

Support for the show comes from David Protein, who doesn't enjoy a protein bar after good
workout.
Here's a tip.
David Protein bars.
All David Protein bars are designed to maximize protein while minimizing calories.
And they say that their bars deliver the highest protein per calorie ratio of any leading
bar on the market.
Their David Gold bar, for instance, delivers 75% calories from protein, and the David
Bronze bar delivers 53% calories from protein.
Had to DavidProtein.com slash ProfG, where they're offering a special deal for our listeners
by four cartons and get your fifth free.
You can also use their store locator to find David and stores at a retailer near you.
Support for the show comes from VCX, the public ticker for private tech.
The US stock market started history's greatest wave of wealth creation.
From factory workers in Detroit to farmers in Omaha, anyone could own a piece of the great
American companies.
But today, our most innovative companies are saying private longer, which means everyday
Americans are missing out until now.
Introducing VCX, a public ticker for private tech.
Visit GetVCX.com for more info.
That's GetVCX.com, carefully considered investment materials before investing, including objectives,
charges, and expenses.
This is another information to be found in the funds perspective at GetVCX.com.
This is a paid sponsorship.
Support for this show comes from Indeed.
If you're looking to hire top tier talent with expertise in your field, Indeed says they
can help.
Indeed, sponsored jobs gives your job the best chance at standing out and grants you access
to quality candidates who can drive the results you need.
Spend more time interviewing candidates who check all your boxes.
Less stress, less time, more results, now with Indeed sponsored jobs.
And listeners of this show will get a $75 sponsored job credit to help get your job
the premium status it deserves at Indeed.com slash Fox Business.
Just go to Indeed.com slash Fox Business right now and support our show by saying you heard
about Indeed on this podcast Indeed.com slash Fox Business terms and conditions apply.
Hiring?
Do it the right way.
With Indeed.
Episode 386, Area Code serving North Central parts of Florida, 1986, Top Gun hit Theatre's
True Story.
Tom Cruise is starring in Romana Comedy about body positivity.
He and his actress both gained 300 pounds for the roles, the name of the film, Missionary
Impossible.
Let's give it a second.
Welcome to the 386th episode of The Prob G Pod what's happening in today's episode.
We speak with Meredith Whitaker, the president of the Signal Foundation and a leading voice
on AI policy.
I first came across Meredith that's out by Southwest.
She was on a panel.
I never I was bored and I walked in.
I almost never listened to panels and I thought who is this?
Who is this strange dark hair woman speaking all sorts of truth and logic about AI?
And it ends up she runs the app and I had lunch with her and she struck me as really intelligent
and I have been much more concerned about for the first time I don't know if I'm getting
older, much more concerned about my own privacy worried that at some point all of my AI queries
will be made public, right?
Is that my prostate question mark, expecting, expecting AI to answer?
I'm pretty sure every ailment I have is because I'm in large prostate.
I'm convinced everything starts with the prostate.
Anyways, I don't know how I got here.
Anyway, she's an incredibly insightful intelligent person and I would argue probably the most well
like person or CEO and tech right now, which isn't saying a lot, very impressive, very intelligent
and sort of signals trying to or is I think carving itself out of sort of the clean well
that part of the internet and I'm fascinated with the trade-off between privacy and utility
and we'll speak more about that.
Anyways, here's our conversation with Meredith Whitaker.
Meredith, where does this podcast find you?
I'm in New York City.
In New York?
I thought you were, I thought you looked in Europe.
I'm in Europe a lot.
I go between Paris and New York, we're small or we spread a lot of jurisdictions.
There you go.
So let's just write into it.
I want to start with the basics.
Signal has been in the news a lot this year and we'll get to that in a moment.
We know it's widely used by journalists, public officials and people who are especially concerned
about privacy, but on a practical level, how does signal actually work and what makes
it different from other messaging apps?
What a practical level signal is the most widely used, actually private communications platform.
We go out of our way to collect as close to no data as possible and that's really what
sets us apart because we exist in an ecosystem where, for better or for worse, in one way or
another, most of the time you make money in tech by collecting and monetizing data.
So you collect data about the users of your platform and then you sell access to different
types of users based on that data to advertisers or you collect data and you train your AI model
with it, et cetera, et cetera, et cetera.
That's kind of the economic engine of tech since the 90s and maybe before signal is obsessed
with maintaining the human right to communicate privately and we have builds, you know, an alternative
communications platform that does just that.
We end up rewriting core pieces of the proverbial stack to enable us to if do what is normal
to provide a basic and easily usable messaging platform in a way that does not collect your
data and thus does not put us in a position of being forced to turn it over if we get a
subpoena of having a breach exposed your most intimate information of, you know, violating
the compacts that we make with the people who rely on us.
So that's in a nutshell, we're also open source and open source matters here because that
means you don't have to trust me.
You don't have to like me.
You can actually verify that, yeah, the thing that she or anyone says it does is what
it does because we can scrutinize the code.
We can prove it.
I think in the most likable CEO and tech, which isn't saying a lot, but, um, yeah, well,
the bar, the bar is pretty low.
So the term, I meant that the term encrypted is, is a loaded term.
Can you talk about the biggest misconception about encryption and messaging apps?
I mean, I think it's, it's a little bit like, you know, the way skin care ingredients
are like, I don't know, gold or something gets invoked, right?
Um, we can say it's, you know, both of these have encryption in them, but one has 10%
encryption or encryption has only applied to 10% of the data, whereas another is fully
encrypted.
And so if you look at, say, what's up, and signal, what's up uses signals encryption
protocol?
And this is the gold standard for encryption messaging was released in 2013 has stood
the test of time, really advanced the field of privacy preserving technology when it
was introduced.
That's licensed by WhatsApp.
But WhatsApp only applies it to one layer of the WhatsApp layer cake, so to speak.
They use it to encrypt the contents of your messages.
So if I'm texting you like, you know, Hey, Scott, where are we going to meet it?
It's out by Southwest.
WhatsApp would not be able to see that.
WhatsApp does not encrypt intimate metadata and metadata is a fuzzy little term, but
it's, you know, it's actually pretty revealing data.
It's who you text.
It's who's in your contact list.
It's your profile photo.
It's when you started texting someone, your therapist, your oncologist, your FBI cutout,
whoever it is.
That's very revealing data.
And then of course, we're not owned by a meta, which means that, you know, there is
not a bunch of Facebook and Instagram data, you could then join that intimate metadata
with to make profiles, et cetera, et cetera, et cetera.
So signal is, you know, encrypted up and down the stack.
We encrypt the contents of your messages, but we also encrypt your profile photo, your
contact list, who is texting, whom, who is messaging with whom, who's in your groups.
So you can look at our website, signal.org slash big brother.
And we work to unseal any subpoena that we are forced to comply with.
And what you see there is a long list of requests for data.
That's normal.
That's what, you know, governments assume an average messenger is able to give up.
And then you see what we're actually able to give up, which is very close to nothing.
We can confirm that a phone number has an account.
We can confirm a handful of other things.
But we have gone out of our way to be, you know, unalloyed, you know, 100% encrypted
to use that slightly metaphorically, but to, you know, you get the gist, we really take
that extremely seriously.
We're not just sprinkling encryption dust on top of a, you know, ultimately non-private
infrastructure.
So I wanted to talk about something that gets no news.
So I don't know if you've heard of AI, but it's in the news recently.
AI.
AI, right.
Infrastructures.
There you go.
What is that?
It's a movie by Steven Spielberg.
So the AI agent specifically, you've been pretty vocal about the dangers of the agentic
AI that the danger it poses to our privacy and security.
Can you elaborate on the risks here and what are most people not aware of?
Yeah.
The risks are the flip side of the promises, really.
And we actually started talking about this about a year ago when we were seeing things
like Microsoft recall creep into, you know, the product updates in this case for Windows
and really recognizing that, you know, signal exists at the application layer, right?
Which means that we have to trust the operating system.
We build on top of iOS or Android or Windows.
And we have to trust that the operating system will be a reliable set of tools that we
as developers can leverage to ensure that signal works for the people who rely on us.
And that, you know, the users of the device can rely on.
And our primary concern is that as agents get integrated into the operating systems by
these, you know, AI companies, the people who maintain the operating system.
And as they get leveraged beyond that in ways that are giving them very pervasive access
to your life.
It undermines our ability as signal to guarantee the type of privacy that we guarantee at the
application layer.
And I'll give, you know, that, that may sound a little bit arcane to people who don't,
you know, live in these waters with me.
But just a quick example, you know, if you have an agent running on your operating system
or sort of giving deep access to your file system and other, other sort of data on your
device, in order to do something like, you know, plan a work dinner, well, the agent will
need access to your calendar.
It will need access to your browser, perhaps to look for a restaurant, maybe your credit
card or your EA's credit card in order to book that work dinner.
And in a scenario where you are as you should be all using signal, it will also need access
to your signal and your signal contacts to text them and coordinate dates and times.
All of that becomes a, you know, pretty frightening set of data access points and ultimately
a security vulnerability because instead of, you know, having to break our gold standard
encryption algorithm, which has been, you know, tested and so mathematically proven to
be secure, you just have to leverage the type of access that these pervasive agents are
being given into your applications, into your intimate data in ways that, you know, are from
a security architecture perspective, very, very insecure.
And I'll note that right now, almost every agent that we're seeing kind of in the mainstream
is relying on large LLM models, models that are too big to run on your device, which
means that, you know, ultimately most of this data would need to be sent off your device
to a cloud server to be processed for inference, you know, creating another security issue and
potentially, you know, placing data in the hands of whatever company is running that agent.
So that's, you know, our concern is really coming from a privacy integrity standpoint and
from a concern for the people who rely on signal by the introduction of these tools, which
can be useful for some things.
But, you know, also pose this pretty significant risk that isn't getting the kind of attention
I believe it should.
We'll be right back after a quick break.
Support for the show comes from BetterHelp.
This International Women's Day, BetterHelp wants to remind all the mothers, grandmothers,
aunts and sisters of the world that you deserve to take care of yourself as much as you take
care of people around you.
If you want help getting connected with it therapist, you could try BetterHelp.
BetterHelp does the initial matching work so you can focus on your therapy goals.
All you need to do is fill out a short questionnaire that helps identify your needs and preferences
and BetterHelp matches you with a licensed therapist operating under a strict code of conduct.
After 12 years plus of experience, BetterHelp says they have an industry-leading match for
film and rate.
And if you aren't happy with your match, you can switch to a different therapist at any
time from their tailored recommendations.
With over 30,000 therapists, BetterHelp is the world's largest online therapy platform,
being served over 6 million people globally and out of over 1.7 million client reviews,
BetterHelp's Average Rating is a 4.5 out of 5 for a live session.
Your emotional well-being matters.
Find support and feel lighter in therapy.
Sign up and get 10% off at BetterHelp.com slash PropG.
That's BetterHELP.com slash PropG.
Support for the show comes from LinkedIn.
It's a shame when the best B2B marketing gets wasted on the wrong audience.
Imagine running an ad for cataract surgery on Saturday morning cartoons or running a
promo for this show.
On a video about roadblocks or something, no offense to our gen alpha listeners, but that
would be a waste of anyone's ad budget.
So when you want to reach the right professionals, you can use LinkedIn ads.
LinkedIn has grown to a network of over 1 billion professionals and 130 million decision
makers according to their data.
That's where it stands apart from other ad buys.
You can target buyers by job title, industry, company rules, seniority skills, company
revenue, all suit and stop wasting budget on the wrong audience.
That's why LinkedIn adds both one of the highest B2B return on ad spend of all online ad
networks.
Seriously, all of them.
Spend $250 on your first campaign on LinkedIn ads and get a free $250 credit for the next
one.
Just go to LinkedIn.com slash Scott.
That's LinkedIn.com slash Scott, terms and conditions apply.
It's important for the show comes from Square.
Think about your favorite small business that coffee shop on your block or the salon you've
been going to for years, whether a dog walker you always pass, who seems to be having the
time of her life.
Square makes it simple to run a small business no matter what it is, whether it's one brick
and mortar, a pop-up mobile service or franchises, Square can help track sales, manage inventory
and access reports in real time.
Square even has built-in tools like loyalty and marketing to help you connect with customers
and reward them for showing up again.
Square supports every major payment method, including tap to pay and offers instant access
to your earnings through square checking.
A lot of the local businesses I go to seem to be using Square, which makes me actually
makes me feel good about the brand.
With Square, you get all the tools to run your business with none of the contracts or
complexity.
And why away?
Right now, you can get up to $200 off-square hardware at square.com slash go slash prop g.
That's squar e.com slash go slash prop g.
Run your business smarter with Square.
Like it started today.
What do you think the risks are?
If you're using a cloud or a chat GPT, what do you think realistically the risks are of
the next five or ten years that your data is compromised in some bad actor or the LLM
themselves will have access to your private information and be able to link identity with
I mean, the John Oliver segment on finding people's data in the dark web, including their
search history, should people be cognizant of what they query these LLMs?
I mean, I think they absolutely should be cognizant.
Any query to an LLM that isn't sort of a specialized private inference setup, you
know, kind of what Moxie who founded signal is doing with confer or other similar setups.
But any, you know, a general query chat GPT is sending that data to servers that are controlled
by OpenAI, Microsoft servers, they retain that data, they could leak that data.
We know that when presented with a valid subpoena, they will turn that data over in a world
in which norms and laws and definitions of criminality shifts from, you know, one year
to the next, perhaps, it's good to be cognizant of where that data could go and what it could
do in terms of, you know, marking you as one or another type of person, not to mention
I think, you know, with the introduction of advertising and, you know, increased targeting,
at least the plans to introduce advertising and in chat GPT, I think there are also issues
about what that can reveal about you, you know, in more mundane context as a consumer or
as a job seeker and, you know, the kind of advantages or disadvantages that might accrue,
and that the power to define you based on data that is, you know, in the context of
chat GPT, often extremely intimate.
You've actually referenced AI as a marketing term, what did you mean by that?
Yeah, I mean, I think it's, I'm being flatly literal, although I think that sometimes
take into mean that I'm saying AI doesn't exist or it's not serious, which is, you know,
marketing is, in fact, very serious, you know, what I'm talking about there is just sort
of denaturalizing AI as a technical term of art.
If you look back at the term AI, you know, it was, it was created in, you know, 1956,
1957 by John McCarthy, who hosted the Dartmouth conference, those of us in, you know, in this
world will be familiar with that kind of an iconic conference where a number of the quote
unquote fathers of AI gathered to try to create intelligent life, you know, in the form
of a machine over the course of a summer.
And John McCarthy created the term in his own words in subsequent interviews because
he wanted to exclude Norbert Weiner from the convening they didn't get along.
Norbert Weiner had, you know, created the term cybernetics in the field of cybernetics
and McCarthy classically did not want to be a disciple.
He wanted to be the father of his own thing, a very common academic urge.
And he also wanted grant money and he thought artificial intelligence was a kind of flashy
term with a, you know, a cool valence that would get some of that, you know, Cold War
era, arpa money flowing to his lab, which it did it funded the conference.
But over the history of the term, it's like over 80 years now, we've seen it applied
to very disparate technical modalities.
So McCarthy was invested in symbolic systems, which would look much more like decision
entries and was actually deeply skeptical of the neural approach, which predated, you
have the term by about 10 years and was, you know, McCullough and Pitts and neural networks
sort of stemmed from that.
So what we see is a term that was invented primarily to describe an approach that's
out of favor today has now been applied, you know, because of the specific resources
available and the recognition that, you know, out neural networks can do interesting things
with data and compute and the type of business models we have, the term AI is now applied
to an approach that was not actually under its umbrella when McCarthy invented it.
And why is any of this important beyond it just being very interesting if you're a nerd?
I think it's important because it allows us to step back and actually recognize that this
is not a term of art.
And what we are describing are very particular approaches that have their own historical
and political economic formulations and that we can actually sort of have a bit more
agency to define what we mean by intelligence to choose the technologies that we are leveraging
to produce intelligence, seeming outputs and to be a bit more critical and actually
regain a bit more of our own agency in relationship to mythologies that kind of naturalize these
systems as just a linear arc of technological and human progress.
There's been a lot of, I don't know if it's warnings or catastrophizing from AI executives
who said, I'm scared of what I've built and I need to retreat to the, you know, the
Cotswolds and right poetry.
I'm curious what you think the threat level is of AI and if it's been overstated, understated
and where you see the biggest threats and how we as a populist respond to it.
I think there are threats, particularly if we integrate these probabilistic, you know,
generative and decision-making systems into high stakes domains, you know, nuclear defense
energy and put them to tasks that they are ultimately not secured or suited for.
So you can have, you know, reward hacking, you can have emergent behavior.
All of those things are real.
Those aren't things that are simply going to sort of spring out of nowhere or, you know,
Athena from Zeus's head and suddenly we have ephemeral technologies running around without
our control or delegation in some sense, right?
Those would need to be choices that are made by people and decision makers and I do think,
you know, in some sense, some of the fear has a bit of escape velocity from material reality
and almost sounds a bit like a religious fervor rather than kind of a, you know, technically
grounded concern about the rushed to integrate technologies that are not fit for purpose
and could have collateral consequences, which is where I land on it.
My primary fear, however, is the combination of the mythology of artificial intelligence,
which is really framing these technologies as superior to human judgment, superior to human
capabilities, which on some axis measured in some ways, you know, sure, they do math much quicker.
So does a calculator, they can, you know, produce things more efficiently, et cetera, et cetera.
Yes, but ultimately these are very centralized technologies that rely on huge amounts of data,
that is captured by an industry invested in what I'd call this surveillance business model,
which is effectively, you know, collect all the data you can via your platforms and then, you know,
train an AI model, sell it to advertisers, et cetera. And, you know, so it's, it requires huge amounts
of data. It requires huge amounts of infrastructure and I don't have to go into the wild catback spending,
the kind of, you know, Nvidia's picks and shovels, the monopoly on chips and the, you know,
build out of data centers. And it requires huge distribution networks, which often get left out
of that calculus. But basically, if you're going to make money, you're going to integrate this,
you need, you know, either a large social media or marketplace platform, or you need a cloud
business model, or you need to latch on to one somehow. So all of that redounds to an industry
that is highly concentrated in the hands of effectively the winners of the, you know, the last
tech boom, the platforms who were able to establish, you know, data pipelines and massive amounts
of data, large platforms, cloud infrastructures, global reach that were sort of cemented via
network effects and economies of scale, all, you know, classic communications network monopolies.
And so my concern with all of that is that what we're looking at is a significant concentration
of power over infrastructure and decision making that is then rebranded as a kind of God's head
intelligence in ways that are making us less critical than we need to be about how that power is
being leveraged. Well, it's drilled down to specifics. What do you think, and nobody knows,
but what is your best guess with respect to AI and employment? And let's call it the West and
Europe and the US over the short and the medium term. I've seen TikToks of economists and AI
executives saying, or AI thought leaders saying, employment, you know, we're going to see a massive
destruction and labor force, but the flip side is so far it hasn't really manifested. There's some,
there's, you could potentially interpret that the job market is softening, but youth unemployment
is about where it has been historically at average, AI and the labor force. What is your best guess?
Yeah, and this, I got to be careful here. This isn't really my lane, and I'm seeing a lot of
kind of competing headlines. It does seem clear to me from some, some conversations that at least in
part, AI has been a handy pretext for job cuts, you know, boards and media and shareholders will
accept that, you know, hey, we cut X number of people because this is part of our AI strategy that
doesn't look like weakening demands that looks like innovation. And so I do think there's some
AI wrapping of downsizing that is happening. And I've heard that firsthand from some folks.
I do think, you know, we are seeing at least as with degradation of work and, you know,
degradation, meaning, you know, there are people who maybe used to have a job as a copyrighter or
a translator. And we've seen this with translation who are now just kind of editing AI output,
right? And it's a less secure, maybe less fun, less rewarding job, but it's not removing the human,
it's sort of removing the agency and power that a human would have in that job under different
circumstances. I am really impressed with what I've seen or, you know, it's the new round of
coding agents are very, very capable. And you know, they're definitely, I'm seeing a lot of
excitement across my industry there. It's, you know, you can't deny that these are very useful and
produce output that is, you know, pretty commensurate with like a junior programmer. But again,
you still need a senior programmer. You still need somebody who understands how it works to review
the code and maintain it. And so even though you're seeing advances in capabilities,
one thing that isn't being talked about enough is, you know, there are a few things that many
engineers I've worked with hate more than having to maintain someone else's shitty code.
So you still need somebody who has an understanding of the systems level, who's bumped their head
up against problems and understand them, you know, and can fix them, who understand how one,
you know, pull request or kind of tranche of code might interact with another. And that's the
place where I'm not only concerned that the kind of rapid outsourcing of some of the development
work to agents. You know, I think some of that could backfire in, you know, a kind of technical
debt that is very difficult to pay down if what we're looking at is systems that are sort of,
you know, built by agents or, you know, kind of coding AI and not fully understood by the people
who, you know, the kind of skeleton crew who are left to maintain them. So those are, you know,
those are some reflections. I don't think I have a clear answer because I think this is not just
a question of AI. It's also, you know, where is their market will, you know, how is AI going to be
used as a, a pretext? And then what happens when we do have the first significant issue with
the reliance on these AI systems? And I, you know, I say that as I recognize that, you know,
Amazon went down apparently because of an error made by an AI agent that they integrated. So,
you know, we have already seen a kind of, you know, first wave of, of critical issues that are
caused by a kind of dependence without human oversight. We'll be right back.
Support for the show comes from VCX, the public ticker for private tech, for generations,
American companies have moved the world forward to their ingenuity and determination. And for
generations, everyday Americans could be part of that journey through perhaps the greatest
innovation of all, the US stock market. It didn't matter whether you were a factory working
Detroit or a farmer in Omaha, anyone could own a piece of the great American companies. But now,
that's changed. Today, our most innovative companies are staying private rather than going
public. The result is that everyday Americans are excluded from investing and getting left
further behind while I'll select few reap all the benefits until now. Introducing VCX,
the public ticker for private tech. VCX by Funrise gives everyone the opportunity to invest in the
next generation of innovation, including the company's leading AI revolution, space exploration,
defense tech, and more. Visit getvcx.com for more info. That's getvcx.com. Carefully consider
the investment material before investing, including objectives, charges, and expenses. This
and other information can be found in the funds perspective at getvcx.com. This is a paid sponsorship.
PDF space. You've got your docs, your plans, your specs, and then invite the crew to build
what's next. The talk of the team works. They think that this design could be a contender.
When somebody wonders, what's the next step? AI helps you finish the rest.
Bolts are tight. Now your plans will find when a smoother business when you're all aligned.
Do that with Acrobat. Learn more at adobe.com slash do that with Acrobat.
Support for the show comes from Shopify. Every worthwhile journey starts with a handful of
what ifs. But one day you'll be able to look back and realize that all those what ifs
or small steps towards turning your dream into a thriving business. Shopify can help you get there.
Shopify is the commerce platform behind millions of businesses around the world,
and 10% of all e-commerce in the US. Join them and turn those what ifs into
with Shopify today. Sign up for your $1 per month trial today at shopify.com slash box business.
Go to shopify.com slash box business. That's shopify.com slash box business.
We're back with more from Meredith Whitaker. So there's a tension between privacy and encryption,
and I think the potential weaponization of encryption and privacy by bad actors.
And I would imagine my virtue of your position. I think I've understand where you would land
on this or at least a bias or view on it. In London in New York they say you can't go more than
12 or 15 feet outside without being on camera somewhere. And to a certain extent, I like that.
I think I like it more in Britain because I'm less worried about it being weaponized by
the administration here. But if you look at the decline in crime rates, I think some of it is
because of technology and then court ordered mandated if you will violations of privacy,
if there's enough evidence that this person is a bad actor, and then we need to violate people's
privacy to understand if something bad is about to happen. You must be given this question all the
time. That tension, where do you land on that tension and is there, is there ever a reason for why
people's privacy should be violated in the context of larger safety concerns?
I want to back this up to the fundamentals of encryption. And when we're talking about
your signal, what we are talking about when we talk about it and encryption and the way that it
works is a technology that either works for everyone or it works for no one. If you undermine
the math of encryption, if you put a back door in there, you have a, you know, not actually random,
random number generator that means you could so it basically perturb the encryption, decrypt it.
That's not just a back door. That's just not not just a an error that only the good guys can
avail themselves of. That is effectively breaking encryption for everyone. So it really is a
scenario where the people you hate the most have to be able to use it to exercise that right?
So to speak, if the people you love the most are going to have access to it as well. It's an
all eggs in one basket and that's that's at the level of math. I'm not answering the question,
is it ever good or appropriate to undermine? You know, that is that's not actually what I'm talking
about. What I'm talking about is a world in which over the last 30 years, we are surveilled within
an inch of our lives. You said every 12 feet were recorded great. And then you made the comment,
you know, I'm more comfortable with that under one regime than I am under another regime. Well,
that becomes the issue. You're not really in control of, you know, how the sands of that
regime shift. I mean, maybe you vote or, you know, whatever it is, but that data is indelible.
Those systems are pervasive. Meta is adding facial recognition to their
Rayban glasses, right? Where is that data going? Which governments will access that? And so
it is interesting to me that in a, you know, a golden age of surveillance when
unprecedented in human history, our actions, our preferences, our communities, you know, who we date,
who we talk to, what we do for a living, how we spend our money, our surveilled and logged at a
level of detail, unimaginable to the Stasi that we are still pinpointing a tiny refuge where
the fundamental right to private communication that is recognized as such that is necessary for a
full and joyful and intellectually rigorous life that has intimacy and the ability to exercise
our opinions and dissents and blow the whistle and do journalism and all of that, that that one
right is presented as a problematic and as the barrier between stopping crime and allowing it to
run rampant in a world where, you know, the issue is more often than not, you know, finding the
needle in the haystack of noise and the haystack of data, not getting access to an encrypted channel.
So my stance on that is very, very clear, but I also think the framing of the problem needs to
be shifted a little bit. Yeah, my pivot co-host said something that really struck me. She said that
people have the right to have secrets and it really struck me in the kind of the smartest people I
know that also understand tech, all use signal. And I realized how promiscuous and careless I've been
with my own data. And I thought what I do is just not that interesting. And most recently, when I hear
the Trump administration talking about assembling lists of people who are vocal, you know, pretty outspoken
against the Trump administration, I'm like, wow, I spoke too soon. If you were to, and you have
advised the government, I know you were part of, you worked with Lena Kahn. What regulation, if you
were to advise the administration or the FTC, maybe it's under a different administration, on what
would be the most thoughtful regulation as it relates to privacy and encryption or AI, you know,
kind of magic one time. What do you think is most needed from our governments right now?
Yeah, this is a bit of a tricky question for me because I've been not in the policy bubble for a while.
I do think, you know, something as simple as a meaningful consent. And by which I do not mean
just a bunch of click wrap and cookie banners around whether or not a given company or institution
gets to create data about us at all, not what they do with our data, but whether they have the right
to tell my story to know about me would go a long way. Of course, I would wreck an entire
logic of the tech business model. But I do think the fundamental thing that needs to be done,
however, the regulatory paintbrush would paint this is to question and then take back the authority
to define who we are from a handful of companies that have naturalized their right to sort us and
order us and tell us our place in the world. That's a bit of a philosophical answer, but I do think
that's the core issue is the the authority we've given tech companies who create data for advertisers
to sort and order our world and tell our stories for us. I'm just curious where you thought of
the Ring Super Bowl ad. My God. My God. I mean, I didn't expect it to become so flagrant so quickly,
I guess, and seeing that it was, I was like, do who are they selling this to? Is it people who
would install this or is it the government contracts who recognize exactly what this is selling
and want to sign up for data access? I certainly wasn't the core demographic. It was aimed at,
but it also felt like there was a tertiary market that was actually being addressed that wasn't
eager doorbell owners. When you look at the landscape, well, I'm going to ask a market question,
and my guess is you're going to tell me it's not your lane, but I just want to remind you
that it's never stopped us from opining on it. I'll talk all manner of topics. We have no domain
expertise in the markets. There's been a meltdown around SaaS companies from a valuation standpoint.
You essentially work for or run a software company. So I would think of it. I don't know if you
call it that, but at the end of the day, I would imagine it's code. I work for a software company.
There you go. So there's been an enormous destruction of value among SaaS companies,
believing that AI is going to come in and kick the crap out of these guys or make them obsolete.
When you see that happening, do you have any initial thoughts on the viability of these
somebody's software companies who are some of them lost 40, 60, 70 percent of their value?
I think ultimately when you're providing enterprise software, particularly to highly regulated
industries, it needs to be interoperable with legacy equipment. Even if you don't like that legacy
equipment, there's a superstructure there. There's a foundation. It needs to work with the data
that you have. Even if that data isn't great, that data needs to be cleaned and fungible.
You need to be able to account for the different determinations that are made,
depending on what kind of model you hook in there that might not be a possible,
particularly in financial services and other industries with high compliance burdens.
You need to have often human oversight that is personally liable or accountable for different
decisions. I do anticipate that AI in some form or fashion will be integrated, will have impacts
here. Fundamentally, this is not a magic wand. There's a lot of legacy infrastructure, regulatory
burdens, and labor processes and modes of work that need to be accounted for. I don't see
the SAS software going away anytime soon, and I don't see AI doing anything to really
erase those other considerations. I think predictions of the demise are a bit self-interested
and far premature. Last question, Meredith. It strikes me the young people are absolutely,
at least when I see their actual behavior. There's some consumer dissonance in that as people
talk a big game about privacy, and then I see people basically telling the world where they are,
what they are doing and who they're doing it with. It strikes me that even if you put a thin layer,
if Uber were to ever get hacked, a thin layer of AI on top could basically can out who's having
affairs, terminating pregnancies, HIV status, it just wouldn't be that difficult to just know
everything about someone with just their Uber data. Do you see the same dissonance I see in that
as consumers have just decided to trade off massive privacy for utility, and do you have a
message for them? I do see some of that. I would ship it a little bit. I think ultimately,
humans want to be loved, and they want to be included. Even when we talk about signal and privacy,
we're not talking about a vacuum. It's not Meredith by myself with none of my thoughts escaping
that anico chamber of my meaning making. I am using signal to share what I think with other people,
because I am a human and communication maps to human relationships and the desire to be connected
and to be included, et cetera, et cetera. I think we're in a world where ultimately we will
opt as human beings. I use these services too, because I want to go to the party. I want to see
what people are doing. I got to get somewhere. I want to participate in life while I'm living
as do I think most people. The ways to do that are things we're going to do, and I don't think
they represent actual choices about where we feel comfortable or uncomfortable with our data,
whatever our data might be. We don't really have access to it. We know we don't want someone to
share our mean DMs with our friend. We know we don't want our health data leads to our assurance
in ways that would harm us, but that's also a place we don't have that much control. In the meantime,
we got to get to work. We want to see what our friend posted. We want to be part of the popular
people and the ways of doing that have been slowly, we can say, colonized or
sort of instrumented by these tech services that advertise convenience, advertise connection,
advertise ease, and then below the surface have sort of hollowed out our privacy and our ability
to define ourselves in the place in the world. I would say what we're seeing is a natural human
inclination. We use what we can to be together, to connect with each other, to participate in life.
Those services have themselves, I think in some sense, betrayed us structurally,
and that doesn't mean we don't care about privacy. That means a meaningful choice around what
it would take to care about privacy has not really been given to us. We do see the number of people
using signal going up and up and up. We do see people's understanding of why privacy is so important.
I think becoming more acute and more felt for people at a personal level when they see people's
social media posts being used at the border when they see these collateral consequences that are
coming home. I think the issue then is, okay, what do we do about it? You can't say, well, the
choice is never to communicate with your friends because that's simply unrealistic and anti-human.
But you should use signal. I'm not exaggerating and this is my final plug.
The smartest people I know, and then the people understand technology, the most have to my
next participant technology, that vent overlap, they all use signal. It's almost like a badge of like,
I get it. Anyways, but my favorite quote from this is people or people want to be loved and
included. Meredith Whitaker is the president of the Signal Foundation and a leading voice on AI
policy. She co-founded the AI Now Institute at NYU, advised FTC chair Lena Kahn and was named
one of times 100 most influential people in AI. She joins us from New York. Meredith very
much appreciate your time and your good work. And I meant what I said, you're the bright, well-lit,
clean part of the AI technology bookstore. Let's put Meredith Whitaker in charge. Let's
consolidate all of it. I'll go raise $11 trillion by all of these companies and put you in charge.
Deal? It's a deal, Scott. It's a deal. Yeah, looking forward to working with you and thank you
for having me on.
I'll do a rough happiness. A hack for young dads. It is striking to me how selfish kids can be.
I mean, it's just a I feel like I'm essentially essentially a credit card that occasionally gets
to watch a football match with them sometimes. And let me just give you a hack. If you're a dad like
me who thinks that you're going to have all these hallmark moments with your child, you'll have
some of those. But for the most part, it's going to be mostly a one-way relationship. And I'm not
saying it's not amazing. But the hack I have implemented and it's helped me a lot is that
my favorite title, I've been a founder, you know, all these cool titles, see or whatever,
my favorite title in the world is dad. And that is every time my kids call me or say, oh, hi,
dad, or they call out dad, or, you know, I love you, dad, every time I hear the word dad,
I'm like one of those dogs that hears the word walk. And I train myself to just love that
term. It's the most important term in my life. And it just, it's more dope, but for me than anything,
it's when these two things that kind of look smell and feel like me called me dad. And what I've
decided and I started believing and training myself to believe five years ago is that when my
kids are awful, you know, they give me a hard time or they come home and expect to rate their
emotions or they're unreasonable or they slam their door. My kids and what you'll find is
generally speaking, your kids don't behave that way outside of the house. If you're like 90% of us,
you're going to find that outside of the house, your kids are pretty reasonable, pretty good
citizens, pretty polite. And at home, they're fucking terrorists assessing the household for
vulnerability so they can strike when you're at your weakest. Now, why do they do that? Because
they're processing, they're emoting and they know what they can do with you because they know
you are there unconditionally. They know you love them unconditionally. Why? Because you're
their dad. And so what I have done and it's been a real unlock for me is that when my kids say
something and consider it or even mean to me or are respectful or aren't kind, I'm not saying I
let them roll right over me. But I assume they're saying one thing to me. They're saying, dad.
This episode was produced by Jennifer Sanchez and Laura Jenner. Cammy Reek is our
social producer Bianca Rosario Miras is our video editor and Drew Burrows is our technical director.
Thank you for listening to The Propji Pod from Propji Media.
This is advertiser content brought to you by Stonyfield Organic.
Our cows, them going out to pasture, they love it. They're so excited to go out every day.
They weight-rated the drawing. In fact, we milk them and we just open up the lane way and let them
just go right out to pasture. I'm Rhonda Miller Goodrich and I'm a dairy farmer in Cabot Vermont.
Our farm is Molly Brook Farm. We're an organic dairy farm and we are a supplier to Stonyfield.
Molly Brook Farm has been in my husband's family since 1835. We started our organic transition
in 2015. We had 53 acres of corn ground and of course we had to use herbicides and pesticides and
the soil was dead really for all intense purposes. We stopped growing corn and stopped using
herbicides and pesticides and we seeded that down to perennial grasses. After that we began to
see biodiversity in that soil again. To be organic certified, our cows need to be in pasture at
least 120 days. I think the organic practices really benefit our animals. You know, having good
feed, good water, a nice light area, that's what's important to us and that's what's important to
Stonyfield. Visit Stonyfield.com to find Stonyfield organic yogurt near you.
The support for the show comes from Shopify. Every worthwhile journey starts with a handful of
what ifs. But one day you'll be able to look back and realize that all those what ifs or small
steps towards turning your dream into a thriving business. Shopify can help you get there.
Shopify is the commerce platform behind millions of businesses around the world and 10% of all
e-commerce in the US. Join them and turn those what ifs into... would Shopify today? Sign up for
your $1 per month trial today at Shopify.com slash Vox Business. Go to Shopify.com slash Vox Business.
That's Shopify.com slash Vox Business.
Do that, do that, do that with Acrobat. Learn more at Adobe.com slash do that with Acrobat.
The Prof G Pod with Scott Galloway



