Loading...
Loading...

Jaeden sits down with Nick Frost, co-founder of Cohere, to discuss the company's focus on enterprise AI, foundational models, and AI sovereignty. Nick shares why Cohere prioritizes practical and secure solutions over chasing AGI, and how businesses can avoid common mistakes when adopting AI technology.
Watch on YouTube: https://youtu.be/Qk9kXX0erTA
Conor's AI Course: https://www.ai-mindset.ai/courses
Get the top 80+ AI Models for $8.99 at AI Box: https://aibox.ai
Chapters
00:00 Introduction to Cohere and AI Background
03:31 Cohere's Unique Approach to AI for Enterprises
06:20 Real-World Applications of Cohere's Technology
09:12 The Evolution of AI Models and Their Utility
12:30 ROI vs AGI: A Pragmatic Approach to AI
16:14 Concerns in the AI Industry and Sovereignty
22:26 Capital Efficiency in AI Development
27:57 Common Mistakes in AI Adoption by Enterprises
30:28 The Future of Enterprise AI
Welcome to the AI Applied Podcast. I'm your host, Jaden Schaeffer. Today, we're joined by Nick Frost,
a co-founder of Kokeer. We're talking a little bit about what Kokeer's working on.
What makes them different? We're super excited. Kokeer is obviously one of these
legendary companies that has, you know, some of your co-founders have been, we're on the original
Transformers paper. And so really this is, you know, one of the core AI companies, I would say,
out there in this AI revolution that we're seeing. Super excited to have you on the show today,
Nick. Would you mind telling everyone a little bit about your background and what got you into kind
of becoming one of the co-founders of Kokeer? Yeah, happy to. Yeah, so I'm Nick Frost. I'm one of
the co-founders here. Yeah, I guess I got into AI back in 2012. In 2013, I remember I was first
introduced to the topic of neural nets. I was at the University of Toronto as an undergrad,
and I took Jeff Hinton's course. And I remember learning about them in 2013 and thinking to myself,
oh, man, I really missed a boat on this whole. If only I had been here a few years earlier,
I really would get in on a ground floor. Yeah, I think about that a lot. I think about how I
really was convinced. I was like, I really was convinced that I had missed the boat and that,
you know, this excitement was kind of just peaked. Yeah, I think that's, it's really like humbling
to think back to that and that amongst many other ideas I have had that have been wrong.
But since learning about them in 2013, I've just been excited about neural networks
and what they can do and how they can change the way we use computers completely.
So after undergrad, I worked at Google for a while. I was a researcher there,
and I worked with Jeff Hinton for several years on foundational neural net research.
Was that how you got into Google like he was there and pulled you in or no, no, no, I took his
undergrad course, but he did not remember me when we got that funny. So then it was at Google that I
that I became a collaborator of his and got the privilege of learning from him and working with him.
And that's also where I met Aiden Gomez, who is CEO and co-founder of co here. He was one of the
authors of the transformer paper and he had met Ivan Zhang, who was a friend of mine from undergrad.
So the three of us got together in 2019, 2020 to create a company that sole objective was to make
neural networks and transformers useful for businesses. So we've been making foundational models since then
focused on the enterprise, focused on deploying them securely, customized into an enterprise
environment so they can get real work done. We've been doing that now for six years.
Okay, amazing. I think a lot of people have heard of co here. You guys have a great product.
You have your own foundational models. I have a startup called AI Box, or we let people get access
to like 80 different AI models. We have co here on there. And actually people have really good
reviews of of the co here models. My I guess my question to you is kind of looking at the the state
of AI today and the market co here. It seems has had a very different kind of objective from the
start, right, from let's say like anthropic or opening eyes trying to just make this mass kind of
market. Talk to me a little bit about why you guys set that have done that from the beginning.
It feels pretty intentional. You're, you know, your direction with kind of targeting enterprise
versus going mass market and how you guys think about that. Yeah, I think it has been directional.
So just set the stage a little bit, right? There's maybe, I don't know, maybe there's like 11
companies in the world that can make foundational lots. Maybe 10 every few months, every like a few
times a year, there's one new one and there's one that stops doing it. But it's been pretty
consistent for a few years now in that around that 10 ballpark. We are unique amongst that 10
in a few ways. We're unique in our singular focus on the enterprise. We're unique in that we're
not American and not Chinese. And that's pretty much all the others. So that puts us in a separate
place. And we're, we're unique in our approach to building a technology that empowers and up lifts
and provides sovereignty for everybody who touches it. But because we don't have a consumer
application, we're not fighting for people's attention. You know, we don't, we don't have ads
in our models. We're not, we're not optimizing the models for sickle fancy to like keep you engaged
in the conversation. We're just optimizing them to make somebody better at their job.
That's all we train them to do. And when we deploy them into an environment, we deploy them
securely on-prem sometimes, even so we like drop our models into a customer's environment,
and then we don't see what goes in and out of that model. So that allows them to connect it to,
you know, real regulated data like healthcare data or financial industries or things within
the government. Real data that people want to get real work done on. And that is a pretty
challenging way to deploy models to challenging way to build them. I mean, you need to be thinking
about efficiency. A number of, you know, a number of GPUs that our models take is quite small
for their performance. We think about that a lot. And we got there by being first and
foremost interested in how to make this stuff useful now. How to make it useful today. We don't
have a business model that requires AGI to be financially viable. We have a business model that
is financially viable. And that's just a commitment to pragmatism and utility that we have as
people. That's just part of who we are. And that's part of our focus. So we've always had that
that that objective. And I think yes, very strategic. I mean, you guys have gone into this with a
solid objective, which I think actually sets you up in a really good way. You know, a lot of people
talk about AI bubbles and crazy valuations and all this kind of stuff goes on. But you guys have a,
you know, a real business model that's working today. You're in enterprise. And I think the
number one thing you want to see is like, are you seeing, you know, great result or companies
seem great results with you? And that's basically how your company will stay stable. What are companies
today using you for? What are they most excited about? Like, talk to me about, you know, what
we're co here is kind of plain today. Yeah, people are seeing great results with technology. And
they're using it for a really wide variety of things. This is a complicated conversation in AI.
People are always saying, Hey, like what's the best use case? What's the best thing? It's
kind of like asking, you know, what is the best use case for a word processor? What is the best
use case for a spreadsheet? You know, what's the bit like this is a fundamental efficiency
optimization workplace tool. And it is best used for whatever it is you are doing work on.
Right. So we build models. We build a search stack called compass. We build an
an agentic framework called north to use them. So that, you know, it's like a chat application.
But then it's also got automation. So you can, you can systematize, regulate and automate
consistent processes. But it's all, you know, it's, it's tough at the stage of the people who
are paying attention to AI are probably familiar with. It's just that we can deploy it entirely
securely and customize and drop it into an enterprise company. What they then use that for is
automating and augmenting whatever it is their knowledge workers are doing. And that's pretty
varied. But just to give a few specific examples, we have people within financial markets inside
banks using north to keep track of the quarterly earnings of way more companies than they used to be
able to. So that's a really, you know, tangible like financial market specialist used to be able to
keep track of that. Some number of companies and create reports on them. Now they now they can
keep track of two times that no because they use a model to help summarize and help write their
reports, validate their thesis, use it to bring in information from their internal sources,
cross check it, and then they can be more efficient. We have people within customer support using
it to resolve tickets. That's a, that's a great example. I used to resolve tickets. I think core
we was a customer of ours. They used to resolve tickets in like four days and then they got that
down to two. Like that is that, yeah. And they got that by this little person in there
of allitating, making sure that hey, this is the right thing you're gonna say, but they've been
able to decrease the average time considerably by automating and augmenting a large portion of it.
Yeah, it's really, it's really all of the place. Like any work that you're doing behind a computer
that is kind of boring. And you know how to do. It's just going to take you time. You can use
north to do it. A lot of our listeners are people working inside of organizations. They've tested out
a lot of different models before. Probably a lot of them use like Anthropic or or Chatchy PT.
Talking a little bit about what people could experience as far as I feel like we kind of get
into the cycle where it's like every, you know, X amount of months. It's like, oh, it's Opus 4.7.
Oh, it's the new Chatchy PT. Oh, it's the new like, talk to me a little bit about where you guys
are at with like new models or how you think about that or if that's necessary or because I mean,
I know for like consumer, it's also kind of like a marketing hype thing. Like this model is
going to destroy all the security in the world, right? From like Anthropic. So like how do you guys
think about new models and model progress? Models have certainly gotten better over the few years.
But not, but it's pretty logarithmic. Right. And mostly the real step change that has happened
in utility of models over the past year has not been the underlying model capability but has been
the harness around the model. Yeah. And that was super interesting. That's like the whole coding
thing right now, like using LLMs for coding. We've been using LLMs for coding for, you know, years.
But it wasn't until the past six months that the harnesses were made so much better that it
became really viable. And now that there has to be an increase in the underlying model capability
to make that legitimate. Like if you tried to use a model from two years ago and used them
a modern harness, it probably wouldn't work very well. But the difference between the model today
and the model back then is pretty minimum. And you might be able to, you know, take one of our
modern harnesses, rework it and get one of the older ones working better. I think there is a lot
of hype going on. I think there's a lot of disingenuous hype going on. I don't think, yeah, I think
a lot of the rhetoric around this is like pretty clearly not accurate and mostly marketing play.
But the utility of models has increased a lot. And that's in part because of improvements. The
model and mostly because of improvements to harnesses and so the way it's plugged in and connected.
And so what data sources you give it access to and stuff like that. Now, we're always building
better models. And we released new models pretty consistently. But we've been focusing a lot on,
on efficiency. We're not trying to make the biggest best model out there. In part because we do
all these on-prem deployments. And you would be surprised at how compute constrained, even pretty
established players are. So we're not making a 5 trillion parameter model because a lot of our
customers do not have anywhere near the compute to serve that at scale. That's one of the reasons why
you've seen those MIT reports that came out that said so many people built demos but didn't get
into production with AI. Right. Wasp was prohibitive. Couldn't secure you as an issue. They couldn't
deploy on-prem because they didn't have the right compute or because it wasn't being offered by
them. A provider they were going with. We have the, we have much closer to the inverse of that.
Everybody we work with pretty much is in production. Wow. That's because we target that right
from the beginning. But that does mean that we don't we don't have the biggest best model.
We have models that are optimized and efficient and like useful in a real world production setting.
Which is to be fair, a complaint I've actually heard from a lot of organizations, just how much
they spend on these models and the model tokens and how the costs are just going up. And like,
it's crazy, you know, like a lot of organizations complain quite loudly about this. And so that,
I mean, that is a real, that is cost is certainly an issue. I would also say that a lot of that
cost is an issue, but the utility you get from these things is pretty huge when it's deployed
correctly and when you're not just making a cool demo when you're instead like empowering your
employees to do stuff better than the return you can get can be quite good. But it does need to be
measured needs to be balanced. It's not a great strategy to put, you know, cart blanche onto
tokens fans and just say, like, let it rip, you know, yeah. This is actually a concept that I
think I've heard you talk about before, which is kind of the contrast between AGI versus ROI.
Can you tell everyone? I guess you're you're take on on that. Yeah. Yeah. You know, I said that
a while ago. It's like we're building ROI not AGI. Yeah. It's quite it's pithy. And so it is
it stuck around. But I think yeah, when I say AGI, like not AGI, you know, I don't think auto
regressive sequence models are like people. I think they're very not like people. I think you
are very much not an auto regressive sequence model. And I think no amount of improvement of an auto
regressive sequence model is going to make it like you. And so it's going to have very different
behavior. And I think at this stage, like, you know, a few years ago, I would ask people,
like, hey, do you think improving language models is going to get us to human level intelligence?
And a lot of people would say, yes, these days pretty much everybody kind of knows, hey,
do you think just scaling LLM is going to get us to AGI? The vast majority, whether or not that,
I asked that question at universities or in public halls or or in AI companies or in banks. Like
anywhere I'm going, I ask in that question, huge consensus is that no auto regressive transformer
base sequence models are not enough to get us to human-like intelligence. They are enough to get
us an intelligence that is incredible. And in many ways surpasses humans, right? In the same
way, previous models of computing surpassed human intelligence in a bunch of areas, just this time,
the areas are bigger and the technology is much more powerful. But I don't think we're going to
get to an intelligence that you treat or expect it to behave as a person. And subsequently,
you know, we're not building a business model of requires that. We're building a business model
that is instead, as LLMs get better, we can provide more utility and I expect LLMs to continue
to improve. But we're not waiting around to say, hey, how can we reach profitability? Well,
we'll build the digital god and then we'll ask it, you know, like that sounds like we're
making something useful. We're going to we're going to ship it into our customers' environments.
We're going to make sure that they're getting ROI on it, that they're getting something useful.
We're going to keep improving our offer. I think that's a very, a very level-headed, probably
more accurate than many response to that. So I do appreciate that perspective on it. I mean,
it's an interesting one. I find like that really is the consensus. But if you spend any time on
Twitter or listening to, you know, a handful of podcasts, you would get the impression that
that's a fringe theory. Right. And that's like, I've asked that question at Masters. Masters classes
at MIT. I've asked that at like all the ML researchers and in conferences. I've asked that of
random people on the street. It is the broad consensus across every discipline. And the reason
for that is that if you sit anybody down in front of an LLM and ask them to use it,
they're going to be blown away. Like, it's still the most incredible technology I have ever
interacted with. It's why I've been excited about neural mass for more than a decade now. And
why I've been excited about LLM since I first found out about them. Like, it's, it's an incredible
piece of technology. But you use it for a little bit and you realize, yeah, this is not a person.
This is doing something very different than a person. And in many ways better, in many ways better.
But also in many ways worse. And if you try to treat it like a person,
you're not going to get good use at it. Instead, you have to treat it like a tool,
like a targeted tool that you can set at specific tasks guided by your own strategy.
One of the best uses of computers I think we've ever created. But it's got to be guided. It's got
to be directed. 100%. But is there anything from your perspective, what you kind of see with AI?
Is there anything that concerns you about maybe the direction of the industry? Yeah, totally.
I think there's a lot of unprincipled behavior going on right now. Yeah, I think there's a lot of,
I think if you're making a business and you're banking your finances on the creation of AGI.
You're in for a rough time. So a question I get a lot. So I just want to pitch this over to you
because I know you guys do a lot of like, um, and model sovereignty, which is huge right now.
We have the UAE, Saudi, France, Canada, India. A lot of different companies are talking about.
They're kind of worried about this AI sovereignty question. Is something like AI sovereignty?
Is this a really durable market? Or is it kind of the, some people are asking me like,
oh, is this because governments are, you know, writing checks because they're afraid and maybe
they're just going to all run American models through like a VPC in the future. Where do we see this
AI sovereignty question playing out in the future? Yeah, that's a great question.
Let's spend some time talking about what we mean by sovereignty first. So when I think of sovereignty,
I think about it at like a few spatial scales. So I know that the word is often used just to think
about a country or something, but I think sovereignty actually can apply to an individual,
can apply to a company. It just means your ability to function independently. It's independence,
it's empowerment. It's, you know, are you the master of your own ship? Are you in control of what's
happening? For a person, that means like, are they uplifted by the technology or are they
pinned down by? I think people over the past few decades, you know, have a pretty at this stage,
openly hostile relationship with technology. Yeah. Like people hate their phones. And so they
should. So they should. The technology promised to empower them, but instead thought for their attention,
tied them down, made them feel bad about themselves, like ultimately. And there's lots of complicated
reasons for that one of them being surveillance capitalism, like complicated motivating factors.
And we're beginning to see that a little bit within LLMs already, right? When you see like,
when you see OpenAI talking about adding ads into their chat, like it's like a risk speedrunning
that. For an enterprise, like sovereignty means like, can they function independent of the market?
Do they have something that's unique to them that's helpful for them? Are they just, you know,
getting table stakes for being engaged in the industry? Or do they have something that actually
allows them to execute on their vision independently in a self-directed manner? For a country, you know,
sovereignty means do they have technology that can't be shut off? Right? Like if you think about
energy sovereignty or something, like if you have a nuclear power plant in your country,
you have some energy sovereignty. And there's not cables that can be cut. There's no country that
can stop shipping you oil. And you have a thing that's powering your country that is under your
control. Co here is unique amongst the foundational model companies in our ability to provide sovereignty
to every one of those groups. So when we make a model, and we've done this to a concrete example,
we've, you know, we made a Japanese model for Fujitsu in Japan. We made a model very good at Japanese.
We dropped it down into the servers of Fujitsu to the same thing in Korea with LG. We've done this a
few, many several places around the world. We take that model, we put it into their environments,
and then we cannot shut it off. And we don't see what goes in and out of it. Like that is a sovereign
model that exists in that country. And they pay us via a contract and goodwill. And they
don't actually, you know, they abide by the terms of service. But we don't see what we don't see
what it is. And if we wanted to shut it off, we couldn't. There's a lot of countries looking
around at their relationships of technology right now and realizing that that's not the case
if they're buying technology from America. And so getting tech that doesn't really provide
sovereignty to them. Same with a company, you know, if they're just using what is out there,
but it's not connected into their data, it's not deployed securely. So they can't give it access
to their like real regulated stuff. You know, that's not differentiated. That's not giving them
independence. That's just giving them the lowest bar. Same with an individual. If they're just using
AI, but the AI is fighting for their attention, the AI is trying to pin them down or trying to say,
yeah, wow, that's actually a great idea. Wow. So amazing. You know, that's not providing sovereignty
to them. That's again, just tying them down. So we're unique in our because of our business model,
because of our go-to-market strategy, because of the types of models we make, because of our
on-prem secure customized deployment, like I think we're the only ones that provide that. And do I
think that's a long-term thing? Yeah. I don't think this is this is something I think the whole
world is realizing we need technology to do. We need technology to be uplifting and empowering us
and the companies we work for and the countries we live in. And we need it to be under the control
of each of those people. So yeah, I don't know. I'm really excited about this. And I love
the cohere as an ability to deliver on it. I love that. For so many reasons, because yes,
I actually think you gave me a bunch of thoughts. I hadn't had before, you know, examples being,
I mean, I'm not trying to like beg on all the other models, but chat GPT, if you go talk to it
lately at the end of every message, it's like, do you want me to, like, choose your answer? Do
you want me to do this and this and this? And then you're like, okay, sure. Now, do you want me
to do this? It just feels like I'm like on a neverending, like, listicle ad on the side where it's
like, you are. Yeah. And that's helpful. Yeah. 100%. And I think also on the other side, which
I know this is kind of like whatever. This is no sort of political statement on anything, but you
have kind of the whole blow up between anthropic and the US government lately. The US government,
I think, realizes there's probably choices they have to make if they want a model that's not
like that. And I mean, just all sorts of different areas, right? You just don't, it feels unstable
to use a model that could turn on you or cut it off when you're using it for something critical,
right? And so many different industries. I mean, finance and healthcare. Like, there's a lot of
industries. These things are critical. And you, yeah, you need to be able to have your sovereignty. So,
okay, consider me sold on the AI sovereignty question, which I was skeptical before. So I really
appreciate you, you bringing that up. Yeah. I'll count that as a win. Count it as a win. Yeah,
I'm an advocate for AI sovereignty now. So one thing that I'm also just curious about with
everything going on, we have these AI models that are raising ginormous, you know, $40 billion
rounds. Yeah, Dietta. Co here has not raised a $40 billion round. I mean, to be fair, most people
haven't aside from like three or four companies total. But I think it's really important for
co here to you guys have made a big point of being, you know, very capital efficient, right?
And focusing on what's important for you guys, what are things that you choose not to spend
money on that your competitors do? Like, how are you that capital efficient? That's a good question.
I just want to sketch that out a little bit. So, yeah, we have raised, um, by any normal metric,
I just a crazy amount of money. By any normal metric in any normal time, the amount of money
that co here is raised is nuts. When we started this company, I did not anticipate that we would
raise anywhere close to the amount of money as we have raised for a regular person to engage
with the amount of money that we have raised. Crazy, unfathomable. And yet, less than, I don't know,
I don't know exactly how much. I don't know hundreds. Maybe we've raised a hundredth of what
open AI and anthropic have raised. There's really three buckets you can spend on when you're making
foundational models. You've got data, you've got compute, you've got talent. And in the compute,
you've got the training compute, and then you've got the inference compute. We have trained competitive
models, models that are not the best, the top of the top and the leaderboard, but for their
size category of fantastic, with one one hundredth of the compute of our competitors.
One one hundredth. And that's because the team that team we have is like, they're great, and they're,
there's persevering or working so hard, they're brilliant, they're creative, they're collaborative,
that's the reason for that. So we've trained it with one one hundredth of the compute.
It is a capital intensive, capital intensive industry, and we will continue to raise more.
But I think ultimately the sustainable viable long-term path, and we are on this for the long
term, we're trying to build a generational company, is one that takes a principled approach.
And would our lives be easier if we had raised more?
Yeah, we'd have more compute to spend with, and to play with, and that would maybe make our lives
easier, but I don't know if it would make us a better company. I don't know if it would make us
the principled, pragmatic, optimized company that we are today.
Do you think the current scale of AI infrastructure spending is sustainable, or do you think we're
heading towards a correction on that? In some ways, there has been a correction already.
There are many data centers which were announced and have not been built. It's like 50% or 50%
or something. Yeah, there was a belief moment there where everybody and their mom was saying,
we're making a data center. Just to give a bit of history on this, we're like the harder
that we use GPUs were originally created for graphics processing, do a matrix multiplication.
They were then used for Bitcoin stuff for a while, which wasn't super helpful in the long-term,
but they were being used. Now, at this stage, the GPUs were using it. They're tailor-made for
neural network training, but they're still big matrix multiplying machines.
That's like one piece of hardware that has been used for many separate things over its history.
I don't think the world is going to run out of things to compute. Even if we figure out how to
make LLM's way more efficient and the computing power goes down, the world having more compute,
to me, seems like a good idea. Provided their built in a system, provided their built in a
sustainable, they're using green energy, they're using closed-loop water systems,
like they're built in a reasonable place, provided they're all those things. I think it's generally
a good idea to have more compute, and I'm not super worried about the world having too much.
We're not going to run out of things to compute. There's definitely, there are definitely
questions that need answering and computers can help us answer them. I think that's a good idea.
That being said, yeah, there's been lots of unprincipled approaches to building data centers,
and that's why you're seeing all of these big announcements just fizzle. They were only
rhetoric in the first place. That was my instinct on it, but I appreciate the insight.
A couple of what I'm not seeing, just to go back to what I'm not seeing. I've seen
suddenly lots of other countries trying to build reasonable data centers for their environment,
and I think that's a really good idea. I'm seeing lots of countries try to figure out,
okay, cool, we need this capability in-house within our borders. Let's build a data center
of this appropriate size. I think that's a great direction for people. Going in the same way,
I think it's a great direction when I see countries building nuclear power plants inside their borders,
so I think that's a great idea. To be fair, I actually think it's a funny moment in time,
and everything in AI happens so fast, so it's happened and everyone's long forgotten. I remember
there was a moment a couple of years ago where all these articles going around saying,
Sam Altman was trying to raise $7 trillion to build sovereign data centers in every single country
and every single country do it, and then it kind of disappeared and he said, he was in, and I don't
know, I have no idea what the background of that story was. I just saw the headlines that thought
it was a wild story, and I think many people do. However, I think probably a lot of countries are
realizing they probably don't need opening AI specifically to go build those data centers for them,
right? They can build their own data centers and have sovereignty over those as well. Like you
mentioned with the model, but I think it did peak a lot of people's, it kind of put it in the
zeitgeist that, gee, this is probably something a lot of countries should look at and should think of.
Like, this is a good country-wide investment to make, not get tied into one ecosystem per
saying they're operating around anyone, but build it so you can run your own sovereign stuff.
What's one of the most common mistakes you see enterprises make when they try to adopt AI?
There's a few. One of them is to say, oh, we're not ready. We need to make sure our data
lake isn't a good environment. Let's not give our employees access to AI. Let's instead
spend a bunch of time trying to figure out how to improve our data warehouse, our data storage.
Your data, as soon as you've done that, somebody else is going to have me. You're going to be
collecting new data and asking to be, there's just never, no, I've never met a company that's been
like, oh, yeah, our data is perfect. It's pristine. It's in all the right. It never happens.
Never has. Some are better than others, and you should always be striving to improve it,
but that's an ongoing process. So I've seen companies like in delay doing it. They could push it
off forever, saying we're going to clean up our data warehousing. The other is to take a, I think
it works much better when you take a bottom-up approach. Ultimately, this technology is augmentative.
You know, everybody in any organization where they're working behind a computer,
an LLM can help them for some portion of their job. It probably can't do the entirety of any
person's job, but it can do a portion of everybody's job. And the best way to get utility out of this
is to give people access to it and say, you use this. You're in control. You figure out how to use
this, put it into your environment, make sure it's secure, make sure it's got access to the right
data, make sure the employees know how to use it, know what they can share. Hopefully, if it's
securely deployed, they can be put any data they want in there, and then say, you are empowered
to figure out how to use this and use it if you feel like it's helping. Even don't use it if
you feel like it's not. That's it. I find that works much better than trying to top down, say,
we are going to identify this particular use case. We're going to automate that particular thing.
It's so challenging in a large organization to know what people are doing when and how to,
like they know how to do it. And so they're going to know how to get an LLM to do it. And that's
going to be beneficial for everybody. That's I think going from a bottom self approach is often
more effective. Love that. I think that is phenomenal advice. By the way, it's been such a phenomenal
interview. And I'm super appreciative, Nick, for your time and all of your thoughts, because I
feel like I've learned a lot, and I'm sure all the listeners have as well, looking forward into
the future, maybe in the next few years, what do you feel like enterprise AI actually looks like?
What are some of the shifts and changes that we're going to see that people should be expecting?
Yeah, I think, I mean, we're already seeing this a little bit, but I think largely enterprise AI
over the coming years is going to look like this. You're going to get into work, you're going to open
your computer, you're going to open up an email client, maybe as you do, maybe you'll open up slack,
and you'll open up an interface to use a language model. And then you're just going to ask it to do
stuff. And your work will feel a lot more like directing a machine. And then making changes to it
as you see fit and figuring out if it's got the real cultural nuance, if it's got the right strategy,
all these things, like, you know, making edits that are creative and strategic and supervising what
it's doing, rather than spending time doing things yourself. And we're already seeing that a little
bit like that's that's the way a lot of coders feel right now. A lot of coders are, you know,
they're supervising, reviewing, strategizing, creating architecture docs and thinking about the
code rather than spending time writing, boy, they're playing code anymore because the yellow lumps
can do that for them. So I think we're going to see that in all knowledge work. And I think we're
beginning to see a little, yeah, we're seeing that already. And we're just going to be seeing that
in more facets of work behind the computer. Amazing. And I for one, I'm super excited for that
future. And you're right. I'm starting to taste it a little bit, but I think we'll see that more
going forward. So we're really excited about that. Nick, thank you so much for hopping on the show
today and sharing all of your insights if people want to, if organizations or enterprises are
interested in contacting, cohere and working for on-prem deployments or sovereignty,
what's the best way for them to, you know, contact you. We got a big contact button on our website.
Yeah, it's not going to be hard to find, all right? You can talk to it.
Phenomenal. Thank you so much, Nick, for hopping on the show. For all of the listeners,
thank you so much for tuning into the podcast today. Make sure to leave a rating review or
every get your episodes and we'll catch you in the next show.

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic

AI Applied: Covering AI News, Interviews and Tools - ChatGPT, Midjourney, Gemini, OpenAI, Anthropic