Loading...
Loading...

Welcome to Heavy Networking, the flagship podcast from the Packet Pushers Network of Fine
Technical Content for Engineers. I am Ethan Banks with Drew, Conry Murray. Please follow us on
LinkedIn and the Packet Pushers Community Slack Group, PacketPushers.net slash community.
If you'd like to join our Slack, like thousands of others from around the world and on today's
episode, Cloud Portability. We are chatting with sponsor Fluid Cloud about how their product can
discover your existing cloud environment and then recreate it in another one. For example, you could
move from AWS to Azure or Azure to GCP and there are other situations Fluid Cloud can help you
with too, such as migrating off of VMware. Fluid Cloud is a robust product. It does a lot and it's
not simply for one-off migrations. And so to get into the nuts and bolts of Fluid Cloud,
is Sherrod Kumar, CEO and Harshad Omar, CTO. They are both co-founders and we're going to chat with
them about not only what Fluid Cloud does, but how it does it. And along the way, we're going to cover
Fluid Cloud's large infrastructure model. This is an industry first that they have launched this month
in March, 2026. And guys, there's a lot of very cool stuff going on at Fluid Cloud. We had a
planning call with them a week or so ago and started digging into this. And this is one of those
conversations where the deeper you go, the deeper it gets. And so there's a lot here to be
interested in. So Sherrod, first question goes to you. Would you give us the 10,000 foot overview
of Fluid Cloud? Hey, thanks, Ethan. Thanks, too, for having us on this podcast. So 10,000 feet
overviews, primarily, the reason we started Fluid Cloud was to break free people from vendor lockings.
Right now, the way all this while the industry has been practicing is a hyper-scaler or a
cloud provider will provide you tons of credits. You get into the cloud provider and then you start
getting yourself locked in. And after that, as the bill starts getting coming,
just kind of the pain starts increasing. And that's where I think we wanted this
industry to be vendor lock-in free because we felt this pain ourselves. This is our second startup
and third company together. And when our previous company was acquired, it took us almost eight
to nine months to move from AWS one account to another account. And given that was the pain which
I kind of personally felt, Harsha personally felt, at that point in time, we were like so much
yearning for some automation across that. And then Hashi got changed its business source license.
Most of the people started moving her open tofu. And then Big Bang, the VMware acquisition
happens. Then if the whole industry decides off 80% of their industry decides to move
away, okay, bye-bye, Mr. VMware, let me move somewhere else. But who's going to do that? It's like
all the consulting companies are there. But the whole pain, the manual pain is so much.
And that's the reason we started fluid cloud. Another thing of what we do at fluid cloud is we also
have, so when is the purpose of building the company, which is the value of the companies to actually
break free enterprises from vendor lock-in. And we are also proud of our foundation, which is fluid
cloud foundation. We always believe in giving back to the community. We have taken a blind school
as an undertaking. But we always contribute 5% of whatever we are getting as revenue to the blind
school to develop the assets for the blind schools for the blind kids. Because one thing which we
have realized is they have a vision disability. They don't have a learning disability. So they can go
long way if right assets could be provided, right infrastructure and resources could be provided.
So yes, it's like it's not just building on one side, on just one industry, it's like also
getting back to the community. So we're talking public clouds and I think Ethan mentioned AWS and
Azure as examples. Can you tell us the public clouds that you support where you can help with migrations?
Yeah, absolutely. So we currently support almost 10 cloud providers today. So we have AWS,
GCP, Azure, OCI, VMware, OpenShift, Nutanix. Now, yeah, Vulture Cloud and Hyper-V.
I know VH. Yeah. I know VH, yes. I know VH as well. So and the whole premise of this was as we
kind of got, first we started with kind of a new cloud where I think because we could cover 100%
of the services for the new cloud from all the cloud service provider to the new cloud.
And then as we started seeing the surge of sovereign cloud, OVH became one of our partners.
And sovereign cloud is one of the big things which is there in Europe and also other countries
given the current geopolitical scenarios as we all know. But and then I think as we started
talking to hyper-scalers with the VMware exit, we started talking to Nutanix.
There are a lot of things which Nutanix move does, but there are a lot of things which it doesn't.
So that's where we come in and help bridge that gap. And then once we started talking to hyper-scalers,
they were like, wow. Now one thing which we have to also understand all these hyper-scalers,
they have hundreds and billions of dollars of backlogs. They have got the bookings,
but to recognize the revenue, you need to move. You're working with consulting companies,
and they work on TNM basis, time and material. They get built, the more complex the problem is
the longer the project is. And the hyper-scalers wanted to move quick. So that's where I think we found
this very good. And I want to add a few things. So first of all, when it comes to public cloud,
as every developer, they want to try out new things. I've been stuck with AWS from past 10 years,
and AWS is the slowest in doing the innovation. If I have to spin up some GPUs,
I have to wait literally nine months. Why would I have to wait in AWS for nine months to get my
GPU to comments? I can deploy things in GCP, they are giving something. There are new clouds coming
up. Vulture cloud is going crazy. They're giving you so cheaper GPUs. But in order to use,
take advantage of that and use that. I have to deploy my Kubernetes clusters. I have to convert
the firewall rules. I have to convert the permissions. It's a nightmare. One of the reasons also
adding more support to the public clouds is try out new innovation, new workloads, which may be
better. Why you stuck with one cloud? Okay, hang on a second, because there's a lot of
difference in the platforms that you guys have described. We've mentioned VMware and
Nutanix and the same breath as we've mentioned GCP and Azure and Vulture and so on. These are
different platforms that have different emphasis and capabilities. So how are you?
How is all of that coming under your umbrella? So yes, that's a great point,
Ethan. And in fact, there's always been a norm that there's all the way you do things in a
private cloud and on-prem and that the way you do things on a public cloud is totally different.
What fluid cloud has done really reduced that abstraction and came up with a hypervisor on a
hyper scale. We have literally reduced the boundaries between public and private clouds.
So think of like this, if somebody's using VMware, all you're doing is spinning of VMs,
you have some base hardware and you're spinning of VMs, you're setting up the networking,
you're setting up the firewall rules, right? Which is the same things those services available as
AWS native or Azure native network security groups, virtual machines. Okay. So now people
do understand there are similarities. It's just matter of provisioning. How do you provision it?
And when you try to provision, let's say creating a VM in VMware, you need to have a cluster,
you need to have this, you need to have that so many dependencies, right? When you spin up in Azure,
you need to create a resource group, you need to have a network security group, you need to have a
virtual network. So that's what fluid cloud brings abstract and infrastructure running anywhere,
convert that into anywhere and we give you the teleforms. Okay. So you figured out,
we have that the basic needs are common across the platform. So as long as the platform can deliver
that basic need, you've got and however you just said abstraction, you were abstracting away the
specifics and just delivering that to use, to use a term common to networking people, intent,
you're delivering a specific sort of intent into into that cloud environment.
It kind of, we also go ahead and create a provision. It's like a translation, right? Like
a VMware configuration translated into new damage or into AWS, you can think of like that.
So you're saying, if I've got a very complicated application set up in AWS and I decide for whatever
reason I want to move that application workload to an entirely different cloud, you take all of
you essentially map all the services I'm using within AWS, figure out the infrastructure as code,
logic, and what you would need to actually duplicate that in Azure or whoever. And you provide
that to me so that then my death can just sort of like copy and paste. Oh yes. Yes. Exactly.
The way you said true, that is exactly how fluid cloud works.
That's a copy. Okay, but there's limitations here because not every platform has the same
capability. So AWS has a zillion different platformers of service offerings that don't map one to one
to other clouds. Azure's got an enter ID and so on. So how do you deal with those situations where
there isn't a mapping? Okay. So first of all, that's fair point. If something is not there, which
is not possible, human by like humanly is not possible, we cannot do it anything like. Let's
say, for example, somebody's moving from AWS to WorldChill Cloud and he's using Cognito in AWS,
there's no Cognito alternative in WorldChill Cloud, right? Somebody's using Open Search in AWS,
there's no alternative for Open Search in WorldChill or in OVH or in GCP, right? So the thing is,
it's a decision which developers have to take in what, how much deep or how many cloud native
past services I want to use. Like there are some infrastructure as a service offerings,
there are some platform as a service offerings, like authentication service, email service,
managed Kafka, those might not be available in all different clouds, but as long as infrastructure
as a service offerings are there, those are standard, pretty standard across all the clouds. And
to, in fact, we got you covered for paths as well. Let's say if there is an alternative in the
other cloud, we can convert that as well, because that's how DevOps guys do. That's what DevOps
guys do. They just go ahead and provision those services, right? So you would require a GCP
expert to deploy things in GCP. You would require an AWS expert to deploy things on AWS.
Where's the connection between? Flute Cloud can still do that. It gives superpowers to the DevOps.
So are you actually exporting this infrastructure as code in a language that developers might be
familiar with? We give Terraform for you as well as the state file. So no matter how the resources
were provisioned, we can just scan it and give you the Terraforms as well as the state file with that.
So and once it is there for that source cloud, you convert it to a target cloud of your choice
and give you the Terraforms for that as well. So what else do we do? We do these conversions in
micro-milly seconds. It just happens in like a flash. So one of our code patterns is also like
scanning. So it just doesn't matter how you have provisioned your cloud infrastructure.
You could have used click-ops, chef, Ansible, Puppet or CLI tool or whatever.
Peace can your cloud APIs and generate the Terraform for it. But we do that in the record time.
One of the best CSPM tools in the market to scan 100,000 cloud resources. It will take one hour to
seven hours. You will take one minute, not more than that. Okay, we got to get into this in more
detail. But I got a couple other background questions first. In one of those use cases. So
when we were chatting with you guys before we recorded, one of the things that came up was,
was, hey, okay, the obvious use case for us is migration. You're going to pick up and move to
another cloud and we're going to make that really easy. But that's not all fluid cloud is about
just picking up and moving from like a one-time migration. Some people do that, but that's not,
that's not even your primary use case, I guess, the way you guys describe it. What other use case
is? How else are customers using fluid cloud? So the thing is, it's a deployment. It's like,
it's a cloud cloning. You want to create a copy of your infrastructure to somewhere else. Now,
we can help in the migration. We are not a migration tool from one cloud to another. Yes,
you can do that. In case of tenant sharding, right, you have a multi-tenancy architecture. One big
customer comes up and says, hey, I don't want my data to be shared with my other with your other
customers. Please set up a separate environment for me. Now, you have to replicate that same
a separate deployment for that customer in the same account. So or in the same provider from one
account to another account or maybe you might have your dev and stage in different accounts and
production in different accounts. So you might be moving within the same provider or in terms of
compliance and data residency scenarios where you have to replicate the resources from one
region to another region in the same account. So you can even go crazy, like select one region and
to a different region and change provider or don't change provider. It doesn't matter. In fact,
we can also help you with some kind of a rollback scenario. Imagine a DevOps has a complex deployment
where five different terraform files runs and then after that Kubernetes deployment happens,
the API deployments, we have multiple state files. One bad deployment. How will you roll back?
You need five rollbacks because there are five deployments happened.
Fluid cloud can combine all those into one state file. We can give you all that combined into one
terraform file. So you can really copy, select two different applications, bring that into one state
file if you have to roll back. Super easy. It's basically version control for your entire cloud
infrastructure. It's something about that I was thinking, this would take a lot of data and this
and that and not really. It's just state files. I guess there's a question of, do you guys also deal
with the data that is living in my cloud or just the infrastructure hole that all of that lives in,
if you will? No data. It's only the infrastructure. You don't need to understand what my workload is
doing. You just need to understand the configurations around it. These are not looking at data.
Okay. Exactly. And then it isn't. So you wouldn't, you're not a backup and restore kind of product either.
There are not so far terraform for your infrastructure. In fact, I'll give you a crazy situation,
Ethan. In case of an outage or in case something happens, something bad happens. It BCDR strategy.
People are not able to come to an RTO or RPO in minutes. They might have the data backup, but
in order to spin up the critical services again, as a backup in a different environment, right?
That takes hours. So RPO can be reduced, but because your critical services are not spinned up,
your RTO itself is in hours. With the fluid cloud, you can literally have RTO reduce down to minutes.
Because I've essentially got a backup of my infrastructure, sort of waiting to go. And if, say,
cloud provider one goes down, I can launch this into cloud provider two and spin up that application
to get it back online. Exactly. That's correct. And if you recently cover the Dota compliance,
which is digital operational resilience act, which started in Europe for primarily financial
institutions, one of the most critical component of that is if the whole provider goes down,
what's your backup and restore strategy to go into nothing. So you have to throw a lot of
bodies into it to maintain that kind of posture, which is now could be automated through using fluid cloud.
But again, to understand, sort of the problems yourself, I can get that infrastructure, all those
infrastructure configurations into cloud B. But I assume you're not handling the DNS
reconfiguration and redirecting all that traffic to cloud Bs that someone else. That would be a
separate startup. Someone to look. This is really no solution for it. Yeah. So to the listeners,
we have a sort of idea, right? Okay. Or you guys licensed. We licensed. So we charge you based
on number of resources we discover. So in your cloud infrastructure, you might be using various
services like EC2 security group, I am your 100 EC2 instances, 50 security firewall rules,
100 VPCs. Everything becomes an asset. It's a cloud asset resources. We discover a resource that
becomes you get resilience, right? That resource is converted into every other cloud provider we
support for you to always be ready to deploy and convert into Terraform and do the provisioning.
So we charge you get per resource level and it's a t-shirts like pricing here. Go ahead,
and you can do unlimited migrations, unlimited movements of infrastructure. So we don't restrict
anybody on number of cloning. You want to do and we don't call it migration because migration
is you're taking here and putting it there. The reason we call it cloning is you're just cloning it
and it exists in both the world now. It's almost like a visit to end. So you can do unlimited
cloning. You can do unlimited within the region for your accounts or from cloud provider to
cloud providers. And I think as soon as people have started plugging us in, as few of the customers
we've already started seeing the result, as soon as they put us in within like one or two months,
they're immediately able to see 5 to 10% reduction in the cost even if they're not changing their
cloud provider. If they change their cloud provider, it's almost goes to 30, 40 or maybe 50%,
depending on a cloud provider they choose. But just within the cloud provider also, they're able
to see 5 to 10%. For example, if something is in the US, the best one, they want to go to US East
one. That's itself is like a 10% reduction in your cost. And then you can do that in minutes.
It's literally in minutes. So are you saying that in addition to understanding all of this
infrastructure, you're mapping it to how it's being priced by that cloud provider and other cloud
providers and show me that? So yes, interestingly, why would somebody use infrastructure? There's
a whole ecosystem around infrastructure. You have resources, then you have a lot of phint ops tools
which give you cost estimation analysis of there are a lot of security tools which give you security
analysis and fixing stuff. Now, when you have that infrastructure, we can do that cost analysis
and security analysis as well. Now, using our mapping, we scan your AWS account, convert that into
Azure and we do the cost analysis of Azure. So convert that into GCP, do a cost analysis of GCP,
convert that into OCA. So you have an active infrastructure running on one cloud which is AWS
and we give you exact comparison how it looked like in Azure, GCP, OCA. It's really like a multi-cloud
cost comparison. And you can really go deep, see each instance is how much that is going to cost.
Each volume how much that GB of transfer will go is going to cost me and all those cost factors
into the product all by itself is just as scanning your environment in that 60 seconds,
give you telephone, give you cost analysis, everything. So it sounds like you have a pretty
straightforward ROI story, then people invest in this product. There is an economic analysis that
happens that means the product can more or less pay for itself. One of the also reasons is people
are frustrated with consulting companies. They take times. Are they? Oh, I have so many friends.
I can speak yet, it's recorded, but there are so many people, they're frustrated with
consult because they just do a dump, just take all those things. I generally give that example. Have
you ever seen in my home when I was in bachelor in my college? Clothes in the cupboard was always
never folded and you just take all that grab all you can and put it in another cupboard. That's
what consulting companies do. People want control. People want their shirts folded and put away.
They don't just want a big laundry plow. They want an automobile folding their shirts.
So it sounds like you are getting deep into my cloud environments. How are you getting this information?
Where does it go? How are you storing and analyzing it?
This is very flexible. Users can choose where they want to save their data. We can put it into
your history buckets or you can use R. We deploy in RSAs. You can use RSAs or we can just
give you your private deployments and stuff. So it's fairly flexible. But as in
now in configuration data, if we have a database, it's RSAs, which is replicated in
five different clouds. So we have an R.D. or R.P. in like Italy, 15 minutes.
The whole area is down. We can come up in 15 minutes, less than 15 minutes in Azure.
Oh, that is if the whole point here is to make sure I'm not victimized by a cloud outage,
I don't want my fluid cloud platform to be victimized by that state and cloud outage. So you spread
the spread things around. Yeah. So one of our POC was just from the largest cyber security
company in the world. And while we were doing that, they said, what if you go down?
They're like, yeah, that makes sense. So now we are like kind of parallel, active, active,
and five cloud providers at the cost of backup and restore. So this was a requirement from one
of the customers. But again, how are you getting that data? Are you just making API calls into
my infrastructure? What level of permissions do I need to give you to get that data? So
are that information? Yeah. If these are standard API calls, each of these cloud providers
have standard documentation and it's you can just get all the things done with just the
read-only permission. So we cannot, fluid cloud cannot make any changes.
If you want, fluid cloud can make a lot of changes on your infrastructure, make it more resilient,
make it more optimized. But if just read-only, we can always generate terraforms and people can
get the ICD pipeline separately. Do I have to do anything more than point you at my cloud with
the appropriate credentials and you do the full discovery? Do I have to do more prompting,
give you more clues as to what's going on and what to look for? No, everything is automated.
Zero friction, zero, like just give me read-only credentials and that's it. So everything ends with
yeah, when we started with this whole journey of multi-cloud, the way I wanted this to happen
was the way you book an Uber. That's like not more than three clicks. So exactly the way we wanted
this as well, like scan, convert and deploy. So like can do scan, convert and then you can analyze
your terraform script and then you can deploy. So scan, convert, analyze and deploy. So we just have
the analyze button which is more like is the price is right from a Uber, can I choose comfort
versus black? So that's what I have to choose. Let's get into the nuts and bolts of how you guys
are doing your translation. Now we've said okay, it's going to end up as a we're going to
discovery, it's going to end up as a terraform state file and then we have conversion magic that we
do that can can make that existing state file into a different state file for a different environment.
How what's going on under the hood and I think along the way we got to talk about your large
infrastructure model this new thing that you guys have announced because I believe that's a
pretty important part of the story. Yes. So first of all, the conversions all the as a as a
as a city and as a developer, I am very skeptical about you know what AI is generating my code
and when it comes to infrastructure, there's even more skepticism and we wasted around four
months to try doing this with AI. So like okay, we don't approach exactly. We have tried almost
every LLM model you can think of but no success. So what you will probably had some success but
there was just enough errors and problems that it wasn't worth pursuing further. Zero success.
I'll tell you the low of conversion was in terms of accuracy, it was less than 10%.
Oh wow, okay. And these we are talking about the biggest of the models, biggest of the models which
are the best in coding. Less than 10% success rate. So did you tell like manually do one-to-one
mappings by hand essentially or how did you exactly so what we did so just like I explained right
there are various services as a DevOps we would know how would you configure an EC to instance in AWS.
You will given VPC ID so before that you have to configure VPC in order to create VPC then you have
to create an account, create a user, create then create a subnet, create a security group,
give a file like inbound rules with IP address and port. The same way and then if a DevOps has to
replicate that in Azure you do the same thing you create a resource group, create a virtual network,
give a private IP then you know create a network security group with source address.
Now in order to do the translation we went resource by resource service level mapping like this
service EC to instance is an equivalent of Linux virtual machine in Azure is equivalent to
compute engine in GCP is equivalent to OCI instance in OCI. So there is a service level mapping
then next level is the attribute level mapping like for example in order to define an
instance type the attribute called is instance type here here it is called VM size here it is
called something else. So there is attribute level mapping and then finally this value level
mapping which is because these cloud providers is not going to make your life easy give if you want
to spin up a VM with the two CPU and five or four GB RAM you know you cannot do that straight forward
you have to define T2 medium or Vivec flash like I don't know I can never remember which class of
and I have to all the doors guys they remember all this class but when it comes to Azure right
it's even weird like standard BS one standard BS two I don't know what does that mean
when it comes to OCI it's even E4 standard E4 it's very weird right and everything
translates to certain CPU and config so the idea is service attribute and value it's a
three level nested mapping you need between all the cloud providers and that's what our patent is
where we have developed ground up we built it around and it took almost more than a year
one year yeah we started in August 24, 2024 August and by the time I think April 2025 we were able
to add support for 30 services across AWS Azure and GCP and today we expanded
that coverage to around 130 services across AWS Azure, GCP, OCI, WorldCero, VH, Nutanix, VMware
and like that so now the large infrastructure model concept sounds like large language model so
is there an AI tie into this concept does it behave like AI or is it trained like a model like many
of us have become familiar with when we think about LL labs I mean in terms of experience you'll
get the same experience like and you're talking to an LLM but behind the scene it's not an LLM it's a
ground up built purpose built machine learning algorithm which we have trained it based on our data
which we using our mapping and stuff so obviously there'll be an LLM which is in the initial
interaction to understand the user intent what you want to buy the front end I can say in plain
English I want to I want to convert this environment from yes you know this cloud to this cloud
or whatever some plain spoke can request that that gets parsed and then handed off to the the LLM
but that isn't the LLM but you've built as you said it's a machine learning trained must yes
so we have trained it over I think a 9 billion tokens of data and that's that's a beta launch
at the moment and we are seeing a lot of you know advantages with that because with that what is
going to happen is we are able to show you a compatibility score like imagine you are using AWS
Aurora DB and the whole day it looks like okay this is just an RDS right I'm using database
service for AWS but if you want to go that database service in Azure good luck with that because
there's no Aurora DB you have to convert that into pro's grace yeah so internally it what we
have done is we're using that mapping we are able to give you a compatibility score so we scan
your environment and we give you a compatibility score against various services to each resource
level to each map attribute level and then all the way grouped by the whole account
compared to other clouds so maybe it will help somebody like okay as a company you know executive
I'm thinking to go to Azure but take a look at that compatibility score you have a low compatibility
score with Azure try gcb that's a better choice you have more success of migration so that's a
very good you know use case which we are able to come up with the help of all I am actually yeah
so I can tell you one thing the way even like the way AWS if you want to go to like from any cloud
provider to AWS the way AWS looks at it or even on prem it's SS mobilize and migrate
the one which is SS that itself is this whole compatibility thing which they give it to a
consulting company and they will cost from the source to destination that takes four to five
weeks or maybe more than that so that you always is readily available that how multi cloud ready you are
so just from that perspective I mean there is no extra dollar any like even hyperscalers have to
spend at this time as this tool is getting used what is the appetite for actual
migration because it's you know not just the hassle of figuring out how to map a workload
from cloud a to cloud b there's also cost associated with doing that data migration that's where
in a lot of cases the cloud providers get you oh you want to believe okay here's the exit fee
so it's great that you can make it easy but if I'm going to get hit with that spend and most
organizations are now sort of multi cloud already what's the impetus to drive what's the value of
doing a migration I mean that's a great point Drew and most of the times this this is something
in awareness you know we we are trying to spread across because the data egress and ingress charges
are not you know in every cloud only AWS Azure and GCP in fact if you go to OCR so when people
those who are multi cloud they should choose one particular provider where they're going to
recite the data so that to avoid those data movement there was a huge backlash from like
when from the government when data egress charges were super high so they lowered down and then
AWS and GCP partnership happening I don't know where where does that go but there are solutions
and work around you can actually always create a private tunnel to avoid data egress charges
between the clouds so the only reason why I would somebody stop using is basically the engineering
effort of I owe I have to understand OCI first of all OCI has the worst terraform syntax somebody
can think of okay I mean it's very hard to learn one particular cloud provider and to become
an expertise and that when you are given with a terraform syntax which is very rarely you have a
attribute which is like 50 character long like you can't even see in that screen you have to
you know need a bigger monitor yeah so to understand the terraform to write those terraforms not
any model is able not every model is able to generate that terraform and that's a huge gap for
DevOps because if you see the DevOps they always want to make any modification with the infrastructure
either to you know make changes on three factors whether it is reducing cost adjusting your cost
or if you are fixing some security or you're increasing the performance so think of it like this
RGB different weightage to different you know color gives you a new color
these three patterns cost security in performance give different weightage to these factors gives
you a new infrastructure design pattern okay so that is the basis of the LIM and which we are
able to give to the DevOps they can take advantage of yeah as a part of the tool be already
support almost 1300 policies right out of the box for all the cloud providers you mentioned all
the 10 so this is also one of our core patterns it's omega-1 policies can primarily we have the
the cloud security posture management built into the tool so we can give the compliance code for
gdpr gdpr pci dss swap to hepan nest or the any benchmarks which the customer wants
not only that we also give a remediation like if there is there is a there is a misconfiguration
on a particular side of the infrastructure as code we actually give you the right recommendation
and if you allow us we can actually we can auto remediate it as well and give you the right
deployment script huh so you're saying in addition to showing me cost comparisons and so on you can
also help me um security side on the security side yeah oh yeah that's correct yeah so it's like
an RGB when we call it it's security cost performance we truly mean like security
securities like uh beyond this there is nothing much which is there on the cloud security side
our previous company was acquired by tenable uh and where we have learned a lot of we have
like we learned a lot of things on the cspm site uh like we did really great things there
there were things which you could have done better which is done at best today
now we said earlier on that blue cloud does not handle the data migration portion of moving
from one cloud to another um so how do your customers typically deal with that do you integrate
with tools they might have that would help them with that or what's that process look like once
they've used blue cloud to get that infrastructure whole ready to accept data oh yes yes there
so each of these cloud providers have some kind of a data migration tools uh which is great um and
this becomes very complimentary uh fluid cloud becomes very complimentary to these kind of solutions
so using fluid cloud you can create a landings on you can create a bare shell infrastructure
and then move the data create your data pipelines using those tools so think of it like this we have
you have some hundred s3 buckets in AWS if you are going to Azure you can create 100 storage
containers in Azure then use the Azure AZ copy tool to just move or there are hundreds of open source
tools in fact these are these cloud providers have an s3 native uh you know s3 compatible APIs
just give us the source s3 bucket and then we copy all the data automatically so
the provisioning the understanding of that infrastructure that's where we're in order to create
an s3 bucket in order to create and storage container you need to understand Azure
by no permissions and other stuff which is not at all related to that and that's where the majority
of the time goes so i'm getting the sense you know you can uh sort of map or match you know sort of
the compute in the RAM and all of that from provider a to provider be what about the networking
environment because that's where things can get really complicated with vpc's and routing and
load balancing and ip seconds so on how good are you at doing that kind of mapping and clothing
so we we take care of all those infrastructure cloud native services and map it everything including
networking storage iam permissions compute everything i gave an example of an easy to instance with
a recipe and map that is one of the configuration one of the property of that service but really if
you go deeper there are tags there are you know encryption settings there are snapshot settings there is
so many permissions there are people creating custom permissions and stuff there are networking like
no credential chaining where it automatically assumes the role if the Kubernetes cluster is
and the s3 bucket in the same account you don't have to give an permissions now how would you
configure the same thing in Azure so taking care of the networking we go ahead when we scan we
actually not just can give you the infrastructure footprint we also take care of the dependencies what
service depends on what what is mandatory what is optional and we bring in all those how would you
convert that manually we take care of that taking care of the design patterns and give you that
so to give an example if you have to create a you know public subnet you know if you have to make an
easy to instance public so what you typically there's in you know you create an easy to instance
attach it to a vpc and then have a private subnet and a public subnet now how do you do that and
Azure you create a Linux virtual machine attach it to a virtual network then
which virtual network has private IP then you create another public IP and then do a vpcp
hearing alternative in Azure so we do that conversion so because that is the way you deploy things
in Azure so taking care of the networking in another example in Azure you can define
deny rules like in security group I can give an IP address a particular port and I can say deny
any request comes to this port from this type source IP deny the rule but you cannot do a deny
rule in AWS how would that get converted into AWS so we do from source so any other IP addresses
except that port is allowed that's how we convert it which looks very crazy in terms of
the rules in inbound rules if you see there will like 10 different rules that coming up but yeah
that is the only way to do it one of the thing I would like to mention is most of the listeners also
was stuck with VM there one of the biggest issues they would face is moving from NSXT
if they're choosing from an NSXT to flow on Nutanix or NSXT to the the native for the networking
in the cloud service provider we also take care of that in fact at this point in time as we
actively talking we are partnering with Nutanix and integrating our tool from a networking conversion
perspective into their tool called so as we talk about it many of the listeners who are more stuck
on the networking side on the VMware we can help them out where we can do these conversions it
actually takes a couple of months to do that with the God level expertise also takes a couple of
weeks we will be able to do that in minutes using this tool so is this in an on-prem environment it
doesn't have to be public cloud if I've got on-prem VMware and I want to migrate off I can do that or
it could be either it could be either it could be you know managed VMware on a cloud and would be
on-prem VMware so it just doesn't matter for us we'll treat it as equal okay a couple of other
things to explore here one of those when we were planning this show you talked about a scenario where
some customers of yours maintain an active active data center across clouds and they use
fluid cloud as a way to mitigate that risk of a cloud outage how does that work what's going on
so it's it's pretty simple it's a three you have an infrastructure running on one cloud scan using
fluid cloud convert it into another cloud do the telephone deploy and now so you have two different
Kubernetes clusters two different databases everything running twice right now what about
the data you can always have a Kubernetes job to synchronize the data every 15 minutes or you can
use people use you know distributed SQL like you abide or you know spanner you know those things
automatically do that where there is a primary database and secondary database all get synced
the services it's the services how you access that data using fluid cloud you can convert that
infrastructure and have a turning so now what happens is you have an standalone application
one cloud exact another application running in another cloud both are getting synced in like
whatever you want every 15 minutes every 30 minutes and stuff now your DNS will resolve to
one load balancer at a moment if something happens you have to choose the DNS so that's what I
said that's a different start of do so I wanted to talk so it's okay I would because I would
describe that as like an active standby your data center but it's replicated in near real time
assuming I've got my data synchronization going keep my DBs in sync and but then I mean man
as far as a disaster recovery business resiliency sort of a solution that's awesome I can just
I can literally cut DNS over and I'm ready to go I might be behind one sync rev yes
you on the database side depending on how I've chosen to done my database or I might be you know
exactly replicated in real time and and ready to go to okay so I can maintain that environment
that also means oh okay so I changed something in my primary active data center you're going to
pick that up as fluid cloud and replicate that to my my other data center environment too so I'm
always in sync between the two environments absolutely that's a good problem that's another feature
we call cloud sync in fact this also comes up with let's say not somebody would do multi-cloud
active active right as a start of we developed an Azure or we developed an AWS solution my customer
wants a solution in OCI or in solution in Azure how do I do that so customer deployments yeah
more I mean most of the startups they face this we always hear that hey we would like to have a
you know air gap deployment and be use Azure and currently you guys are an AWS why don't you
move yourself into Azure that's a two three months of effort keep the whole refactor move into
another cloud and then go for it it's like guys you don't have to do all of this just sign me up
and then we should we should we should be fine like we can actually do that in like
real time out like in a day start less than that but that's how that's how that's fast that's how
as fast as it could get one gotcha here which is let's say if my application is using a cloud
native services right I'm calling S3 packet APS I deployed that in Azure but I'm still calling
S3 packet API right because application logic is not changed so that is something somebody has to
do manually to change that to now start using Azure native but we got you covered we have a
portable SDKs so you have to just use fluid cloud S3 bucket and your infrastructure is
pointing out to AWS we'll call the API S2 AWS S3 bucket if you change deploy to Azure just change
the environment variable to Azure and then we'll start calling the Azure APIs for you so your
application is literally zero code change it's almost like a building a super cloud
so you know every time AWS has its conference it rolls out like 500 new services so
high-least how do you guys keep up what's sort of the differential between current capability and
whatever a cloud has just released yes I can I can start with that answer so
let me start with OpenAI OpenAI came up with a ton of features like every single day it's like hey
I have this integration I have that configuration but again you have to just narrow it down
the user is using it for can you rewrite this hey tell me about that tell me about this maximum
people are using codex and others for doing the coding but then sometimes they move to other tools
from the user perspective if you have one of the most sophisticated DevOps as well
they don't go beyond 30 services and a very sophisticated organization they have 1000
but I have not seen personally people using 100 200 services of AWS at once even though they're
coming up with 500 so we don't have to keep up with another 500 which is coming in the next
print for AWS all we have to make sure that we are covering the users and the maximum coverage
we can give them so for an enterprise of the magnitude of like the largest insurance provider
you're working with they might use like 35 40 services or maybe 50 services how do we how much do
we cover almost like 45 of that what we cover the rest of the five we can build it on the go
so our focus is more user focus than what the services they are using versus like oh they have come
up with another 500 we have to build automation for another 500 so it's a it's a very user centric
approach okay so to put it in other way it sounds like you're letting your customer sort of drive
you toward the services you want to include in your mapping service yes that's correct and in
fact as we are partnering with the cloud service providers they are telling us that hey we have
these 20 customers why don't you do you have these services like you know we we kind of integrate
very closely with OCI so OCI has given us a list like can we have this let me put it in the next
print just crank up the engine and you know we get the mapping for those and boom like we have it
also are most of the mapping is kind of getting vetted by the cloud service providers it's not
there just we are just doing it randomly so they are also looking at it and they're like oh yeah this
looks good we can kind of roll it out now so we are working hand in hand with all these cloud
service providers that's the that's where we find ourselves to be in a very privileged position
very lucky like lucky to be in this position where you're solving a problem and like the whole
industry is coming towards of helping okay yeah you should be doing this or that so it sounds
very easy that hey fluid cloud currency girl but it's a whole lot of whole turn of effort to
maintain that accuracy in fact we have pointed out many times AWS has changed the availability zones
for those accounts every month and suddenly Terra from up level start failing you would not know
why because that availability zone is not available longer available for your account
we have seen vulture cloud has changed the OS IDs like the OS ID is you did determine your OS for
your in vulture instance so if Ubuntu was OS ID 2514 suddenly at this point in time the OS ID
says invalid OS ID oh what is that new OS ID for that Ubuntu machine the version I'm using
so we have pointed out those because every day we are doing hundreds of migrations you can think
of every combination AWS 12 regions to Azure 8 regions to OCI 40 regions to Vulture 32 regions
and every it is a nightmare for us to keep up with the services and you know yeah cloud bill is quite
high it's like what you said earlier it sounds easy and I'm going no one does it it sounds
terrifying so yes it's almost like an Apple it's an Apple iPhone like it seems to be you know the
user interface yeah you'll say easy is can convert deploy behind the scene the kind of the heavy
lifting which is getting done I think the great technology should be better like that the the
front should be very easy to use very intuitive but the behind the background and I think I mean
and I can be very honest every single time and tropics coming up with a new model there are
100 companies which are dying out there in the market we saw a lot of like IBM stocks tanking
down and all those kind of things happening very recently however I think that's where I like
when you have to pick a start of year to pick up problem which was not solved for a couple of
like decades like all this what you're solving at 20-year-old problem you were very sure we tried
using AI four months into it we knew that this won't be solved through an AI kind of based problem
and then you kind of get into that and that's how it is definitely not easy and that's the reason
I mean we'll be able to survive much more longer than then another average starter
yeah you're tackling a very difficult problem and if you can take away that pain for the
customer that's the whole point yeah okay this has been a wonderful conversation in that the
complexities of cloud and how you deal with it and deal with multiple environments and so on has
been a problem the industry has been tackling in a lot of avenues and a lot of different narrow
least scoped products and and so on for a while and what you guys are doing I think stands out
to me as like you really went for it and you're really going after some hard challenges and it
also sounds like it might be a little bit to be true so I it's there should I be I don't know I
don't want to sound skeptical but I mean how do you address people whose natural response is like
come on I can't do all of those things right I mean yes yes we do get that kind of reaction very
often Ethan it's too good to be good because our our claims are super high like a nine-month
task can be done in like one hour are you crazy like yes it is that crazy my request for all
the developers and DevOps guys like just try it out we are not doing something like totally
out of the box like which is not understandable it's simple straightforward terraform conversions
generating terraforms and we have taken that super hard problems in a different way
which we are able to provide value so my request to the DevOps guys would be just try it out get
the feeling of the product then you will understand power and then make use of just like open
claw right nobody believed in open claw can do the things but when you try you'll be amazed so
for infrastructure we want to be solving the super hard problems for DevOps and we want to solve
those infrastructure problems so when you think of infrastructure cloud infrastructure think of
flute cloud well people have questions for your harsh should Omar how do how do they reach out to
you oh you can come to our website flutecloud.com find me on LinkedIn my name is Hushed Omar my email
is hushed at flutecloud.com it's my first name and flutecloud.com and yes there are various ways
to reach out to me and we are a sure on comear yeah and then sure on comear same question how
do people reach out to you yeah I mean so I'm on LinkedIn uh people can uh on LinkedIn
sharad comear and my email is sharadact flutecloud.com yeah mean just uh go to our website book a demo
and to the the question which you also said was like when people actually click on the demo when
they see the demo and the reaction and this morning I think we just had this reaction one of the
customers with like wow you guys were not doing hand gestures it's real it's like so I'm like
yeah guys I mean it's a full fledged thing like I'm like oh no we would like to get into it so
yeah I mean unless it's a it's like touch it to believe it so I think yes book a demo with us
uh reach reach out to me reach out to Hushed and uh yeah you will see the magic happening in
the infrastructure award well thanks guys for joining us on having networking this was fantastic
and uh fluidcloud.com if you're interested in what these guys are doing they're also going to be
at several different events in video gtc human x Nutanix.next AWS summit both in la and in dc so if
you're at any of those events and you see fluid cloud swing by and take a look and see what they're
doing and have a chat and thank you for listening to heavy networking today and if you do ring up
fluidcloud to find out more tell them that you heard about them on packet pushers that lets our
sponsors know that we help you keep up with the industry and therefore they should keep working
with us like share and subscribe leave a comment wherever you're listening apples modify youtube
wherever wherever and all that helps other people discover our podcasted videos and we really
appreciate that we have fun while we're doing this but we don't do this just for fun if you know
what I'm saying this is how we've made our families here at packet pushers and for what you do to
pay your bills and feed your families thank you every network janitor packet mechanic and
scruffy looking switchherter matters you matter your work is important we see you we appreciate you
here keep doing what you do to keep the world online and tell our next episode to just remember
too much networking would never be enough
