Key Factors to Consider During Containerization with Travis Jeppson
Some of the highlights of the show include
- How containerization enabled Nav to spread roughly 250 virtual machines across multiple environments, while drastically reducing infrastructure spend
- Travis’s thoughts on buying cloud native software tools versus building them, and what engineers should consider during this process
- The difficulty of finding security solutions that work inside of a cloud-native ecosystem
- Why companies should expect to encounter unique challenges when migrating to Kubernetes
- Why companies need to understand their end goal, and determine an overall objective before beginning a migration
- Travis’s must-have engineering tool, and why he can’t live without it
Links
- LinkedIn: https://www.linkedin.com/in/stmpy/
- Twitter: https://twitter.com/stmpy
TranscriptAnnouncer: Welcome to The Business of Cloud Native Podcast where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.
Emily: Welcome to The Business of Cloud Native. I’m Emily Omier, your host. And today I’m here with Travis Jeppson. Travis is currently at Kasten, but he’s also going to talk about his time as a director of engineering at Nav.
Travis: At Nav, my role shifted quite a bit while I was there. I started as a software developer, writing Ruby back end applications for them, and then shifted into—actually within a month of being there, they shifted me over to the operational side because I had previous experience working with containerization, and also in infrastructure. So, they quickly moved me over into that realm and from there, I worked there for about a year until they told me, go spin up a team and get things moving. Help us move to containerization. Help us move to a more modern infrastructure and stuff. And so, about a year after that I became a director of engineering to where I had our ops team that had spun up, and then I also acquired both our QA team and our IT team that was there. And then, about a year after that, I ended up acquiring a little bit more than that. So, I ended up with a fair amount of our front end and some of our backend teams as well, and where they moved me into the senior director position. So, a day in the life, towards the end of when I was at Nav was a lot of working with the teams, helping them to do a lot of architectural perspective, and changes, and outlook to where we were trying to get as far as the company is concerned. We were building a product that we could address both first-party customers where they would log in to the Nav website directly, as well as working with partners so that we could issue out Nav functionality to those partners that they could incorporate to their pages as well. And so, we worked very hard to try to segment those two pieces together so that what we were building could be dispersed between both first-party customers and our third-party customers. And so, towards the end of my time there, it ended up being a lot of working within all of engineering to help facilitate those purposes. Then, just about six months ago, I ended up shifting my role over to a company called Kasten. And, Kasten is strictly working within the Kubernetes ecosystem. So, we do data management for Kubernetes based applications, and I am the site lead in Utah for Kasten, and so my day in and day out, a lot is, it's, kind of, all over the place. Sometimes it's working with engineering to help figure out some things going on there, sometimes it's working with brokers to help find office space for it. And sometimes it's dealing with insurance. It ended up being quite dynamic. But overall, I'd say most of my time is really spent more on the engineering side, just from the perspective of having worked at Nav and having been a consumer of a lot of these technologies, I think that they really appreciate my insights that I'm able to give there. So, I end up working, a lot, with the engineers to help facilitate what we're doing.
Emily: Sounds like you end up serving as a bridge from having been an end-user. But do you think that there is common miscommunications that happen, or what do those conversations sound like? Why is that experience valuable?
Travis: Yeah, so I don't know if it's as much as a miscommunication as much as what are customers looking for? And what are they trying to achieve? And why are they purchasing different software solutions? And what makes sense for them, more than anything. And I think that, having been a consumer of those products, I was more or less on the front lines there. When I was building our operational team at Nav, that was basically what I was doing is trying to figure out what things are we going to spend time on? And what things are we going to build ourselves, or what things do we need to just go find a solution for and bring them in-house? And the funny thing is when I was doing that for Nav is actually when I was introduced to Kasten and to the CEO here. And so, that ended up changing the way my career went. But overall, I think what Kasten—what those conversations really end up becoming is what are customers trying to do, and where are they trying to go?
Emily: Yeah, and in fact, that is exactly what I want to talk about more on this podcast. So, tell me a little bit about what your experience at Nav was. What were you looking for? What did you want to prioritize? What was the company hoping to get out of moving to containers?
Travis: So, I would say maybe the piece that really facilitated a lot of the progress in that sense was starting to understand our infrastructure spend. And then, to couple with that was also trying to become more agile. More agile in the sense of being able to push on demand, where previous to that we were pushing—you know, when we push our code, we did it on a bi-weekly basis—well, every other week, and it was always very cumbersome. If we have pictures of us in the early days of Nav, where there would be 10 engineers around someone’s desk, and they were the one person that was pushing the code into production, just waiting for the other shoe to fall, or waiting for something to happen. And so, when I started doing operational things for Nav, it started addressing those two things. What can we do to help control our infrastructure, and to understand it a little bit better? And how can we also create more of a dynamic infrastructure? Like, Nav is very much a US-based company. And so, the traffic that we're getting onto our website was regional very, very much. And so, there would be periods where it would be very busy, and then there'd be periods where it wasn’t. And the way that our infrastructure was designed, and a lot of times the way that they are designed, especially with virtual machines, is that you're building for capacity. You're building to be able to handle that load, and that has to stay there all the time, regardless of whether that capacity is being used or not. And so, that was one of the biggest questions, and that bill was—we were completely in the clouds. We were completely in AWS, but that bill continued to get more and more expensive every month. To the point of where it warranted the executive team to come down and say, “This needs to be fixed. This is going at an outrageous pace, and we need to be able to figure out how to control this.” And so, that's when they came to me and said, “Okay, get a team spun up, and let's figure out how to control this.” And so, I would say that those were some of the big pieces that really drove us to start looking at cloud-native technologies, containerization, and Kubernetes.
Emily: And do you think it was successful?
Travis: Yeah. So, I do, for a few reasons. And obviously, we learned some lessons along the way, but what we were able to do is, with the infrastructure that we had growing, we were pushing close to 250 virtual machines across two different environments, that being our production, and a development environment that we had. And when we moved to containerization, we were able to not only spin up more environments, but we were able to still decrease that overall spend as far as the infrastructure was concerned. And so, what used to be, I think we had about 100 VMs for our dev environment and then about 150 for our production environment, and that crossed many different pieces from the front end to the back end to—but that was all it was all compute, right? So, none of that even included the database resources that we were using inside of AWS. And we were able to shrink that down to a nine node Kubernetes cluster where three of those nodes were part of the control plane, and then the other six of those were part of the data plane. And then, we ended up spinning up—we were using HashiCorp Vault, and we ended up moving that outside of the cluster just for sanity purposes. But we were able to drastically decrease the footprint of an environment quite a bit, and on top of that, it also correlated to being able to decrease that spend. And so, once we started turning on everything and turning off all of the older infrastructure, it was something that we really liked. And I almost did this, and I wish I would have just taken a snapshot of those couple months within our Amazon bill and, like, posted it on a wall because it almost cut in half to the spend that we had previous to that.
Emily: And then, you mentioned some lessons. What are some top three lessons that you learned along the way?
Travis: Oh, man. So, I would say, probably the one that bit us the most was actually the telemetry, observability, being able to see what was happening within our environments, especially during a transitional time like that. Now, we did this a few years ago, and so the tools that are out now weren't necessarily as readily available as they were then. I'm not going to name companies, but the company that we were using at that point in time, we came to them and said, “We don't have this visibility, and this is hurting us. This is, kind of, a deal-breaker, and if we can't get this visibility, then we have to look elsewhere.” And they're like, “Well, it's something we've been talking about, but it's not something that we're doing right now.” And it's like, “Okay.” So, we moved on to a solution that was very much in our hands. So, we went from one to where it's like, “Well, we can't rely on a company, maybe we can just deal with it ourselves.” So, we did that, and then we realized, this is actually a lot of work, and it takes a lot of time and a lot of effort. And so, we actually stayed on that one for about a year, and then we moved off of that one, even. And where we found a middle ground to—we wanted control in certain areas, but we didn't want all of the control. And so, then we found a solution that helped us, kind of, meet a middle ground, to where we got the control we wanted, we got the flexibility using the [unintelligible] tools, primarily Prometheus, and then we were able to hand off a lot of the management of the infrastructure for the metric system and telemetry to a vendor, to where we didn't have to worry about that side of it. But we could pump over anything that we wanted, and we could aggregate the data any way we wanted, and that's exactly what we wanted to get out of that. So, that one, I think, was maybe one of the hardest ones just because we put so much work into multiple different iterations of what that eventually became. So, the one that we finally settled on to where it was, kind of, a happy medium is the one that the company is using to this day. It ended up being a much better solution, but it took us two years to figure it out.
I would say maybe the next hardest one after that is, it really comes down to just being flexible. Like, you always go in with a plan, and you always assume that that plan is going to work out, and that everything is going to be perfect. And most of the time, that doesn't end up being true. Most of the time you get to the point where you hit something, you hit a snag, or you hit some issue to where you realize that your plan is basically thrown out the window. And there was a point in time to where we, kind of, just stuck to it. We're like, “Okay, just get it to work, just get it to work, just get it to work.” And we kept trying to slam that effort moving forward until we realized that doesn't work. We're burning time. There's no way we're going to get to the point where we need to be, and we're not getting the results that we want. And so, one day I grabbed my team and we sat down and I just said, “Okay, we have this solution in place, but here's the problems with it. Of those problems, how many of them do we absolutely know how to solve right now?” And so, then we looked at the list and we talked about the ones that we knew we could solve, and it's like, okay, of the list that we don't know how to solve, there was a fair amount still leftover and looking at that list, it's like, is it going to be worthwhile to continue addressing this unknown? Or should we adapt our plan to remove that unknown piece of it, so that we can actually get back on track to what makes sense for us and for our end goals. And so, we decided it may be best to scrap that idea and go back to the drawing board. And so, we took two days to where we took an offsite. So, Nav has a corporate apartment, and we just went there and hung out there for two days, and then we whiteboarded and put post-it notes, the giant post-it pad notes, all over the walls, and then we went back to the drawing board. And this was actually around our Kubernetes management layer, what to use to help us manage Kubernetes. And so, the solution we had in place before just wasn't cutting it. And so, we went back and we literally tried everything that was available. We did a Google search, we went to any site that said, “Here's a Kubernetes management layer.” Either just a CLI to help you get the infrastructure spun up, or if it was a GUI, and it had the full management system baked in, or whatever it ended up being. So, we sat down and took that entire list, and then we took a list of the specific outcomes that we needed; we wanted to be able to do X, Y, and Z, and if any of those cannot be done with one of those solutions, then that solution is cut. And so, we seriously just took hour chunks, two-hour chunks of time, and we would divide that list up of different offerings and we started figuring out which ones would work, which ones wouldn’t until we got to the point where we literally had one left and that one ended up being the solution that we used moving forward. But being able to take that stop, and being able to readdress our plan and say, “We still want a particular outcome, but the way that we're approaching it is not working. Can we actually readdress this and change our plans in order to still get us the desired outcome?” And after we did that the one time and we got back on track a lot further than we thought, or a lot quicker than we thought we'd be able to, but after we did that the one time, then we started doing that a lot more with a lot of other issues that would arise, we would come back to it. And after we did that one time, that's when we went back to our metrics and said, “Okay, maybe we need to do the same thing with our metrics.” And that's when we shifted that the final time as well. But I think that the second one, then there really was, you have to understand that the important things out of creating a plan are the results of that plan, not necessarily how you get there. And if you're okay with changing the way that you get there, then you can actually achieve the goals of that plan much quicker.
Emily: Was there a third lesson that you learned that, sort of, stuck out?
Travis: Yeah. I'd say a third lesson is really understanding why you would want to shift over to a cloud-native infrastructure. Because at first, a lot of the reason that it started was we need to do this for cost savings. We want to be able to wrangle in our infrastructure and do all of that stuff. And it's like, okay, that's an okay reason. But at the end of the day, after I hired a team, and even after we did all the work to push everything out, were we in a net positive as far as the cost was concerned? Because there's a lot to incorporate there. And there's a lot of tools, as well, that you have to also consider, a lot of things that we ended up picking up later on that we weren't necessarily using beforehand. And so, while we were able to wrangle in and control the cost of our cloud spend, I don't know that it actually ended up being more cost-effective overall for the company. Now, that's like apples to apples, right? Let's look at our team size, and let's look at our infrastructure costs before and after. If you combine those two things together, were they less? And I don't know if that's true. But what I do think is true, is after going through all of this, we were able to move drastically faster in our pace inside of engineering. After we were done, all of the teams had their own services set up, they were able to deploy on-demand, things became very, very simplified for them. And on top of that, we even, for quite a while had a development environment that was using containerization, and it was very simple to be able to hire someone in, and just run a command, and you would have your development environment up and running. And we even had quite a few people, just the first day that they were on the job, be able to create a commit and contribute, which was a great thing. And so, if you're comparing apples to apples as far as like, what's the cost, then I don't think that that's a good reason to start addressing cloud-native infrastructure. But if you're looking at the overall cost that we had burned in engineering time trying to get development environments set up, or burned in infrastructure, trying to release a new service, or burned in many, many other ways, then we were absolutely net positive in a situation. So, releasing a service before we moved into containerization took about two weeks, and you had about four or five different people involved in order to get that service released. And it was very time consuming, and very costly. Afterwards, after we moved into containerization, it took a matter of minutes. Like, as soon as a developer wanted to release a new service, they just built out the profile in GitLab, and then they would push the code up, and it would go deploy, and everything would be up, and available, and ready to go. And so, our operating costs, I think is really what I'm coming to, is that those drastically changed. And so, at first when I was reporting about our progress and how things were going, and they kept saying, “Well, where's our cost? Where's our cost? Where’s our cost?” And so, I kept showing them, “Okay, well, this is what our infrastructure cost before and this is what it cost after.” And while there was some movement there, the thing that I started learning and started reporting back up toward the exec team later on was, “Okay, let me show you what the scenario is now as opposed to what the scenario was then.” And as soon as I was able to start painting a picture as to how much easier and faster we were able to move, they actually quit asking me. They're like, “Okay. We're good. We're sold on the fact this was successful.” And so, I think the third lesson that we learned is that it is important to understand why. And hopefully, you can figure that out before you start everything, but we didn't quite figure it out at that point in time. But we did figure it out soon enough to where we were able to make choices and adapt to that reason why to make it more beneficial for the company in the long run.
Emily: And did you feel like there was anything that was lost in translation when you were talking with the executive team and, sort of, giving updates?
Travis: Um, no, not really. I’ve had a few conversations on that and there's a lot of different things that they care about. Usually from an executive team, you want to make sure that, with what's being produced, it's not only going to be able to facilitate product movement, being able to adapt to the changes of our customers, but that we're not doing it at a pace that is, unmaintainable, which is, kind of, where we had been. And so, my conversations with them went from like, “Let's stop looking at this one particular metric that you keep asking me about, to looking at the bigger picture. How much quicker are we able to push code? How much quicker are the product owners able to adapt? How much quicker are they able to take feedback, and apply that, and put that into our product, and be able to version on top of that, and create iterations?” And on top of that, also saying, “Well yes, of course, we still have this one metric that does still matter. But that aside, I look at the overall operations that are happening now as opposed to the way that they were.” And so, for the most part, sometimes it would take a little bit of explanation, and I'm not going to lie, there are a couple times where I had to make powerpoints, and I had to, kind of, lay things out in a different way, but I think that it ended up being so well received that there was even one point where I had to present to the entire company and tell them about our migration, and what happened, and the impact that it had on our development time, and on our infrastructure costs, and everything else. Because through a migration, there are going to be pains with that migration. And after it was all said and done, the executive team wanted the entire company to understand and know why we had to go through those pains, and why it was necessary to move forward. And so, yeah, I ended up talking to the entire company and illustrating to everyone why this was such a monumental move for us. So, I don't think there was a lot lost in translation. I think it actually was very well received.
Emily: Tell me a little bit more—you were talking about how you basically were in charge of prioritizing which tools you were going to buy, what you were going to build internally. Tell me a little bit more about both what you were looking for, how you were making that decision, what some of the choices were that you made?
Travis: Yeah, for sure. So, let’s kind of like, simplify that a little bit. It comes down to—and I was able to give a talk at a couple conferences about this specific thing, but building versus buying. Why would you want to build versus why would you want to just buy? And the result that I came down to, and with a lot of help from reading a lot of information from the internet and also from some mentors, is that the most expensive resource that you have are the people that are on your team, that are working with you. Anything else above and beyond them is actually second. And so, the thing that you want to put your most expensive resource towards are the things that are going to end up evolving your company and progressing your company the fastest. And so, if there are solutions out there that you could use—or that you could build, you could like, eh, I could save a few thousand dollars and we could build this ourselves and blah, blah, blah. It's like, okay, you're looking at the purchase price of that software. But are you looking at the development time and hours that said company has put behind it, and the time and effort that you're going to end up putting behind it? Because I don't know about most companies, but everyone on my team wasn't free. There was a price behind them at the end of the day, and what they were spending that time and effort on, for me, needed to be absolutely necessary for the progression of the company. So, when I started looking at solutions, I started just deciding if we build this ourselves or if we take the time to do an open-source solution and we have to manage this ourselves, what is going to happen into the management of my team? What's going to happen to the overhead of my team? Because I can't just go hire more developers because I want to use a new open-source solution, and I need someone to maintain it. And I don't necessarily have anything against open source, but a lot of times, that's what it ended up coming down to. I think it's very valuable, and I think there are situations where it is the right way to go. Anyway, with those decisions, it really came down to if we implement this and we have to manage it, then that is time that my team is not going to be able to spend on these other projects, which those other projects are more important to me. So, then we would go and look for a vendor or a solution that could help step in and fill in the gap that we were missing. And so, given containerization, cloud-native, Kubernetes, and even Prometheus, all of those are all open source tools. But a lot of times, what we would do is use something that had that open source side to it, so that we could create a standardization, and use that standardization internally, and one that would be monitored and controlled by the community, which helped a ton. But then we have a solution on top of that, that would help bridge the gap between we don't want to manage it ourselves, or we want help managing it, or we want a solution that can step in, use this standardization, but still provide the functionality that we're looking for. And so, that is that's really where—when we were evaluating what we needed to do, then we, kind of, went through that process of can we find a way to standardize around a toolset using open source? And if so, that was great. Then we would take that and say, “Okay, now can we get help with it?” And then, that's typically the route that we would end up going.
Emily: Was there anything that you wanted to buy but couldn’t? Like, there wasn't something available?
Travis: Some of the really hard ones were actually more niche. So, I would say one of the ones that we really struggled with was on the security side. Finding solutions that worked inside of a cloud-native ecosystem as opposed to a virtual machine ecosystem, from security perspectives, were not advancing nearly as fast as some of the infrastructure tools. And so, that side of things was actually very, very complicated and hard to work with. We found some startups that were starting to address this, and we were working with them and we did purchase a solution from one of them, but we kept running into they only cover this piece, they don't cover all these other pieces, because you have intrusion detection and prevention, you also have network monitoring, and you need to have forensics running against your logs, and you also need runtime protection when your environment is up and going, and then you had the virus protection, too. And so, there wasn't anywhere that we could go to just say we need a full and complete security solution, and we want it to start now; go. So, like, being able to facilitate that part of our infrastructure was actually very complicated, and we ended up having to poke around, and use some antiquated services, and we tried to update to facilitate our needs, and some of them—I hate to say this, but some of them were even just to check a box, because within the containerized world as opposed to VM world, you're not going to get the same kind of coverage. A big one, really, is virus protection. If you look at—even if you go to Docker’s website and you read about virus protection, the only way to scan a Docker environment for viruses is to shutdown Docker, which doesn't work. You can't ever shut down Docker, because that's your entire ecosystem, so you just can't do it. But you can use immutability. You can use the fact that you created your images yourself. You can sign your images to verify that they came from a trusted source and stuff like that. And so, we ended up having to piecemeal a fair amount of that together. So, of anything, I would say that's the one thing that you can't just go out and buy right now.
Emily: I realized that we've talked a lot about pain points, but I also wanted to ask about pleasant surprises. Was there anything along the journey that went much better, was much easier than you expected?
Travis: One big one was actually the overall outcome, because we went in with one perspective of like, let's save money on infrastructure, but then realizing, through the journey, how much simpler a lot of the process became, especially for developers was a very, very pleasant surprise. And on top of that, even the developer adoption of it. I know that sometimes—and I hear a lot that it doesn't go very well for some companies, and developers don't want to learn a new technology, or whatever else, but we put a lot of time and effort upfront to educate our developers. And the adoption actually went really well for us, and that was also a very pleasant surprise. I had my defenses up, I was ready to go to war and be like, “This is happening, regardless of whether you want it or not.” And I didn't ever have to do that. As soon as we sat down and we showed them the differences in the workflow and how much quicker it was to be able to adapt and make changes to their services, as well as push new services. They were just like, “Sign me up. I'm ready to go. This is way better than anything we're doing right now.” And so, that, for me, was also another very pleasant surprise.
Emily: Can you tell me a little bit more about how this experience informs your role now at Kasten?
Travis: Yeah, so I would say there's a few things. I'd say probably the primary one is having gone through this with a company, and watching the migration, and watching all of the different struggles and the different problems you have to solve to adapt a containerized workflow has definitely influenced how I approach customers working with Kasten, but also engineering, and also the executive team as well here. And working with them, and helping them understand the things that matter to the things that didn't matter. And the things that are going to affect customers more than, maybe, they would think as well, just from my own experience and having to deal with it.
Emily: Give me some examples. What are some things that do matter versus don't matter? And where do you think there's sometimes a disconnect?
Travis: Yeah. So, you know, I'll be frank here. Kasten is definitely a Kubernetes based vendor, right? And I remember there were there a couple times—and I don't know if I want my CEO to hear this, but if he does, it's okay. There were times where I remember going to KubeCon conferences or different container-based conferences, and looking at the vendors, and just thinking, I don't know if I would ever want to do that. That never makes sense to me. But when you go up and you go talk to a vendor, you go discuss the product that they're building or whatever else, they like to show you all the flashy things, the things that really make them stand out that they're like, “Hey, we can take this process and make it crazy simpler,” or, “We can do this thing for you. We can add in this service mesh, and you're going to get all of this telemetry out of your system,” and all this craziness, or, “We can build an underlying data volume so that you can have stateful applications inside of Kubernetes. And we'll do all of this,” and it's like, every single time—not every time—but most of the time when I would talk to them, and they would give me their flashy approach and tell me, “Hey, this is all the craziness you can do.” Like, I'd go back and I talked to my team and say, “Does this make sense for us? Yeah, this is cool, but the amount of work we're going to have to put in in order to adapt that or to even use that and leverage it, what is it going to buy us? What advantage is it going to give us over what we're doing right now?” And a lot of times, it didn't end up giving a lot of advantage. It didn't make a huge difference. Now, being at one of those vendors, one of the big differences, and this was, kind of, a long-running thing with me and Niraj, our CEO, we ended up having a ton of conversation around this, but the big difference that I see with Kasten, and one thing that I continue to push here, and I told him time and time again, this is why I joined this team, is because Kasten, while they have their—we do data management, we can do backups, we can do recoveries, we have data mobility, right? The thing about Kasten is it actually lets you attack a problem the way that you want to attack it, and that's stateful applications. And a lot of times, you're going to go look in how to run stateful applications and you're going to get this big long—oh, you need a data layer. You need to be able to have your data be—to migrate across availability zones, or across regions to be able to do this. And that adds so much complexity, where at the end of the day, how often does the data infrastructure actually go bad? We have these cloud providers now, and they have spent a lot of time on making sure that their data infrastructure is pretty robust. Why aren't we just using those? Why aren't we just using those and then accounting for disasters or issues coming up around that? And that's actually the way that Kasten has approached it is, you can use your data, you can use whatever you want, and we're just here as a tool to help you facilitate that process. And so, kind of, getting back to your question of, like, what I really feel like makes a difference in this space is you have to understand what that customer is trying to do. And you have to understand how to facilitate their end goals and what they want. It's not about coming in and saying, “I can help you do all of this stuff.” And it's like, “Okay, but what does all that stuff get for me? Because really, the problem that I'm dealing with right now is, is x, y, and z.” And as a vendor, and as talking to customers, it's more about helping them. It's more about solving their problems, allowing them to focus on the tasks that are going to be more monumental for their company, instead of focusing on tasks that aren’t. Not everyone is going to be a data management company, and rightfully so. You have other things and important things to be paying attention to. So, let me come in and help you address that need without causing you a lot of pain, and a lot of hardship, to be able to just come in and use a solution and move on, but using a solution in your environment in your way to where I'm helping solve a problem, instead of helping create another problem, for what benefit?
Emily: Do you think that most companies that are on this journey are essentially trying to solve similar problems?
Travis: And which side? On the vendor side or on the consumption side?
Emily: Oh so, like Nav, the end-users. Do you think essentially any company that's moving to containers, that’s moving to Kubernetes, are they going to run into essentially the same set of problems?
Travis: You know, no, I don't think so. I think that each journey is going to be a little bit different, and it's going to cause different problems. Because if you take a company like Nav to where we had to be PCI compliant. We had different regulations that we had to abide by. And that caused the solution set that worked for us to be drastically different from the company that may not have those issues. A Kasten, for example. We're still very much, even though we have a product in the Kubernetes space, we're also still a consumer of those technologies as well. But our problems, and the things that we're addressing are monumentally different than the ones that we're addressing in Nav. And then, you also get into a lot of questions around what are the things that are important? Because sometimes your SLA is the most important thing and that will cause your solution to differ. Sometimes your SLA can waver a little bit, but you absolutely have to provide a different need for your customers. And so, while all of these tools kind of look the same—like if you're looking out in the morning, and you look at the freeway and you see all of these people that are in vehicles, and they're all traveling somewhere. Sometimes these people are moving large products. Sometimes these people are only moving themselves. But sometimes when only moving yourself, sometimes you're going to work, but sometimes you're going to play. The reason we all acquired a vehicle is because it helps facilitate that process though our need for that vehicle is drastically different. And I think that in cloud-native and Kubernetes it's the same thing. The needs are so varying and so different, but yet you can use similar tools to help facilitate them in different ways.
Emily: Do you have one or two examples of how Nav and Kasten have different needs?
Travis: Yeah, absolutely. So, I would say that one of the foremost concern that Nav is absolutely security. With the PCI regulation and everything else, protecting the identity of our customers, protecting the data for the company, it is a must. There is no if, and, or but about it, it has to happen. And the way that we ended up using Kubernetes had to facilitate that as well. So, like I had mentioned earlier that we had six nodes that we were using for the compute side of things. The reason we had six is because we had to create a logical segregation within those nodes to protect the services running on them. So, we would only allow back end services that had access to confidential information on a subset of nodes. And we wouldn't allow anything else to run there. So, you could run your front end service and a PCI compliant service on the same node, ever. But if you look at what we do at Kasten, we are running quite a few environments, and being in the Kubernetes ecosystem, and being a vendor there, we end up having to work with every single cloud vendor out there. We’re getting certified with all of them—I'm working with a few right now. But we have certifications within AWS, and Google, and Azure, and we also are working with VMware Pivotal. So, it’s across the board, and that's something that's been crazy important for Kasten is being able to have that multi-cloud experience. Being able to take data and move it from one environment to another, whether on-premise or off-premise. And so, that being one of our primary needs at Kasten. And so, we build around need, whereas Nav builds around security.
Emily: Excellent. Anything else that you'd like to add that I maybe didn't think to ask, didn’t know to ask?
Travis: Oh, that's an open-ended question. I would say one thing, if nothing else, I am very much in agreement with the fact that almost every company out there is someday going to end up hearing the words Kubernetes, just the same as they ended up learning VMware associated with virtual machines and stuff. It is. And there's a reason behind that, but the reason behind, I don't think is as important as understanding when and why it makes sense for you to start adapting and adopting those technologies because for every company, just as we've been talking about, it ends up being different, drastically different. And I think that it is very important to understand your end goal, and getting into it. Look at the overall outcome of what you're trying to achieve and use that to help drive the movement forward. Because if you look at a lot of—and the reason I say this is because if you look a lot of—I don't know if you want to call them fads or movements within technology—you look at Agile, you look at microservices, you look at a lot of other—even cloud-native. A lot of times people look at that and they're like, “Hey, look at all these good things that come out of this. And they don't typically look at what the trade-off is. Because in a microservice infrastructure, if you've got two developers, then why do you need seven different microservices? It might actually be anti—or working against what your workflow is like. And I think that even containerization is that same way. There are situations to adapt your workflow to start using those technologies, and I think sometimes there aren't. And I think that it is something that—you need to go into it and understand what the outcome is. And if you understand that outcome, then when you engage and start using those technologies, then every decision you make will be to help drive towards that outcome. And I think that that'll help you get through it a lot quicker and a lot easier, and it'll also help you just get rid of a lot of the other noise that's out there. And it'll help you, kind of, get specifically to the point of things that make sense to you and to your company so that you're able to get to that outcome and continue to drive forward and continue to help your company become successful.
Emily: All right, one last question. Actually two last questions. What is a can't-live-without engineering tool for you?
Travis: Oh, man. There's probably so many. But for me, probably one of—let me think. Is this, like, a tool that I use on my computer, or is this maybe something I use in the process? Or any of the above?
Emily: Any of the above. I mean, you could tell me Slack or something, anything that you can't imagine doing your work without.
Travis: Yeah, Slack, I think, actually helps deter me from getting work done. [laughs] But I would say for me, the one I cannot live without is a pipelining system. And for me, a lot of times has come down to GitLab. I really love the workflow in GitLab, but any pipelining system, really, is the must-have because if you can get into your process of getting code from a developer's laptop to automating how that gets into an environment, that process saves so much time and so many resources, that I don't even care which system you end up using. But just having that process, having that CI/CD system, I think is an absolute must-have.
Emily: Excellent. And then, how can people connect with you?
Travis: Yeah, I'm on some social media, I don’t do it all, but I'm definitely on LinkedIn. You can just search for me by name on there. I'm also on Twitter. My callsign there is @stmpy. It's kind of a long story, but my friends make fun of me because my legs are short, and so they used to call me Stumpy. So, it's Stumpy without the U, so just S-T-M-P-Y on Twitter. And I think that those are probably the two best ways to get a hold of me.
Emily: Excellent. Well, thank you so much for chatting.
Travis: Yeah, thank you. I really appreciate it.
Announcer: Thank you for listening to The Business of Cloud Native podcast. Keep up with the latest on the podcast at thebusinessofcloudnative.com and subscribe on iTunes, Spotify, Google Podcasts, or wherever fine podcasts are distributed. We'll see you next time.
This has been HumblePod production. Stay humble.