Why Companies Go Cloud-Native with Austin Adams and Zach Arnold

 

Some of the highlights of the show include
  • The diplomacy that’s required between software engineers and management, and why influence is needed to move projects forward to completion.
  • Driving factors behind Ygrene’s Kubernetes migration, which included an infrastructure bottleneck, a need to streamline deployment, and a desire to leverage their internal team of cloud experts.
  • Management’s request to ship code faster, and why it was important to the organization.
  • How the company’s engineers responded to the request to ship code faster, and overcame disconnects with management.
  • How the team obtained executive buy-in for a Kubernetes migration.
  • Key cultural changes that were required to make the migration to Kubernetes successful.
  • How unexpected challenges forced the team to learn the “depths of Kubernetes,” and how it helped with root cause analysis.
  • Why the transition to Kubernetes was a success, enabling the team to ship code faster, deliver more value, secure more customers, and drive more revenue.
Links:
TranscriptAnnouncer: Welcome to The Business of Cloud Native podcast where we explore how end users talk and think about the transition to Kubernetes and cloud-native architectures.
Emily: Welcome to The Business of Cloud Native. My name is Emily Omier, and I am here with Austin Adams and Zack Arnold, and we are here to talk about why companies go cloud-native.
Austin: So, I'm currently the CTO of a small Agrotech startup called HerdX. And that means I spend my days designing software, designing architecture for how distributed systems talk, and also leading teams of engineers to build proof-of-concepts and then production systems as they take over the projects that I've designed.
Emily: And then, what did you do at Ygrene?
Austin: I did the exact same thing, except for without the CTO title. And I also had other higher-level engineers working with me at Ygrene. So, we made a lot of technical decisions together. We all migrated to Kubernetes together, and Zack was a chief proponent of that, especially with the culture change. So, I focused on the designing software that teams of implementation engineers could take over and actually build out for the long run. And I think Zack really focused on—oh, I'll let Zack say what he focused on. [laughs].
Emily: Go for it, Zach.
Zach: Hello. I'm Zack. I also no longer work for Ygrene, although I have a lot of admiration and respect for the people who do. It was a fantastic company. So, Austin called me up a while back and asked me to think about participating in a DevOps engineering role at Ygrene. And he sort of said at the outset, we don't really know what it looks like, and we're pretty sure that we just created a position out of a culture, but would you be willing to embody it?
And up until this point, I'd had cloud experience, and I had had software engineering experience, but I didn't really spend a ton of time focused on the actual movement of software from developer’s laptops to production with as few hiccups, and as many tests, and as much safety as possible in between. So, I always told people the role felt like it was three parts. It was part IT automation expert, part software engineer, and then part diplomat. And the diplomacy was mostly in between people who are more operations focused. So, support engineers, project managers, and people who were on-call day in and day out, and being a go-between higher levels of management and software engineers themselves because there's this awkward, coordinated motion that has to really happen at a fine-grained level in order to get DevOps to really work at a company.
What I mean by that is, essentially, Dev and Ops seem to on the surface have opposing goals, the operation staff, it’s job is to maintain stability, and the development side’s job is to introduce change, which invariably introduces instability. So, that dichotomy means that being able to simultaneously satisfy both desires is really a goal of DevOps, but it's difficult to achieve at an organizational level without dealing with some pretty critical cultural components. So, what do I spend my day on? The answer to that question is, yes. It really depends on the day. Sometimes it's cloud engineers. Sometimes it's QA folks, sometimes it's management. Sometimes I'm heads-down writing software for integrations in between tools. And every now and again, I get to contribute to open-source. So, a lot of different actual daily tasks take place in my position.
Emily: Tell me a little bit more about this diplomacy between software engineers and management.
Zach: [laughs]. Well, I'm not sure who's going to be listening in this amazing audience of ours, but I assume, because people are human, that they have capital O-pinions about how things should work, especially as it pertains to either software development lifecycle, the ITIL process of introducing change into a datacenter, into a cloud environment, compliance, security. There's lots of, I'll call them thought frameworks that have a very narrow focus on how we should be doing something with respect to software. So, diplomacy is the—well, I guess in true statecraft, it's being able to work in between countries. But in this particular case, diplomacy is using relational equity or influence, to be able to have every group achieve a common and shared purpose.
At the end of the day, in most companies the goal is actually to be able to produce a product that people would want to pay for, and we can do so as quickly and as efficiently as possible. To do that, though, it again requires a lot of people with differing goals to work together towards that shared purpose. So, the diplomacy looks like, aside from just having way too many meetings, it actually looks like being able to communicate other thought frameworks to different stakeholders and being able to synthesize all of the different narrow-focused frameworks into a common shared, overarching process. So, I'll give you a concrete example because it feels like I just spewed a bunch of buzzwords. A concrete example would be, let's say in the common feature that's being delivered for ABC Company, for this feature it requires X number of hours of software development; X number of hours of testing; X number of hours of preparing, either capacity planning, or fleet size recommendations, or some form of operational pre-work; and then the actual deployment, and running, and monitoring. So, in the company that I currently work for, we just described roughly 20 different teams that would have to work together in order to achieve the delivery of this feature as rapidly as possible.
So, the process of DevOps and the diplomacy of DevOps, for me looks like—aside from trying to automate as much as humanly possible and to provide what I call interface guarantees, which are basically shared agreements of functionality between two teams. So, the way that the developers will speak to the QA engineers is through Git. They develop new software, and they push it into shared code repositories, the way that the QA engineers will speak to people who are going to be handling the deployments—or at management in this particular case—is going to be through a well-formatted XML test file. So, providing automation around those particular interfaces and then ensuring that everyone's shared goals are met at the particular period of time where they're going to be invoked over the course of the delivery of that feature, is the “subtle art,”—air quotes, you can't see but—to me of DevOps diplomacy. That kind of help?
Emily: Yeah, absolutely. Let's take, actually, just a little bit of a step back. Can you talk about what some of the business goals were behind moving to Kubernetes for Ygrene? Who was the champion of this move? Was it business stakeholders saying, “Hey, we really need this to change,” or engineering going to business stakeholders? Who needed a change.
I believe that the desire for Kubernetes came from a bottleneck of infrastructure. Not so much around performance, such as the applications weren't performing due to scale. We had projected scale that we were coming to where it would cause a problem potentially, but it was also in the ease of deployment. It had a very operations mindset as Zack was saying, our infrastructure was almost entirely managed—of the core applications set—by outsourcing. And so, we depended on them to innovate, we depended on them to spin up new environments and services.
But we also have this internal competing team that always had this cloud background. And so, what we were trying to do was lessen the time between idea to deployment by utilizing platforms that were more scalable, more flexible, and all the things that Docker gives with the Dev/Prod Parity, the ease of packaging your environment together so that small team can ship an entire application. And so, I think our main goal with that was to take that team that already had a lot of cloud experience, and give them more power to drive the innovation and not be bottlenecked just by what the outsourcing team could do. Which, by the way, just for the record, the outsourcing team was an amazing team, but they didn't have the Kubernetes or cloud experience, either.
So, in terms of a hero or champion of it, it just started as an idea between me and the new CTO, or CIO that came in, talking about how can we ship code faster? So, one of the things that happened in my career was the desire for a rapid response team which, that sounds like a buzzword or something, but it was this idea that Ygrene was shipping software fairly slow, and we wanted to get faster. So, really the CIO, and one of the development managers, they were the really big champions of, “Hey, let's deliver value to the business faster.” And they had the experience to ask their engineers how to make that happen, and then trust Zack and I through this process of delivering Kubernetes, and Istio, and container security, and all these different things that eventually got implemented.
Emily: Why do you think shipping code faster matters?
Austin: I think, for this company, why it mattered was the PACE financing industry is relatively new. And while financing has some old established patterns, I feel like there's still always room for innovation. If you hear the early days of the Bridgewater Financial Hedge Fund, they were a source of innovation and they used technology to deliver new types of assets and things like that. And so, our team at Ygrene was excellent because they wanted to try new things. They wanted to try new patterns of PACE financing, or ways of getting in front of the customer, or connections with different analytics so they could understand their customer better.
So, it was important to be able to try things, experiment to see what was going to be successful. To get things out into the real world to know, okay, this is actually going to work, or no, this isn't going to work. And then, also, one of the things within financing is—especially newer financing—is there's a lot of speed bumps along the way. Compliance laws can come into effect, as well as working with cities and governments that have specialized rules and specialized things that they need—because everyone's an expert when it comes to legislation, apparently—they decide that they need X, and they give us a time when we have to get it done. And so, we actually have another customer out there, which is the legislative bodies. So, they have to get the software—their features that are needed within the financing system out by certain dates, or we’re no longer eligible to operate in those counties. So, one of it was a core business risk, so we needed to be able to deliver faster. The other was how can we grow the business?
Emily: Zach, this might be a question for you. Was there anything that was lost in translation as you were explaining what engineering was going to do in order to meet this goal of shipping code faster, of being more agile, when you were talking to C level management? How did they understand, and did anything get lost in translation?
Zach: One of the largest disconnects, both on a technical and from a high level speaking to management issue I had was explaining how we were no longer going to be managing application servers as though they were pets. When you come from an on-premise setup, and you've got your VMware ESXi, and you're managing virtual machines, the most important thing that you have is backups because you want to keep those machines exactly as they are, and you install new software on those machines. When Kubernetes says, I'm going to put your pods wherever they fit on the cluster, assuming it conforms with the scheduling pattern, and if a node dies, it's totally fine, I'm going to spin a new one up for you, and move pods around and ensure that the application is exactly as you had stated—as in, it’s in its desired state—that kind of thinking from switching from infrastructure as pets to infrastructure as cattle, is difficult to explain to people who have spent their careers in building and maintaining datacenters. And I think a lot—well, it's not guaranteed that this is across the board, but if you want to talk about a generational divide, people that usually occupy the C level office chairs are familiar with—in their heyday of their career—a datacenter-based setup. In a cloud-based consumption model where it really doesn't matter—I can just spin up anything anywhere—when you talk about moving from reasoning about your application as the servers it comprises and instead talking about your application as the workload it comprises, it becomes a place where you have to really, really concretely explain to people exactly how it's going to work that the entire earth will not come crashing down if you lose a server, or if you lose a pod, or if a container hiccups and gets restarted by Kubernetes on that node. I think that was the real key one.
And the reason why that actually became incredibly beneficial for us is because once we actually had that executive buy-off when it came to, while I still may not understand, I trust that you know what you're doing and that this infrastructure really is replaceable, it allowed us to get a little bit more aggressive with how we managed our resources. So, now using Horizontal Pod Autoscaling, using the Kubernetes Cluster Autoscaler, and leveraging Amazon EC2 Spot Fleets, we were only ever paying for the exact amount of infrastructure that was required to run our business. And I think that is usually the thing that translates the best to management and non-technical leadership. Because when it comes down to if I'm aware that using this tool, and using a cloud-native approach to running my application, I am only ever going to be paying for the computational resource that I need in that exact minute to run my business, then the budget discussions become a lot easier, because everyone is aware that this is your exact run-rate when it comes to technology. Does that make sense?
Emily: Absolutely. How important was having that executive buy-in? My understanding is that a lot of companies, they think that they're going to get all these savings from Kubernetes, and it doesn't always materialize. So, I'm just curious, it sounds like it really did for Ygrene.
Zach: There was two things that really worked well for us when this transformation was taking place. The first was, Ygrene was still growing, so if the budget grew alongside of the growth of the company, nobody noticed. So, that was one really incredible thing that happened that, I think, now having had different positions in the industry, I don't know if I appreciated that enough because if you're attempting to make a cost-neutral migration to the Cloud, or to adopt cloud-native management principles, you're going to probably move too little, too late. And when that happens, you run the risk of really doing a poor job of adopting cloud-native, and then scrapping that project, because it never materialized the benefit, as you just described, that some people didn't experience. And the other benefit that we had, I think was the fact that because there were enough incredibly senior technical people—and again, I learned everything from these people—working with us, and because we were all, for the most part, on the same page when it came to this migration, it was easy to have a unified front with our management because every engineer saw the value of this new way of running our infrastructure and running our application.
In one non—and this obviously helps with our engineers—one non-monetary benefit that helped really get the buy-in was the fact that, with Kubernetes, our on-call SEV-1 pages went down, I want to say, by over 40 percent which was insane because Kubernetes was automatically intervening in the case where servers went down. JVMs run out of memory, exceptions cause strange things, but a simple restart usually fixes the vast majority of them. Well, now Kubernetes was doing this and we didn't need to wake somebody up in order to keep the machine running.
Emily: From when you started this transition to when you, I should say, when you probably left the company, but what were some of the surprises, either surprises for you, or surprises for other people in the organization?
Austin: The initial surprise was the yes that we got. So, initially I pitched it and started talking about it, and then the culture started changing to where we realized we really needed to change, and bringing Zack on and then getting the yes from management was the initial surprise. And—
Emily: Why was that a surprise?
Austin: It was just surprising because, when you work as an engineer—I mean, none of us were C suite, or Dev managers, or anything. We were just highly respected engineers working in the HQ. So, it was just a surprise that what we felt was a semi-crazy idea at the time—because Kubernetes was a little bit earlier. I mean, EKS wasn't even a thing from Amazon. We ran our Kubernetes clusters from the hip, which is using kops, which is—kops is a great tool, but obviously it wasn't managed. It was managed by us, mainly by Zach and his team, to be honest.
So, that was a surprise that they would trust a billion-dollar financing engine to run on the proposal of two engineers. And then, the next ones were just how much the single-server, vertical scaling, and depending on running on the same server was into our applications. So, as we started to look at the core applications and moving them into a containerized environment, but also into an environment that can be spun up and spun down, looking at the assumptions the application was making around being on the same server; having specific IP addresses, or hostnames; and things like that, where we had to take those assumptions out and make things more flexible. So, we had to remove some stateful assumptions in the applications, that was a surprise. We also had to enforce more of the idea of idempotency, especially when introducing Istio, and [00:21:44 retryable] connections and retryable logic around circuit breaking and service-to-service communication. So, some of those were the bigger surprises, is the paradigm shift between, “Okay, we've got this service that's always going to run on the same machine, and it's always going to have local access to its files,” to, “Now we're on a pod that's got a volume mounted, and there's 50 of them.” And it's just different. So, that was a big—[laughs], that was a big surprise for us.
Emily: Was there anything that you'd call a pleasant surprise? Things that went well that you anticipated to be really difficult?
Zach: Oh, my gosh, yes. When you read through Kubernetes for the first time, you tend to have this—especially if somebody else told you, “Hey, we're going to do this,” this sinking feeling of, “Oh my god, I don't even know nothing,” because it's so immense in its complexity. It requires a retooling of how you think, but there have been lots of open-source community efforts to improve the cluster lifecycle management of Kubernetes, and one such project that really helped us get going—do you remember this Austin?—was kops.
Austin: Yep. Yep, kops is great.
Zach: I want to say Justin Santa Barbara was the original creator of that project, and it's still open source, and I think he still maintains it. But to have a production-ready, and we really mean production-ready: it was private, everything was isolated, the CNI was provisioned correctly, everything was in the right place, to have a fully production-ready Kubernetes cluster ready to go within a few hours of us being able to learn about this tool in AWS was huge because then we could start to focus on what we didn't even understand inside of the cluster. Because there were lots of—Kubernetes is—there's two sides of it, and both of them are confusing. There's the infrastructure that participates in the cluster, and there's the actual components inside of the cluster which get orchestrated to make your application possible. So, not having to initially focus on the infrastructure that made up the cluster, so we could just figure out the difference between our butt and the hole in the ground, when it came to our application inside of Kubernetes was immensely helpful to us. I mean, there are a lot of tools these days that do that now: GKE, EKS, AKS, but we got into Kubernetes right after it went GA, and this was huge to help with that.
Emily: Can you tell me also a little bit about the cultural changes that had to happen? And what were these cultural changes, and then how did it go?
Zach: As Austin said, the notion of—I think a lot—and I don't want to offer this as a sweeping statement—but I think the vast majority of the engineers that we had in Seattle, in San Jose, and in Petaluma where the company was headquartered, I think, even if they didn't understand what the word idempotent meant, they understood more or less how that was going to work. The larger challenge for us was actually in helping our contractors, who actually made up the vast majority of our labor force towards the end of my tenure there, how a lot of these principles worked in software. So, take a perfect example: part of the application is written in Ruby on Rails, and in Ruby on Rails, there's a concept of one-off tasks called rake tasks. When you are running a single server, and you're sending lots of emails that have attachments, those attachments have to be on the file system. And this is the phrase I always said to people, as we refactor the code together, I repeated the statement, “You have to pretend this request is going to start on one server and finish on a different one, and you don't know what either of them are, ahead of time.”
And I think using just that simple nugget really helped, culturally, start to reshape this skill of people because when you can't use or depend on something like the file system, or you can't depend on that I'm still on the same server, you begin to break your task into components, and you begin to store those components in either a central database or a central file system like Amazon S3. And adopting those parts of, I would call, cloud-native engineering were critical to the cultural adoption of this tool. I think the other thing was, obviously, lots of training had to take place. And I think a lot of operational handoff had to take place. I remember for, basically, a fairly long stretch of time, I was on-call along with whoever was also on-call because I had the vast majority of the operational knowledge of Kubernetes for that particular team. So, I think there was a good bit of rescaling and mindset shift from the technical side of being able to adopt a cloud-native approach to software building. Does that make sense?
Emily: Absolutely. What do you think actually were some of the biggest challenges or the biggest pain points?
Zach: So, challenges of cultural shift, or challenges of specifically Kubernetes adoption?
Emily: I was thinking challenges of Kubernetes adoption, but I'm also curious about the cultural shift if that's one of the biggest pain points.
Zach: It really was for us. I think—because now it wouldn’t—if you wanted to take out Kubernetes and replace it with Nomad there? All of the engineers would know what you're talking about. It wouldn't take but whatever the amount of time it would to migrate your Kubernetes manifests to Nomad HCL files. So, I do think the rescaling and the mindset shift, culturally speaking, was probably the thing that helped solidify it from an engineering level. But Kubernetes adoption—or at least problems in Kubernetes adoption, there was a lot of migration horror stories that we encountered.
A lot of cluster instability in earlier versions of Kubernetes prevented any form of smooth upgrades. I had to leave—it was with my brother’s—it was his wedding, what was it—oh, rehearsal dinner, that’s what it was. I had to leave his rehearsal dinner because the production cluster for Ygrene went down, and we needed to get it back up. So, lots of funny stories like that. Or Nordstrom did a really fantastic talk on this in KubeCon in Austin in 2017. But the [00:28:57 unintelligible] split-brain problem where suddenly the consensus in between all of the Kubernetes master nodes began to fail for one reason or another. And because they were serving incorrect information to the controller managers, then the controller managers were acting on incorrect information and causing the schedulers to do really crazy things, like delete entire deployments, or move pods, or kill nodes, or lots of interesting things. I think we unnecessarily bit off a little bit too much when it came to trying to do tricky stuff when it came to infrastructure. We introduced a good bit of instability when it came to Amazon EC2 Spot that I think, all things considered, I would have revised the decision on that. Because we faced a lot of node instability, which translated into application instability, which would cause really, really interesting edge cases to show up basically only in production.
Austin: One of the more notable ones—and I think this is the symptom of one of the larger challenges was during testing, one of our project managers that also helped out in the testing side—technical project managers—which we nicknamed the Edge Case Factory, because she was just, anointed, or somehow had this superpower to find the most interesting edge cases, and things that never went wrong for anyone else always went wrong for her, and it really helped us build more robust software for sure, but there's some people out there with mutant powers to catch bugs, and she was one of them. We had two clusters, we had lower environment clusters, and then we had production cluster. The production cluster hosted two namespaces: the staging namespace, which is supposed to be an exact copy of production; and then the production namespace, so that you can smoke-test legitimate production resources, and blah blah blah. So, one time, we started to get some calls that, all of a sudden, people were getting the staging environment underneath the production URL.
Zach: Yeah.
Austin: And we were like, “Uh… excuse me?” It comes down to—we eventually figured it out. It was something within the networking layer. But it was this thing, as we rolled along, the deeper understanding of, okay, how does this—to use a term that Zack Arnold coined—this benevolent botnet, how does this thing even work, at the most fundamental and most detailed levels? And so, as problems and issues would occur, pre-production or even in production, we had to really learn the depths of Kubernetes. And I think the reason we had to learn it at that stage was because of how new Kubernetes was, all things considered. But I think now with a lot more of the managed systems, I would say it's not necessary, but it's definitely helpful to really know how Kubernetes works down in the depths. So, that was one of the big challenges was, to put it succinctly, when an issue comes up, knowing really what's going on under the hood, really, really helped us as we discovered and learned things about Kubernetes.
Zach: And what you're saying, Austin, was really illuminated by the fact that the telemetry that we had in production was not sufficient, in our minds, at least until very recently, to be able to adequately capture all the data necessary to accurately do root cause analyses on particular issues. In early days, there was far too much root cause analysis by, “It was probably this,” and then we moved on. Now having actually taken the time to instrument tracing, to instrument metrics, to instrument logs with correlation, we used, eventually, Datadog, but working our way through the various telemetry tools to achieve this, we really struggled being able to give accurate information to stakeholders about what was really going wrong in production. And I think Austin was probably the first person in the headquarters side of the company—I’m not entirely certain about some of our satellite dev offices—but to really champion a data-driven way of actually running software. Which, it seems trivial now because obviously that's how a lot of these tools work out of the box. But for us, it was really like, “Oh, I guess we really do need to think about the HTTP error rate.” [laughs].
Emily: So, taking another step back here, do you think that Ygrene got everything that it expected, or that it wanted out of moving to Kubernetes?
Austin: I think we're obviously playing up some of the challenges that we had because it was our day-to-day, but I do believe that trust in the dev team grew, we were able to deploy code during the day, which we could have done that in the beginning, even with vertically scaled infrastructure, we would have done it with downtime, but it really was that as we started to show that Kubernetes and these cloud-native tools like Fluentd, Prometheus, Istio, and other things like that when you set them up properly, they do take a lot of the risk out. It added trust in the development team. It gave more responsibility to the developers to manage their own code in production, which is the DevOps culture, the DevOps mindset. And I think in the end, we were able to ship code faster, we were able to deliver more value, we were able to go into new jurisdictions and markets quicker, to get more customers, and to ultimately increase the amount of revenue that Ygrene had. So, it built a bridge between the data science side of things, the development side of things, the project management side of things, and the compliance side of things.
So, I definitely think they got a lot out of trusting us with this migration. I think that were we to continue, probably Zack and I even to this day, we would have been able to implement more, and more, and more. Obviously, I left the company, Zach left the company to pursue other opportunities, but I do believe we left them in a good spot to take this ecosystem that was put in place and run with it. To continue to innovate and do experiments to get more business.
Zach: Emily, I'd characterize it with an anecdote. After our Chief Information Officer left the company, our Chief Operating Officer actually took over the management of the Technology Group, and aside from basically giving dev management carte blanche authority to do as they needed to, I think there was so much trust there that we didn't have at the beginning of our journey with technology and Ygrene. And it was characterized in, we had monthly calls with all of the regional account managers, which are basically our out-of-office sales staff. And generally, the project managers from our group would have to sit in those meetings and hear just about how terrible our technology was relative to the competition, either lacking in features, lacking in stability, lacking in design quality, lacking in user interface design, or way overdoing the amount of compliance we had to have.
And towards the end of my tenure, those complaints dropped to zero, which I think was really a testament to the fact that we were running things stably, the amount of on-call pages went down tremendously, the amount of user-impacting production outages was dramatically reduced, and I think the overall quality of software increased with every release. And to be able to say that, as a finance company, we were able to deploy 10 times during the day if we needed to, and not because it was an emergency, but because it was genuinely a value-added feature for customers. I think that that really demonstrated that we reached a level of success adopting Kubernetes and cloud-native, that really helped our business win. And we positioned them, basically, now to make experiments that they thought would work from a business sense we implement the technology behind it, and then we find out whether or not we were right.
Emily: Let's go ahead and wrap up. We're nearing the top of the hour, but just two questions for both of you. One is, where could listeners find you or connect with you? And the second one is, do you have a can’t-live-without engineering tool?
Austin: Yeah, so I'll go first. Listeners can find me on Twitter @_austbot, or on LinkedIn. Those are really the only tools I use. And I can't really live without Prometheus and Grafana. I really love being able to see everything that's happening in my applications. I love instrumentation. I'm very data-driven on what's happening inside. So, obviously Kubernetes is there, but it's almost become that Kubernetes is the Cloud. I don't even think about it anymore. It's these other tools that help us monitor and create active monitoring paradigms in our application so we can deploy fast, and know if we broke something.
Zach: And if you want to stay in contact with me, I would recommend not using Twitter, I lost my password and I'm not entirely certain how to get it back. I don't have a blue checkmark, so I can't talk to Twitter about that. I probably am on LinkedIn… you know what, you can find me in my house. I'm currently working. The engineering tool that I really can't live without, I think my IDE. I use IntelliJ by JetBrains, and—
Austin: Yeah, it’s good stuff.
Zach: —I think I wouldn't be able to program without it. I fear for my next coding interview because I'll be pretending that there's type ahead completion in a Google Doc, and it just won't work. So, yeah, I think that would be the tool I'd keep forever.
Austin: And if any of Zach's managers are listening, he's not planning on doing any coding interviews anytime soon.
Zach: [laughs]. Yes, obviously.
Emily: Well, thank you so much.
Zach: Emily Omier, thank you so much for your time.
Austin: Right, thanks.
Austin: And don't forget Zack is an author. He and his team worked very hard on that book.
Emily: Zack, do you want to give a plug to your book?
Zach: Oh, yeah. Some really intelligent people that, for some reason, dragged me along, worked on a book. Basically it started as an introduction to Kubernetes, and it turned into a Master's Course on Kubernetes. It's from Packt Publishing and yeah, you can find it there, amazon.com or steal it on the internet. If you're looking to get started with Kubernetes I cannot recommend the team that worked on this book enough. It was a real honor to be able to work with people I consider to be heavyweights in the industry. It was really fun.
Emily: Thank you so much.
Announcer: Thank you for listening to The Business of Cloud Native podcast. Keep up with the latest on the podcast at thebusinessofcloudnative.com and subscribe on iTunes, Spotify, Google Podcasts, or wherever fine podcasts are distributed. We'll see you next time.
This has been HumblePod production. Stay humble.
The Business of Cloud ...