Microservices, Latency and Alligators in the Pond
Disrupting in a Non-Disruptive Way
I had the good fortune to attend Cisco Live 2018 in Orlando, where Cisco gave an update on its partnership with Google Cloud. In the keynote speech, Cisco CEO Chuck Robbins brought Google Cloud CEO, Dianne Greene on stage to explain a little more in depth. Greene’s key message, which she repeated several times, was about “disrupting in a non-disruptive way”.
While that may seem to make as much sense as an eggless omelette (there is such a thing), it’s worthwhile to try to unravel these cryptic comments. Greene herself provided precious little detail, but the kind folks at Google’s booth explained that the focus is on hybrid clouds, which would allow for Kubernetes’ container management software to work seamlessly in the cloud and with on-prem deployments.
Earlier this year, Cisco had signaled its container-friendly intentions when it announced Kubernetes support for its AppDynamics (app performance monitoring) CloudCenter (app deployment and management) products. A larger announcement on the Cisco-Google collaboration is expected for next year and I would be surprised if containers are not part of the message.
Containers are essentially receptacles for microservices, and anybody building a scalable, robust application these days should definitely consider a microservice architecture – that’s pretty much what it means to be “cloud native”. Moreover, we can think of cloud-native applications as essentially using an assembly of various services, which, as mentioned, are housed in containers. While Kubernetes provides the key container orchestration features of scheduling, cluster management and discovery, it does not cover topics like visibility, resiliency, traffic control, security and policy enforcement. This is important, because these services interact with one another over a network in the form of a service mesh, which is different from a traditional thick client implementation, where most of an application’s components reside on a single machine.
This is where Istio steps in and attempts to fill some of these gaps. Backed by Google and IBM, Istio (“sail” in Greek) is an open platform to connect, manage and secure microservices. The heart of Istio is the Envoy proxy, originally developed by Lyft, which performs service discovery, load balancing, etc.
If we put ourselves in the shoes of a developer who has to de-compose an application into a set of microservices, Greene’s comments, about disrupting in a non-disruptive way, start to make a lot more sense. We also begin to see how tightly applications will now be tied to the networks they run on. One of the standard criticisms of microservice architectures is that excessive latency will ultimately kill performance.
Platforms like Istio will offer developers some solace in that they provide features like traffic routing and steering, or even visibility into metrics such as latency, but that doesn’t necessarily solve the underlying problem of a slow network, or more generally, a slow cloud. Essentially, app performance is now joined at the hip with network infrastructure performance, and the two are becoming more intertwined than ever before.
Successful Transition to Cloud-Native Applications
And this is where companies such as Spirent steps in to ensure network infrastructure performance through tools used to measure throughput, loss and latency by simulating network traffic with realism and precision. Similarly, it helps verify cloud infrastructure performance to generate multi-dimensional workloads to improve predictability and performance of an infrastructure, enable effective capacity planning, and provide bottleneck analysis.
Walking around the Orange County Convention Center, it was hard not to notice signs around the ponds, warning about alligators and snakes. While Spirent cannot protect you from these exotic, but very real dangers, we can help you avoid getting bit by poor network performance, by enabling a successful transition to cloud-native applications.