Mass-market network functions virtualization or NFV is becoming more of a reality day by day. It is an initiative to virtualize network services traditionally run on proprietary, dedicated hardware. With NFV, functions like routing, load balancing and firewalls are packaged as virtual machines (VMs) on commodity hardware.The future is bright, even if it’s more complicated than we would like. From the implementations so far, here are a few lessons that can be taken, whatever your job role.
The big change for the CXO is the shift from CAPEX to OPEX. Cloud-based, on-demand, virtualized infrastructure is an OPEX play, while the traditional service provider play has been on capital-intensive, hardware-oriented expenditure.
In our experience, this shift has not been fully reconciled within many service providers. While it may seem obvious to businesses that operate SaaS business models or the many businesses that have come to rely on SaaS, this is quite a major shift in thinking and processes for service providers. Do not underestimate how challenging it is to change “the way we’ve always done it”. It can be just as complicated as any technical challenge.
As time goes by, we expect 30% of the OPEX to go to NFV infrastructure and 70% to go to VNFs/VNFCs. This is the first lesson – that this shift towards OPEX needs to get underway or it will create an institutional barrier to successful NFV deployments.
From the CFO’s perspective, the lesson is on ROI from an NFV infrastructure. The traditional mind-set has been on ROI from physical infrastructure. A vast simplification is that if we add X physical infrastructure, we can deliver Y revenue. But this thinking is not sufficiently layered for an NFV world.
The ROI will only be break-even if it is based on physical infrastructure. With NFV, it is about being able to do things better, rather than having better things. The true payback is based on the change of process as well as infrastructure.
Engineering has quickly caught up to the need for new skillsets – there is no new lesson there. The lesson here is that you actually don’t need to be at the bleeding edge, as many may think.
This is fairly typical organizational thinking. When people within an organization realize that they are behind on something, the human ego often kicks in with something like “We’re terrible at this, but we should be the worlds most advanced at this!” And this thinking can filter down to KPIs and dictats. New skills ARE needed for NFVI, but you don’t have to be on the bleeding edge. Out there in the real-world, we find that most OpenStack operations are based on the older Mitaka, rather than the newer Pike. And that’s okay.
4. Product Management
For product management, the lesson is about making a choice with their eyes open. With the implementations we’ve been involved in, it’s been 60% OpenStack and 40% VMware.
The decision here is much more expansive than can be fit into a bullet point. But at its core, it’s an operational decision and a cost decision. To my mind, while being careful not to wade into the religious fervour of smartphone OS preferences… it has a parallel in the choice between iOS or Android, more so a few years ago.
Do we go for a more costly but mature system with advanced tools or a flexible but maturing open environment? That is a big decision for any Engineering team – and one that needs to be made understanding the full, long-term implications of both paths.
As for Ops, their lesson is both challenging and optimistic. Virtualization can and should have a dramatic, positive impact on provisioning speed. We consider that Ops need to be looking at a 50%+ savings on the process time for service setup and provisioning.
This benchmark will ensure that the realities of process improvement will start to positively impact the ROI from virtualization. Plus, it is not so ambitious that it will have a negative impact on the Ops team.
To misquote “Top Gun”, the CTO is going to feel the need, the need for speed. Services and cloud connectivity have to be low latency. It will not be acceptable for there to be an increase in latency within an NFV environment.
In the telecom world where five nines availability has been a mantra for decades, it certainly won’t be accepted in an infrastructure that is meant to be an upgrade. The target for 5G environments is line speed. As an example, for Openwave Mobility our vGiLAN functions work at sub-5mS and we are constantly working to improve that.
7. Customer Experience
And let’s not forget who telcos are doing this for. The people, that we all ultimately serve – the people who pay the bills, the subscribers themselves.
For the people whose role it is to ensure and improve the customer experience, their experience of NFV will be different again. For people in these roles, our advice would be to concentrate on three clear metrics to measure. It will likely be subtly different for every operator, but for example it could be improved throughput, reduced RAN congestion or improved Quality of Experience.
Whatever the metrics are, they need to be measurable and manageable. And from their perspective, an NFVI has to serve them, so they can better serve their subscribers.