Trumpeting A New Beginning In Financial Services

rajesh

A multi-national bank to move its “production” global HR data warehouse to a new technology platform by Q4 2016

A leading wealth management company builds an Enterprise Reporting Data Hub – the first use case already live “in production”

A large bank wants to improve operational efficiency in its AML processing by using external data for entity resolution

Corporate IT in a bank wants to move its old data to a low TCO archival mechanism, but at the same time, retain rapid access to that data

All of the above are current conversations that we are participating in with our clients at the moment. Different financial services lines of business and different use cases, but with a few common technology themes running across them. They all are about data – handling large volumes of mostly structured data in multiple formats from multiple data sources. In addition to the data handling itself, they need to push data for consumption to various users via reports, dashboards and other functionality.

We saw it, but did not know it was headed this way!

I have been in FinTech for more than 15 years now. I find it incredibly complex and beautiful and am never ceased to be amazed by the pace of innovation in this area. But I can also claim to be a little jaded by and skeptical of new technology hype cycles, like many of us are.

I and a few of my colleagues got our first introduction to big data in late 2014. A guy in my then (IT services) organization was conducting some research into big data (there was no ongoing project or even POC in our sight then) and we asked him to give us the low down on the topic. Most discourse on big data during that time centered on the ‘big’ part – the humongous volume of data potentially available that was not being used productively. Equal emphasis was laid on the ‘unstructured’ nature of most of that data – tweets, blog posts, images etc. – and how it could be handled at scale and made to contribute to analysis, something that had not been really possible before.

Most of our discussion session centered on the use cases in big data, as opposed to the toolsets made available by big data technology (mostly Hadoop). We started with the history of big data, then moved on to the usual suspects in terms of use cases (Tweet analytics, that famous telecom use case) and generally struggled to relate the discussion to the financial services context.

We talked about how financial institutions (banks, custodians, brokerages etc.) were already handling massive volumes of structured data at scale using ‘traditional’ technology. We ended that session by saying that big data technology was probably not going to be very relevant to us anytime soon. Talking to peers in other organizations led me to believe that they had similar introductions to big data around that time too, and had reached similar conclusions.

In hindsight, our biggest mistake then – not asking for a tour of the big data toolset and relating to what it can do for structured data that was not necessarily very ‘big’.

What work do we give that elephant?

There was another common theme in the examples that I had listed earlier. And you have most likely guessed what it is – all of these use cases are being executed using big data technology.

What a difference a year can make!

Big data technology is coming out of labs and proofs-of-concept, and becoming mainstream in financial services. The use cases under design and implementation today relate mostly to handling structured data, and center on providing an infrastructure for analytics and reporting – i.e., handling data and requests that are not mission critical and not real time from an operational standpoint.

In our conversations with clients and prospects, we have observed multiple drivers behind this adoption

– cost (license, hardware, implementation – all important to different degrees for different organizations)

- development efficiency in some use cases (e.g. analytics sandbox) and

- general industry buzz / marketing hype(!) around big data technology.

One surprising (though common) observation across these conversations is also that these big data product / purchase decisions are often driven by the CTO’s office in the organization, and might be taken before the architecture and implementation teams have had a chance to orient themselves to the big data technology paradigm. A “build and they shall come” approach to use cases, almost.

Let us see if we can make our saddle fit

Most of the executives who are responsible for architectural and implementation leadership at financial services firms come from the structured data technology background. Data management to them means RDBMS, ETL and ‘traditional’ data warehouses. Big data has entered their conversations less than a year ago. Given this, it is not surprising that financial services technology executives are talking about the need to rapidly come up to speed on all things big data.

What has not changed in architectural and planning discussions however is the evaluation of what can still be reused in the current technology investments when big data technology is brought into the stack – both from the product / platform as well as the skillset perspective.

The implementations that I listed earlier had us putting together technology teams comprising SQL and Java skills for our clients!

Big data technology vendors seem to be increasingly supporting these trends in their products too, based on the voice of their clients (i.e. banks and others). This approach helps executives achieve two critical objectives – i) continue leveraging their existing investments in technology and ii) derive the benefits of big data technology, but in a context that they are familiar with.

This ride should get much better after the first few miles

However, like any other new technology, adoption of big data technology has not been completely smooth or without hitches.

Big data fintech companies themselves are fairly young (less than 2 years old) and typically tend to have small teams – we have seen them struggle to provide implementation support and training to their early clients. Documentation is also difficult to come by.

Also, while these products claim to support existing ways of doing things (like SQL), these claims can often only be validated when certain scenarios are encountered during usage. One product that we are working with in a client implementation claims to support ANSI SQL, but we discovered that it does not support all SQL constructs. What this means is that certain workarounds have to be adopted in order to get the desired output.

Not surprisingly, IT teams in the client organizations need to be prepared for a steep learning curve, coupled with longish implementation timeframes in the short term. Some of the above considerations also have a bearing on what the implementation team should look like – our observation is that peopling the team with members having sound technology fundamentals and the willingness to try new things is an important parameter defining success. What is also important is expectation management of internal clients (i.e. the business teams within the organization) – the plan should reflect the fact that the first few requests are likely to take longer to deliver, before the steady state pace is achieved.

One can run, but not hide 

A move towards increasing adoption of big data technology in financial services is here to stay. The next few years in financial technology are likely to be defined by how organizations coax that yellow elephant to do their bidding.