5 Big Data Trends Impacting Financial Institutions in 2016

The adoption of industry standards and more mature platforms will shift big data's focus from IT-driven infrastructure projects to business-driven data solutions. Those who adopt big data strategies early and aggressively will realize operational efficiencies and top-line growth.

2015 was a pivotal year for big data in financial services. Most large organizations have realized that the information they capture in the normal course of business has enormous strategic and competitive value. And the value of this data is becoming even more important as financial technology startups threaten to disrupt established players.

Many large organizations have launched initiatives to capitalize on the value of their data. However, these initiatives have exposed new questions and identified new challenges, both business and technical, that must be overcome.

For example, what are the right business problems to address with big data? How do you align data goals and operations across business units to maximize business value? How do you aggregate data while preserving security and privacy? And what processes, organization and tools do you need to implement to deliver effective solutions?

In 2016, we will see solutions to many of these challenges and the emergence of powerfully differentiated strategies from organizations that leverage their big data assets. Here are some key developments to look for in 2016.

1. The Emergence of Powerful Big Data Use Cases

One of the challenges with big data solution adoption has been a disconnect between business and IT. In many instances, IT has led the way, building out a big data infrastructure and adopting a myriad of new tools, often without the context of a specific business problem. The result is frequently a solution looking for a problem to solve.

Smarter organizations have taken a different approach, building solutions to specific business problems or building a data-as-a-service offer, giving the business the flexibility to select the tools they need to solve their specific problem. In 2016, we will see much more of these two approaches.

Some of the key use cases driving big data adoption include compliance, regulatory risk reporting, cyber security and trade surveillance. In 2016, we will see increased interest in revenue-generating use cases such as customer 360.

Manual data quality processes are the weak link in financial data governance with many organizations still throwing people and spreadsheets at the problem. We will also see big data solutions applied to automating these processes in the New Year.

Big data as a service offers even more opportunity for business value. Why? Because it provides a single source of all enterprise data with documented quality and provenance as well as enables business users to select the data sets and tools necessary to solve their specific business problem.

This approach is both operationally efficient and strategically agile as it does not predetermine how they data will be used.

2. The Smart (Semantic) Data Lake

In 2015, we saw the emergence of the data lake — a single store for all enterprise data characterized by the ability to collect vast amounts of data in its native, untransformed format at a very low cost.

The data lake offers much promise but it also has limitations. Cataloging data sources, harmonizing disparate data and adding meaning to the data continue to be challenging for many organizations.

Emerging vendors are attempting to solve pieces of this problem but few offer the promise of semantic technologies to provide a holistic end-to-end solution.
For example, semantic technology, based on open industry standards from the W3C, offers a standard way to describe and harmonize data from any source, structured or unstructured, using common, business-friendly models.

Smart data lake tools leverage the power of semantic technologies on top of big data tools like Hadoop HDFS and Apache Spark. By delivering a massively parallel, in-memory, graph database that supports semantics standards, companies can overcome one of the long-standing challenges with semantic technology – performance at scale. It is now possible to run interactive graph queries across enterprise data sets (tens of billions of triples).

These technologies also allow organizations to leverage industry-standard models like the Financial Industry Business Ontology (FIBO) from the EDM Council. When combined with enterprise-scale semantic tools to operationalize the model, smart data lakes provide a powerful path forward for the industry to benefit from its data assets.

3. Democratization of Data Access

The smart data lake tools also solve another challenge with data lakes: end-user access. Most data lake solutions require manual coding for transforming and preparing data for consumption by BI tools. With a smart data lake, the semantic models used to add meaning to the data can also be used to provide critical end-user capabilities, such as: data cataloging, data meaning, data provenance and self-service data analytics.

Data described by semantic models does not presuppose the queries and analytics it needs to support. The semantic descriptions enable end users to find the data they need and to query it in business terms, without any coding.

This democratization of data access will open up access to enterprise data from a few data scientists to many business analysts.

4. Broad Deployment of Big Data solutions to Mid-Sized Organizations

Thus far, the complexity and immaturity of the tools required to implement big data solutions has kept it mainly in the domain of large, technically sophisticated organizations. There is also a perception that “big” is the most important dimension of big data. However, variety is an equally important dimension and organizations of all sizes have variety in their data.

The adoptions of the cloud along with the emergence of more packaged solutions like the smart data lake and the accompanying democratization of data access will open up big data opportunities to a much broader group of mid-tier organizations in 2016.

Better defined use cases, low cost cloud-hosted solutions and a more mainstream skill-set requirement will remove many of the existing barriers for these firms to harness the value of the diverse data in their enterprise.

Webinar
REGISTER FOR THIS FREE WEBINAR
Turbocharging Landing Page Success: Leveraging Advanced Analytics and AI Tools
How can you optimize your landing pages for success? The answer lies in harnessing AI-led solutions to deliver seamless, personalized user experiences. Learn more in this webinar from iQuanti.
WEDNESDAY, April 3rd AT 2:00 PM (ET)
Enter your email address

5. The Rise of Big Data Governance

A recent survey by the EDM Council highlighted that we are at an inflection point for data governance in the financial services industry. Most organizations have recognized the need for enterprise data governance and have set, or are setting, their strategy. BCBS 239 is the key driver for these programs but we are in the infancy of being able to measure their business value. In 2016, big data initiatives will create even more demand for data governance, as processes, controls and security, currently applied in data silos, will need to be enforced on the shared enterprise data lake.

Emerging standards like The Data Management Capability Assessment Model (DCAM) from the EDM Council will enable competitive benchmarking and the adoption of standardized data management practices. In the EDMC’s own words, “The Data Management Capability Assessment Model was created by the EDM Council to represent the intersection of data management best practice and the reality of financial services operations. It documents 37 essential capabilities and 115 sub-capabilities associated with the development of a sustainable data management program.”

The EDM Council recently conducted a self-assessment benchmark across the financial services industry and it documents and quantifies very well what we see in practice. The industry needs standardized objective measures for data quality which will level the competitive playing field, identify priority issues and in-turn put pressure on automating error prone and limited scale manual processes.

The good news is that harmonized, high-quality, well-governed data doesn’t just benefit regulatory compliance. It also enables customer-focused use cases that increase competitiveness in an industry challenged by fast-moving, disruptive new entrants.

Marty Loughlin is Vice President of Financial Services at Cambridge Semantics Inc. Prior to joining Cambridge Semantics, Marty was the managing director for EMC’s consulting business in Boston. His 25-year career has focused on helping clients leverage transformative technologies to drive business results, most recently in cloud and big data. Prior to joining EMC in 2005, Marty was co-founder and COO of Granitar, a web consultancy. Marty holds a bachelor’s degree in English from Dublin City University and a high-tech MBA from Northeastern University.

This article was originally published on . All content © 2024 by The Financial Brand and may not be reproduced by any means without permission.