John Duigenan, the global chief technology officer for Financial Services, has spent years as an IBM distinguished engineer helping some of the largest banks in the world solve their largest business challenges. These days that means, among many things, implementing hybrid cloud strategies to conquer technical debt and security and compliance issues. Recently, Industrious sat down with Duigenan to discuss one of his latest obsessions: helping banks make sense of the vast amounts of data they control, and how to wring the most value from it.

So we wanted to talk about data fabric, since you’d mentioned that’s something you’ve been excited about lately—but then it turns out what you’re really excited about right now is your new bank account. Not only is that not something we’re used to hearing, but it turns out the secret to your new favorite account may just be its data fabric? 

Duigenan.

Quite right. It’s a new consumer offering from a legacy bank. They weren’t in this space before, so when they approached consumer banking, they did so intentionally and said, I better know my clients. I better create my data systems so that they know all the different products he or she has, all the ones they might need, their past preferences and purchases, their profile.

So even on day one, when John opens an account and drops in, it could be a few thousand dollars or a few hundred thousand dollars, we the bank know exactly what to do. We know how to take care of his money, where to invest it, what offers to provide. And that profile and the offers, they’re all automated and customized. Everything my new bank is doing, it all starts with AI.

Now compare that to my old bank, which I still value, and they’re still working on modernizing and providing new offerings. But in their system, they still classify me as a student, despite their full knowledge of my age and monetary flows. And because of that oversight, they have never tried to upgrade my relationship tier, thus improving my experience and their revenue.

But it’s not really fair to compare banks, is it, especially those using legacy systems versus those running entirely new platforms?

Banks had better be ready for those kinds of comparisons—consumers make them every day. Let me give you another example: my credit cards. One consistently declines my transactions as fraudulent, which is a big discouragement from using that card. The other, which is very much an established brand, knows that I live for beautiful dining and travel experience, which they proactively seek to arrange for me, and transactions are only ever declined when they are indeed fraudulent.

So with the right back end, any company can do this. And really, they have to. We’re about to experience the biggest transfer of wealth in history, between the baby boomers and their children. And they’re not going to approach their banking relationships like their parents did. They’ll all be asking, Why do I stay? Why do these relationships stay with the institutions my parents used? I don’t trust them, they don’t understand me and there are better companies. If these established institutions don’t start meeting their clients’ needs in new ways, money’s going to flow out of them at high, high speed.

For banks to offer new services and experiences to customers, they need to understand those customers on a whole new level.

Many banks have been trying to do exactly that, as you’ve mentioned. So what gives?

It’s true, none of these firms is intentionally choosing to offer poor service. There’s three main issues. First off, each of these companies one way or another grew through a series of acquisitions. So suddenly they’re dealing with an integrated data system. Some of these acquisitions were made 30, 40 years ago and the data’s still not integrated. It’s why you have regulators handing out hundreds of millions of dollars in fines for risk management and data governance—it’s nearly impossible when you have these data silos upon silos upon silos, to the point where viewing and using the data is virtually impossible outside of a very specific application or program.

Secondly, over time, each of these places said, I need to do analytics. I know I need to do some kind of analytical function on my clients. So what they did was spin up all of these data warehouses and data marts and data lakes using all different kinds of software and hardware. These things just multiplied like rabbits, and they’re everywhere. Each one of them does its own thing, and its own thing is useful, but it’s not widely used across the organization.

Now there are hundreds, if not thousands of these analytics systems that are disconnected from the organization that contain hundreds of copies of the same data. So you wind up with data multiplication or data duplication in multiple instances across multiple, disparate disconnected systems. And not only are you duplicating efforts, you’re not connecting them and you’re introducing risk, and this is where you wind up where we started, where you’re misidentifying customers or mishandling them, and ultimately ill serving them.

The third issue is that when data has been multiplied across many processing destinations, both on- and off-premises, it can get so dispersed, it’s basically scattered to the wind. This creates a scenario where the data’s basically lost the rest of the organization, where they may lose access, and worse still, people may gain access who shouldn’t have it, whether that’s bad actors inside or outside the organization. It creates some very high risks both in terms of data breaches and policy controls.

And this is where our data fabric comes in, to stitch it all together?

Exactly. Some people also call it a data mesh, but the data fabric is the network in which all those sources of data get connected together and makes them usable and accessible.

Developers especially value the freedom a data fabric offers to easily plug information into new projects.

So how’s a data fabric work?

The data fabric, which we offer here at IBM as part of our Cloud Pak for Data, it’s going to do four primary things for the bank:

it’s going to connect and provide virtualized access,
it’s going to automate data discovery,
it’s going to build a data catalog,
it’s going to create an automated data pipeline.

Each of those is a crucial step. It helps me connect to data irrespective of what that data is, or where it sits, whether in a traditional datacenter or a hybrid cloud environment, in a conventional database or unstructured data. By offering virtualized access, developers no longer need to write specific code to access certain data.

At the same time, cloud-based AI is helping to identify all the data out there and translating the existing data models and their metadata into useful information. And that leads to the data catalog. That’s how people in the organization who need to know that there’s a customer propensity database, for example, the catalog is how they find out that already exists and put it use. If we can find something in the catalog, and do it with simple queries and natural language, that saves so much work and resources and immediately plugs into whatever new use I was seeking the data for.

And then naturally, there’s more and more data coming in every day, both from our new data-enabled offerings, as well as a million other places. And all of that has to be connected, which you do with the data pipeline, which has to be automated to keep up with the sheer volume of new information out there.

It almost sounds easy when you put it that way. Or it should be. Because at the end of the day, banks want to be in the financial services business, not the data management business.

I should say so. At the same time, in this era, you can’t have one without the other. Just ask my new bank. For them, data was never an afterthought—it was the starting point of their whole business. And that’s where IBM is always ready to help, solving these kinds of complex business problems.

Want to join a community of industry leaders and tech experts sharing ideas like this? Discover more.

Was this article helpful?
YesNo

More from Cloud

Bigger isn’t always better: How hybrid AI pattern enables smaller language models

5 min read - As large language models (LLMs) have entered the common vernacular, people have discovered how to use apps that access them. Modern AI tools can generate, create, summarize, translate, classify and even converse. Tools in the generative AI domain allow us to generate responses to prompts after learning from existing artifacts. One area that has not seen much innovation is at the far edge and on constrained devices. We see some versions of AI apps running locally on mobile devices with…

IBM Tech Now: April 8, 2024

< 1 min read - ​Welcome IBM Tech Now, our video web series featuring the latest and greatest news and announcements in the world of technology. Make sure you subscribe to our YouTube channel to be notified every time a new IBM Tech Now video is published. IBM Tech Now: Episode 96 On this episode, we're covering the following topics: IBM Cloud Logs A collaboration with IBM watsonx.ai and Anaconda IBM offerings in the G2 Spring Reports Stay plugged in You can check out the…

The advantages and disadvantages of private cloud 

6 min read - The popularity of private cloud is growing, primarily driven by the need for greater data security. Across industries like education, retail and government, organizations are choosing private cloud settings to conduct business use cases involving workloads with sensitive information and to comply with data privacy and compliance needs. In a report from Technavio (link resides outside ibm.com), the private cloud services market size is estimated to grow at a CAGR of 26.71% between 2023 and 2028, and it is forecast to increase by…

IBM Newsletters

Get our newsletters and topic updates that deliver the latest thought leadership and insights on emerging trends.
Subscribe now More newsletters