Our guest on this episode is Vishnu V Ram, Vice President of Engineering, at Credit Karma. Tune in to hear how he thinks about the powerful AI recommendation systems Credit Karma builds to democratize access to capital – and the importance of transparency and data trust to AI.

Guest: Vishnu V Ram, VP of Engineering, at Credit Karma

Vishnu has been instrumental in building and scaling Credit Karma’s machine learning infrastructure, powering personalized recommendations at scale, in pursuit of helping its nearly 130 million members make financial progress. Having been with Credit Karma for more than eight years, Vishnu has led development strategy and implementation of the company’s deep-learning based system comprised of an internally built machine learning platform that helps Credit Karma transform and manage data at scale – his team runs ~58 billion model predictions a day. Today, Vishnu oversees Credit Karma’s newest Core engineering organization, responsible for helping the Credit Karma product reach a more cohesive, personalized and intelligent state, in pursuit of Credit Karma members’ goals. Prior to joining Credit Karma, Vishnu held CTO roles at Nykaa and Games 24×7.

Subscribe and listen to the Rev-Tech Revolution podcast series on:

Riva Data Revolution podcast series on Spotify Riva Data Revolution podcast series on Apple Podcasts Riva Data Revolution podcast series on iHeart Radio
Spotify Apple Podcasts iHeart Radio

Prefer reading over listening? We got you covered! 

Introduction (00:09): Welcome to the Rev-Tech Revolution podcast. Today’s episode is hosted by Betsy Peters, VP of Marketing and Product Strategy at Riva. She is joined by Vishnu V. Ram, Vice President of Engineering at Credit Karma. They talk about how Credit Karma created powerful AI recommendation systems, how Credit Karma thinks about data trust, and the importance of transparency in AI. All of this and more on the Rev-Tech Revolution podcast. 

Betsy (00:40): Vishnu, welcome to the Rev-Tech Revolution podcast. It’s a thrill to have you here. We’ve been playing schedule jockeying for a while, so we’re glad to have you. 

Vishnu (00:50): Thanks a lot for inviting me, Betsy. It’s great to be here as well. 

Betsy (00:53): Thanks for taking the time. So, I would love to start out by understanding a little bit more about you. So, you’ve been with Credit Karma for nearly a decade. That’s a long stretch in the tech industry. So, tell us about Credit Karma, your role there and what keeps you sticking around. 

Vishnu (01:14): Definitely a long stint and it’s been the longest stint in my career and when I joined Credit Karma, Credit Karma already had 25 million users. This was in 2014 and we were around a couple of hundred people and the company was already hitting it in all cylinders in terms of delivering on our mission for our members and our mission for our members has been something that attracts a lot of people to the company. Interestingly, I won’t say at that time that was the thing that attracted me to Credit Karma, but that is what has really kept me here for such a long time and is probably going to keep me here for much longer than that. I was listening to a couple of other podcasts on your channel and I heard about someone talking about international students coming over here and establishing credit and building their financial status within the country. 

I was an international student when I came to the country. I applied for all kinds of random credit cards cause they were giving out pizza or t-shirts and got a bunch of declines in the first few months and then I said, I’m going to wait it out. And then after that applied for something and then I got it and I was like, I don’t know why I got declines in the first place and I don’t know why I got an accept at the later place. And that’s just one group of users. I think what Credit Karma has been doing for the 15, 16 years it’s been active is help every single American who’s of age, who has a financial score credit report with the bureaus, an opportunity to just look at where they are in their financial lives and then figure out what they want to do and figure out where they want to be and how a credit report and credit score is a set of tools that they can use for themselves and not as a set of tools that gets used against them. 

I think that was probably the biggest thing that I realized as I started working in the company. I think what attracted me to the company was 25 million users, the scale at which it was operating and obviously the founding team and the passion that the founding team has always had towards delivering on the mission, but what’s kept me here is the company has allowed me to be a significant part of delivering on the mission. And prior to Credit Karma, a lot of my background was in early stage startups where I’m used to being in the middle of things and moving things on my own and having an opportunity in a company operating at that scale, 25 million users and still be part of the team, which is moving things forward. That’s just been amazing. 

Betsy (04:36): That’s great. And so for people who don’t know the mission of Credit Karma, and I’m just going to paraphrase here, please correct me, is it that concept that you are allowing people to use their credit score proactively for them instead of having other companies use it against them? Is that kind of how the mission boils down or how would you phrase that? 

Vishnu (05:00): The mission of the company is essentially to help every single person improve on their journey to financial progress, whatever that means to them for their phase of life and for their stage and what they care about with respect to their personal finances. That’s really the mission. And like I said, these are all a set of tools. These are set of tools that the way the rest of the world looks at you is by looking at some of the data that is part of your credit report, that is your credit score, and there might be other data points like your income and your employment status and all of that. 

That’s how the rest of the world looks at you, to decide whether to give you a place to stay for rental or whether to decide to give you a job or whether to decide whether they can give you a line of credit to buy a home or a car or anything. So, it’s more about making sure that we are allowing our users to get access to the things that they need so that they can have control over their own financial progress in the best way possible for their own needs and wants. 

Betsy (06:10): That is much more eloquent than what I said, but I’ve heard in some of the podcasts you’ve been on that you call yourself a co-founder at heart and clearly part of the reason you’re able to stick there is the idea that you can be that at scale it sounds like, which is terrific. Do you have a good example of how that mentality has really changed your approach to leading at Credit Karma? 

Vishnu (06:36): Yeah, I think I would say when you’re a co-founder at heart, you care about all aspects of where the company is and what it wants to do for its users and members and you’re not just specifically caring about the things that your title says that you care about. I’m in the engineering function, but if you’re a co-founder, like I’ve been a co-founder and when I was a co-founder I’ve gone out and there was a photo shoot or something that was happening and then there were people, they just needed refreshments and there was no one around to do it. And I said, okay, I’m going to go do that. So, when you do that, and I’ve done customer support at 1:00 AM, I’ve done customer support at 2:00 AM because that’s when I felt that I’m going to learn… If someone was sticking around on an app that I’ve built at 1:00 AM, well they’re trying to do something that they want to and I can learn from them, so you learn from them. 

So, I think one aspect of it is to make sure that you are not allowing your title or not allowing your function to divert you away from doing what’s really needed to help the company and help your users. The second part of it, like I already mentioned, is you can learn from every single person in the company and everyone brings a different set of perspectives and how can you learn from those perspectives and that way then you are going to be able to use that well to again move the company forward, help us do a better job for our users. And then a nice side effect of all of this is that you focused on what’s good for you in the longer term. You focused on what’s good for the company in the longer term and some of the shorter-term pulls and pushes and pressures, you have an opportunity to look beyond that and focus on what’s important for everyone in the longer term. 

Betsy (08:51): That kind of holistic approach or numerous criteria versus kind of short-term results or even what’s in your job description, I think is so critical to being a good leader and I think a good designer. So, I was interested to hear that you led the charge on the development of Credit Karma’s recommendation systems. Would love to hear some stories about that in particular. At the end of the day, how much has been pure data and AI and where does the human factor fit in? 

Vishnu (09:29): We’ve gone through multiple iterations of our recommended system internally. When I joined the company, the company already knew that the user’s data was very, very valuable in helping identify the right set of financial products that you want to put in front of the user. So, there was already some semblance of recommendations happening, but they were happening in different pockets in different ways. I think the first step for us when I came in was to make sure that we were able to centralize all of that into one single recommended system that could then allow everybody in the company to use that system to drive all recommendations across various pages or sections of the app. Now, when you do that, just in the first phase and talking about year one, just in the first phase, now there were teams and there were individuals who had already invested in doing the best possible job that they can do in their own section to make the best possible recommendations that they can think of. 

So, if you do not try to sit with them and understand what they were trying to achieve and what were they trying to do well for the company and for the members, you are not going to be able to incorporate the best parts of what they were trying to do into the centralized recommender system. It’s important to understand what they were trying to do, and what were their intentions. Forget the implementation. If you just focus on the implementation, you’re probably going to say, hey, they don’t know what they’re doing, but if you focus on what they were trying to do, what they intended to achieve, then you have an opportunity to make the centralized recommended system stronger. 

And just the act of sitting with them and understanding what they were trying to achieve, it also allows you to build a partnership with them. In that case, they’re not going to look at you as a person who is reducing their impact. They’re going to look at you as a partner and they’re going to come back, when they have ideas, they want to come back with ideas and then come and talk to you and make your system better and make our system better, right? 

Betsy (11:56): Tell me a little bit about the journey of creating this recommendation system. So, even from the very beginning, looking at the data sets that you had and the relative cleanliness of them and how much of a challenge was that for Credit Karma? Did you have your house in order in that regard? 

Vishnu (12:14): Yeah, one of our biggest, I think very, very early decisions that the company had and the bedrock of the company’s foundation was tight integration with the credit bureaus. So, what that means is a lot of the important data that our members need us to have so that we are able to use that to deliver accurate and relevant financial product recommendations to them is based on credit bureau data and the credit bureaus have invested decades of effort into making sure that they are collecting clean data, they’re investing in data quality, they’re investing in making sure that the quality of the data is of high standards. So, to be able to operate on top of that allowed us to spend a lot of time in the first few years that I was here in making the right investments in the right systems as far as machine learning is concerned, and also invest quite a bit of time in investing in our data pipelines for collecting behavioral data. 

Because when you have the credit report data, you really only understand one part of the equation and that one part of the equation is if Vishnu were to apply for a particular financial product from a particular bank, what are the chances that Vishnu is going to get approved for that particular product by that bank? That’s all we know, but by investing in the data pipelines to collect the behavioral data, we also get a sense of what does Vishnu really want? What does he want to achieve? Does he want to go get a new car? Does he want to go save for a home in California or does he just want to make sure that he’s managing his debt appropriately? So, to be able to get that information about what our users want and need, we needed to make that investment in what we call tracking data collection pipelines and make that easier and easier so that when new teams… The company kept growing. 

So, when I joined, we were 200, now we are like 16-1700. So, as the company keeps growing, there are new engineering teams coming in and they’re building new experiences for our users. When they build new experiences for our users, that’s an opportunity to understand the user a little bit better. And we wanted to make sure that we did that in a way where when new teams get added, they have an easy way of also making sure that they’re collecting the data properly for us. But there’s a lot I can talk about. Some of the other earlier systems that we invested in which continued to do a great job for the company in which I’m very, very proud of the teams that ended up doing that. 

Betsy (15:17): I would love to hear one of your best stories about an early system that laid this foundation to take you from 600 to 1,600 or whatever your number of employees is now. 

Vishnu (15:32): Yeah, so 200 to 1,600 and one of the earliest ones that we invested in was a model scoring system at scale. And there are a lot of different open source implementations that are cloud implementations available at this point of time in the industry today, so someone starting would just end up going and using something from one of the big cloud vendors or just adopt something open source, but when we started, we didn’t have that and we knew that we had an amazing data science team, which were already building models that we knew could be very powerful if we were able to help use that for our members appropriately. And we knew that that was a big blocker. So, to enable our data science team to deliver the impact, we needed a system that internally we call prophecy. Essentially it predicts, it takes in all the models that our data scientists have built and it takes in the user’s data and when users come to the app, it scores thousands of models for every single user, every single session. 

And that helps drive our recommendation systems. And this is an investment that we made very, very early on because we knew that that was a big gap because we were basically crippling our data science teams in terms of how we were operating. And then once we made this investment, once we deployed that in 2015, 2016, it unlocked the potential of our data scientists to impact our members’ lives and business. And the system, we’ve continued to iterate on it, we’ve continued to make it better and it still operates. We do more than 50 billion predictions a day for our users and to be able to make that investment at the right time and then make sure that we had the right team to be able to keep that going along the way for 6, 7, 8 years and continuing to operate at such large scale is just something I’m going to just hold it in my head for a long time. 

Betsy (17:58): You should be very proud of all of that. Is there a hypothesis early on that you had that either new behavior data… let’s just say new behavior data nullified that was a surprise to you and your team or any kind of aha moment that you had when you added new data sources? 

Vishnu (18:22): Again, this goes back to very, very early days where when you are a FinTech, when you’re in the business of helping members find the right credit cards and personal loans, it’s very easy to look at our members in the way big banks look at them because they’re our partners, they’re giving us information, they’re giving us advice, and they’re giving us guidance, where when big banks are designing their products, whether it’s credit cards or whether it’s personal loans, they’re designing this is for the prime user, this is for the subprime user, this is for a user between this credit score and this credit score. And I think naturally some of the experiences that we built mimicked how these products were getting designed for what segments of users. And then once you started collecting a lot of the behavioral data, and then when you combine the behavioral data with the credit data, then you have an opportunity to know that someone might have not a great credit score at this point of time, but they are on the journey to aspire for things that you normally not put in front of them. 

Maybe I’ll just take a personal example. When I came into the country a second time around in 2013, I had a really good credit score because I was in the country 10, 12 years back and I had a few credit cards that were open for a long time and I didn’t have balances, so I had no credit report, but at the same time I had a good credit score and I had a good income, I had a good job with a good income. Now, if a particular product was designed for someone with a deep credit history, I won’t get approved for it, but as soon as I came into the country, I had a family, which means I had higher aspirations. So, it’s not just today’s credit score and credit report that matters, it’s also the trajectory of the user in terms of what they’re aspiring towards, what they’re going to get to be able to use all of the data. 

If I’m interested in prime cards, because I think of myself as a prime member, while the banks might not see that in the data currently. So, then there’s an opportunity for Credit Karma to help me find the path to get to those schools. And then Credit Karma is also telling me, Vishnu, if you apply for these products, you’re not going to get approved for it. So, just to signal back to me that I’m not going to get approved for these products, then I’ll just back off. I’m not going to do the same thing that I did when I came here as a student where I’m just going to non-stop keep applying for products. 

So, to be able to bring the financial data that banks look at as well as the behavioral data that we collect and then allow that to tell the user, we know you are interested in these products, we want to show them to you, but don’t apply for them now, wait for a little bit, improve your credit health and then you’re going to get approved for it. So, just that opportunity to do that is probably something that I would say is how we transition from a credit score, credit range-based experience to an experience that is actually more tailored for every single user’s personal situation and then using all the great machine learning techniques and the data to be able to get there. I think that time was a big aha moment. 

Betsy (22:17): That is a great illustration, I think, of something that feels pretty unique about Credit Karma because your business is very data tech and intellect intensive, but at the end of the day, your customers are people and they come to you for help with financial matters that can be really emotional. So, are there any other examples of how Credit Karma takes that human factor in when designing products or when seeing opportunities in the marketplace that other people might not see because you have such rich data and such an interesting way to look at it? 

Vishnu (22:58): Yeah, I think one of the things that we have invested in the last few years is incorporating machine learning into our notifications. We send out millions of notifications to our members and I remember when I joined the company, I would look at the number of notifications that Credit Karma would send me and then I would look at the number of notifications that I used to get from a bunch of other companies and I would say, I don’t get a lot of emails from Credit Karma. What’s going on? Is there some capability that we are missing? What do we do? Then I figured out that we had an artificial but necessary constraint to make sure that we are only sending relevant notifications, emails or push notifications to our members and the artificial constraints where you can only send X emails in a month, y emails in a week. 

So, we had those constraints. Now the constraints were great to make sure that we were able to make sure that we are doing the right thing for the user where we are not spamming them just to make sure that we are building on some fancy engagement metrics like MAUs and DAUs just for the metrics sake rather than what was beneficial for the members, but at the same time, there were a lot of situations where I would much rather hear from Credit Karma, especially when it comes to my financial health. 

I’ll give you an example. My, I’m actually flying to India tomorrow and my flight was supposed to be Tuesday night and then a couple of days back my wife had booked the ticket, couple of days back, my wife gets an email from the agency saying that, hey, your schedule has changed. And she said, okay, normally I get these emails when the schedule changes from 8:00 AM to 8:15 AM or 8:15 AM to 8:30 AM. We look at it on the day of travel and then we just checked again and then when we checked again yesterday, it sounds like the flight was canceled. I’m traveling to India and my flight was canceled and I didn’t get a push notification, or I didn’t get an email wanting me, your flight is canceled, you need to go change your flight now. 

If I’m in that situation, I need to be told immediately so that I can make alternate plans. Now with respect to financial health, if you’re able to translate that to financial health, if someone has stolen your identity, and they’re trying to apply for a personal loan or a credit card on using your identity, you want to know immediately. You don’t want to wait two days because Credit Karma has this artificial constraint that I will only send you one email every week. You want to know immediately about that. 

I think to be able to make sure that we understand the situation in the context well for the user, for the things that are happening and then help them know that this is important, take care of it right now, or if something can wait, get the system to wait and then let the user know about it. So I think to be able to do that, we needed to incorporate machine learning and we’ve had a lot of success in that where our machine learning can drive value for our business, it’s able to drive value for our members while maintaining the longer-term health of the platform in a suitable way. 

Betsy (26:42): Interesting. So, tell me about the application of machine learning when it comes to overcoming kind of those terms as kind of more brittle business rules that you had about artificial caps on email. So, how did you use machine learning to overcome that? What were some of the steps? 

Vishnu (27:05): I think a lot of this comes to understanding which of these rules are really hurting our members and business and making sure that we are having a conversation with the people who… Again, going back to my earlier comment, the people who put these constraints in place, put these constraints in place for a reason. Now, what are the reasons or the intent that they had before putting these constraints in place? First, understand the intent and then understand if you’re going to take that rule away, what’s going to happen? What will happen? Just try to get an understanding. Maybe you run an AB test to understand if you take a constraint away, what’s going to happen? And then when you see that, because at our scale we are going to get these learnings very, very quickly, then you know that what’s the worst that can happen if you take that constraint away, or what’s the best that can happen when you take that constraint away? 

And then you are able to fine tune your approach and understand how you can use machine learning. And to use machine learning, we need to collect some amount of data. So, one is the AB test to understand if you take the constraint away, what’s going to happen? And second is what kind of mode or what kind of modeling technique are you going to bring in place to be able to make that machine learning driven? And do you still need a soft rule because models can take some time to improve? How do you keep your models operating as well as needed to meet the intent of the original constraint? 

And then while afterward, basically you think of it as a stepwise softening of the constraint, you soften the constraint and then you put your model in, you collect the data, you build the model, you put that model in, and then you go to then have an opportunity to understand with the softer constraint, with the model, how things are working. Then the next phase is you have an opportunity to take even that softer constraint away because you know that the model is doing what you want it to do and sometimes you are going to not get the objective properly of the initial constraint. So, that’s the reason you go through a couple of phases here, especially with something like this, which is very important and impactful for the company, we would much rather take a couple of phases in before we can get to a good place. 

Betsy (29:41): I’d love to know a little bit more about your teams when you’re designing a new data heavy product and how you balance data, intellect and human factors and emotion. It sounds like you are very well calibrated in both senses, but do you have folks from design teams, marketing teams? What is the org structure as you’re getting into a bit of an R&D on a new product development? 

Vishnu (30:12): There are things that are primarily user facing and there are things that are happening in the background. Some of the notification stuff that I talked about is happening in the background. And in that situation, we are partnering very, very heavily with our marketing teams to make sure that we understand what they are trying to achieve. But when we are working on something that is user facing, like as soon as you open the app, the homepage where you land on, design is very, very, very heavily involved in how we go about thinking about it. And I think it’s important to understand that the intersection of and AI is still super nascent at this stage. And it’s important to understand that I come from more from the AI and data side and by design, I don’t really understand everything that the design team is thinking about. 

So, it’s important to just go in there with an open mind and understand what are they thinking about. For example, if I make everything machine learning driven, then nothing is going to stay in the same place, but as a user, when we just started this call, I think we were told, okay, at right top is very going to see the recording. Now if this whole navigation system was machine learning driven, sometimes they leave the call will be in the right bottom and sometimes it’ll be middle of the screen. Is that a good machine learning use case? Probably not because you just want to understand the navigation and these are all sensibilities that the design team brings to the table and they understand what’s really important, and what can be made in a machine learning driven way. 

And I keep following some of the research that is happening in the intersection of and AI, and one of the things is how do you design products which is aware of AI being used to drive the product? Because once you use AI to put something in front of the user, let’s say you put a feed in front of the user, the act of the user going through that feed, experiencing, clicking, and scrolling and all of that, that actually is behavioral data that you’re collecting, which is going to become used in next set of models. 

So, I think how do you make sure that the designer understands how each user’s experience is going to evolve and help them understand it, help them understand that, hey, we want to try to find the most relevant, most revenue generating, most value generating thing, whatever your objective is, and put it right on top. But the designer might say, sure, you want to put it right on top, but do you understand that the users, the way we have designed the product, they’re going to click on the middle of the screen rather than at the top of the screen. 

So, if you don’t sit down with a design team and understand how they’re thinking about the users, what they are learning from qualitative research, you’re going to design the product wrong, but I think it’s still a very evolving design. There are a lot of practices, best practices that are coming out from some of the large companies, which have been doing it for maybe half a decade or a decade. It’s important to just understand how you can take that, learn from that and bring it to your own company and your own situation. 

Betsy (33:58): How does Credit Karma think about data trust when it comes to both your employees trusting the data that they’re building on, but then also the customers that you have and what do you do to monitor it and check it over time? 

Vishnu (34:15): I think this is always a hard problem, especially when it comes to financial data. And if we are putting financial data in front of the user on which they’re going to make some decisions, then it’s important to get it right. So, the way, like I mentioned, we have, one of our biggest advantages or benefits is to be working with bureaus, which means that there’s a lot of data that comes with a lot of rigour that the bureaus have put in that we are able to bring it to play when we put it in front of our users. At the same time, there are transformations of the data that we do internally. And I think the biggest aspect of it is how can you make sure that you are tracking all these transformations. 

And as soon as the transformation changes, you also know why the transformation is changing, of data is changing and figure out how also to centralize a lot of these transformations in one place because if you are able to centralize a lot of these data transformations in one place, then you are able to keep adding new tools and technologies that are coming out in the market. We work very closely with Google Cloud, but we are also starting to look at other third party vendors like Monte Carlo who can then come and help us in monitoring some of the data quality. If you’re able to get a lot of these transformations in a few different places, for example, we really care a lot about the security of PII, which means that we keep the PII data completely away from all the other data that some of my teams like recommendation systems, data scientists and machine learning deal with. 

So, if you’re doing transformations with the security, then that’s completely in a separate place. If you’re doing transformations of non PII data, then you’re going to do that in a different place, but if you’re able to localize a lot of these translations in a few places, then you can bring some of these tools into play so that you are able to automatically get some monitoring of the data. You can build rules or use some out-of-the-box anomaly detection pieces that these tools come with. So, you’re monitoring the data. Then once you’re monitoring the data, then the next part of it is having the right processes. If a particular data attribute is breaking, what are you going to do? Who’s going to look at it? Who’s going to fix it? Which team is responsible to fix it? 

And then the triage of data quality issues, and how do you quickly resolve it, do you think of them as site incidents? Are these data incidents? We have gone through a lot of those stages. Initially, we used to have two separate incidents track, we used to have data incidents and we used to have site incidents and over a period of time, we merged it back in and said like, hey, we need visibility from everyone. You can’t separate our data incidents from site incidents. 

So, we merged all of them back in and said, we want really site incidents to happen, and then we want the right teams to get involved and fix it with the right level of urgency so that you are able to… And things break. And when things break, it’s how quickly you can react and bring it back to its original operating rhythm. And that’s also an important part of trust. One is preventing big breaks from happening, and second is when some of these breaks happen, how do you react? How do you get better? How do you learn from it, and how do you get better with that? 

Betsy (38:06): And this will be beyond Credit Karma, so what do you see as the greatest need for improvement and evolution of AI now? 

Vishnu (38:12): I think we touched upon a couple of these pieces. I think the human factor, how do you look at the human factor? When you talk about the human factor, it’s multiple aspects of it. I think one aspect of it in a company like ours is how you are working with the right partner teams to be able to do the right thing for the company on an ongoing basis rather than… We are not after one shot wins, we want to be able to have sustainable wins with the teams involved. And the second aspect of it is you want to use data, you want to use AI to do something for the user. Now, if I show the number two product and number one’s place and number one product and number two place in the order in which users see it, that’s not of a big impact. 

The user gets to see what they will get approved for and in the wrong order. So, the cost of a mistake in this kind of situation is not high, but there are many other situations where it is very high. And as someone who’s employing AI, we all know AI makes mistakes. When AI makes mistakes, what’s the backstop? What’s the cost of that mistake? What are you building in your systems to be able to keep working on reducing those mistakes? And sometimes the way you catch and resolve those mistakes, prevent them from happening, might not be the most elegant solution, but you still need to do it so that you can allow your members to trust the product they’re using. Your members don’t really care whether you’re using AI or rules or whatever behind the scenes. 

What they really care about is are you showing them the right thing? Are you telling them, giving them the right information at the right time? AI is something that we use behind the scenes to be able to do a great job on a personalized basis, but you can’t make mistakes that is going to hurt our members. You need to make sure that you have other mechanisms that you’re investing in those mechanisms, and some of those mechanisms, we are still figuring out what those mechanisms are, where do you need those mechanisms? But you still need those mechanisms in place. You still need those audits and checks in place so that you are protecting your members and users from the impact that you don’t want to have that impact on them. You want to protect them from that impact. You’re trying to help them. You’re not trying to distract them or you’re not trying to hurt them. So, it’s important to make sure that we are thinking about the investments you need to make so that when AI makes mistakes, you can catch it and prevent that from going forward. 

Betsy (41:07): So, I’d like to dig in on both of those with some follow-on questions. The first would be, and it sounds like you’re very sensitive to this based on your own experience, but you’re only as good as the data set that you get. So, there are implicit historical biases in data sets. And so curious to know, because you guys appear to strive to look more holistically at the data, how have you thought about that, and what are the things that you’ve done to try and make sure that the products are fair and not just reliant on historical data, which can have its skews? 

Vishnu (41:50): Yeah, I think we’ve probably looked at it less from a data perspective because I think as an organization, as a company, one of the things that we do is make the available choice of financial products to our members. And finally, the banks and the Fintech’s are the companies that are underwriting these financial products. So, one thing we focus on as a business is how we get access to a variety of financial partners on our platform to our members. And then the second part of it is how do we keep working with the financial partners to help them in whatever ways we can so that they’re able to design products that are needed for people who are eligible to get it and who can, like I talked about, when I came to the country, I’ve thought of myself as a prime user, but from the data’s perspective, I was not a prime user. 

So, now are those situations where we need to help some other Fintech’s come on top of our platform and give access to a lot of other financial products to our members. I think we look at it more as an opportunity to provide access to our platform, to financial partners who want to provide these financial products to our members who need them. That’s what we end up spending more time on. And in terms of other things, whether there is some bias that comes from the behavioral data that we collect, whether there is some bias that we introduce, we are very careful about making sure that the way our business model is structured is we get paid only when the user gets the product. If they do not get the product, we are not getting paid. The constraints of the business model naturally force us to do the right thing for our members. 

Betsy (44:02): So, your job is assembling a breadth of opportunity and then playing back the white space to the banks to say there are opportunities here and there’s a large audience type of thing. Yeah, that’s great. That’s an interesting market maker there. The other question I was going to have is about transparency. It’s probably a more internal question, but in terms of the model building with AI, what are some of the steps that you take or you recommend taking in regard to making sure that the model has enough transparency so it can be audited and people can see what’s inside the black box and make sure that you guys are doing the right thing? 

Vishnu (44:48): Yeah, I think it starts out with laying out the objectives. What are you trying to get the model to do? Making sure that the objectives of what the model intends to do is put right out from the front for anybody to look at. And then the second aspect of that is how are you measuring success? What does success look like? What are the KPIs and metrics that you need to focus on to make sure that it’s in a good place? Whether it’s accuracy or whether it is just getting revenue or whether it is reducing opt-outs. So, just try to make sure that you identify the right metrics. And then the third part of it is pretty much everything that we do within Credit Karma, especially if its member impacting, is through experiments. So, one of the biggest investments that we made, I would’ve talked about it if you wanted me to talk about too, was the experimentation system. 

And we again, did that in 2016. And the experimentation system allows our data scientists, allow our product engineers, every single team within the company to test out everything. So, to set up everything as an experiment. And then when you set up everything as an experiment, then you’re getting results from the experiment. And then you can always go back and audit how those experiments were set up and what those results were and what were the reasons why something was not ramped or what are the reasons why something was not ramped. So, then you get a lot of transparency and it’s just very convenient for anyone to set up an experiment, anyone to view the results of an experiment. It’s just like transparency is built into the whole process then. 

Betsy (46:35): Tell me more about the experimentation process that you put in place, and it sounds like it’s benefiting more than just the engineering teams. 

Vishnu (46:45): Yes, definitely. I think we set up our experimentation system in a way where we knew that a lot of teams wanted to run experiments. We knew that a lot of teams wanted to look at the analysis, wanted to get an understanding like, okay, the engineering team and the product team decided to ramp this, but I want to know why, because it is taking away something that I used to use and now something else is coming in place. How does that happen? And to be able to do that, you need to make sure that… We invested a lot of effort, we still have a team, which does a great job in making sure that the front end or it’s an internal tool that we’ve built out on top of our experimentation system, and they’re able to go and look at the results. And then a lot of those results are also getting published on Slack. 

People can easily share a link to a Looker dashboard that displays the results of an experiment. So we made these integrations with LUCA, we made these integrations with Slack, especially for a data science perspective, we put together a process where every week there’s a group of people coming together looking at the current status of all these experiments and understanding what we want to do next. And we also made the investment in building a team of analysts who are focused just on experiments and how these experiments are set up, how are they designed, how are we evaluating them? Do we need a longer-term holdout to ensure the results are not a one-time newness effect? So there’s a team that is just focused on analysis of experiments, especially with things like on the modeling side. So I think there’s process investments, there are tooling investments, there are team investments that we had to make to all this work. 

Betsy (48:48): Interesting. And just to give us a sense of the kind of scale, how many experiments might you be running at any one time? 

Vishnu (48:56): Way too many. I would say that I think I don’t really have the number in front of me, but at any point of time they’re like hundreds to thousands. It’s a very, very rough range, but there are a lot of experiments going on, and maybe one way to think about it is probably half the company uses the front end that we have built out. 

Betsy (49:21): Wow, that’s great. That speaks a lot about your culture. We’ve been taking up a ton of your time, so I’ll just ask one more question. Vishnu, what is making you the most excited about what you guys might be able to accomplish in the next couple of years? 

Vishnu (49:37): Yeah, one of the things that we’ve always talked about, our CEOs always talked about is automation, to help our members do things that will help them make progress in their financial self and figuring out how we can employ more automation. For example, I know I’m probably [inaudible 00:50:06] for my auto insurance at this point in time. So, is that some automation that the company can invest, can build in, which will allow me to just tell Credit Karma, hey, my auto insurance just gets auto renewed right now, so it’s automated in the favor of the auto insurance company. It’s not automated in my favor of me. How can Credit Karma help bring some of this automation into the app so that it’s automated in favor of the member? That’s one of the things we are interested in making early steps in, starting with all our data to help our members better. So, that’s probably something we are interested in doing. 

Betsy (50:48): That is very exciting and sign me up. Vishnu, thanks again for being a part of the Rev-Tech Revolution. It was really nice having you on. I’m glad we finally got a chance to connect, and I appreciate your time very much. 

Vishnu (51:00): Thank you. Thanks a lot, Betsy. I had a great time as well. 

Speaker 1 (51:04): Thank you for tuning in to the Rev-Tech Revolution podcast. If you enjoyed this episode, please don’t forget to rate, review and share this with colleagues who would benefit from it. If you would like to learn more about how Riva can help you improve your customer data operations, check out rivaengine.com

Explore Riva's Transformative Impact: 352% ROI, $7.5M in Benefits
This is default text for notification bar