
Building Credit Karma’s Unclaimed Money Feature
On our mission to become the best financial assistant, this is how we built a feature to unlock $40 billion in unclaimed money.

On our mission to become the best financial assistant, this is how we built a feature to unlock $40 billion in unclaimed money.

If you're considering moving to a cloud infrastructure, how do you plan for the future? We'll share an approach for migrating to Google Cloud using cloud agnostic tools.

At Credit Karma, where you start isn't necessarily where you stay. Jeff talks about making the switch from QA to Development, and the satisfaction - and scrambles - that come with career change.

If Finagle microservices are deployed to a production environment without taking the time to configure, at least, some essential sane defaults, most of the features that make these microservices resilient, fault-tolerant, and highly performant can be lost. In our move to microservices, we learned how to make the most out of this tool.

Studies show that behavioral interviewing is the technique that has the strongest correlation to hiring success. We'll introduce the concept as the beginning of a 3-part series on how behavioral interviewing is used at Credit Karma. In the article you can see the inspiration a guy like Dwight Schrute can bring to the interview process.

Meet Nita, a software engineer who's working on the new Credit Karma Tax product. Nita opens up about how she got into engineering, why she wanted to work at Credit Karma, and what keeps her motivated.

This is how you can use GraphQL to detect breaking changes in API designs, and the open source utilities you can use to ensure your schema supports all your client requests.

Credit Karma leverages data for over 60 million members to deliver a personalized user experience. To do this, we rely largely on Scala and Akka to do the heavy lifting. Powerful tools, however, demand some mastery on how to use them.

We’re excited to launch our open source exec-jar-plugin for Apache Maven, a new way to distribute Java/Scala applications.

When we were considering how to push 700k events per minute from Kafka into our data warehouse, Vertica, we learned these lessons about how to choose the best framework for high throughput.