Credit Karma leverages data for over 60 million members to deliver a personalized user experience. To do this, we rely largely on Scala and Akka to do the heavy lifting. Powerful tools, however, demand some mastery on how to use them.
We’re excited to launch our open source exec-jar-plugin for Apache Maven, a new way to distribute Java/Scala applications.
When we were considering how to push 700k events per minute from Kafka into our data warehouse, Vertica, we learned these lessons about how to choose the best framework for high throughput.
As a programmer, I often over-quantified my own impact. I wanted opportunities where I could do big things. This desire for impact was a part of the reason why I headed down a management track. What I didn’t realize at the time is that individual contributors can have as much impact, or more, than any manager.
At Credit Karma, my team maintains a service powered by a multi node Akka cluster. Akka is a framework for building distributed, concurrent, and message-driven systems in Scala. It helps us easily scale our service, and gives us some resilience when problems inevitably happen in production. In this post, I’ll cover two problems - auto down and quarantine - and share our lessons learned.
Three best practices helped us successfully pull off a 24-hour coding event that accelerated the development of our proprietary prediction software.
In this video, Matt Greenberg, Senior Director of Engineering, talks about Coaching Career Growth at the San Francisco Engineering Leadership Community meetup.