As you are conducting your interview, you need to give the interviewee a chance to answer a question well. Here is a high level walk through of how to bring a candidate to the best answer they have. Our philosophy at Credit Karma is to get candidates comfortable. That is how they will be when they work here – and never forget that the candidate is interviewing us, too.
Last time, we introduced behavioral interviewing as a concept and looked broadly at what it involves. Today, we’re going to have a deeper dive and give you a simple rubric with which to engage with a candidate if tasked with conducting an interview.
Getting to what matters
For ease of memory, when you’re in a live situation, just think… CAR:
Context of the situation that the interviewee is describing.
Actions that the person took.
Result of their behaviors.
Once you have the context of the situation understood, you can better understand and inquire into the actions they took. The actions are the meat and potatoes of our rubric. The majority of your time talking should really be spent on their behaviors. Once you appreciate both of those steps, you’re well prepared to talk to them about the results they brought about and how successful their outcomes were.
Let’s try it out
Now let’s examine a trial transcript of how a behavioral interview might play out, so we can put this rubric to the test.
Interviewer: Tell me about a time where you thought a technical decision the team was making was not optimal?
Interviewee: Well, last year we were starting to design an awesome new feature for our users that would allow all of them to track delivery of their candy in real time. We wanted them to be able to track all of the details from start to finish. Before this, they had no context on the ordering process till their package was at their door. The team had three other developers, I was the most senior, and a product manager and designer working on the project.
The designer had a ton of great and detailed ideas about all sorts of different abstract things we might show to the user. The person who was assigned the story really dove into being able to build a semantic layer in the database to describe all these abstract concepts. I felt like the approach was too hard to maintain. I gave him some critical feedback on the approach when we were doing a design review and told him he should explore some other options. Instead of really considering my feedback, he went to other team members, including our infrastructure team, and started getting people spun up on a big, complex schema change in the relational DB. When I realized that they were spinning everything up, I did some research myself on alternatives and came up with an idea based on an eventing system that persisted with Apache Kafka.
I took the alternative to the team and talked about why I thought it was better. I made sure the product manager was accounting for the new approach and that we had the time to implement it. I told the developer that while I thought his idea could be implemented, and would work, that it was not the optimal technical solution and we sat in a room talking about it for almost two hours before he finally agreed that my approach was better. In the end we implemented a highly scalable solution based on Apache Kafka that really changed the way we interacted in scalable real-time technical design that ended up being used for three or four other features along the way.
Let’s apply CAR. Here, we got context, action, and results. We have a minimally viable answer and something we can evaluate to move the conversation forward. The candidate’s action led to an optimal solution for the problem they were trying to solve, and was used by other people. That’s a good sign. But when we assess their actions against the context they described, it sounds a little like they browbeat the other engineer into submission. Which sounds less promising. They got a desirable result, from potential less than desirable behavior.
After you apply this rubric, you need to quickly assess if you’ve learned what you wanted to from the answer. Behavioral interviews are still a conversation. Don’t think of them as discrete question and answer blocks, rather a broader conversation with a candidate that continues until you’ve unearthed what you’re looking for.
You can keep digging. So from this example transcript, an interviewer would keep asking clarifying questions until they were satisfied. A good follow up would be, “Interesting approach, tell me more about what went into the conversation with the original designer?” or “I’d like to hear more about design reviews, can you tell me more about what went on in the design review?”
Focus on helpfulness
Just remember as you go back and forth with a candidate that you’re not trying to trick anyone. You have an opportunity with a behavioral interview to make it a conversation and dig deep into an answer. The challenge in the moment is to ask open questions and let the interviewee continue to talk about their behaviors, without telling the interviewee exactly what you want to hear – all while giving them a great opportunity to give a great answer.
An open question is one that is deliberately seeking a long answer, as opposed to a closed question that can generally be answered with a few words. A closed question asks someone if they take walks at work to stay focused. An open question asks them how they stay focused at work. Open questions give the candidate the opportunity to keep clarifying what they’re saying and drill down to the specifics you want without you guiding them directly to it.
Take this for a test drive
So far, we’ve learned about the purpose of behavioral questions and now how to evaluate them. This is a good time to practice. Are you ready to practice? Grab a friend and go through driving them to CAR answers and evaluate them on competencies. Can you guide the conversation to an evaluation you are happy with? Will it help you hire great engineers?