People are extremely important to our company. They are the primary source of costs, product and sales. Given our positioning on the far side of B2B, we are not reliant on advertising, targeted campaigns, anonymous inbounds, partner referrals or open source contributors. Everything that happens, happens because of us and thanks to us. This puts us in greater control of our own destiny but with great power comes great responsibility.
We’ve been doing a lot of interviews and have roughly been investing our time like this:
From this, I think we have learned a few things:
All this makes it very hard to compare candidates objectively and methodically - particularly when you have many good options. So the goal here is to take what we have learned and devise a statistically coherent interview assessment that we will call SIP for short. This will also turn bad interviews into useful data to construct a distribution.
A candidate should satisfy 3 key attributes (just like a murder suspect):
I would also argue that they are in increasing order of importance: You would rather hire an eager person with less skills than a skilled person who is not enthusiastic.
Looking back at our previous process, it took us 5 hours just to decide if we saw ourselves working with that person (arguably the most important trait) which is probably what we should be evaluating first. ie. stop being nice to people we don’t like. Moreover, focusing on each aspect (skills, enthusiasm, personality) separately led to potential +1 -1 situations that left candidates flat. We need more method so we can offload some of the workload and improve objectivity.
Now that we have 3 attributes we are looking to satisfy, let’s dig deeper in order to establish suitable tests for each. I also propose to use a scoring from 1-6 for each test which means a ‘neutral’ score is not possible (forces binary judgement) but provides 3 degrees each of positivity and negativity to avoid binary emphasis.
Means: We observe that the candidate can lie regarding their skills and that the skills required are largely determined by the role. By designing a few critical, pre-defined questions for each role we can come up with evaluations for the candidate’s practical ability for the role, theoretical technical understanding and additional skills not required for the role but potentially valuable to the other roles or the company in the future.
Motive: It should be noted that the candidate can also lie regarding ‘why they want to work at suade’ but it is harder to fake interest than to fake knowledge. We can also say that this is an attribute determined by the company and hence this evaluation should apply uniformly to all roles. We can classify a candidates motivation with 3 simple metrics: Why a startup? Why FinTech? Why Suade?
Opportunity: It is safe to assume that the candidate is not a raging psychopath and that WYSIWYG. We also observe that this is purely a personality assessment on the person and is entirely determined by the interviewer. Culture fit might seem like the hardest intution to quantify into a test, but a well-defined (and true) company culture can easily lead to actionable tests. Our company motto is Learn, Lead, Laugh and hence we can easily construct 3 metrics to evaluate their ability to fulfill and propogate this motto.
      … Practical Skills?
      … Theoretical Understanding?
      … Bonus Skills?
      … Why a startup?
      … Why FinTech?
      … Why Suade?
      … Ability to Learn?
      … Ability to Lead?
      … Ability to Laugh?
[ ___ / 54 ]
*N.B. The average is 31.5
I might also suggest one final criteria for recommendations/referrals ranging from -3 to 3 which we would ask for from the recommender.
[-3] [-2] [-1]     … Referral Rating?
Now that we have a method of collecting data in place, we can try various approaches on how to apply it. Does everyone evaluate everything, do we take turns evaluating different sections or do we re-evaluate with each new interaction in a Bayesian manner? Do we weight one section more heavily, favour initial/later impressions more or give more weight to evaluations from people who will work closest with them?
As a starting point, I think it would be simplest for every interviewer to complete a full scorecard for each interview.