“What do you do with all that data?”
I recently got this question from a senior executive as I was briefing BAE Systems’ half year hiring numbers. I was using a graph of the recruiting funnel, comparing it to the same period last year. If you have ever dug into a recruiting funnel, you know that it counts many things: applicants, resumes sent to hiring managers, interviews scheduled, offers, offers accepted, hires, etc.
"Over the past few months we’ve been applying machine learning to our talent acquisition data to build models that let us predict what should be happening in the funnel and by whom"
That’s a lot of data. Getting that data requires a lot technology. We leverage the statuses in our applicant tracking system and store the vast number of transactions in a data warehouse. Using an E-T-L tool (Extract-Transform-Load) we extract the data from the warehouse, manipulate it in a workflow, and then visualize it on a dashboard.
But the question still remains: what do we do with all of the information? What does it really tell you other than the fact that there’s a lot of recruiting activity happening? Can’t we just determine that by the number of jobs posted?
Back to the briefing. First, I shared the basics with the audience how the recruitment funnel can reveal the health of the talent pipeline, and the conversion rates (moving from one stage in the funnel to the next) can indicate the performance of the hiring team, as well as the competitiveness of our pay.
Then, I got to the exciting part of the answer. Over the past few months we’ve been applying machine learning to our talent acquisition data to build models that let us predict what should be happening in the funnel and by whom.
Our new yield rate model predicts how many applicants are needed at the top of the funnel to get a new hire at the bottom of the funnel. By job. By location. By security clearance level. And the length of time each stage of the process will take. Pretty sweet! The model evaluates the performance of many factors over several years and tells the hiring team what to expect for each new requisition. By continually analyzing the funnel, we will also be able to determine if the requisition is getting sufficient organic traffic and, if not, whether it needs some help from the sourcing or recruitment marketing teams. If the requisition is getting enough traffic, that’s another actionable data point, indicating when to un-post the requisition. Having too many candidates creates a negative candidate experience and lowers recruiter efficiency. The model will also help highlight bottlenecks such as hiring managers taking too much time to review or interviews taking too long to schedule, so we can intervene.
We also built a requisition load model, which helps our recruiting managers apportion work among recruiters. The model evaluates each requisition, the number of positions within it, the job code, location and other historical data to give each requisition a difficulty score with several weighting algorithms in play. The requisition scores are then aggregated for each recruiter and tagged as “HIGH, MEDIUM, LOW” on the talent acquisition manager’s dashboard. This gives the manager a holistic, objective look at how they are assigning work, enabling them to make adjustments as needed.
We are in the early phases of deploying these models and still need to do the change management bit with the recruiting teams to ensure adoption.
While the executive seemed happy with my response, the question got me thinking… “what could we do if we had even more data?”