Assessing employees’ added value may help to optimize companies’ recruitment policies. A newly appointed assistant professor at the Amsterdam Business School, Colin Lee, has a special fascination for careers.
decision to select someone for a job interview. The accuracy of Lee’s model is also remarkable. ‘Where a vacancy did not require a cover letter, the model’s prediction about the job interview was right in about 80% of the cases.’
This latter aspect makes the model attractive to companies. ‘If recruiters can reliably identify which applicants need to be either selected or rejected, they can spend more time on the border cases,’ says Lee. ‘However, this involves a risk. Selecting candidates on the basis of past results creates the risk that good candidates may not be selected.’
The risk of models of this type is that users rely on the results of the model and that their selection on the basis of specific characteristics is biased. For instance, if the model finds that the performance of hockey players is better, on average, than that of football players, applicants from the latter group might not be selected. This may create a monoculture within the company. Indeed, developments in the labour market may not be taken into account and the company might be blind to future high-performers.
‘Of course, I make sure that this does not happen. First, on a practical level, demographic factors that could lead to discrimination are to be excluded. Moreover, the model should be able to correct selection decisions taken on improper grounds.’ Lee would exclude any data on hockey or football, for instance. ‘Even if they have predictive value, it would be impossible to identify a valid reason to include that type of data. The model should not become the black box type of thing produced by certain machine learning techniques, but should rather provide an explanation as to why it points to a certain outcome.’
Lee has been in touch with companies that are interested in using his model, but has only moved ahead on a practical level, with an alternative model estimating performance on the basis of data extracted from academic articles and on the basis of estimates by experts. The model would have a much stronger basis if it could use real, historical data, says Lee. ‘The current design comes close, but in fact we are not quite sure how accurate the estimates are. This requires data on candidates’ levels of success in the future in terms of their performance in the long term in a range of companies, positions and sectors.’
This is not a simple task. Talks with some large temporary employment agencies on data about workers’ careers have so far failed to produce results. ‘These companies often collect excellent information in the context of the application, but have little information about the performance on the job. And attempts to link information from the application process to information about the performance on the job run up against, among other things, privacy issues,’ says Lee. ‘Moreover, commercial interests may infringe on the independence of academic research in this kind of large scale research project.’
Meanwhile, Lee has changed tack and has begun to build a jobsite for volunteers. With extensive questionnaires aiming to collect information from volunteers as well as their employers while respecting their privacy, this is a demanding job.
‘This research is primarily focused on a deeper understanding of the match between the individual and the job. This includes forecasting the performance on the job, but also identifying which factors determine how long you will stay with the organisation or the project, and if you’ll find the job fun and rewarding. Still, he also hopes to be able to test his earlier model. ‘In the best scenario, the model will ensure that companies can put the right person in the right place on the basis of the characteristics of the candidate.’ This raises the question if a model based on information on volunteers can be applied in a wider context. ‘That is something we will need to find out.’