How Employers Keep AI Reliable in an Unpredictable World
AI is increasingly helping employers with the more strategic aspects of managing people, enabling leaders to anticipate what will happen next. The tech can produce a risk of exit score, identify an individual’s propensity for promotion, or even estimate how much time will be needed to meet DEI goals.
At first, it seems like magic. But a peek behind the scenes reveals that a lot of human effort goes into maintaining robust AI. Transparent platforms that leverage AI also allow business leaders to see the top factors that lead to a given prediction.
This kind of responsible approach to AI is crucial—particularly following periods of disruption. Consider how the US quits rate has reached historic highs. It’s something nobody would have predicted a year ago had they based their projections solely on past recessions. But applying past patterns to the future is how machine learning (the most common form of AI that companies leverage today) generally works.
In a recent Q&A with Pete DeBellis, Vice President, Total Rewards and People Analytics Research Leader at Deloitte, we explored the pandemic’s impact on the validity of AI-fuelled predictions, and how the HR tech community is responding. Here’s what he had to say:
Are there types of AI-powered people decisions that are more vulnerable to disruptions in the historical data than others?
Pete DeBellis: Any model is vulnerable to disruptions in its underlying data. But models that require temporally long data sets to be reliable (e.g. a predictive attrition model built on years of historical data) or those that involve less frequent data refreshes (e.g. a compensation model informed by annual salary surveys) can be particularly susceptible to reduced fidelity in times of disruption.
Further, models that are built upon underlying assumptions that are upended by disruption (e.g. models of consumer behavior that assumed brick and mortar stores were open or staffing models that assumed work only happened at a company facility) can quickly lose relevance when the underlying assumptions no longer hold true or the economic conditions of the environment in which they need to be applied are dramatically different. Conversely, models that blend internal data with external data, use broader data sets and approaches, and explicitly include a human element to validate and activate insights may prove more resilient in the face of disruption.
What has the HR tech community done on a technical level to ensure predictive machine learning models are still accurate following dramatic shifts in the labor market?
Under normal circumstances, even the best predictive models warrant constant monitoring, care and feeding, and iterative improvement. But the current environment is anything but normal so the challenges and responsibilities associated with predictive models have been magnified.
There is no silver bullet here, but the HR tech community is rallying to respond to dramatic shifts in the world of work fueled by the COVID-19 pandemic. They are testing and retesting models, and making sure that test and training data reflect the current environment. They are leveraging learnings across the ecosystem and throughout their own user communities. And, perhaps most importantly, responsible members of the HR tech community are staying close to their customers and promoting honest dialogue about which models and analytics may be most vulnerable to these disruptions.
It’s okay to not have all the answers at a time like this—but it’s not okay to act like you do and let your customers make decisions anchored in overestimations of model performance. The ultimate purpose of people analytics is to derive insights that in turn drive more informed action. Staying true to that North Star—turning insights into action—sometimes means acknowledging that tech may not hold all of the answers.
Under what conditions can business leaders and people managers trust that their AI for people decisions are still functioning post-pandemic?
This is no time for “black box” approaches and blind faith. It is incumbent upon leaders at all levels to understand the details of how technologies and algorithms are functioning and to ask the tough questions. What are the data sources? What are the calculations being used? How could changes in the underlying data impact our models? Do we need to widen the data set or exclude anomalous data? How are we evolving our testing approaches? Are we doing enough disruptive scenario planning?
And for those closer to the technology, this is a time to test. Then retest. Make sure that your training and test data are reflective of the current real world environment. And make the extra effort to ensure that you effectively translate how the technologies and algorithms are actually functioning to the business stakeholders who rely upon them. Our recent people analytics research clearly demonstrates the importance of effective collaboration between people analytics practitioners and their customers—and times like these only raise the stakes for that collaboration.
Finally, for business folk and technologists alike, there is no time like the present for connecting with peers at other organizations and leveraging the expertise of your vendors to learn from the successes (and failures) of others facing similar conditions and challenges.
Be the first to know!
Never miss a story! Get the Outsmart newsletter.