Stay up-to-date

Get the latest news, announcements and offers on HRtech.

Privacy policy | Cookie policy

AI: Human After All?

According to the 2022 IBM Global AI Adoption Index, 35% of companies worldwide are now using AI. 42% is on the verge. It will become ubiquitous. Computer scientist Kriti Sarma warns of the consequences when human biases slip into algorithms that ultimately decide if you get a loan, insurance or that new job. But it’s not Cassandra, the mythical figure who only predicted bad things. As the founder of AI for Good, Kriti is optimistic: “This is our chance to remake a more equal society.”

As a kid, Kriti Sarma was what we would nowadays call a girl geek. “I used to build useless computers. Then I moved on to candy-fetching robots. I was in the business of automating unhealthy habits (laughs).” But soon, she saw technology could have a social impact: “I grew up in North-Western India, witnessing a lot of inequality. I saw how access to knowledge, education and technology was limited to a small group of people. That’s the spark for my main idea: use tech to solve problems too important to leave unsolved.”

About 10 years ago, Kriti started to focus on the power of technologists. “As a designer of technology, you have so much impact. But there was a complete lack of debate in the community. We just talked about solving cases, without taking a step back to see if there could be unintended consequences. What we needed was not just a code of ethics, but the ethics of code. Doctors have to abide by principles, lawyers have deontological guidelines, but for us technologists, there wasn’t even a class.”

Is your AI tool racist?

Asked if we need binding regulations on AI, Kriti has a simple answer in store. “Just apply the rules for humans to technology. When recruiting candidates, it’s illegal to discriminate on gender, race and other traits. It would get exposed. But if an AI tool would be churning out unequal decisions, there is no culture of transparency. Explanations like ‘can’t tell, secret source” are inexcusable. We should just ask: “Is your AI racist or sexist? To make it clear that it’s not about technological complexity, but about human biases that have crept in. What we do need more debate on, is on the allowed use cases of AI in general.”

“Growing up, I realised that tech can solve problems too important to leave unsolved.”

Kriti Sarma

But something is happening. “You see progress when it comes to data ethics or governance. And cybersecurity is top-of-mind. This hasn’t happened yet for decision ethics. Organisations are still looking for a home for it. But you do see that companies in general are taking their ethical commitments more seriously, hence the rise of ESG. I hope the conversation will broaden, that leaders will understand that business ethics should also include products and tech. Buyers of AI solutions have to set their standards.”

Why your HR department holds the key to the future

Data could improve human decision making. But the opposite also occurs: AI reproduces and amplifies human biases. “The most meaningful change comes from bringing in many people with different backgrounds building these tools. Then the problem will solve itself. For instance: I have a traditional educational background as a computer scientist. But maybe technology shouldn’t be created entirely by geeks with mediocre social skills like me (laughs). A message to every talent acquisition manager reading this: go for the atypical profile once in a while!”

The pace at which AI develops is impressive. Kriti recalls a fascinating anecdote… “I once did a project where AI was introduced to take over mundane tasks. After one day it had automated some 50% of the workload. It didn’t take the technology that long to go to 80%. I was so proud. I thought the people at that organisation would love me for getting all that clutter out of their hands. But they hated it. They came up to me and said: “Tough challenges used to make up for 20% of my time. Now it’s the other way round. It’s all I’m doing, every day.”

What does it mean for HR in the long run? “How do you train someone, how do you think about learning and development, if you already know machines will outpace you sooner or later? How you manage talent and ambition in the long run should be a main issue for CEOs and HR leaders. This is why I am excited to talk to the HR community in Antwerp this fall: I am convinced they will have the most profound impact on how tech applies to humans and how we shape the use of human potential and use our talents in the future.”