A senior technology executive has warned the data used to build algorithms driving artificial intelligence can lead to bias and sexism.
Emily Rich, managing director of startups at Microsoft Australia, told the Startup Grind conference in Melbourne on Monday that a lack of diversity in the data used for artificial intelligence raised serious issues.
“Uber’s self-driving cars were running red lights because they had misclassified data in the algorithms,” she said. “Last year the UK home office put in a passport facial recognition system that could only detect white skin. So these are the kind of catastrophic things that happen, that are actually happening every day, and it’s due to this data that is not diverse.”
Ms Rich said most companies from startups to the world’s biggest listed companies were investing more in AI than ever before but often the algorithms were not always correct.
Ms Rich pointed to online searches for jobs where the term ‘CEO’ displays 11 per cent women and said while this may be accurate in terms of the number of women CEOs it means algorithms are only surfacing executive job positions to men.
“The algorithms are learning off data that is saying essentially women can’t be CEOs, that is where it becomes concerning,” she said.
Ms Rich said facial recognition software is 99 per cent accurate for white men but accuracy drops significantly for women and people from other backgrounds because the largest facial data set in the world is based on 80 per cent men and 75 per cent white men.
“For me, I am not a white man, I still need to work in the world and have facial recognition work for me,” she said. “As soon as you are a woman of colour you are completely shut out of this.”
Skewed data sets were highlighted by Caroline Criado Perez in her book Invisible women: Exposing data bias in a world designed for men which outlines a statistical “silence” about half of humanity.
It’s a problem the government is looking to address with its AI Ethics Framework which includes the principle of fairness.
Last month the National Australia Bank, Commonwealth Bank, Telstra, Microsoft and Flamingo AI signed up to test the principles which state: “Throughout their lifecycle, AI systems should be inclusive and accessible, and should not involve or result in unfair discrimination against individuals, communities or groups.”
Ms Rich said Microsoft was working to create more diverse facial recognition data sets by overlaying existing data sets with new data.
She called on startups and product developers to instil a more diverse way of thinking in order to avoid bias in AI.
This could be done by hiring diverse teams, using and creating high-quality data sets, checking bias and auditing algorithms.
“It’s really, really, really crucial that you’re auditing any algorithm that you’re creating to make
sure that it’s actually working for everyone,” she said. “Because we’re building products for everyone.”
Source: Thanks smh.com