Also known as AI bias, machine learning bias occurs when an AI system outputs results that are discriminatory and structually biased, caused by errors in the machine learning procedure. Some factors that affect the quality of the AI include the quantity, quality, and variety of the training data used.
Types of machine learning bias
Machine learning bias makes a difference in people’s everyday lives. For example, facial recognition systems are less capable of recognizing the faces of women and people of colour, compared to men and white people. Noticeably, research has consistently shown that facial recognition AI systems have the poorest recognition accuracy for Black women between 18-30. This leads to more prejudiced errors with law enforcement surveillance technology, employment decisions, and airport passenger screening—common uses of facial recognition technology.
Another way that marginalized communities are affected by machine learning bias is through the job market search. In the status quo, a wide range of large companies use recruiting algorithms to search through the thousands of resumes they get daily. However, not all these algorithms are fair. Multinational e-commerce company Amazon penalizes resumes containing certain word patterns, and favours men over women by prioritizing resumes without the word “women’s”.
How to prevent AI bias:
Machine learning bias creates tangible negative impacts by disadvantaging marginalized groups of people while elevating the privileged. These biases can be avoided and reduced by proper procedures of reviewing algorithms, a more representative data set, and the removal of bias in our real-world setting.
NASA's next chapter of lunar exploration, Artemis, has the task of not just going to the Moon to create a long-term human presence on and around it, but also to prepare for ever-more-complex human missions to Mars.
The multi-billion dollar system to observe our creation.