US Government Agencies Team Up to Fight AI Abuse

robot

The US Department of Justice (DoJ), Consumer Financial Protection Bureau (CFPB), Equal Employment Opportunity Commission (EEOC), and Federal Trade Commission (FTC) have released a joint statement on fighting bias and discrimination created via the use of artificial intelligence (AI).

As the statement notes,

These automated systems are often advertised as providing insights and breakthroughs, increasing efficiencies and cost-savings, and modernizing existing practices. Although many of these tools offer the promise of advancement, their use also has the potential to perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes.

Many automated systems, including those based on AI, are trained on, and then scour, huge amounts of data in order to find patterns, train algorithms, perform tasks, make recommendations, and make predictions.

Unfortunately, these systems can create or perpetuate biases or otherwise produce faulty results.

According to an article in the Harvard Business Review,

Human biases are well-documented, from implicit association tests that demonstrate biases we may not even be aware of, to field experiments that demonstrate how much these biases can affect outcomes. Over the past few years, society has started to wrestle with just how much these human biases can make their way into artificial intelligence systems — with harmful results. At a time when many companies are looking to deploy AI systems across their operations, being acutely aware of those risks and working to reduce them is an urgent priority.

The problems with bias in AI systems can exist for several reasons:

  • Unrepresentative or imbalanced datasets can skew outcomes.
  • Datasets may incorporate the historical biases of humans who made past decisions.
  • Models may represent the biases of the programmers.
  • Automated systems may be “black boxes” with unclear internal workings making it impossible to determine whether they’re “fair”.
  • Developers may not understand the contexts in which their systems will be used.
  • Developers may use flawed assumptions about datasets, users, context, etc.

As this blog explains, an example of AI bias is when a facial recognition algorithm is trained to recognize a white person more easily than a black person because the faces of white people are used in training more often.

In 2019, researchers found that an algorithm used by US hospitals to predict which patients would require additional medical care favored white patients over black patients. The algorithm based its predictions on the patients’ past healthcare spending. However, these results were skewed in part because black people with similar medical conditions tended to spend less on their medical care than white patients with the same conditions.

In another example, a study found that Google ads displayed high-paying positions to men much more often than to women. This may have happened because the employers targeted men for their ads, or it could have happened because men were more likely to click on such ads, and thus the system was more likely to show such ads to them.

It’s possible that technology-based biases can be resolved with the use of technology that’s based on an awareness of such biases.

For example, Clearview AI has been awarded patents for its supposedly bias-free facial recognitions algorithms. And as AP reported, Cangrade was awarded a patent for its “bias-free hiring and talent management solution.”

Categories: Patents