Bias in Machine Learning Projects

Should we fear that automated decisions made by Machine Learning models are unfair?

Luigi Saetta
Towards Data Science

--

(Photo by Deon Black on Unsplash)

Introduction.

I was preparing a webinar and I was thinking about what subjects could be interesting for it, between the many different subjects that come out in my mind.

I thought that to be successful in realizing a Machine Learning project, one should be really aware of what are the pitfalls, what are the points, during the project, where you can do important mistakes.

One area of concern for sure is regarding BIAS. For this reason, I decided to devote some of my time and prepared the following notes.

What is Bias and why does it matter?

Well, let’s start with a definition. And I will take the one coming from Wikipedia: “ Bias is a disproportionate weight in favor of or against an idea or thing, usually in a way that is closed-minded, prejudicial, or unfair. …People may develop biases for or against an individual, a group, or a belief. In science and engineering, a bias is a systematic error”.

So, we could start by saying that bias is a systematic error in our Machine Learning (ML) model.

When we develop a supervised model (for now, let us limit our discussion on this type of model), given a set of inputs X = (x1, … xn) we want to be able to predict the most probable value for a related y. We think there is a meaningful relationship between X and y

y = f(X)

and we want to train a model, starting from a large set of examples, where for each sample we give the input values (X) and the corresponding expected value y.

The model uses some very general algorithm (for example a Neural Network) but it learns all his knowledge from the data used during the training.

Usually, during the development of the model, one part of the available data is taken apart (held-out) for validating the model. During this phase normally one wants to verify if the model has learned a generalizable knowledge. In other words, if it performs well on data unseen during the training phase. But, before being ready to use the model in production, on real-world data, many tests should be done to verify that the model is correct.

We might discover that our model tends to give favorable predictions to some samples (that could be related to people) and less favorable to others, in a systematic and unjustified way. Good if we discover this issue before using the model in production.

One example: let’s imagine that we have developed a Deep Learning model that could help us in diagnosis melanoma, using as input a picture of a portion of the skin.

Based on our validation set, it seems that our model has a nice accuracy (90%?), but then we start a field evaluation study and we discover that the model doesn’t work well on people with dark skin. It tends to have a higher rate of errors in this group of people, systematically. We have discovered a bias in our model.

Definitively it is a mistake we have done somewhere. But it is a mistake that produces errors systematically, not randomly.

We do a careful analysis of the data collection process and we discover that our dataset contains a higher percentage of people with fair-skinned people. Simply, because it is well known that those people have higher risks coming from exposition to the sun (that is one of the risk factors). Adding more examples of lesions from people with dark skin helps to improve the accuracy in this sub-population and reduce the bias in our model.

Is it something that has necessarily to do with ethics, with what we think is fair or unfair?

Well, this is probably one of the big questions we would like to answer. But it is not the only one. For now, we can say yes and no.

Let me try to express my first idea: every time we develop a rule that in some way treats differently several groups of people we should ask ourselves if it is fair. And the answer is not easy, because there is no universally accepted definition of what is fair and what is not.

Fairness cannot be mathematically measured (this doesn’t mean that we shouldn’t make the question, which means that the answer is not easy).

But, here the problem is always seen bigger because we have an ancestral fear that Computers, sooner or later, will take decisions instead of us. And, at the end of the story, an ML model is something that makes decisions.

First of all, the different sources for bias in ML.

As in every field of Science, before getting to any conclusions, we should gather all the relevant information (and in this process, we should be unbiased!).

So, we should first try to understand where bias could come from.

The development of an ML model is almost always a long process, with a series of steps. Let’s try to make a summary:

  1. Definition of the question (or questions) the model should answer.
  2. Identification of the sources of data
  3. Data collection
  4. Data extraction, transformation
  5. Generation of the train, validation, and test set
  6. Definition of the kind of algorithms we want to adopt
  7. Training of the algorithm
  8. Evaluation of the trained model
  9. Deployment of the model
  10. Documentation
  11. Monitoring of the model’s performances along the time

Well, probably this is not the only way to summarize the development process. But I think it is a good one to support our discussion.

As we look at the steps detailed above, we should think that in every one of these steps we can make mistakes. And, some of these mistakes could condition our model to give systematical errors on one or more of the sub-population of the samples.

Therefore, without any deliberate intent, we can introduce bias in our model.

What are the different types of bias?

As we said, we can introduce bias in different steps of the development of an ML model and therefore we can have different types of bias. Having a classification can be useful to guide us in the process of identification of possible sources of bias in our model.

One useful classification has been proposed, by two researchers, in this article: “ A framework for understanding unintended consequences of Machine Learning Models” (H. Suresh, J. V. Guttag, MIT, Feb. 2020).

In the article, they identify these six categories:

  • Historical Bias
  • Representation Bias
  • Measurement Bias
  • Aggregation Bias
  • Evaluation Bias
  • Deployment Bias

The first three are involved in data generation and collection. The last three in the model development.

They also provide a nice diagram that helps us to link the different categories to the different steps in the development of an ML model:

(source: H: Suresh, J. V. Guttag, MIT, Feb. 2020, https://arxiv.org/pdf/1901.10002.pdf)

I would add some considerations to the above picture.

  1. As you can see, we have different, multiple sources for bias
  2. There are also other ways to introduce bias: for example, when you split the available data between training and a validation set, if the split is not done correctly you could end up evaluating and optimizing the model on a set that has not the correct distribution. Or, you could “leak” some information from training to validation set and measure an accuracy that you won’t find in reality

To sum it up, we must carefully check the entire process to avoid bias. Multiple tests and testing more and more data is helpful and, as my personal suggestion, having a team of people with different areas of expertise and background is also helpful.

Feedback loops.

Another problem, related to bias, is that ML models can create Feedback Loop.

What is a feedback loop? a feedback loop arises when the model wrongly produces the “next generation of input data”, amplifying a bias, or some incorrect predictions produced by the model itself.

Let’s try to explain using a (somewhat real) example: imagine that a group of people wants to publicize a “fake theory”. They produce a nice and well-crafted video where they explain their theory. Then, they publish this video on one Social Media, with a Recommendation System based only on ratings from users. All the people from this group give very high ratings to the Video.

The Recommender Engine, based on these ratings, will start to recommend this video to other people and, if these people will give some more positive ratings, there is the chance that starts a positive loop where more and more people will see the video.

Well, this problem is not unreal. For example, there have been articles in the past suggesting that the YouTube engine has created something of this sort. And, we all know the debate regarding if some other Social Media can produce, for example, an increase in spreading fake news or hate speech.

One example is coming from the New York Times article: “ YouTube unleashed a conspiracy theory boom. Can it be contained?”.

An interesting paper, titled “ Feedback Loop and Bias Amplification in Recommender Systems” can be found on arXiv. In this case, the bias created can be called “ popularity bias”: the model tends to push items based on popularity and not on the intrinsic quality, and this bias can be amplified by such algorithms as Collaborative Filtering, as analyzed in the article.

Model ExPlainability.

One of the reasons why bias can be difficult to identify is the fact that sometimes ML models are used as (magic) Black Box. You verify that the model has enough high accuracy on the validation and test set and that’s OK. Then, you ignore that you’re not able to explain why the model produces some predictions instead of others (why, for example, the model decides that it is a high risk to give a loan to one customer).

It could happen, especially if you’re using large neural networks, with many layers.

But, even if it is complicated to explain, we shouldn’t accept such a situation. First of all, because there is a real risk that the model contains bugs and maybe bias. Second, because this is unacceptable if the usage of the model has social consequences. Third, because there may be some regulations on the subject.

One area of active research in ML is the Model exPlainability and, in the last years, many techniques have been developed that can be employed even to support the explanation of complex models.

From a general point of view, we talk about:

  • General explainability, when we want to describe in general how the model works; For example, what are the most important features and what is not so important
  • Local explainability: where we want to explain why the model has done a single, specific, prediction (for example on a single patient)

One technique used to explain even complex, non-linear models is LiMe: Local Interpretable Model-agnostic Explanation, where we locally approximate (locally means: around the specific sample) the model with a linear one to identify the effects, on the prediction, of varying the features. See, for example, https://christophm.github.io/interpretable-ml-book/lime.html

One last remark, interesting for people living in the EU: some people tend to incorrectly assert that GDPR defines a “right to explanation”. It is not true and it is a matter that couldn’t be ruled simply.

In general, there is an article (n. 22) in GDPR regarding the possibility to have Completely Automated Decisions. This article applies if three conditions are fulfilled:

  • There is a completely automated decisional process
  • The process works on personal data
  • it has the goal to evaluate the personal aspects of a physical person

In this case, GDPR forbids a completely automated decision process if it produces juridical effects or if it has significant effects on the personality of the user.

It doesn’t mean that you can’t use an ML model but, in the scenario depicted above, the process requires a human intervention (the decision process cannot be entirely made by a machine).

Also, the GDPR does not mandate a “right to explanation”. It would be rather difficult, and probably in the majority of cases ineffective, to try to explain to the people the inner workings of an ML model. But it mandates a “right to information” and a “right to object”.

Finally, if we are talking between people working on the development of ML models, we have the knowledge and the tools to understand why a model produce some decisions and therefore we shouldn’t accept that a model is used as “a black box”, without any comprehension of how it works.

Conclusions.

As probably you clearly understand now, the risk of Bias in ML models is a problem that can be analyzed and, if not removed, reduced greatly.

There is a great concern regarding the adoption of models for “automated decisions”, especially because most people don’t understand how these models work, and they fear the risk that those decisions are biased and can be unfair.

But, I think that first of all, we should consider bias as a systematic error: As Data Scientists and ML Engineers we should try to do our best to avoid or reduce these kinds of errors. Then, we should never treat a model as a “back box” and always try to explain decisions and outcomes, to see if they make sense.

Obviously, we shouldn’t forget that the outcomes of an ML model can have social and ethical consequences.

However, these kinds of problems should not prevent us from seriously considering the great benefits that can be realized, even in the social field, from the adoption of AI.

More information.

If you want more information I would suggest the articles listed in the section “References”, from which I have taken some information and pictures.

One book that I would greatly recommend is the one in ref. n. 3. Chapter n. 3 from the book has been authored by dr. R. Thomas and contains a rather extensive examination of bias, feedback loop, and other subjects linked to data ethics.

References.

[1] A Framework for Understanding Unintended Consequences of Machine Learning Models, H. Suresh, J. V. Guttag, MIT, https://arxiv.org/pdf/1901.10002.pd f

[2] Feedback Loop and Bias Amplification in Recommender Systems, M. Mansoury, H. Abdollahpouri, M. Pechenizkiy, B. Mobasher, R. Burke, https://arxiv.org/pdf/2007.13019.pdf#:~:text=Recommendation%20algorithms%20are%20known%20to,known%20as%20a%20feedback%20loop.

[3] Deep Learning for Coders with fastai & PyTorch, J. Howards, and S. Gugger, O’Reilly, 2020.

[4] Interpretable Machine Learning, https://christophm.github.io/interpretable-ml-book/

Originally published at https://luigisaetta.it on November 9, 2020.

--

--

Born in the wonderful city of Naples, but living in Rome. Always curious about new technologies and new things., especially in the AI field.