AI, Discrimination and Law: A First Look at Legal Research

Our blog author Saskia made a new experience when explaining her research to a student visiting her within the scope of the so-called "Berufspraktische Tage".

A couple of weeks ago, I found myself confronted with an unusual task. Many Austrian schools expect their 7th or 8th grade students (13-15 years old) to complete so-called “Berufspraktische Tage”, where they spend some days at a workplace to gain practical experience. One student decided to visit us at the Department of Innovation and Digitalisation in Law and get a first impression of legal research.

As one topic we are currently researching is AI, discrimination, and law, I decided to discuss this with the student in a way that would ideally be useful for both of us.

As a starting point, we took the phrase “algorithmic discrimination” and tried to make sense of it. The student had not been confronted with the term “algorithm” before, so I tried to explain it. After struggling for a while, I realized my explanation had room for improvement. However, in the end, we settled on a common understanding, that an algorithm is a sort of computer programme that offers solutions to a problem. I should add, we mixed the terms algorithm, machine, computer programme and AI a lot, but I hope the student, and anyone reading, will forgive me this accuracy-explainability trade-off. For our purposes, the definition worked. We also discussed the term discrimination, and examples that the student was familiar with, before diving into algorithmic discrimination.

One story that worked really well for us to have a discussion about was the debate on A Levels and GCSEs[1]: In 2020, due to the Covid-19 pandemic, students in British schools were unable to take their final exams[2] . The proposed solution was an algorithm taking into account teachers estimates of the marks students would have received, based on their past performance. With this, we already stumbled upon one of the core problems of machine learning: Can we predict the future by learning from past patterns? Or put differently, what about the students that maybe didn’t do amazingly the last couple of years, but who really stepped up in the final months? On the other hand, what if teachers, and therefore the input data, are biased?

The story became more controversial when it was decided to use an algorithm that, in addition to teachers’ estimates, used overall past performance of each school. This significantly lowered grades, but more importantly, it lowered grades unevenly. The algorithm downgraded students at state schools more than at private schools. This was not only because of past performance, but also because it was decided that in smaller classes, overall performance of the school would be weighted less, and teachers’ opinions more. Class sizes in private schools tend to be significantly smaller than in state schools. The algorithm was eventually scrapped after outrage over these differences.

We also revisited the AMS Algorithm story that made the news a couple of years ago in Austria. The Austrian “Arbeitsmarktservice” (Public Employment Service) introduced an algorithm in 2018 to calculate job seekers’ chances of finding employment, thus determining the type of support they should receive. Apart from criticisms of a lack of transparency, several reports stated that the algorithm unfairly disadvantaged women with caring responsibilities (but not men), people with health issues, and older age groups.[3]  Based on a score produced by the algorithm, job seekers were split into three groups, actively impacting what kind of support or training they were offered.

The student helped me gather some information on these stories, in particular on how they were publicly received, and on which legal instruments were mentioned in relation to the use of these algorithms.

We talked about different stages of the student’s life: when you are admitted to University, when you are hired, promoted or fired from a job, when you take out insurance or a mortgage, if you receive a particular medical treatment… Would you always want a human being to take the decisions on this? Why? Isn’t the human being also biased?

Two key points that stood out to me were the following:

  • First, that both of us, before talking each situation through, had an instinctive preference for human-made decisions. Although this is already a much-researched topic,[4] it was interesting to discuss how we felt about this in practice. I had a quick look afterwards at whether younger generations still seem to have this “suspicion” towards automated decisions, and my first impression was that there are many interesting questions here that are not really in the spotlight at the moment. Surely, with all the hype around “Trustworthy AI”, we should be discussing what is trustworthy to the next generations growing up with AI?
  • Secondly, aside from potential discriminatory aspects, which we had been focusing on, we discussed concerns about machine-made errors, and about whom to address a complaint to, in a situation where no human is involved in the decision-making process. This concern is reflected in many legal and policy documents. For example, the GDPR[5] limits the cases in which people may be subject to solely automated decisions. In the exceptions in which data subjects may be subjected to solely automated decision-making, they must always at least have the possibility to obtain human intervention and contest the decision.

This brought us to the next issue: How does all this link to legal research? The link to law emerged most clearly by asking two questions: What do we want our society to look like? And, how can we achieve this? Law of course isn’t the only tool to shape technology and society, and we could have had a much longer discussion on how technology and society also shape the law, but there is only so much you we could cover in one day.

Overall, for me it was a really positive experience. It forced me to try and find effective (albeit not perfect) definitions for terms we use every day, to give very clear answers and examples, and to reflect on why critical perspectives on digital practices matter. All of this is something that I want to keep thinking of going forward, not just when interacting with younger people.

Ironically, at the end of the day, I asked the student where their interest in Law actually stemmed from. As it turns out, the student had taken part in an automated skills test at school, and it had recommended that they might want to choose a legal career...


[1] www.bbc.com/news/explainers-53807730

[2] GCSE exams at age 16, and A Level exams at age 18

[3] https://algorithmwatch.org/de/wenn-algorithmen-ueber-den-job-entscheiden/ ; https://www.derstandard.at/story/2000114974300/ams-algorithmus-forscher-warnen-vor-diskriminierung-und-bemaengeln-fehlende-transparenz

[4] dl.acm.org/doi/abs/10.1145/3411764.3445570

[5] Article 22

Computerbildschirm mit digitalen Daten, unscharfe Umrisse, davor eine Brille die die Daten scharf sichtbar macht

Photo by Kevin Ku on Unsplash