Fayola Peter's picture

PhD Candidate

Cognitive Science

 

 

Abeba Birhane is a PhD candidate in cognitive science at University College Dublin and Lero. Her interdisciplinary research focuses on the dynamic and reciprocal relationship between ubiquitous technologies, personhood, and society. Specifically, she explores how technologies which are interwoven into our personal, social, political, and economical sphere are shaping what it means to be a person. Abeba is passionate about communicating research, not just to inform the public but also to effect change.

My background is in cognitive science, and an element of my work still revolves around the question of cognition, but cognition is not the centre of my research. Rather, I look at the question of how our day-to-day lives and interactions are influenced by machine learning systems; how machine learning systems that are pervasive in our lives are shaping and influencing what it means to exist. Most of the time when we are designing and deploying machine learning systems, we create them in a way that perpetuates existing norms and historical injustices. The core of my research is to constantly question and interrogate these machine learning systems, to make sure that from design to deployment, we don't embed them with biases, harms, and injustices.

There are many cases of this problem in practice. For example, the UK supermarket chain Co-Op are currently implementing a machine learning system in their supermarkets, and the idea is that the system will be able to identify a potential shoplifter or someone who might engage in “undesirable” behaviour. However, there is no scientific way of telling whether someone will shoplift, or whether someone is a suspicious person. The very premise of being able to judge a person by looking at the way they walk, or their facial expressions is rooted in long discredited pseudoscience.  What those systems end up doing is looking into historical data as to the type of people who have been suspected of shoplifting or criminal behaviour in the past, and predict potential criminals by the way they look. So it simply picks up social stereotypes, and if your character fits into that stereotype you end up being suspected, or in some cases there might even be action taken against you. In the United States for example, we recently saw the third person to be wrongfully detained due to facial recognition systems. All three of these cases are Black males, as these systems tend to disproportionately impact racial minorities, lower class people, disabled people and those generally at the margin of society. Of course, we know of three cases as of now, but it’s likely that many more people have fallen victim to wrongful arrests due to facial recognition systems, and it may just be that they don’t have the platform to tell their stories, or didn't even have the means to contest a facial recognition system. The more these systems are implemented, the more we see these disastrous impacts on real people.

Even within the short time I have been involved with Lero, I have seen that it is a tightly-knit community that is super supportive. I have also come to appreciate the scientific values that people in leadership positions aspire to. There is a culture for openly questioning legitimacy, and a push for a higher standard for improvement, but most importantly there is a great initiative from the top to put accountability, transparency, and ethical software research at the top of the research agenda. For me, because my work is very critical of software research and Artificial Intelligence in general, the fact that Lero also aspires for an ethical and equitable approach to software really is just ideal.

"My hope for the future of my research area is for openness of critical thinking within the machine learning community. I am hoping for a culture that questions, criticises, and examines machine learning systems, instead of just blindly trusting them to do impossible things.

 

I have always been an advocate of communicating one’s research, not only to the scientific community, but also the general public. I believe that continually communicating and interacting with people is a key part of doing science. It’s really not sufficient just to run experiments, produce results, and write peer-reviewed papers. It’s also crucial to raise awareness of the increasing expansion of machine learning systems into the public sphere, because this is not just a matter of pure research, it is also a matter of people’s lives being impacted. A lot of these systems are operating under the radar. Many banks are, for example, integrating machine learning systems in their banking systems here in Ireland. Many big organisations use machine learning systems in hiring as machines sift through CVs before they are assessed by a human. But these take place behind closed doors and we are not aware of it for the most part. Whether you are applying for a job, getting a loan from the bank, or even just walking into a supermarket, you are likely in an environment where AI systems operate in the background and your behaviour and action is assigned a score, all without your awareness or consent. For me, part of doing research is creating an awareness that there are more and more AI systems embedded in our lives. Effectively communicating this and raising awareness is just as important as running experiments and writing peer-reviewed scientific papers.

My hope for the future of my research area is for openness of critical thinking within the machine learning community. I am hoping for a culture that questions, criticises, and examines machine learning systems, instead of just blindly trusting them to do impossible things. We put so much faith in machine learning systems. When we see there is machine learning involved, we tend to lose our critical faculty and resort to blindly trusting those systems, so I hope to see the elimination of this blind faith and a move toward critical questioning. Most importantly, I am hoping for tighter regulations in how these systems are implemented into society, and a much more transparent practice, where organisations or public spaces are open about using these systems, instead of investigative journalists finding it out and making it public.