Trinity College Dublin and ADAPT Establish AI Accountability Lab

28 November 2024

The AI Accountability Lab, led by Dr Abeba Birhane, will be housed in the ADAPT Research Ireland Centre in Trinity’s School of Computer Science and Statistics

A new research group designed to advance AI accountability research launches today at Trinity College Dublin.  The AI Accountability Lab (AIAL) will be led by Dr Abeba Birhane, Research Fellow in the ADAPT Research Ireland Centre at the School of Computer Science and Statistics in Trinity.  It will focus on critical issues across broad topics such as the examination of opaque technological ecologies and the execution of audits on specific models and training datasets. 

As AI technologies continue to shape society, the AIAL will examine their broader impacts and hold powerful entities accountable for technological harms, advocating for policies rooted in evidence. Research will specifically address potential corporate capture of current regulatory processes, outline justice-driven model evaluation, as well as audits of deployed models, specifically those used on vulnerable groups. 

Speaking about the work of the AIAL, Dr Birhane said: “The AI Accountability Lab aims to foster transparency and accountability in the development and use of AI systems. And we have a broad and comprehensive view of AI accountability. This includes better understanding and critical scrutiny of the wider AI ecology – for example via systematic studies of possible corporate capture, to the evaluation of specific AI models, tools, and training datasets.”

The AIAL is supported by a grant of just under €1.5 million from three groups: the AI Collaborative, an Initiative of the Omidyar Group; Luminate; and the John D. and Catherine T. MacArthur Foundation.  

AI technologies, despite their supposed potential, have been shown to encode and exacerbate existing societal norms and inequalities, disproportionately affecting  vulnerable groups.  In sectors such as healthcare, education, and law enforcement, deployment of AI technologies without thorough evaluation can not only have nuanced but catastrophic impact on individuals and groups but can also alter social fabrics.  For example, in healthcare, a liver allocation algorithm used by the UK’s National Health Service (NHS) has been found to discriminate by age. No matter how iIl, patients under the age of 45 seem currently unable to receive a transplant, due to the predictive logic underlying the algorithm. 

Additionally, incorporating AI algorithms without proper evaluation has a direct or implicit impact on people. For example, a decision support algorithm used by the Danish child protection services to aid child protection deployed without formal evaluation has been found to suffer from numerous issues. These include information leakage, inconsistent risk scores, and age-based discrimination. 

Furthermore, errors in facial recognition technologies have led to misidentification and the arrest of innocent people in the UK and the US. In education, the use of student data for purposes beyond schooling drew criticism in the United Kingdom. Secret agreements allowing authorities to monitor benefit claims raised fears of increased surveillance, eroding public trust in technology, and disproportionate targeting of low-income families (source Schools Week).   These few examples illustrate the need for transparency, accountability, and robust oversight of AI systems, which are central topics the AI Accountability Lab seeks to address through research and evidence-driven policy advocacy.

The AIAL will be housed in the School of Computer Science and Statistics at Trinity College Dublin. Professor Gregory O’Hare, Professor of Artificial Intelligence and Head of School of Computer Science & Statistics at Trinity College Dublin, said: “The new dawn of AI associated with generative AI has heralded a velocity of AI adoption hithertofore not witnessed. The provenance of such systems is however fundamental. The AI Accountability Lab will be at the forefront of research that will examine such systems; through algorithmic auditability it will create a National and European Centre of Excellence in this space, delivering thought leadership and informing best practice.”

Professor John D Kelleher, Director of ADAPT and Chair of Artificial Intelligence at Trinity, commented: “We are proud to welcome the AI Accountability Lab to ADAPT’s vibrant community of multidisciplinary experts, all dedicated to addressing the critical challenges and opportunities that technology presents.  By integrating the AIAL within our ecosystem, we reaffirm our commitment to advancing AI solutions that are transparent, fair, and beneficial for society, industry, and government.  With the support of ADAPT’s collaborative environment, the Lab will be well-positioned to drive impactful research that safeguards individuals, shapes policy, and ensures AI serves society responsibly.” 

In its initial stages, the AIAL will leverage empirical evidence to inform evidence-driven policies; challenge and dismantle harmful technologies; hold responsible bodies accountable for adverse consequences of their technology; and pave the way for a future marked by just and equitable AI. The group’s research objectives include addressing structural inequities in AI deployment, examining power dynamics within AI policy-making, and advancing justice-driven audit standards for AI accountability. The lab will also collaborate with research and policy organisations across Europe and Africa, such as Access Now, to strengthen international accountability measures and policy recommendations.