Software engineering researcher Ayushi Rastogi (University of Groningen) has been awarded the KHMW Langerhuizen Bate 2026 (€25,000) for a project investigating how artificial intelligence affects software professionals with disabilities. Her research asks a timely question: does AI make work more inclusive, or does it risk reinforcing existing inequalities?

NEDERLANDSE VERTALING

Ayushi Rastogi | Foto: Bram Saeys

You were born, educated, and obtained your PhD in India. What motivated your move to the Netherlands?
My move to the Netherlands was initially for personal reasons, because at the time I was doing a postdoc in California, and my partner – now my husband – was starting his PhD here. But it turned out to be an excellent professional step as well. The Netherlands has one of the strongest software engineering research communities in the world, and being part of that environment has been very valuable for me. 

How did working in the Netherlands change you as a researcher?
The biggest difference for me has been the close collaboration between academia and industry. In India, especially when I was working there eight to ten years ago, many large software companies had their own internal research units, but collaboration with universities was less common. Here in the Netherlands, interaction with industry is much more integrated into academic research.
That has helped me understand the challenges people face in practice. Large companies, for example, often have access to extensive datasets, while smaller companies are more interested in learning from the experiences of others. Working with such a range of partners has shaped my perspective on how research can create value for different stakeholders.

Your research examines how artificial intelligence shapes software that affects everyday life, society, and the economy. Why is it important to study this impact systematically?
Software is deeply embedded in how we work, communicate, and organise our lives. Traditionally, we studied the people who developed software to understand the systems they produced. But with AI entering the picture, that relationship is changing rapidly. Software can now be generated much faster, and sometimes in ways that developers themselves do not fully understand.
There is compelling evidence that AI improves productivity, but we still lack a holistic understanding of its broader effects. Without that overview, we risk improving one dimension – such as efficiency – while overlooking societal or economic consequences in the long term. I believe this is one of the most important challenges we currently face in software engineering. 

What kinds of risks do you see?
One risk is that developers may lose insight into how software systems are created. Decisions that used to be carefully reasoned can now be produced very quickly, sometimes without full transparency. This leads to the accumulation of ‘cognitive debt.’ As a result, understanding is deferred, and developers no longer fully understand why certain technical choices are made.
That may not seem critical at first, but if software behaves unpredictably in sensitive environments – such as infrastructure or healthcare – the consequences can be serious. AI is an extremely powerful tool, but we need to understand how to use it responsibly. 

Do companies recognise these risks sufficiently?
Many companies clearly see the benefits of AI and are aware that risks exist. However, without systematic research it remains difficult to determine what those risks are. Organisations are trying to sustain the guardrails, but the rapid pace of change means these are not always enough. That is why holistic research into the broader impact of AI is so important.”

Your proposed project focuses on software professionals with disabilities. Why is it important to study the impact of AI on this group now?
Many digital tools are still designed with a ‘one-size-fits-all’ perspective. Even without intending to exclude anyone, this can disadvantage groups whose needs are not reflected in the data on which systems are based. Earlier research has shown, for example, that when companies tried to identify software experts automatically, they often selected mostly male developers – not because women were absent, but because the criteria for expertise reflected typical male working patterns.
Large language models risk reinforcing such effects. They are trained on existing data and practices, which reflect the behaviour of majority groups – often male developers without disabilities. If we do not explicitly take different working styles and needs into account, these systems may unintentionally reproduce and reinforce existing inequalities rather than reduce them. The world is benefiting immensely of the rapid development of AI. But the question is whether this benefit extends equally to all the demographics. It is important that we are taking everyone along on this transformative journey.

How will you conduct this research?
My first step is to analyse large-scale development activity on GitHub, an open-source coding platform that currently hosts around 180 million developers worldwide. I will study how programming patterns change before and after the introduction of AI tools for developers with disabilities. For example, I will examine how much code developers produce, how often revisions are needed, and how long development tasks take.
This allows us to identify measurable effects. In later stages, I hope to complement these findings with surveys or interviews to better understand the experiences behind the numbers. 

What role will the Langerhuizen Bate play in this project?
The funding will make it possible to hire a research assistant who can help analyse large open-source datasets. We will identify developers who self-report disabilities – such as neurodivergence, visual impairment or hearing impairment – and compare their activity patterns with those of other developers.
If time permits, we also hope to contact some of these developers directly. That would help us understand why certain patterns appear in the data and what kinds of support might be most effective in practice. 

AI is often presented to make work more inclusive. Do you see opportunities or also risks for developers with disabilities?
I see both. AI has enormous potential to improve productivity and accessibility, but it does not automatically become inclusive. Because AI systems are trained on existing human behaviour, they can reproduce existing biases unless we actively address them.
If we want AI to support inclusion, we need to define explicitly what that means and how we want to achieve it. Otherwise, the benefits may remain unevenly distributed.

Do you expect AI companies to take such findings seriously?
I hope so. In earlier research on diversity and inclusion, including my collaboration with companies such as Microsoft and similar efforts at Google, it is clear that industry is interested in these questions. As researchers, it is our responsibility to formulate the right questions and provide evidence that can help guide future development.

The KHMW Langerhuizen Bate 2026 will be awarded on Monday, 29 June. Click here for the full programme of this festive day.