Women in AI: Urvashi Aneja is researching the social impact of AI in India
To give AI-focused women academics and others their well-deserved -- and overdue -- time in the spotlight, TechCrunch is launching a series of interviews focusing on remarkable women who’ve contributed to the AI revolution. We’ll publish several pieces throughout the year as the AI boom continues, highlighting key work that often goes unrecognized. Read more profiles here.
Urvashi Aneja is the founding director of Digital Futures Lab, an interdisciplinary research effort that seeks to examine the interaction between technology and society in the Global South. She's also an associate fellow at the Asia Pacific program at Chatham House, an independent policy institute based in London.
Aneja's current research focuses on the societal impact of algorithmic decision-making systems in India, where she's based, and platform governance. Aneja recently authored a study on the current uses of AI in India, reviewing use cases across sectors including policing and agriculture.
Q&A
Briefly, how did you get your start in AI? What attracted you to the field?
I started my career in research and policy engagement in the humanitarian sector. For several years, I studied the use of digital technologies in protracted crises in low-resource contexts. I quickly learned that there's a fine line between innovation and experimentation, particularly when dealing with vulnerable populations. The learnings from this experience made me deeply concerned about the techno-solutionist narratives around the potential of digital technologies, particularly AI. At the same time, India had launched its Digital India mission and National Strategy for Artificial Intelligence. I was troubled by the dominant narratives that saw AI as a silver bullet for India’s complex socio-economic problems, and the complete lack of critical discourse around the issue.
What work are you most proud of (in the AI field)?
I'm proud that we've been able to draw attention to the political economy of AI production as well as broader implications for social justice, labor relations and environmental sustainability. Very often narratives on AI focus on the gains of specific applications, and at best, the benefits and risks of that application. But this misses the forest for the trees -- a product-oriented lens obscures the broader structural impacts such as the contribution of AI to epistemic injustice, deskilling of labor and the perpetuation of unaccountable power in the majority world. I'm also proud that we've been able to translate these concerns into concrete policy and regulation -- whether designing procurement guidelines for AI use in the public sector or delivering evidence in legal proceedings against Big Tech companies in the Global South.
How do you navigate the challenges of the male-dominated tech industry, and, by extension, the male-dominated AI industry?
By letting my work do the talking. And by constantly asking: why?
What advice would you give to women seeking to enter the AI field?
Develop your knowledge and expertise. Make sure your technical understanding of issues is sound, but don’t focus narrowly only on AI. Instead, study widely so that you can draw connections across fields and disciplines. Not enough people understand AI as a socio-technical system that's a product of history and culture.
What are some of the most pressing issues facing AI as it evolves?
I think the most pressing issue is the concentration of power within a handful of technology companies. While not new, this problem is exacerbated by new developments in large language models and generative AI. Many of these companies are now fanning fears around the existential risks of AI. Not only is this a distraction from the existing harms, but it also positions these companies as necessary for addressing AI-related harms. In many ways, we're losing some of the momentum of the "tech-lash" that arose following the Cambridge Analytica episode. In places like India, I also worry that AI is being positioned as necessary for socioeconomic development, presenting an opportunity to leapfrog persistent challenges. Not only does this exaggerate AI’s potential, but it also disregards the point that it isn't possible to leapfrog the institutional development needed to develop safeguards. Another issue that we're not considering seriously enough is the environmental impacts of AI -- the current trajectory is likely to be unsustainable. In the current ecosystem, those most vulnerable to the impacts of climate change are unlikely to be the beneficiaries of AI innovation.
What are some issues AI users should be aware of?
Users need to be made aware that AI isn't magic, nor anything close to human intelligence. It's a form of computational statistics that has many beneficial uses, but is ultimately only a probabilistic guess based on historical or previous patterns. I’m sure there are several other issues users also need to be aware of, but I want to caution that we should be wary of attempts to shift responsibility downstream, onto users. I see this most recently with the use of generative AI tools in low-resource contexts in the majority world -- rather than be cautious about these experimental and unreliable technologies, the focus often shifts to how end-users, such as farmers or front-line health workers, need to up-skill.
What is the best way to responsibly build AI?
This must start with assessing the need for AI in the first place. Is there a problem that AI can uniquely solve or are other means possible? And if we're to build AI, is a complex, black-box model necessary, or might a simpler logic-based model do just as well? We also need to re-center domain knowledge into the building of AI. In the obsession with big data, we've sacrificed theory -- we need to build a theory of change based on domain knowledge and this should be the basis of the models we're building, not just big data alone. This is of course in addition to key issues such as participation, inclusive teams, labor rights and so on.
How can investors better push for responsible AI?
Investors need to consider the entire life cycle of AI production -- not just the outputs or outcomes of AI applications. This would require looking at a range of issues such as whether labor is fairly valued, the environmental impacts, the business model of the company (i.e. is it based on commercial surveillance?) and internal accountability measures within the company. Investors also need to ask for better and more rigorous evidence about the supposed benefits of AI.