Governments must ensure automated social protection systems are fit for purpose and do not prevent people eligible for welfare from receiving it, Amnesty International said as it published a `technical explainer on the underlying technology behind Samagra Vedika’, an algorithmic system that has been used inTelangana state since 2016.
The technical explainer sheds light on Samagra Vedika’s human rights risks and its use of a technical process called “entity resolution” wherein machine-learning algorithms are used to merge databases together, with the goal of assessing the eligibility of welfare applicants and detecting fraudulent and duplicate beneficiaries in social protection programmes.
Publication of the technical explainer follows media reports blaming Samagra Vedika for allegedly excluding thousands of people from accessing social protection measures, including those related to food security, income, and housing.
A 2024 investigation published in Al Jazeera exposed how errors in the system, which consolidates individuals’ data from several government databases, led to thousands of families being denied vital benefits, raising serious human rights concerns around their right to social security.
“Automated decision-making systems such as Samagra Vedika are opaque, and they flatten people’s lives by reducing them to numbers using artificial intelligence (AI) and algorithms. In a regulatory vacuum and with no transparency, investigating the human rights impacts of such systems is extremely challenging,” said David Nolan, Senior Investigative Researcher at Amnesty Tech.
Amnesty International dedicated a year to designing – and attempting to carry out – an audit of the Samagra Vedika system. Despite these efforts, the audit remains incomplete due to challenges in accessing the underlying system and a blanket lack of transparency from the developers and deployers of this system.
Nevertheless, through embarking on this process, Amnesty International uncovered key methodological learnings and insights into the nascent field of algorithmic investigations. By sharing these, Amnesty International aims to enhance the collective capacity of civil society, NGOs, and journalists to conduct future research in this field.
“Governments must realize that there are real lives at stake here,” David Nolan said. The case of Samagra Vedika in Telangana is emblematic of governments increasingly relying on AI and automated decision-making systems (ADMs) to administer social protection programmes. This trend often leads to unjust outcomes for already marginalized groups, such as exclusion from social security benefits, without adequate accountability or transparency and remedy.