Testing 1-2-3: An evaluation framework for digital ID with lessons from India’s Aadhaar system

In the latest instalment of the ‘Digital ID Dispatches from Africa’ series, researchers from the Centre for Internet and Society (CIS) outline the evaluation framework CIS developed for digital ID, and assess how India’s Aadhaar system measures up against it. The framework can be widely adapted to the unique challenges faced by diverse and developing countries, write Yesha Tshering Paul and Shruti Trikanad.

The last few years have seen the deployment of many national digital ID programmes, particularly in the Global South. The stated objectives driving these programmes include more efficient governance, reducing corruption, improving welfare distribution, and addressing national security concerns. Rarely do we ask whether these objectives are really met by the plethora of digital ID ‘solutions’ being pandered to so-called developing countries.

Given its growing popularity and rapidly increasing geographic footprint, it’s concerning that the mandatory collection and storage of biometrics is often pushed as an easy solution to establish and authenticate identity in countries with weak civil registration and vital statistics (CRVS) systems and largely undocumented populations. The collection of sensitive data (and the frequent decision to store it in centralised databases with few access controls) puts data at high risk for attack by malicious actors, and raises concerns around privacy, surveillance and an adverse impact on civil liberties. It is critical to adopt a rights-based approach and ensure strict safeguards in the development and implementation of these systems.

Aadhaar: India’s National ID Programme

India is the home of Aadhaar, a national digital ID system with the world’s largest biometric database. A random, unique 12-digit identification number (“Aadhaar number”) is assigned upon successful enrolment by a resident of India. The uniqueness of the individual is established after demographic and biometric deduplication.

This data is collected by the Unique Identification Authority of India (“UIDAI”), a statutory authority established in 2009. The UIDAI notably collected data and issued Aadhaar numbers solely on the basis of an Executive Notification until the implementation of the Aadhaar Act in 2016. It has thus faced multiple judicial challenges, culminating in a 2018 Supreme Court judgment that upheld the validity of Aadhaar despite petitioners’ concerns about privacy and exclusions (among other issues arising out of the system). Despite Aadhaar not officially being mandatory, it is often the only form of identification accepted in many instances to avail essential services (including private services), and continues to be de facto mandatory for persons seeking to file their taxes, for example.

Evaluating Aadhaar

The Evaluation Framework for Digital Identity developed by the Centre for Internet and Society seeks to critically examine existing and proposed digital ID systems and inform the trade-offs that must be made to ensure that rights are adequately protected and harms minimised at every stage. It is also the framework that we used to assess digital ID systems for our collaborative project with Research ICT Africa, as explained in an earlier post.

The framework comprises three kinds of tests: rule of law (to ensure that the digital ID system is governed by an appropriate rule of law framework), rights-based (to ensure that rights-based principles are upheld), and risk-based (to ensure that the system is prepared to address any potential risks and harms that may emerge). A comprehensive analysis of the Aadhaar system by our colleague, Vrinda Bhandari, using these tests brought out the following findings:

(a) Rule of Law Tests

India’s Supreme Court determined that Aadhaar had a legitimate aim (“aimed at offering subsidies, benefits or services to the marginalised sections of the society for whom such welfare schemes have been formulated from time to time”). However, it fails to adequately meet several standards in our rule of law tests, because the Aadhaar project was initially launched without statutory backing in 2009.

The Aadhaar Act (“the Act”) does not clearly define the purposes for which the Aadhaar can be used, as well as the actors connected to it (including private actors). While its purposes have been mandated by the Act and other statutes, the Act is notably vague on “voluntary” uses of Aadhaar (including what these purposes may be, or what categories of actors may use it). It also lacks clarity on how to determine the legitimacy of new uses or purposes.

On the issue of user data, we see that the Act provides scope for the state to widen the scope of information that can be collected, with no provision for user notification in case of data breaches or third-party access, as well as limited rights for users to access their own information or completely opt out of the system. Users are only able to correct information through the option to update residence information.

The Act prescribes limited redressal mechanisms, and fails to provide any remedies to persons who suffered exclusions due to authentication failures. A point of conflict in ensuring accountability in the system is the dual role of the UIDAI as both an administrator and a regulator, and no independent body that can hold it accountable (particularly in the absence of a data protection law in India).

(b) Rights-based Tests

While the Supreme Court held that Aadhaar has largely upheld data minimisation principles (meaning to only collect and hold data if strictly necessary and for the purposes intended), we found that the system needs to implement stricter data minimisation practices for core biometric information and metadata storage.

The exclusionary impacts of Aadhaar are significant, particularly when we look at issues faced in enrolment or due to authentication failures for a multitude of reasons (biometric failure to enrol, issues with Aadhaar-seeding and fingerprint recognition, lack of mobile or wireless connectivity and electricity, limited functional point of sale (POS) machines and server capacity; and age, manual labour and disabilities such as leprosy that affect biometric markers).

(c) Risk-based Tests

The Act lacks appropriate mechanisms to prevent and/or mitigate potential risks. The UIDAI has made preliminary attempts to incorporate privacy by design through the introduction of a virtual ID (a temporary, revocable 16-digit random number mapped with the Aadhaar number that can be used instead of one’s Aadhaar number during authentication for e-KYC) and a UID Token (a 72-character alphanumeric string meant only for system usage). However, there is no clearly articulated mitigation strategy in place in case of breach of the ID system. Moreover, the entire system continues to operate in the absence of a data protection law.

Adapting the Evaluation Framework for African contexts

The Aadhaar system in India is unique from many digital ID systems that we see in more developed countries. It was hastily implemented in the absence of a comprehensive regulatory framework, is fairly centralised and almost entirely managed by the state, relies heavily on biometric data, and now serves as a legal identity vastly beyond its original welfare delivery purpose.

This is a trend we have noticed in several countries in Africa in the course of our work with Research ICT Africa. Similar to our experience in India, we see that digital ID systems in Africa have often been quickly implemented without accounting for deeper structural issues such as infrastructural requirements, lack of identity documents, religious or cultural barriers that may stand in the way of enrolment of women or certain communities, digital literacy, and insufficient accountability mechanisms and loosely drafted legislation that leaves room for executive discretion. Such ambitious programmes might further exclude communities that have historically faced injustices at the hands of the state, and may be reluctant to voluntarily be surveilled by the state. The potential exclusions arising from ignoring these local contexts are exacerbated by the fact that these are often made mandatory, with no acceptable alternative forms of identification. As a result, they bring with them similar challenges of privacy, surveillance, and exclusions than what we experience in India.

Our experience with Aadhaar inspired us to develop this series of tests that could be widely adapted to the unique challenges faced by diverse and developing countries. This would be particularly valuable in Africa where many countries still have ID systems in preliminary stages that can be adapted to incorporate critical concerns and from the outset. For instance, while deciding the constitutionality of the Huduma Namba system in Kenya, the Court stalled its implementation until an “appropriate and comprehensive regulatory framework” could be enacted. Echoing one of our first tests in the framework, the Court acknowledged that “a law that affects a fundamental right or freedom should be clear and unambiguous.”

We hope that this framework will ignite conversations around digital ID that can influence a more effective, inclusive, and rights-embracing system.

In the next instalment of this series, we travel to Zimbabwe to hear more from our country partner there about what they found when they used the Evaluation Framework to assess digital ID in their country.

The opinions expressed in this article are those of the author(s) and do not necessarily reflect the views of SAIIA.

(Main image: A woman poses with her Aadhaar ID card in Chennai, India on 17 November 2016. – AFP/Arun Sankar via Getty Images)