21.02.2025

Expert of ADC Memorial comments on the plans of the St. Petersburg authorities to detect ethnicity via special set up of facial recognition system

The government of St. Petersburg has announced that the city‘s surveillance cameras will be set up to recognize ethnicity. The contract for the relevant services was signed by the Committee on Interethnic Relations.  As Stefania Kulaeva, expert of ADC Memorial, says on VOT TAK TV, ethnic profiling is unacceptable it is a discriminatory practice aimed at persecuting certain groups, in this case migrants.

Ethnic profiling, including through the use of artificial intelligence, facial recognition technologies, etc., is condemned by international human rights bodies. Thus, in the General recommendation No. 36 (2020) of the UN Committee on the Elimination of Racial Discrimination on preventing and combating racial profiling by law enforcement officials it is stated the following:

  • 58.States should ensure that algorithmic profiling systems used for the purposes of law enforcement are in full compliance with international human rights law. To that effect, before procuring or deploying such systems States should adopt appropriate legislative, administrative and other measures to determine the purpose of their use and to regulate as accurately as possible the parameters and guarantees that prevent breaches of human rights. Such measures should, in particular, be aimed at ensuring that the deployment of algorithmic profiling systems does not undermine the right not to be discriminated against, the right to equality before the law, the right to liberty and security of person, the right to the presumption of innocence, the right to life, the right to privacy, freedom of movement, freedom of peaceful assembly and association, protections against arbitrary arrest and other interventions, and the right to an effective remedy.
  • 59.States should carefully assess the potential human rights impact prior to employing facial recognition technology, which can lead to misidentification owing to a lack of representation in data collection. Before national deployment, States should consider a pilot period under the supervision of an independent oversight body that is inclusive of individuals who reflect the diverse composition of the population, to mitigate against any potential instances of misidentification and profiling based on skin colour.
  • 60.States should ensure that algorithmic profiling systems deployed for law enforcement purposes are designed for transparency, and should allow researchers and civil society to access the code and subject it to scrutiny. There should be continual assessment and monitoring of the human rights impact of those systems throughout their life cycle, and States should take appropriate mitigation measures if risks or harms to human rights are identified. Those processes should examine potential and actual discriminatory effects of algorithmic profiling based on grounds of race, colour, descent, or national or ethnic origin and their intersection with other grounds, including religion, sex and gender, sexual orientation and gender identity, disability, age, migration status and work or other status. They should be conducted prior to the development or acquisition of such systems, wherever possible, and at the very least prior to and during the full period of the use of the systems. Such processes should include community impact assessments. Groups that are potentially or actually affected and relevant experts should be included in the assessment and mitigation processes.
  • 61.States should take all appropriate measures to ensure transparency in the use of algorithmic profiling systems. This includes public disclosure of the use of such systems and meaningful explanations of the ways in which the systems work, the data sets that are being used, and the measures in place to prevent or mitigate human rights harms.
  • 62.States should adopt measures to ensure that independent oversight bodies have a mandate to monitor the use of artificial intelligence tools by the public sector, and to assess them against criteria developed in conformity with the Convention to ensure they are not entrenching inequalities or producing discriminatory results. States should also ensure that the functioning of such systems is regularly monitored and evaluated in order to assess deficiencies and to take the necessary corrective measures. When the results of an assessment of a technology indicate a high risk of discrimination or other human rights violations, States should take measures to avoid the use of such a technology.
Exit mobile version