image-left

Automation and Databases

Artificial Intelligence (AI) and Machine Learning (ML) methods can automate identity verification. The databases these technologies rely on are heterogeneous, partial and vulnerable to exploitation

A (optimist)

User-controlled private data enables reliable and faster AI-based application processing and environmental monitoring

  • Shared guidelines and security measures that are in place and taken up by everyone, exceptionally user-friendly
  • Privacy enabled freeware replaces big technology. Data agency is given back to the user - ensuring data privacy and clear consent of data access
  • Organisations which are working in the field can take up database information to facilitate access to the procedure and speed it up (to verify whether someone’s asylum application has merits)
  • Broad deployment of secure, accurate biometric systems (facial / iris / voice- and gait-recognition)

B (pessimist)

Widespread use of flawed automated systems with suspect databases that have been attacked and exploited by malevolent regimes

  • Malevolent use of digital platforms and databases by third parties / (hostile) countries of origin making refugee returns impossible
  • Usages of drones, surveillance tools for coordinating push-backs
  • Humanitarian resources are deployed based on imperfect AI algorithms and so crises are made worse
  • Totalitarian regimes use biometrics to identify supporters and persecute dissidents
  • Identifying databases used as bargaining chip for international negotiations
  • Individuals have no control over the data collected and stored by developers and monitoring entities
  • Databases increasingly controlled by the private sector, who are not democratically elected and not open about how data are collected or used

C1 (mediator)

  • AI has been implemented extensively, and is working well across various international protection activities and procedures
  • Private and Public databases are used by automated systems for a variety of governmental tasks
  • Data privacy policies are ineffective at safe guarding personally identifiable information
  • Biometric genetics are used to estimate relatedness in support of family reunification (security concerns emerge)

C2 (mediator)

The application of AI automated systems is hindered by restricting use to limited, secure databases

  • Databases do not include all information, so malevolent actors who manage to get access, cannot harm migrants to the same extent
  • Biometrics are used to identify asylum seekers and create efficiency gains in management of populations in need

D (innovator)

Fully automated asylum processes that involve no human intervention

  • AI-based admin systems allow making quick decisions on asylum applications thus solving the problem of backlog and allowing more people to get international protection
  • No human intervention needed in the processing of asylum claims
  • Persistent gaps in ability to conduct audits or performance reviews of AI/ML systems due to limited knowledge of decision-making algorithms