Using Facial Recognition Technology in the Warfare


Using Facial Recognition Technology in the Warfare


Date: June 21, 2022 14:00-16:00


  • Jen-Ran Chen, Chairman of Digital Transformation Association


  • Hsin-Hsuan Lin, Professor of Department of Law, Chinese Cultural University)
  • Kuan-Ju Chou, Digital Rights Specialist of Taiwan Association of Human Rights
  • YJ Hsu, Professor of Department of Computer Science & Information Engineering, National Taiwan University)
  • JH Hua, Chairman of CyberLink Corp.

Session details

Ukraine government is using facial recognition software to help identify the bodies of Russian soldiers killed in combat and track down their families to inform them of their deaths.  The free offer of the facial recognition technology solution from Clearview AI seems a righteous move, however the company — who claims itself with the largest facial recognition database in the world, — has faced a string of legal challenges.  In this panel, experts in the fields are invited for discussing issues such as: can we ignore the controversy debate when facial recognition technology is applied in wartime?  What are the ethical issues in technology usage? Is it possible to use the facial recognition technology without privacy and other risks?


Session Highlights

The Moderator Mr. Jen-Ran Chen opened the session by indicating that digital technology has developed to the extent that it closely integrated with our daily life.  It even changes or transforms the people’s lifestyle. Among them, the biometric data used in areas of business,  security and access control is also becoming more and more diversified.

Mr. JH Hua then described facial recognition technology as a double-edged sword, with high accuracy and high risks violating privacy.  When the government use facial recognition technology, it is mandatory to obtain people’s consent in advanced, he suggested in the session.  Mr. Hua believes that face recognition technology does help increase efficiency in various applications such as custom entry control in the airport, identification of lost elders, insurance sales.  However, he reminded the audiences that, the controversy of Clearview AI is not using the technology, but to combine the results with other personal information found in the social media.  That’s also the main reason why the company was sued in many countries.

Professor Hsin-Hsuan Lin talked about the issue from international law perspective.  She started by mentioning the humanity and human rights crises caused by applying face recognition technology and AI weapons in the armed conflict, violent extremism, or counter-terrorism events. UN Security Council Resolution 2396 (2017) and the Madrid Guiding Principles were mentioned as legal instrument to address the crises.  However, the former is not detailed enough, and the latter proposes only the bottom line.  Although international regulations are developed in a slow path, they provide a normative basis.  Professor Lin suggested that private companies should suspend their relationship with countries that may violate the human rights.  Further, when companies have doubts regarding government’s request for handing out the biometric data, they should seek for judicial remedies.  She believed that the establishment of local law to regulate the AI application is urgent and suggested to reference the BIPA Act (Biometric Information Privacy Act) of Illinois.

Ms. Kuan-Ju Chou put forward her views as a human right activist.  By reviewing the open records of all the government procurements during 2006 and 2021, she found out that there are at least 107 cases related to acquiring face recognition solutions in Taiwan.  The buyers include libraries, schools, police department, and etc.  She also cited several facial recognition technology projects initiated by the public sector but cancelled due to high controversy.  One example, a university applied technologies tracking eye movement  and face expression in the class, to catch the cheating students.  Taiwan Railway Administration once installed Surveillance cameras with facial recognition function trying to identify suspects and people in need in the train station. 

She further argued that these AI surveillance systems are prone to be abused or misused, and that’s the reason human rights groups advocating for banning facial recognition technology used in public sector or in public space.  In addition, there is no laws in the country at this moment to resolve the possible disputes.  The application of the technology in the school is even worse, as students may get used to live in an environment without privacy.  Finally,  Ms. Chou suggested all the biometrics data can be used only under the premise of protecting data owners.  It may take a while to let the society to trust the technology.

Follow Ms. Chou’s comments, professor YJ Hsu explained that it is not easy to regulate the usage of face recognition technology, because different stakeholders have different views and values.  The most important thing is, no one’s rights should be determined by automatic decision-making system. Data subjects should also be clearly informed about how their personal data is collected, processed, stored, and used.  More importantly, they should be aware of the impact of the data usage.


Professor Hsu indicated the UK court has imposed a £7.5 million fine on Clearview AI just a few days ago.  There are still concerns that even high penalty cannot stop Clearview AI from continuing what it is doing. She suggested that all the AI providers should take privacy into consideration when designing their services or products and put in mind that AI cannot be 100% accurate.  Another focus be put is raising public awareness and education.  It is hoped that multistakeholder discussion may urge the law makers to come out a better-rounded legislation.  Professor Hsu reminded that AI is a crucial technology for the national economic development, forbiting the use of the technology may not the best answer for the good of the country. She suggested students in Engineering major should learn about the concepts of ethic and human rights.

In the end, The panel moderator concluded that it is obvious that AI and other emerging technologies will change people way of life. Through the law, people may better use the technology rather than manipulated by the technology.

One question was raised from the floor about the right of data usage and the possibilities to ignore the data regulation in the state of emergency.

Professor Lin first made a conceptual clarification on the state of emergency.  In the international law, the “state of emergency” refers to the launch of lethal attacks in an armed conflict.  She believed that the question raised from the floor is about if the pandemic time or severe disaster event are considered state of emergency.  She further explained that it may broadly covers acts of war, but in terms of international law, there are two different systems to apply the law: when the two countries have officially declared war and entered a state of conflict, international humanitarian law is applicable; when there is no war, international human rights law is more applicable.

In the cases of pandemic or severe disaster, it is suggested to refer to the International Covenant on Civil and Political Rights (ICCPR) as well as The International Covenant on Economic, Social and Cultural Rights (ICESCR) as basis to determine the applicability.   In the pandemic period and in Taiwan, it is legally accepted to allow the use of data as the collective public health may have higher priority.

Scroll to Top