Artificial Intelligence (AI) provides the mechanisms to enable machines to learn. AI systems which use machine learning, can detect patterns in enormous volumes of data and model complex, interdependent systems to generate outcomes that improve the efficiency of decision making. The use of AI (specifically machine learning) in delivering physical threat detection solutions is based on data coming from cameras and the ability of the algorithm to interpret the data. This data comes from the camera’s ability to see clearly and the software’s ability to gather image data and the algorithm’s ability to interpret the data to determine if there is a threat present. As with any AI solution there must be a responsible and ethical approach to its implementation and having a standard to guide this approach is essential.
In order to put AI Ethics into practice there are several international standards on AI that include ethical components and the Organization for Economic Co-operation and Development (OECD) AI Standard is one that should be considered when an organization is developing their own AI policy and standard. The International Standard for AI delivered by the OECD, lays out general tenants for AI implementation that is focused on ethical adherence and the adherence to the well-being of humanity. The OECD AI value-based principles and recommendations for public policy and international cooperation are intended to be guiding principles for governments, organizations and individuals in the design, development, and implementation of AI systems. The purpose for these guiding principles is to ensure that the public and users of AI best interest are primary and that the individuals designing, developing, and implementing AI systems are held accountable to the OECD AI standard. According to the OECD Secretary-General Angel Gurría, these principles “will be a global reference point for trustworthy AI so that we can harness its opportunities in a way that delivers the best outcomes for all”.[i]
OECD AI Value-Based Principles
According to the OECD AI value-based principles, AI should benefit people and the planet by driving inclusive growth, sustainable development and well-being; AI systems should be designed in a way that respects the rule of law, human rights, democratic values and diversity, while applying the appropriate safeguards to ensure a fair and just society; there should be transparency and responsible disclosure around AI systems especially when they are engaging with the public; and AI systems must function in a robust, secure and safe way throughout their lifetimes, while accessing and managing potential risks[ii]. Organizations and individuals developing, deploying, or operating AI systems should be held accountable for their proper functioning in line with the AI value-based principles. We must keep in mind that when it comes to AI, we need both responsible use and responsible design.
Establishing Ethical AI Implementation Approach
The first step in establishing a responsible and ethical approach to AI is through an organization’s AI policy and standard. As mentioned earlier adopting one of the international AI policies such as the OECD would be essential to accomplishing this. The next step is to apply a data ethics strategy for the identification and use of data in your AI solution. According to the OECD AI Standard, applying data ethics will improve what data is collected. Improving the ethicality of the data, will improve the ethicality of the algorithm interpreting the data. Murat Durmus, (CEO of AISOMA AG) states that “data ethics refers to the systematization, defense, and recommendation of concepts of right and wrong behavior in relation to data”. He identifies seven aspects of ethical data, which include
(1) start with a clear user need and public benefit,
(2) be aware of relevant legislation and codes of practice,
(3) use data that is proportionate to the user need,
(4) understand the limitations of the data,
(5) use robust practices and work within your skillset,
(6) make your work transparent and be accountable,
(7) embed data use responsibly.[iii]
Developing AI applications to be people (human)-centered is a must. People-centered AI applications support the inclusivity and well-being of people that it serves and respects human-centered values and fairness. People-centered AI applications must be designed, developed, and implemented with transparency, be robust and safe, and are accountable for the results and decisions that are produce and/or the decisions AI applications influence. AI applications like any other application must be monitored during the life of the application. Building evidence base metrics and KPI’s to continually assess AI applications are important to ensure your AI standards, guidelines and principles are being met.
An often overlooked aspect of ensuring ethical AI applications are developed is to establish a diverse AI product development team. A diverse team will bring a “diversity of thought” to the initiative. A diverse team made up of people with different cultural backgrounds covering multiple disciplines such as knowledge management, data science/analyst/architect/engineers, machine learning, business, and domain subject matter experts as well as change management will bring a cross functional team with different perspectives and innovation to the product being developed.
Scylla: AI in Physical Threat Detection Solutions
Scylla, the AI powered physical threat detection solutions company established in 2018 with a global presence including the US with office in Austin, Texas, is delivering a holistic threat detection solution with ethical AI at its core. Scylla uses AI to analyze data coming from stationary and portable cameras. It then processes the acquired data to generate meaningful information about the activity contained in the video stream. Scylla solution identifies behavioral characteristics of the subject including facial recognition, to determine the potential threat. Finally, Scylla solution will provide alerts to law enforcement response units through web and mobile channels with the information about the threat to facilitate a proper response[iv].
Scylla solution is powered by state-of-the-art algorithms that enable first responders and security teams to obtain instant identification and detection of security threats through a centralized dashboard. The centralized dashboard keeps the human-in-the-loop to monitor the results of the systems and to make decisions based on the information being received. Once the alarm and detection process are initiated, Scylla’s threat detection dashboard will open showing security personnel statistical metrics enhancing intelligence on the current threat as well as displaying all the crucial information, compiled, and delivered to end users responsible for security.
Scylla behavior recognition system can observe and detect a wide range of anomalous events and behavior from surveillance cameras, such as Smoke and fire, Anomalous consumer behavior that may result in shoplifting and Fighting. As Scylla observes your video stream, once a suspicious object is detected and identified an alert will be raised and escalated to your security staff or personnel. An alert is classified as a true alarm when the prediction of AI corresponds to reality (i.e., the object of interest is correctly identified, the action sought after is detected, etc.). A false positive is a case when the alert is triggered by mistake. The elaborate AI and machine learning behind Scylla ODS, can meet any level of production-grade industrial standards including international AI standards.
Scylla’s Preventive Threat Detection solution is based on computer vision algorithms and detection of the threat is based on visual content analysis. The algorithm utilizes diverse data sets in its training. These data sets enable the algorithm to identify and detect a wide range of objects from cameras, including people, a variety of weapons and vehicles. This capability enables Scylla solution to address critical facial recognition issues by providing a more accurate facial recognition that limits the number of false matches.
Scylla Incorporates Ethics in their Solutions
Scylla physical threat detection solutions meet the value-based principals and ethical AI implementation approach established by many AI standards. Scylla is focused on using its advanced AI powered video analytics to protect human life. As an organization Scylla understands that using AI comes with the responsibility to use these capabilities wisely and ethically. Scylla has intentionally made algorithm training, product design and marketing decisions that ensure the use of ethical AI in accordance with their mission and ethics.
Scylla has successfully addressed the core principles of ethical AI. These core principles include (1) Be socially responsible and beneficial; (2) Avoid creating or reinforcing unfair bias; (3) Be built and tested for safety; (4) Be accountable to people; and (5) Incorporate privacy principles.[v],[vi]
The following indicates how Scylla has addressed these principles:
1. Be socially responsible and beneficial.
The expanded reach of new technologies increasingly touches society as a whole. The development of Scylla AI technologies considers its social responsibilities in providing AI solutions.
2. Avoid creating or reinforcing unfair bias.
AI algorithms and datasets can reflect, reinforce, or reduce unfair biases. Scylla recognizes this and provides unbiased datasets in their in their AI solutions. Scylla seeks to avoid unjust impacts on people, particularly those related to sensitive characteristics such as race, ethnicity, and gender.
3. Be built and tested for safety.
Scylla develops and applies strong safety and security practices to avoid unintended results that create risks of harm. Scylla designs their AI systems in accordance with their ethical principles and best practices in AI development, which includes monitoring their operation after deployment at client sites.
4. Be accountable to people.
Scylla design their AI solutions with appropriate opportunities for feedback and relevant explanations. Scylla AI solutions are designed, developed, and implemented with appropriate human direction and control.
5. Incorporate privacy principles.
Scylla incorporates and adheres to privacy principles in the development and use of their AI solutions. Scylla AI solutions are designed, developed, and implemented with privacy safeguards, and provide appropriate transparency and control over the use of data.
Scylla also has a commitment to human rights beginning with the data sets used to train their models. Scylla has deliberately built ethnically and gender balanced datasets in order to eliminate bias in face recognition. Privacy is a fundamental human right. When information is gathered, it needs to be used, stored, and managed in line with legal requirements. Scylla does not store any data that can be considered personal. No footage or images are stored. In jurisdictions where face recognition is not permitted for privacy reasons, Scylla’s person search can be used to search for someone based on their general appearance. Scylla purposefully does not use their software to identify any ethnic or racial groups. Scylla will not sell software to any party that is involved in human rights breaches. Scylla undertakes due diligence on the ultimate use of all of Their software. Most importantly, Scylla has very clear directions from their CEO to only sell their AI software to organizations that will use it for valid purposes.
For more information on how Scylla incorporates responsible and ethical AI in its solutions access the webinar: Scylla – Safe, Ethical and Effective Next-gen AI for Physical Security with Kris Greiner, VP of Sales North America at Scylla.
[i] OECD (2019). OECD Recommendation of the Council on Artificial Intelligence, OECD/LEGAL/0449. Retrieved from http://legalinstruments.oecd.org
[ii] OECD (2019). Artificial Intelligence in Society, OECD Publishing, Paris, https://doi.org/10.1787/eedfee77-en.
[iii] Murat Durmus (2020). Data Ethics: 7 Points to Consider. Retrieved from https://www.linkedin.com/pulse/data-ethics-7-points-consider-murat-durmus
[iv] Scylla (2021). Whitepaper: Report on the performance of various modules of Scylla AI Physical Threat Detection Solution
[v] Artificial Intelligence Principles at Google (2021). Retrieved from https://ai.google/principles/
[vi] DoD Adopts Ethical Principles for Artificial Intelligence (2020). Retrieved from https://www.defense.gov/Newsroom/Releases/Release/Article/2091996/dod-adopts-ethical-principles-for-artificial-intelligence/