May 13, 2024
Principal Author: Karen Jensen
As part of our ongoing Speaker Series, on May 13th, the Global Ethics and Culture team welcomed an esteemed panel, and our very own Nebahat Arslan as moderator, to discuss actionable items that organisations can use to build Responsible AI solutions.
In 2023, the Global Ethics and Culture team launched a Speaker Series that focused on education and awareness of bias in AI, and our Global Hackathon event challenged organisations worldwide to design AI solutions that addressed the ongoing challenges of gender parity and equity.
This year, we’re building on our 2023 success(es) with a new Speaker Series that will identify actionable solutions organisations can implement to overcome bias and foster change in AI ecosystems and infrastructures.
Our global panelists in today’s series included Leslie Canavera, Manail Anis Ahmed, and Ayca Ariyoruk. (Please see the links below to our speaker's profiles on LinkedIn.)
How to design policies for Responsible AI
Our panelists were asked a series of questions, along with questions submitted by our global audience, on how to design policies for Responsible AI. Their responses have been summarised here.
What practical recommendations do you have for organisations implementing policies for meaningful change in Responsible AI?
Build from a Value-based perspective: Organisations should define their values around AI by including both individual and organisational perspectives. These values should include, at a minimum, fairness, privacy, and regulatory compliance.
Design from a Human-centric viewpoint: AI development should prioritise human well-being and should consider second and third-order effects. These second and third-order effects can include the potential changes AI deployment will have upon vulnerable populations and society at large.
Transparency and Explainability: The acquisition of data and the reasoning behind AI decisions should be transparent. This includes vigorously questioning the scientific validity of data, and its accuracy, while also recognising the potential for inherent bias in AI.
Stakeholder Involvement: Identify and include stakeholders within your organisational supply chain. By asking holistic questions from a diverse pool of internal and external stakeholders, organisations can design robust and inclusive solutions.
Measurable Outcomes: Processes should be aligned with policies that have clear metrics for measuring success and they should be subject to rigorous testing standards.
Please provide some current strategies for the successful deployment of policies and frameworks for Responsible AI deployment.
Conduct risk and impact assessments: Like any other new implementation, AI applications should have clear definitions of the intended purpose and the expected outcome(s) of the application. AI policies should consider the level of risk in deployment, the regulatory and legal frameworks for user consent, and how to measure whether the AI application is “fit for purpose” (works as intended).
Design with Transparency and Focus on the Needs of Human Capital: While there has been much discussion about how AI regulations can inhibit innovation, our panel had an alternative perspective. Our panel surmised that regulation of AI fosters trust and helps to prevent, and overcome, biases that have been built into previous emerging technologies. Designing with the best practices of Algorithmic Justice helps to ensure that AI augments human capabilities and is inclusive of everyone, not just small groups.
Understandability and Explainable AI (XAI): Even proprietary ecosystems can acknowledge the challenges of datafication and monetisation of AI deployment. By expanding on just the inputs and subsequent outputs of emerging technologies, we can ensure that metrics are both more understandable to users and to regulatory agents. In this way, organisations can ensure that their technologies and applications align with their original intentions.
How can organisations, especially small businesses, navigate the complexities of Global AI policies and legislative frameworks?
Policies should focus on stakeholder input and Global impacts: Inclusive policies for ethical AI encompass the value viewpoints of diverse stakeholders and identify impacts from a global perspective. Nebahat suggested that organisations can choose the most difficult AI policy framework to align with. For example, if an organisation is building an AI application whose business location and/or deployment technology exists in the European Union, it can choose the EU AI Act as its regulatory framework.
Ask the right question: What problem are you trying to solve? Can we explain the results of our deployment clearly and transparently? Have we aligned with a regulatory framework applicable to our location, our industry, and our solution? What are the potential long-term effects of our AI deployment? These are just a few examples.
Questions from the audience
How can we protect vulnerable populations from any adverse consequences of AI deployment?
Scientific validity is important. Additionally, testing & reliability of outcomes as well as human-to-human interactions is critical.
How do you see AI enhancing industries?
Improved outcomes. For example, at present, current sea ice forecasting is a manual process, with significant lag time. AI sea ice forecasting has significantly improved lag times, and with improved forecasting capabilities, has the potential to reduce negative impacts to human life.
What are the AI surveillance risks of facial recognition?
In Western societies, we have seen the harms of the erosion of civil liberties that are both far-reaching and have a high potential for damage. In patriarchal societies, where women are already more vulnerable to gender biases and socioeconomic barriers, the additional risk of exploiting AI technologies may include reputational damage with severe repercussions.
How can we include Consumer and Customer protection in the use of AI?
Transparency in the disclosure of AI applications is critical. For chatbot use, organisations should select non-human names for AI applications to maintain trust and transparency.
Takeaways and Learning
Start the blueprint of your AI policy from a Human Rights perspective. Defining both individual and organisational values can expose and eliminate hidden biases.
Regulation should not be thought of as a barrier to innovation in the development of AI applications. Regulation ensures that fairness and safety are baked into the entire culture of emerging technologies.
Impact assessments and analyses are critical tools that organisations can use to view their policies with a three-dimensional lens. This 3D lens helps identify any unintended and harmful 2nd and 3rd-order effects.
“Garbage in, garbage out” (Stenson, 2016) is still relevant in deploying ethical AI applications. Organisations should include ethical data acquisition strategies as well as both input and output validation protocols that can pass rigorous scientific testing.
Event recording
You can view the recording of the event using this link.
Ethics & Culture Team
Please see the links below to our Team’s profiles on LinkedIn.
Reference
Stenson, R. (2016, March 14). Is This the First Time Anyone Printed, ‘Garbage In,
Garbage Out’? Atlas Obscura.
Comments