Photo by Photos Hobby on Unsplash

Responsible AI — Creating well-governed and value-driven insights

Evepsalti

--

During the last decade, artificial intelligence has enabled new business and personal experiences that improve operations, drive economic growth and fulfill individual experiences. At the same time, with all the latest developments around AI comes the need for governance, transparency and an ongoing process to ensure the frameworks are fair, ethical and according to organizational, government and societal values. Hence, responsible AI is becoming a paramount parameter for any entity that’s designing or implementing AI models in their plans and operations. Learn what are the pitfalls of AI and how you can ensure well-governed AI is at the core of your strategy creating value-driven insights.

While artificial intelligence (AI) existed for a couple of decades now, big data, quantum computing and advances in data science have allowed the models to be stress tested and more reliable. Now that we’re in the throes of the fourth industrial revolution, AI and its related technologies are transforming virtually every aspect of every business. With large scale applications the economic opportunity around AI is significant. By 2030, it could be as much as $15.7 trillion, “making it the biggest commercial opportunity in today’s fast changing economy,” according to a recent report by PwC. Also, we see leaders recognizing the business growth and cost savings benefits of AI as they’re integrating it in their strategies. PwC’s CEO survey found that 85% of CEOs believe that AI will significantly change the way they do business in the next five years.

AI continues to mature as a powerful technology with expanding applications and it’s able to automate routine tasks — such as our daily commute — while also augmenting human capacity with new insights. Combining human creativity and innovation with the scalability of machine learning is advancing our knowledge base and understanding at a significant pace resulting in advancements on education, the environment, economic empowerment, health, hunger and crisis response among others. As Alphabet’s CEO Sundar Pichai said in January, developing AI responsibly and with social benefit in mind can help avoid significant challenges and increase the potential to improve billions of lives.

At the same time, with great power comes great responsibility. Specifically, AI can raise concerns on a couple of fronts due to its potentially disruptive impact. These fears include workforce displacement, loss of privacy, potential biases in decision-making and lack of control over automated systems and robots. Others include intentional or unintentional uses of AI to harm others associated with surveillance, weapons etc. While these issues are significant, they can also be addressable with the appropriate planning, oversight, and governance.

Not surprisingly, as AI underpins success for business and society, senior leaders are under the spotlight to ensure their companies are complying with standards and laws around the responsible use of AI systems. According to Wharton, the majority (77%) of CEOs say that AI threatens to increase vulnerability and disruption to the ways they do business. At the same time, ensuring responsible AI practices can be challenging for many. So far, about 25% of companies say that they definitely prioritize considering the ethical implications of an AI solution before investing in it, according to research by PwC but this number continues to rise as the AI applications evolve and leaders are seeing the economic and productivity effects of infusing AI into their strategies and operations.

In general, ethical debates are well underway about what’s “right” and “wrong” when it comes to high-stakes AI applications and how we can infuse AI systems with human ethical judgment, when moral values frequently vary by culture and can be difficult to code in software.

In addition, bias can also pervade AI tools. Take, for example, software to predict future criminals that is biased against African Americans or a recruiting tool used by Amazon riddled with bias against women. Companies have a responsibility to commit mitigating bias. This mitigation should not be an afterthought, but it should guide AI development and product management processes. Before developing algorithms, AI designers and developers should be scrutinizing potential biases in data, identifying the potential ramifications of these biases, and then proactively taking steps to minimize them.

Minimizing bias also involves constructive dissent, a phrase embraced by Rumman Chowdhury, the Responsible AI lead for Accenture. Chowdhury has explained, “Successful governance of AI systems needs to allow ‘constructive dissent’ — that is, a culture where individuals, from the bottom up, are empowered to speak and be protected if they do so. It is self-defeating to create rules of ethical use without the institutional incentives and protections for workers engaged in these projects to speak up.” If people feel empowered with high levels of psychological safety to voice concerns, this will fuel more productive conversations pertaining to responsible AI and, especially, mitigating bias.

One core remedy for minimizing bias is enabling and prioritizing diversity. A recent report from the AI Now Institute revealed that 80% of AI professors and 85% of AI research staff at Facebook and other high tech companies are male and only 15% of AI research staff at Facebook are women. It’s not much better in academia, with recent studies showing only 18% of authors at leading AI conferences are women, 5 and more than 80% of AI professors are male.

For African American workers, the picture is worse. For example, only 4% of the workforce in Fortune 100 companies is black and this percentage is not unique to high tech companies. The report unfortunately has no data on trans workers or other gender minorities. Racial diversity among AI researchers and industry leaders is minimal, so we need to be thoughtful in bringing diverse perspectives and backgrounds to the table to be able to mitigate bias and build responsible AI tools.

To mitigate some of these risks, there needs to be clear guidelines around several areas as demonstrated by Google’s responsible AI principles including fairness, interoperability, privacy and security. In terms of fairness, it’s critical to design a model with inclusion in mind, use representative databases to train and test the model, and continuously analyze performance to check the system for unfair bias. Interpretability is crucial to being able to question, understand, and trust AI systems. Recommended practices include treating interoperability as a core part of the user experience, understanding deeply the trained model, recognizing human psychology and limitations and of course ongoing and rigorous testing. It is also well known that AI depends on massive amounts of data, but this data can be at times quite sensitive (medical records etc.) so it’s important to collect and handle data responsibly by anonymizing and aggregating sensitive data or even better utilizing public or non-sensitive data. Lastly, safety and security are paramount requirements for responsible AI and identifying potential threats to the system while developing a system approach to combat these threats is always a best practice.

In addition, it’s important to create the right governance framework that is also anchored to your organization’s core values, ethical guardrails and regulatory constraints. Adherence to industry standards on how to design, build, train and test intelligent systems is critical. Explainability is also important and it goes hand in hand with documentation throughout the entire AI lifecycle, from model design to implementation and use. Design and decision-making processes should be documented. Additionally, it should be clear when and why AI systems make mistakes. AI is inevitably going to fail. By making all aspects of AI development transparent, we can empower humans’ judgment to kick in and avoid many negative fallouts.

Trust should be at the core of new product development and ongoing monitoring enables organizations to supervise and audit the performance of the algorithms for bias and security threats. To instill trust in AI systems, developers and leaders alike should be encouraged to look “under the hood” at the underlying models, explore the data used to train them, expose the reasoning behind each decision, and provide coherent explanations to all stakeholders in a timely manner.” As IBM’s AI ethics policy states, “Allow for questions. A user should be able to ask why an AI is doing what it’s doing on an ongoing basis. This should be clear and up front in the user interface at all times.”

Finally, monitoring brings accountability around protecting the end user and for providing better services to the market.

Overall, AI is here to stay — bringing limitless potential to push us forward as a society and a global economy. Used wisely, responsibly and with transparency, it can create extensive benefits for businesses, governments, and individuals worldwide. And we all have a role to play in with being vigilant, critical and inquisitive around AI models, applications and their broader impact.

--

--