The Protiviti View  | Insights From Our Experts on Trends, Risks and Opportunities

The Protiviti View

Insights From Our Experts on Trends, Risks and Opportunities
Search

POST

4 mins to read

Programs, Policies, Principles and People Can Move “Ethical AI” From a Nebulous Concept to a Meaningful Practice

supply chain management
Larger Font
4 minutes to read

Competitive pressures and the inherent desire to disrupt markets through innovation have technology companies moving fast to invest in advanced artificial intelligence (AI). As they do, many of these businesses — industry giants and startups alike — are racing to embrace “ethical AI.” They are acknowledging a need to demonstrate to their stakeholders and the general public that they view responsible development and application of AI technologies as a business priority and moral obligation.

Even though we’ve already seen some fast fails with initial efforts, the trend toward ethical AI — which involves data privacy and business ethics — is positive. It’s in keeping with the concept of being a responsible technology company and laying the groundwork for future success and accountability. As a recent report from the AI Now Institute explains, “ethical initiatives … provide developers, company employees, and other stakeholders a set of high-level value statements or objectives against which actions can be later judged.”

Because tech firms are at the forefront of shaping this rapidly evolving area of technology, they also have a prime opportunity to set the standard for ethical AI. And it’s likely many firms will be even more motivated to seize that opportunity once they recognize the connection between ethical AI practices and protecting their return on investment (ROI) in AI technologies. A global survey report we recently developed with ESI ThoughtLab, Competing in the Cognitive Age, shows that 28% of technology companies are already realizing significant value from their investments in advanced AI. That figure is well above the 16% figure for businesses worldwide, and it’s expected to increase to 76% within the next two years.

Overlooking the Social Implications of AI Creates Risk

The AI revolution that’s now underway will change the course of business across all industries and turn data into the key driver of competitive advantage. But the data and privacy issues around AI present social implications, and hence, risk for businesses and their stakeholders. The basis of AI learning originates from humans, and unfortunately, human behavior is not always for the good. Concerns about AI range from the potential for racial and gender bias in AI systems to how law enforcement and the military will use the technology. We’ve already seen employees at leading tech companies objecting loudly and in large numbers to AI-related projects with the U.S. government’s Immigration and Customs Enforcement agency and the Pentagon.

A technology company stating its commitment to develop and apply AI ethically and responsibly won’t be enough to satisfy most stakeholders. However, clearly demonstrating ethical AI practices isn’t easy, either. There are questions about what ethical AI even means. And there is no government regulation (yet) that ensures oversight of and accountability for the ethical development and use of AI.

So, how can a tech firm, or any business, that it is innovating with AI be confident it is doing so in a way that will not deceive or harm people? What can it do to earn the trust of users and the public at large? And how can company leadership be sure they’re even taking the right approach with ethical AI? Focusing on four key areas — programs, policies, principles and people — can serve as a good start. These recommendations are derived, in part, from our Competing in the Cognitive Age survey report:

  • Programs — Businesses are developing and deploying AI systems and related technologies with few accountability mechanisms and little thought about their broader implications. Validating AI programs to ensure algorithms are accurate, free from bias and conceptually sound and meet the organization’s model validation criteria and standards can reduce the risk of ethical missteps.
  • Policies —Data is the most important element in AI, and that has led to companies stockpiling data, including vast amounts of customer data, to drive their AI programs. Twenty-five percent of AI leaders surveyed for our recent report pointed to the need to consider data privacy as one of the most important lessons they have learned in working with AI. So, it can be a smart move for businesses to think critically about how they source their data and determine whether there are clear rules and policies in place to ensure that the data is clean and usable.
  • Principles — Strong principles help to shape good practices as well as effective processes and controls. Not surprisingly, we have seen leading technology companies — Google and Microsoft among them — establishing their own guiding principles for ethical AI. Tech firms can look to these businesses and others for inspiration on developing their principles. Other resources, like The Future of Life Institute, can also provide valuable guidance on ensuring beneficial (and not destructive) use of AI. Also, the European Union’s European Commission has issued draft guidelines on “trustworthy AI” that can help to inform a company’s approach to ethical AI.
  • People — Perhaps the biggest check-and-balance measure for AI is people. AI does not have a moral compass. It does not have feelings. And it will take in whatever information — and bias — it is fed. So, it is up to people to ensure that the business uses data responsibly and develops AI technologies that will create benefit, not cause harm. That will demand a lot of monitoring, questioning and pushing back, which businesses must be eager to encourage. Also, people must always be able to control AI. The E.U. states as much in one of the seven requirements it outlines for developing future AI systems — Human Agency and Oversight: “AI should not trample on human autonomy. People should not be manipulated or coerced by AI systems, and humans should be able to intervene or oversee every decision that the software makes.”

While many companies, including a host of technology firms, are still at the starting gate with AI, or only in the early stages of development, that will likely change soon. Our research for Competing in the Cognitive Age suggests we will see a rapid acceleration of AI investment and advancement within the next two years. For tech firms already working with AI, or planning to, now is the time to start thinking about how to move ethical AI from a nebulous concept to a meaningful practice — and make it a part of their journey toward becoming a responsible technology company of the future.

Was this post helpful to you?

Thanks for your feedback!

Subscribe to The Protiviti View Blog

To face the future confidently, you need to be equipped with valuable insights that align with your interests and business goals.

In this Article

Authors

Gordon Tucker

By Gordon Tucker

Verified Expert at Protiviti

EXPERTISE

Madhumita Bhattacharyya

By Madhumita Bhattacharyya

Verified Expert at Protiviti

EXPERTISE

No noise.
Just insights.

Subscribe now

Related posts

Article

What is it about

What you need to know: Aging systems, data silos, regulatory pressures and talent gaps complicate enterprise transformation for public utilities....

Article

What is it about

The big picture: C-suite leaders in traditional aerospace and defense (A&D) companies are launching and growing their aftermarket services and...

Article

What is it about

What to watch: President-elect Donald Trump will take office in January 2025 with Republican control of both the Senate and...

Search