Ethical AI – The Good, the Bad, and the Moral Imperative

Madhumita Bhattacharyya, Managing Director Global Leader, Enterprise Data and Analytics

Artificial intelligence (AI) is revolutionizing the way companies do business. It is helping companies improve decision making, increase customer retention, bring new products to market more quickly, and experience exponential growth. A global Protiviti study found that companies leading the way with advanced AI are seeing a real competitive advantage.

As AI becomes integrated into how we do business, companies are discovering innovative and valuable ways to use the technology to improve the world of work – and the world in general. Google is integrating AI technology into its G Suite of office applications, enabling Google Drive to automatically group together related files while using AI-enabled cybersecurity to flag suspicious activity in the tool. And Hartford Financial Services Group is attacking the opioid crisis head on by using AI to scan workers compensation claims for signs of possible opioid addiction based on prescription medication claims. Working with nurses and pharmacists, the company has seen a 45% reduction in opioid prescriptions – more than double the national average of 21%.

But just as AI can be used to benefit business and society, it also can be used to do harm or introduce potentially negative consequences. Legendary physicist Stephen Hawking once famously warned, “The rise of powerful AI will be either the best, or worst, thing ever to happen to humanity.” In the wrong hands, or with the wrong intent, AI and machine learning (ML) can be used to take advantage of individual and corporate vulnerabilities. And even for companies without any ill intent, AI can open the door to unethical consequences, including erosion of privacy, lack of transparency on how AI systems make life-impacting decisions, and discrimination and bias in credit scoring, recruiting and judicial rulings.

It is important, then, to step back and consider the societal and moral implications of this powerful technology. Companies must consider not only what AI can do for them, but also how they can implement AI responsibly – for the good of business, society and humankind. The push for ethical use of artificial intelligence is global. More than 15 countries have released national strategies that include ethical standards for the use and development of AI, and several organizations and countries have expressed interest in regulating AI.

As companies begin to integrate AI into their business strategy, the importance of practicing ethical AI may become more significant than just “doing the right thing.”  Some industry experts are beginning to warn that businesses need to demonstrate responsible use of AI or face potential consequences of government oversight and regulation, as they see a rapidly growing trend toward AI oversight. In Protiviti’s recent white paper, Artificial Intelligence: Can Humans Drive Ethical AI?, we share insight on several factors that companies should consider to ensure that any AI development is brought closer to its ethical goal, including data quality and eliminating data bias; proper training, validation and testing of AI/ML algorithms; and consideration of societal issues such as job change or loss.

While any technology can be misused for unethical purposes, it’s early enough in the AI game for corporations to put the right checks and balances in place to ensure that their AI systems and applications follow a clear ethical path. Whether it’s ensuring that AI algorithms are trained on clean and unbiased data, preparing employees for possible job changes, or creating committees or advisory boards to set clear goals for ethical AI, organizations have the opportunity – and the moral imperative – to ensure that any AI development is done for positive outcomes with minimal risk for harm.

The Protiviti white paper, Artificial Intelligence: Can Humans Drive Ethical AI?, is available as a free download from our website.

Add comment