Exploring the Challenges Posed by Artificial Intelligence, Part I: Ethics

Organizations must make a concerted effort to address the ethical questions raised by artificial intelligence if they are to realize the technology’s full potential.

According to an IDC report, by next year, 75 percent of enterprise applications will include some sort of artificial intelligence (AI) functionality, and global spending on AI will exceed $52 billion — a significant increase over the roughly $19 billion that was spent on AI in 2018. As IDC Research Director for Cognitive/Artificial Intelligence Systems David Schubmehl puts it, “Every industry and every organization should be evaluating AI to see how it will affect their business processes and go-to-market efficiencies.”

As I have explored previously, there is already ample evidence that feeding the right datasets into the right AI solutions can drive considerable value across the healthcare industry. In fact, according to a recent KPMG report, 89 percent of healthcare professionals claim that AI is currently creating efficiencies in their systems, and 91 percent agree that it is increasing patients’ access to care.

In the healthcare communications space specifically, AI-driven value can take any number of forms. Sometimes, this value looks like a more nuanced understanding of patients that enables messaging based on the intricate ways a patient’s past behavior, personal characteristics, and current position in their patient journey interact. Other times, it looks like a tool that integrates with healthcare providers’ electronic health records to help support the identification of pockets of underdiagnosed patients. Still other times, it looks like the ability to make decisions about the next best marketing action based on predictive recommendations.

All of this is to say that there is no longer any doubt about the utility of AI in healthcare marketing, healthcare more broadly, or the business world at large. AI is not only the future; it is the present. However, just because AI solutions are increasingly capable of autonomous decision-making does not mean organizations should grant these solutions total operational autonomy — indeed, organizations must exercise as much oversight of AI solutions as of human workers, if not more.

Its clear upsides notwithstanding, AI continues to present challenges related to ethics, privacy and security, and end-to-end efficacy. Over the course of this three-part series, I will explore each of these challenges in depth, but at the highest level, to get the most out of their AI, organizations must develop an understanding of these shortcomings and use it to inform approaches that treat AI not as a set-it-and-forget-it solution, but as a powerful augmentor of their existing capabilities.

The Bias Reinforcement Problem

Many AI solutions deliver value by identifying trends in massive datasets that would otherwise go unnoticed, and using these trends to make decisions or recommend actions. Depending on how the algorithms that power a solution are programmed and trained, organizations may or may not be able to determine how the solution arrived at a given decision or action. Organizations need adequate visibility into the mechanics of their AI solutions to, among other things, avoid unwittingly embedding racist, sexist, ageist, or otherwise unethical biases into the very heart of their operations. Without clear guidelines for the ethical design and deployment of AI solutions, organizations risk stumbling into the same pitfalls that even the largest, best-resourced companies in the world have encountered.

For instance, in 2014, Amazon rolled out a machine learning algorithm designed to vet the thousands of résumés the company receives every year. As one of the solution’s engineers summed it up, “[Amazon] literally wanted it to be an engine where [we’re] going to give [it] 100 résumés, it will spit out the top five, and we’ll hire those [people].” Unfortunately, since men hold 74 percent of Amazon’s managerial positions and 60 percent of its total positions, the majority of résumés that were used to populate the solution’s training data belonged to men. As a result, the solution “learned” that men were preferable candidates to women, and assessed a penalty to any résumé that featured the word “women’s.”

Similarly, just last year, technology luminaries including David Heinemeier Hansson (creator of Ruby on Rails) and Steve Wozniak (cofounder of Apple) drew attention to a seemingly biased algorithm that was being used to set credit limits for consumers’ Apple Cards. Hansson and his wife, Jamie, file taxes jointly and live in a community-property state, yet even though Jamie has a better credit score, her credit limit was set considerably lower than her husband’s. Wozniak and his wife, Janet Hill, found themselves in the same situation, and while Apple claims that its algorithm was not programmed to use gender as an input in its decision-making processes, that may actually have been part of the problem.

Again, many AI solutions operate by identifying trends that are either imperceptible to the human eye or comprised of data points that a human analyst might not consider — whether by choice or by oversight. Apple’s algorithm may not have been programmed to be sexist, but, depending on the training data it was fed, it may have identified that, historically, consumers who frequently shop at women’s clothing stores have been granted lower credit limits. Consequently, it may have learned to make credit worthiness decisions based on a proxy for gender, reinforcing existing gender-based economic inequalities.

Laying the Groundwork for Ethical AI

Herein lies the challenge of developing ethical AI solutions: if organizations do not work proactively to counteract long-standing human biases, such biases will likely end up being reinforced by artificial intelligence. Doing so can become tremendously difficult if a solution’s decision-making processes are even slightly opaque.

In an effort to address the serious ethical questions raised by blackbox AI solutions, governing bodies like the Organisation for Economic Co-operation and Development (OECD) have established guidelines for transparent, explainable AI solutions. OECD guidelines dictate that AI actors have an obligation to:

  1. Foster a general understanding of AI systems.
  2. Make stakeholders aware of their interactions with AI systems, including in the workplace.
  3. Enable those affected by an AI system to understand the outcome.
  4. Enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors and the logic that served as the basis for the prediction, recommendation, or decision.

Principles like these can be used to inform the type of AI ethics policies that, according to a separate KPMG report, roughly 90 percent of business stakeholders want. Against the backdrop of this broad consensus, it should be no surprise that, in addition to AI architects, AI product managers, data scientists, and software engineers, KPMG listed AI ethicists as one of the “top five hires companies need to succeed in 2019.”

Bringing Out the Best in Us All

AI ethicists will be particularly essential in highly regulated industries like healthcare. Absent the proper regulatory frameworks, the risk of an algorithm learning to use proxies for, say, patients with certain comorbidities or pre-existing conditions in an unethical manner will be quite high.

That said, by working with an AI ethicist and taking it upon themselves to ensure their AI solutions are simple to understand not only for themselves, but for end users, analytics professionals can minimize the risk of bias-reinforcement while still tapping into AI’s immense value.

Ultimately, the question is no longer if AI will be deployed in healthcare, but how it will be deployed. It is incumbent upon each and every one of us working in the analytics space to do our part to ensure this “how” amplifies the best of humanity, not the worst.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.