People's Newsroom

Moral license for Artificial Intelligence

The concept of social license to operate, where a firm openly works with the communities that will be affected by its actions to gain their trust and acceptance, offers an approach for crafting AI solutions that are acceptable to stakeholders.

Mining companies, like companies in many industries, have been struggling with the difference between having a legal license to operate and a moral one. The colloquial version of this is the distinction between what one could do and what one should do—just because something is technically possible and economically feasible doesn’t mean that the people it affects will find it morally acceptable. Without the acceptance of the community, firms find themselves dealing with “never-ending demands” from “local troublemakers” hearing that “the company has done nothing for us”—all resulting in costs, financial and non-financial, that weigh projects down. A company can have the best intentions, investing in (what it thought were) all the right things, and still experience opposition from within the community. It may work to understand local mores and invest in the community’s social infrastructure—improving access to health care and education, upgrading roads and electricity services and fostering economic activity in the region resulting in bustling local businesses and a healthy employment market—to no avail.

Without the community’s acceptance, without a moral license, the mining companies in NSW found themselves struggling. This moral license is commonly called a social license, a phrase coined in the ’90s, and represents the ongoing acceptance and approval of a mining development by a local community. Since then, it has become increasingly recognized within the mining industry that firms must work with local communities to obtain, and then maintain, a social license to operate (SLO). The concept of a social license to operate has developed over time and been adopted by a range of industries that affect the physical environment they operate in, such as logging or pulp and paper mills.

What has any of this to do with artificial intelligence (AI)? While AI may seem a long way from mining, logging, and paper production, organizations working with AI (which, these days, seems to be most firms) are finding that the technology’s use raises similar challenges around its acceptance by, and impact on, society. No matter how carefully an AI solution is designed, or how extensive user group testing has been, unveiling a solution to the public results in a wide range of reactions. A Bluetooth-enabled tampon can be greeted with both acclaim and condemnation, with some seeing the solution as a boon that will help them avoid embarrassment and health problems while others see privacy and safety concerns or worry about the device being hacked, leaking personal information. Higher-stakes solutions result in more impassioned reactions, as has been the case with COMPAS, a tool for estimating a defendant’s risk of recidivism (or reoffending) in a criminal trial, and MiDAS, a solution intended to detect fraud and then automatically charge people with misrepresentation and demand repayment. These solutions are considered biased against less privileged groups, exacerbating structural inequalities in society and institutionalizing this disadvantage. Just as with building an oil rig, the fact that an AI solution is legally and economically feasible doesn’t imply that the community will find it morally or ethically acceptable, even if they stand to personally benefit.

AI, like all technology, can benefit as well as harm both individuals and society as a whole. How we use technology—how we transform it from an idea into a solution—determines whether potential benefits outweigh harms. “Technology is neither good nor bad; nor is it neutral.” It is how we use technology that matters, for what ends, and by what means is it employed, as both require contemplation. There are choices to be made and compromises to be struck to ensure that the benefits are realized while minimizing, or suitably managing, the problems. Forgoing a technology due to potential problems might not be the most desirable option, though, as a “good enough” solution in an (already) imperfect world might, on balance, be preferable to the imperfect world on its own. The question is, however, what is “good enough”?

The challenge, then, is to discover what we should do. How do we identify these opportunities? What processes might be used to make compromises? And how can we ensure that the diverse voices in the community have their concerns listened to and accounted for?

Framing the challenge

AI enables solutions as diverse as machine translation, self-driving cars, voice assistants, character and handwriting recognition, ad targeting, product recommendations, music recognition, and facial recognition. AI is being used to instruct, advise, report measurements, provide information and analysis, report on work performed, report on its own state, run simulations, and render virtual environments. Solutions that seemed impossible a few years ago are now embedded in products and services we use every day.

Over this time, our view of AI has also changed. Hopes that AI-powered solutions would counter some of our human weaknesses have given way to fears that AI might be an existential threat. At first, it was thought that regulation could control how AI is used—open letters were sent to the officials with long lists of signatories attached, asking for regulation to be enacted. This has failed to bear fruit. More recently, the focus has been on developing ethical principles to guide the development of AI-enabled solutions. These principles are useful distillations of what we want from AI (and what we’d like to avoid), but they are not enough, as they fall short of describing how particular solutions should adhere to them. The latest hope is that design (and design methodologies) will enable us to apply these principles, but it’s not clear that design will be enough either.

Our efforts to grapple with the challenge of realizing AI’s value while minimizing problems have been complicated by three challenges:

  • The definitional challenge of understanding what exactly AI is, and therefore, what the problems are
  • The challenge of aligning technical (AI) solutions with social norms
  • The challenge of bridging different social worlds— the different cultural segments of society that shape how their members understand and think about the world

We’ll deal with each of these in turn.

The definitional challenge: What is AI, and what are the problems?

There is no widely agreed-upon and precise definition of what AI is and what it isn’t. This is in part because AI is a broad church, home to a range of otherwise unrelated technologies.

“Artificial intelligence is that activity devoted to making machines intelligent, and intelligence is that quality that enables an entity to function appropriately and with foresight in its environment.”

While imprecise, this definition does capture the huge scope and ambition of what we might call the AI project. The lack of a precise definition might also have helped the field grow, as it has enabled AI to be something of a bowerbird, with its practitioners “borrowing” ideas and techniques from other fields in pursuit of their goals. A more cynical approach might be to define AI as “things that don’t quite work yet,” as many technologies stop being seen as AI once they are broadly adopted. Roboticist Rodney Brooks once complained: “Every time we figure out a piece of it, it stops being magical; we say, ‘Oh, that’s just a computation.’” There is a sense that AI is a label for the (currently) impossible.

More pragmatic would be to consider AI as an area of practice, a community working to replicate human cognitive (rather than just physical) achievements. AI technology is simply whatever technology the AI community uses to solve problems that they find interesting. AI can progress by applying old techniques to solve new problems just as much as it can by discovering new techniques to solve old problems. Indeed, a significant driver for the current wave of investment we’re seeing in AI is a confluence of cloud services, easy access to data, and low-cost, ubiquitous compute and networks enabling new solutions to be built from old technologies, rather than the development of new disruptive technologies. After several decades of steady progress, it seems that the discovery of new AI techniques might be stalling.

Regardless of where one draws the line between “intelligent” technologies and others, the growing concern for ethical AI is not due to new technology—such as, for instance, the development of CRISPR or genetically modified organisms (GMOs)—that enables us to do new and unprecedented things. The concern is due to dramatic reductions in price-performance that enable existing technologies to be applied in a broad range of new contexts. The ethical challenges presented by AI are not due to some unique capability of the technology but to the ability to easily and cheaply deploy the technology at scale. It is the scale of this deployment that is disruptive.

Many of our technology-related problems arise because of the unforeseen consequences when apparently benign technologies are employed on a massive scale. Hence, many technical applications that seemed a boon to mankind when first introduced became threats when their use became widespread.

Thanks to the growing scale of AI deployment, society seems to be at a tipping point: a transition from a world containing some automated decisions to a world dominated by automated decisions. Society is formalizing decisions in algorithms, cementing them in software to automate them, and then connecting these decisions to each other and the operational solutions surrounding them. Where previously the digital landscape consisted of the isolated islands of enterprise applications and personal computing, the landscape today is one of always online, available, and interconnected cloud solutions and smartphones.

The technology used to automate decisions is less important than the volume of decisions being automated and the impact of connecting these automated decisions so that they affect each other. We’re also integrating these automated decisions with hardware that can affect the real world. And we’re doing this at scale, creating a landscape dominated by overlapping decisioning networks. It’s not that the individual decisions being automated are necessarily problematic on their own (though they may be, and we need guardrails to help ensure that this isn’t the case). Rather, problematic behavior often emerges when automated decisions are integrated and affect each other directly, something we might consider distributed stupidity—situations where emergent unintended consequences and clashes between automated decisions result in “smart” systems going bad.

A car rental firm, for example, might integrate the end-to-end rental process, from payments through to provisioning, reaching all the way into individual rental cars by using Internet of Things (IoT) sensors and effectors. This could enable the firm to track car locations and provide more tailored rental plans and support renters on the road, while also reducing theft by immobilizing (stationary) cars should they be stolen. However, these systems might lead the firm to inadvertently immobilize a long-term rental car while the renters are camping in a remote location with intermittent (at best) mobile phone coverage, believing the car to be stolen due to a temporary fault with a payment gateway that was progressively escalated by a series of automated decisions when the firm was unable to contact the renters via SMS or an outbound call center. The renters in this case would be left without a functioning vehicle in an isolated location and with limited resources, unable to walk out or contact help.

The point is that a bad (automated) decision can now have a cascading series of knock-on effects, triggering further bad decisions that escalate the problem. The unforeseen consequences Kransberg warns of might well, in such instances, be the result of unintended interactions between previously manual decisions that have been automated and then integrated. These interactions could be highly contingent, as with the rental car example. They can also be prosaic, such as mistakenly adding a name to the list of redundancies after a merger, which could force a firm to terminate and then rehire an employee. Integrating payroll with operational and access control systems streamlines internal processes, but it also creates a network of automated decisions that, once started, the firm no longer controls.

This is a difference in degree, not type, with the low (and dropping) cost of technology shifting the question from can we too should we. We need to consider the four “areas”: Are we doing the right things? Are we doing them the right way? Are we getting them done well? and Are we getting the benefits? 

The dual-edge here is that because the cost to deploy and integrate these automated decisions is low and dropping, governance and oversight are also lowered, while issues concerning privacy, persuasion, and consent come to the fore.

We need to focus on the system, rather than the technology, as it’s systems in use that concern us, not technology as imagined.

Aligning technical solutions with social norms

Our second challenge—the problem of aligning technical (AI) solutions with social norms—is one of not seeing the wood for the trees. The technical community, by nature of its analytical approach, focuses on details. The problem of creating an autonomous car becomes the problem of defining how the car should behave in different contexts: what to do when approaching a red light when a pedestrian stumbles in front of the car, and so on.

Designing “correct” car behavior is a question of identifying enough different contexts—different behavioral scenarios—and then crafting appropriate responses for each situation. Similarly, creating an unbiased facial recognition algorithm is seen as a question of ensuring that the set of behavioral scenarios (and responses) used to design the algorithm is suitably unbiased, trained on a demographically balanced set of images rather than relying on historical (and potentially biased) data sets.

This reductionist approach is rightly seen as problematic, as to whether or not a particular response is ethical (or not) is often an “it depends” problem. For autonomous cars, this manifests in the trolley problem, a thought experiment first posed in its modern form by Phillipa Foot in 1967. The trolley problem proposes a dilemma where a human operator must choose whether or not to pull a lever that will change the track that a trolley is running down. The dilemma is that a group of people is standing on the first track, while a separate individual is on the second, so the operator is forced to choose between the group dying due to their inaction, or the individual dying due to their action. The point here is that there is no single “correct” choice; any choice made will be based on subjective values applied to particular circumstances one finds oneself in, nor can one refuse to choose. Many of the scenarios identified for our autonomous car will not have obvious responses, and reasonable individuals may disagree on what the most appropriate response is for a particular scenario. Similarly, attempting to align the training set for a facial recognition system with demographics leads to the question of which group of people will determine the demographic profile to be used.

The diverse and complex real-world makes slicing any problem into a sufficient number of scenarios to ensure ethical behavior a Sisyphean task. There will always be another, sometimes unforeseen scenario to consider; newly defined scenarios may well be in conflict with existing ones, largely because these systems are working with human-defined (socially determined) categories and types that are, by their nature, fluid and imprecise. Changing the operating context of a solution can also undo all the hard work put into considering scenarios, as assumptions about demographics or nature of the environment—and therefore, the applicable scenarios—might no longer hold. Autonomous cars designed in Europe, for example, can be confused by Australian wildlife. Or a medical diagnosis solution might succeed in the lab but fail in the real world.

The natural bias of practitioners leads them to think that “fair” or “ethical” can be defined algorithmically. This is not possible—a blind spot, generally, for the technologists.

Bridging social worlds

The third and final challenge is bridging different social worlds. All of us have our own unique lived experience, an individual history that has shaped who we are and how we approach the world and society. The generation that came of age in the Great Depression during the 1930s is a case in point: Failing banks during that time took countless individuals’ life savings with them, generating a lifelong distrust of banks among many people.

Disagreements in society are typically framed as differences in values or principles, differences in how we evaluate what we see around us. However, some of society’s deepest and most intractable disputes are not primarily about values and principles. Indeed, we can often agree on principles. The differences lie in the social worlds to which we apply these values and principles: the way we interpret what we see around us. We might agree with the principle that “it’s wrong to [unjustly] kill people,” for example, while disagreeing on what constitutes a person.

Progress on these most intractable disputes is difficult, as it’s common to assume that there is a single secular society (a fully normalized social world) against which to measure principles such as fairness. The assumption is that everyone sees the same world as we do ourselves but just approaches it with different values when this is not necessarily the case—a blind spot for many social commentators.

We can see these differences in social worlds come to the fore in some more recent and more controversial AI solutions. COMPAS, the recidivism-predicting tool mentioned earlier, is a good example. The team developing COMPAS took a utilitarian approach, creating a solution for a world where all individuals are treated equally and where harms (roughly, the proportion of incorrect predictions) are minimized for the greatest number of people. If we use a different measure and judge COMPAS according to the norms of a different world, one focused on equity where all individuals experience similar outcomes in life no matter what circumstances they start under, then COMPAS is lacking, as the unintended harms, it causes fall disproportionately on disadvantaged groups. This is the “fairness paradox,” as improving COMPAS’s performance in one world results in the solution performing worse in others (and vice versa).

While we agree that our AI solutions should be ethical—that they should adhere to principles such as fairness (promoting fair treatment and outcomes) and avoiding harm, we can also disagree on which trade-offs are required to translate these principles into practice—how the principles are enacted. Applying the same clearly defined principle in different social worlds can result in very different outcomes, and so it’s quite possible, in our open and diverse society, for different teams working from the same set of principles to create very different solutions. These differences can easily be enough for one group to consider a solution from another to be unethical.

It’s common at conferences to pose the (rhetorical) question: Who decides what is ethical? Any design decision is likely to disenfranchise or otherwise affect some demographic group or fail to address existing inequalities or disadvantages, so it’s implied that care must be taken to ensure that decisions are made by a suitably sensitive decision-maker. This is likely to be the wrong question, though, as focusing on who makes the decision means that we’re ignoring how this individual’s particular social world (which will be used to frame what is or is not ethical) was selected. A better question is: How can one build a bridge between the different social worlds that a particular solution touches? There are trade-offs to be made, but without such a bridge, one cannot begin to determine how to make them.

We might summarize the challenges of developing ethical AI solutions (moral decisioning networks) as being similar to thermodynamics in that you can’t win, you can’t break even, and you can’t leave the game. We can’t win, because if we choose to frame “ethical” in terms of a single social world—an assumed secular society—then we must privilege that social world over others. We can’t break even, because even if we can find a middle ground, a bridge between social worlds, our technical solution will be rife with exceptions, corner cases, and problems that we might consider unethical. Nor can we leave the game, banning or regulating undesirable technologies, because what we’re experiencing is a shift from a world containing isolated automated decisions to one largely defined by the networks of interacting automated decisions it contains.

If we’re to move beyond the current stalemate, we need to find a way to address all of these challenges: a method that enables us to address the concerns of all involved social worlds (rather than privileging one over others), that enables us to consider both the (proposed) system and the community it touches (rather than just the technology), and one that also provides us with a mechanism for managing the conflicts and uncertainty, the ethical lapses, that are inherent in any automated decisioning system. We need an inclusive dialogue.

Trust and acceptance

A successful AI solution—a successful automated decisioning network—is one that not only effectively performs its intended function, but one that is accepted, approved, and ultimately trusted by the people it touches. While there will be challenges, managements shouldn’t find themselves dealing with “never-ending demands” from “local troublemakers,” hearing that “the company has done nothing for us” while incurring costs that weigh the project down. The relationship between management and community should be collaborative rather than adversarial, working together to understand when AI should be used. Unfortunately, we’re a long way from such a state of affairs.

The concept of a social license to operate for AI has the potential to address all three challenges—definitional, aligning a solution with social norms, and bridging social worlds—discussed above. An SLO puts the focus on the overall solution and the social and physical environment into which it is deployed rather than on the technology, avoiding the problem of centering our method on particular AI technologies. It also addresses the challenge of bridging social worlds by acknowledging that the solution can never be considered ethical. While a firm might have the legal right to operate, it must also obtain, with the consent of the community, a moral license to operate, and this license must be maintained and renewed as both the solution and community evolve and circumstances change. The ongoing process of developing and maintaining an SLO enables a firm to build a bridge between the social world of the firm and the social world of the affected community—which itself may contain multiple social worlds that also need to be bridged. The SLO process does this by providing a framework within which the firm can work with the community to understand each other, the proposed solution, and each party’s goals, norms, and principles. They work together to develop a shared understanding of the proposed solution (focusing on the decisioning network rather than on the particular technologies) and then to determine how shared principles are enacted in real-life—addressing the problem of aligning a solution with social norms—by identifying problems and opportunities and finding solutions. Being open to this type of dialogue means being vulnerable because honesty is required in order for the dialogue to be open and inclusive. Products and services must be fairly represented. Stakeholders need to be willing to trust that technology has not been misrepresented or, in the event of informed consent, that data will be stored and used as promised.

Back to top button