People's Newsroom

An intelligent hospital

Consider a case where a firm is developing a “smart” hospital. This hospital will have all the usual accouterments of a smart building: IoT sensor networks to track how inhabitants use the building—identifying patterns of room use and individual preferences—and automation to both optimize the building’s operation and tailor it to individuals, minimizing maintenance costs, reducing the building’s environmental footprint, and improving convenience and comfort for its users. Floor-by-floor and zone-by-zone air quality and staff presence data will enable air conditioning and heating to be optimized, reducing power and water use while improving comfort. Data on ambient light levels and staff activity can be used to minimize lighting. Plant equipment, such as backup generators and oxygen supply lines, can be instrumented to enable just-in-time maintenance. Smartphone apps will enable inhabitants to interact with these systems and personalize their experience. And so on.

AI will be used to string these systems together, transforming our smart hospital into an “intelligent” one. Voice assistants will be ubiquitous—installed in registration (including for the emergency room), patient and treatment rooms, surgery, and so on—providing staff, patients, and their guests with a more convenient way of interacting with hospital processes, calling for help, and bridging any language barriers. Staff, patients, and visitors are tracked from when they first approach the building and associated with records maintained in operational systems—patients should never go missing again, visitors will be directed to whomever they’re visiting via wayfinding, and staff can always find the nearest specialist in an emergency. Decision support tools speed diagnosis, highlighting potential problems on medical images and suggesting what a patient’s particular collection of symptoms might imply. All this information is fed into AI-powered situational awareness and planning systems that identify problems (possibly before they crystallize into emergencies) and present decision-makers with both potential problems and possible solutions. A patient’s mutterings, for instance, are correlated with unusual readings from bedside monitors and interpreted as advanced heart disease, resulting in the situational awareness and planning system dispatching a drone crash cart while alerting support staff and the nearest specialist, and suggesting a change to the operating room schedule to accommodate a potential emergency.

While a boon, our intelligent hospital will likely suffer from many of the problems associated with a large-scale AI deployment. Voice assistants, for example, must support a range of languages, but which dialects within each language should be supported to avoid biasing the solution, and how should the hospital supports those who (for whatever reason) can’t talk? A tool that “reads” X-ray images and highlights lung damage or other signs of pneumonia, a tool that worked well in the hospital where it was developed, might be biased against one of the demographic groups that our intelligent hospital serves, providing an undesirably high level of false negatives or positives. What should the situational awareness and planning system prioritize when confronted with conflicting needs for a scarce resource, such as a particular specialist or machine: Which patient gets priority, and should the system be empowered to make these decisions on its own? There is also the possibility of unexpected interactions between these systems causing problems via emergent distributed stupidity: A voice assistant in a patient room might consistently misrecognize a patient with an uncommon dialect and, exacerbated by biases in diagnosis recommendation solutions, cause situation analysis to create many erroneous low-level requests that the staff soon dismiss, leading the staff to turn off decision support and so miss the patient’s underlying problem before it becomes critical.

Our intelligent hospital can also amplify existing discrimination, disadvantage, and privacy concerns. Flawed AI behavioral profiling derived from social media and smartphone data could, for example, influence medical risk profiles determining which treatments are offered. Data from medical devices pieced together by situation analysis—blood oxygen, heart rate, and so on—might provide accurate prognoses that are implicitly treated by staff as do not resuscitate (DNR) decisions, decisions that might not be in a patient’s best interest but represent the most efficient use of hospital resources.

AI enables the hospital to take data generated by the sensor network (security cameras, for example), identify individual people, profile them, and then discriminate between them, either individually or as groups, and treat them differently. This discrimination can be a boon—allowing the firm to adjust the building or medical treatment to their needs and preferences while smoothing their journey through the day. The discrimination could also be harmful—creating undue stress by enabling the firm to track toilet breaks, generating maps of who is talking to whom and use them to identify groups unrelated to work for union-busting purposes, determining what treatments are offered to a patient, or even determining which patient is treated when resources are scarce. This discrimination relies on a wealth of personal data (both captured and inferred) stored in operational systems, elevating the risks and consequences of our intelligent hospital’s operational systems being hacked or leaking personal data.

From acceptance through approval to trust

To understand how a firm might go about gaining a social license, it’s important to consider the major role that trust plays in this effort. The benefits of a social license to operate are the result of the community’s acceptance and approval of a solution—the intelligent hospital in our example—and this acceptance and approval stem from the community’s trust in the firm. If the firm is to realize the anticipated benefits of the intelligent hospital, it needs to ensure the acceptance and approval of the community that will be using it. Failure to do this is likely to result in disruptions that drive up the cost and prevent the benefits from being realized. These can range from the minor (small disobediences such as sabotaging the sensors on a floor or using patterned clothing to hinder AI profiling and location tracking) to the major (attempts to hack the system and render it inoperable, or protests). Unanticipated bias in voice assistants, for example, could lead to protests by affected community groups unable to engage with hospital systems. Prioritization decisions by the planning system that are not aligned with community norms, or simply surprising to many in the community, could result in the entire project being questioned.

The firm has a great deal of freedom in how AI is used to realize the intelligent hospital. While voice assistants will require some form of voice recognition technology, a range of audio and video techniques can be used to track inhabitants to similar effects. A number of different approaches—many configurations of sensors, decisions (potentially made by AI technologies), and (consequential) actions—are possible, though only some of them will be acceptable, and even fewer may be desirable, to both the people working in and using the hospital and the firm commissioning it.

What is important is what decisions are made, which of these decisions are automated and which are not, how these decisions affect the quality of the working and private lives of the people using the building, the effect of the decisions on the human dignity of the people they touch, and how the decisions align with community expectations. The firm needs a social license for the intelligent hospital. The community needs to trust the firm if it is to grant the license: trust that the firm will do (and is doing) what it says it will, and trust the firm’s ability to execute and deliver on its commitments.

Ultimately, trust is a relationship of reliance. It’s the belief that a counterpart will behave in certain ways, as well as the belief that the counterpart is dependable and competent, that they can be relied on. A firm that works collaboratively with the community, demonstrating integrity and competence in how it shapes the solution and manages operational risk, will likely be seen in a positive light. A firm that takes advantage of a community’s vulnerabilities, is seen as cynical or incompetent or shows poor stewardship of its own vulnerabilities, will be viewed poorly.

Trust-building enables members of the groups associated with the initiative to accept being vulnerable to one another (something many businesses may need to learn), and it also helps deescalate conflicts. Failure by the firm to meet community expectations, either for reasons beyond the firm’s control or because the results of the firm’s labors don’t align with community expectations, erodes trust. When trust breaks down, it is often replaced by a suspicion—a suspicion that results in “never-ending demands” from “local troublemakers.”

Within the context of social license to operate, trust relies on four factors: a firm’s (or its solution’s) impact on the community; the quantity of contact between the firm and the community; the quality of that contact; and the procedural fairness of decisions made regarding the solution. A firm can take action in all four of these areas to build trust with the community and so increase the community’s acceptance and approval of its actions.

Understanding a solution’s impact on the community entails recognizing that all solutions bring with them problems as well as benefits. Our intelligent hospital potentially has a smaller environmental footprint through more efficient energy use. It may facilitate more inclusive operations by enabling staff to support a broader range of languages. And diagnoses might be more accurate and swifter. However, the building also has the potential to increase work stress; introduce the privacy risks of sensitive personal data being leaked or otherwise misused; or institutionalize undesirable biases, inequalities, or disadvantages; as well as being subject to emergent distributed stupidity. But many of these benefits and problems can be anticipated by firms, enabling them to bolster benefits while mitigating problems.

It is also important to consider how the community experiences a solution, and how individuals experience it personally. For instance, integrating Bluetooth-enabled medical devices directly into the intelligent hospital’s IoT network might be met with a similar response to the Bluetooth-enabled tampon discussed earlier. Or a desire to streamline operations by simplifying how staff can collaborate around an image recognition solution might not adequately address concerns about privacy and human dignity. It’s quite possible for different stakeholders within the community to have different expectations for a solution’s benefits and problems. Similarly, an unanticipated dialect could result in frustration or even exclusion of an individual unless the speech recognition failure is dealt with gracefully. This mismatch between the firm’s intention and community expectations of a solution’s impact and benefits can be a significant source of the unanticipated consequences for the firm.

The distinction between the impact of a smart hospital and an intelligent one, between a hospital without and with AI, is one of degree rather than kind. AI increases the potential benefits, but it also elevates the risks.

This brings us to the next two factors supporting trust: the quantity and quality of the firm’s contact with the community. Trust is the result of frequent positive contact between the firm and the community. The firm that builds our hospital needs to present a human face to the community (the firm, after all, is also a community), a face that the community can learn to trust and work with.

Contact should be frequent (quantity) and meaningful (quality). Practically, contact can range from formal impact studies attempting to gauge how a solution will affect a community and their disposition toward it to day-to-day contact in the field via community groups or between individuals and representatives of the firm, as well as contact with stakeholders who are not directly affected by the solution but who have an interest in influencing the outcome. Some of this contact might also be mandated via regulations such as the General Data Protection Regulation (GDPR) or those associated with the industry in which the firm is operating. Frequent, meaningful contact enables the community and the firm to learn about each other, reducing the unknowns (and the unexpected) by minimizing misinterpretation and avoiding the projection of one’s own belief systems onto the other.

The fourth factor influencing trust, procedural fairness, is the decision-making and dispute resolution processes that govern a solution’s development and operation. Individuals must perceive that they have a reasonable voice in the decision-making process, that the decision-makers have treated them respectfully, and the procedure is one they regard as fair. They must also feel that there is equal power between parties—community and firm—so that the solution is truthful.

For the community to accept our intelligent hospital and trust the firm behind it, they need to feel that their opinions are valued, that their point of view has been accounted for, that they are being treated respectfully and with dignity, and that their view is being integrated into the solution. End-of-life care or intensive care treatment augmented by AI, for instance, needs to support patients and treat them with dignity and respect, rather than be based on an economic calculus. It should be practical for individuals and groups, for example, to respond to the proposal to use voice assistants throughout the hospital, pointing out problems and suggesting alternatives. Both decision-making and dispute processes need to be understandable and navigable by individuals so that they can see their views being accommodated and weighed against not only those of others in the community but with technical and financial constraints and the firm’s own interests.

The missing parts

The concept of social license to operate can provide us with a solid foundation for a moral license for AI, but work needs to be done to adapt it to the needs of firms developing AI solutions. There are three questions that we’ve been skirting in this article so far that we need to address if we’re to move forward. These questions are:

  • How do we describe the (proposed) solution without unnecessary (and confusing, for many stakeholders) technical details or reverting to overly abstract concepts?
  • What constitutes “community” for our solution—that is, how do we identify our stakeholders?
  • How do we evolve the solution, working from a proposed solution to one that the stakeholders consider ethical, identifying where the trade-offs are to be made and how to make them?

Describing the solution

The first hurdle to overcome is to find a way to describe our solution, such as the smart hospital in our example. While our familiarity with voice assistants makes them easy to understand, it is more challenging to understand a situational awareness and planning solution due to its more nebulous nature, as it requires data to be sourced from around the hospital to drive a network of interconnected decisions that provides recommendations and triggers actions for a diverse range of (potential) patient problems. We need a language that the community and the people proposing it can use to discuss the shape the AI solution will take—how inhabitant location will be tracked and what the tracking data will be used for, how it will interact with situational awareness, what actions and processes situational awareness can drive, and so on—as well as the relative problems and benefits of alternative approaches to realizing this functionality. It’s AI’s ability to integrate this broad range of sensors and effectors, to transform our smart hospital containing isolated automated decisions into an intelligent hospital that contains an integrated automated decisioning network, that highlights this need.

Describing our solution involves solving what we might call the brewing problem. Brewing required the development of microbiology—a language integrating biology and chemistry—before it could transition from craft to engineering. This made it possible to fine-tune the brewing process and obtain more consistent results. Similarly, if we’re to fine-tune our AI solution, then we need to be able to describe and discuss it in a language that is accessible to both the community and the people proposing it, a language that encompasses both ethics and implementation, but without including too many technical details. To be both comprehensible and useful, this language needs to be more specific than our high-level ethical principles, but more general than implementation details. It should also avoid technical jargon, using straightforward and accessible terms to support a common understanding that contributes to building trust. We need to be able to describe the interconnected and aggregated set of decisions (the decisions and their relationships) in our proposed solution; which actor (human or machine) enacts each decision; what information drives the decision; the consequences (and information) resulting from a decision; and the impact of these actions (and changing information) on humans.

It can be important to distinguish between decisions made by a human and those made by AI, as humans and machines think (and decide) differently. As humans, we use our senses and lived experience when we make a decision, even if we’re making it unconsciously. We notice the unusual and unexpected and factor them into our deliberations. Machines, on the other hand, only consider the data that they’re designed to consider. If a decision is consequential—such as the decision to fire a missile, withdraw an individual’s social benefits, or to move a lifesaving machine to a different patient—then it is common to prefer that the decision is made by a human, as only a human will consider an unusual factor, something unexpected but important enough to sway a decision. In some cases, regulation might require particular decisions to be made by a human (or even by a group) rather than algorithmically. However, while we want to distinguish between human and machine decisions, we might be less interested in how the machine decision is implemented.

Our intelligent hospital might be described in terms of what information is captured, the decisions that are informed by this information, the entities that make the decisions (human or machine), and the information and actions that spring from each decision. For instance, the description may specify that a temporary identification badge issued to a visitor (information) will be associated with video images and a voiceprint (information) to identify the visitor (via a machine decision) so that the hospital can track them as they move through the building. (The technology used to associate the two is less important than the fact that the association is made.) If the building determines that the visitor wanders into a prohibited area, then it notifies (a machine decision) security staff on the floor who will determine what to do (a human decision). A complete description of a solution could contain many of these information-decision-action threads covering our intelligent hospital’s operations (“Man is an animal suspended in webs of significance he himself has spun”), which will be evolved and refined in collaboration with the community.

Defining the community

Before we begin any work, we need to delineate the social boundary of our system. We must establish who the stakeholders are, understand their dispositions, discover the social worlds at play, and identify our “experts,” gatekeepers, and informants.

“Community” may well be too narrow a term to capture the diverse set of stakeholders that a complex solution such as our intelligent hospital touches and whose lives it affects. It’s easy to assume a social license to be a single license granted by a well-defined community. This is not true in complex environments, where the community is composed of a diverse collection of subgroups drawn from other geographic areas and communities. In these cases, it’s more productive to think of a social license to operate as a continuum of multiple licenses across these subgroups, across multiple overlapping and interrelated communities.

An anthropologist might start by listing the different behaviors, thoughts, and attitudes that should be considered, along with demographic attributes such as employment status, income, gender, primary language, and so on—factors that describe differences in the community. These factors are mapped to a set of community factors, with each factor capturing a tension or difference in preference that might exist in the community. Obvious examples from our intelligent hospital are a worker’s attitude to gender (whether gender is considered strictly binary or if a broader definition is accommodated), the nature of their work (analytical and bureaucratic or manual), their educational attainment, their religious or belief system, socioeconomic (dis)advantage, or whether they work in the hospital regularly or only visit occasionally. A complete set of factors provides us with a mud map of the landscape our community might cover.

Firms can use a range of formal and informal methods to investigate community members’ behaviors, thoughts, and attitudes, such as observation, structured and semi-structured interviews, group discussions, diary studies, or workshops with members of the group being studied. The important thing is to establish an open dialogue where information flows back and forth between researchers and subjects. Participants can be selected from the community to ensure that all known factors are covered, with particular attention given to edge cases. The goal is to learn as much as possible about the community’s history and the individuals within it to develop a full understanding of the social worlds the community contains and how it functions.

What the firm learns can be captured in an actor-network—a web of human and nonhuman “actants,” their relationships, conflicts and alliances, and the processes that bind them together—which can be used to identify a set of representative community member profiles (and representative community members) and how they might relate to the proposed solution.

Refining the solution

Our last challenge is to work with our community to refine our solution. In an approach inspired by the technique of general morphological analysis (GMA), we can break this into four phases.

First, we take an idea, such as our intelligent hospital, and create a description of it. The building might use this data to drive these decisions, with this decision resulting in these actions. This is the language discussed earlier in the article, the information-decision-action threads that describe how the building will monitor visitors while in the building, support diagnosis, identify and help manage emergencies, and so on. The description can be kept general at this point by, for example, not concerning ourselves with whether a decision is made by a machine or a human.

Next, we refine our solution over two phases: eliminating the impossible, and then discovering what is allowable (and acceptable) to the community (as regulation lags behind ever-evolving social norms).

Eliminating the impossible involves enumerating all possible solution configurations—combinations of which information might feed which decision to trigger which action—and then eliminating the ones that are clearly impossible, such as those configurations that are either technically impossible or that are prevented by regulation. Regulation might require that a particular decision must be made, or supervised, by a human, leading us to add “this decision is performed by a human” to our solution description. Our intelligent hospital, for example, might require that any decision to transfer a lifesaving machine to a higher-priority patient is made by a human. In cases where we want the benefits of both human and machine decision-making, we might split the decision in two: a machine suggestion that can be considered as part of a human decision. The situation analysis and planning solution could be restricted to providing recommendations to a human manager who is responsible for determining the course of action. Or a decision might be required to be made by a suitably qualified person, or one with a particular level of seniority, such as a medical specialist—a requirement that is noted in the description of the decision. We might also require that a machine decision is also understandable by a human, noting in the machine decision’s specifications that whatever technique used must provide a rationale for the decisions it makes. Our planning engine, for example, might be better implemented via rule-based constraint satisfaction rather than machine learning, as this may simplify users interacting with and tweaking the solution’s reasoning.

This first phase of analysis will also determine when a piece of data represents personal data (such as gender) that can only be used as an input to a few specific decisions. Based on this analysis, our description of the solution can be evolved, either by changing elements—information, decisions, and actions and their relationships—or by annotating them to restrict how each element might be used or implemented.

The next step, removing the unacceptable, is a similar process but must be done in consultation with the community. Working with community representatives (aligned with the representative community member profiles identified earlier), a firm can identify what outcomes and processes are more or less acceptable to the community.

This phase can also explore the solution’s benefit (to the community) and maturity, using a tool such as a Wardley map to expose assumptions, permit challenges, and create consensus. For example, if a particular decision is required to be “fair”—such as the choice in COMPAS between equality and equity, or the prioritization of patient needs in an emergency—then how fairness is to be enacted could be determined in collaboration with the community representatives and noted in the decision’s description. Groups of related components—such as an automated registration process that integrates voice and touch interfaces with image recognition—can be reviewed to ensure the ensemble as a whole will not disadvantage or otherwise negatively affect individuals even though particular AI components are not perfect. The community’s attitude to (potentially) controversial technologies can also be considered: The community may be uncomfortable with ubiquitous video surveillance, prompting our intelligent hospital’s owners to find a more acceptable way to track inhabitants as they move through the building. The role of situation analysis and planning might be questioned due to concerns (mentioned earlier) that accurate prognoses will be treated as implicit DNR recommendations that are not in a patient’s best interests. With challenging questions such as this, the firm may need to consult with many diverse groups in the community to develop a coherent approach that is acceptable to the community as a whole.

At the conclusion of eliminating the impossible and discovering what is allowable, we have a detailed outline of our solution—though not a complete solution, as it won’t have details that the firm and community do not consider pertinent. The algorithm used to maintain the temperature in a building zone will likely, for example, remain unspecified. Other details, on the other hand, might be quite tightly specified, such as the allowable uses for the video streams emanating from security cameras, how consequential recommendations from AI solutions (such as accurate prognoses) should be treated, the extent to which behavioral profiles can influence decision-making, which machine decisions are required to be understandable by a human, or how “fair” should be interpreted when dealing with conflicting patient priorities.

The processes of eliminating the impossible and discovering what is allowable enable the firm, in collaboration with the community, to determine how ethical principles (such as fairness or preventing harm) are enacted, documenting this in a shared description of the solution. We have what might be called an “ethical requirements architecture.”

The final, fourth phase is the technical challenge of taking the refined solution description and determining how it should be realized. It’s in this phase that the wealth of work on methodologies and techniques to create unbiased and ethical algorithms—“trustworthy AI”—is leveraged.

The need for a moral license for Artificial Intelligence

Work on ethical AI has focused on developing the principles, requirements, technical standards, and best practices needed to realize ethical AI. However, while there is a clear consensus that AI should be ethical and a global convergence around principles for ethical AI, there remain substantive differences on how these principles should be realized, on what “ethical AI” means in practice.

While this article is notionally about “ethical AI,” it never addresses the question of ethics and AI directly, taking a different tack. Rather than attempt to define what AI uses are and aren’t ethical, it proposes that firms need to work with the communities they touch and obtain and maintain a moral license for the AI-enabled solutions they want to operate. Moreover, firms should consider doing this for any solution that automates decisions and integrates them with other operational systems to create decisioning networks—not just solutions that contain what is currently considered AI technology.

This difference in approach is due to three observations

  • That AI solutions cannot be made ethical though the development of “fair” or “ethical” algorithms or development methodologies
  • That there is no single secular society (a fully normalized social world, an objective standard) against which we can determine if a solution is ethical or good
  • That the importance of ethical AI is not due to the development of disruptive AI technology or an existential threat from isolated, self-aware, AI solutions, but rather due to the widespread emergence of automated decisioning networks

Ethical AI—the development of regulation, techniques, and methodologies to manage the bias and failings of particular technologies and solutions—isn’t enough on its own. Ethics are the rules, actions, or behaviors that we’ll use to get there. Our goal should be moral AI. We must keep a clear view of our ends as well as our means. In a diverse, open society, the only way to determine if we should do something is to work openly with the community that will be affected by our actions to gain their trust and then acceptance of our proposal.

Back to top button