(Illustration by iStock/DrAfter123)

Much of the innovation in the health care system, including initiatives that are designed in whole or in part to achieve objectives aligned with health equity, are centered around cutting-edge technologies—wearables, sensors, digital applications, remote patient monitoring, artificial intelligence, and so on—which may amplify rather than alleviate disparities.

At least part of the value proposition of advanced technological solutions in health care rests on the premise that digital tools can be used as a great equalizer to overcome long-standing biases while simultaneously expanding the capabilities of an overburdened and inefficient system. While these technologies could indeed make health care more efficient and accessible, there is little evidence that the magnitude of investment devoted to tech-centric innovation produces positive, sustainable change for communities fraught with health disparities. These problems, unlike the technologies too often used to address them, carry the weight and complexity of centuries marred by the marginalization and exploitation of oppressed people. To address the challenges communities at the periphery face requires identifying and combating the systemic inequalities that are corrosive to physical and mental health in these communities, including those inequalities embedded in the health care system. While advanced technology can be a powerful tool for achieving equity in health care, the irresponsible use of advanced technology can have devastating consequences for vulnerable communities.

How Technological Innovation Fails to Advance Equitable Health Care

One of the most glaring modern examples of disparity-perpetuating health-care tech deployed in the US isn’t a “new” technology; instead, it’s a new version of old tech that produced worse health outcomes than its antecedent. Pulse oximeters, which measure blood oxygen levels by calculating the amount of light absorbed by human tissue, were first developed in the 1970s by Hewlett-Packard, who took care to ensure the tool’s accuracy on varying skin tones by testing it among people of color. However, modern pulse oximeters, which are now largely produced by a small biotech company, use optical color-sensing, which often fails to accurately detect blood oxygen levels in people with darker skin tones. Despite this known defect, when COVID-19 first-hit, pulse oximeter readings were nonetheless hailed as a key “biomarker” for early hospitalization and during triage. Disturbingly, some patients of color who told ER doctors they couldn’t breathe well were actually sent home when the device indicated they didn’t need oxygen.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

Another example of technologies being deployed to support or circumvent clinical decision-making but which have been shown to sometimes exacerbate health disparities is the fast-growing category of artificial intelligence (AI) and machine learning (ML). These systems are often built on biased rules and homogenous data sets that do not reflect the patient population at large or misinterpret the data of minority populations. For instance, an algorithm developed to determine kidney transplant list placement puts Black patients lower on the list than white patients, even when all other factors remain identical. This is despite the fact that Black Americans are about four times as likely to have kidney failure as white Americans and make up more than 35 percent of people on dialysis compared to their 13 percent share of the US population. After a study revealed these disparities, some institutions stopped using the algorithm and others have begun the work of replacing it. In other cases, AI algorithms have also been used to guide clinical decisions and determine which patients are most in need of medical care; in both cases, researchers have exposed significant racial disparities embedded in the algorithms that negatively impact patient care.

Yet these disturbing outcomes have not stopped innovators from continuing to develop and deploy algorithms for other health-related purposes without proper safeguards to ensure they do not harm patients based on race, a social construction that has no rightful place in clinical decision-making. Many of these algorithms have not been rigorously evaluated or exposed to peer-reviewed publications, and they are not consistently or thoroughly monitored for their effect on health consumers, especially vulnerable health consumers.

Despite these mounting issues surrounding the use of artificial intelligence, machine learning, and other accelerated technologies, both industry and government continue to pour money into this mode of innovation. Industry has made seismic investments in the digital health market. According to Grand View Research, the market was valued at $211 billion in 2022 alone. And the market is expected to grow as much as 18.6 percent each year from now to 2030. The U.S. government is also making significant investments in digital health. The US Department of Health and Human Services’ has earmarked $80 million dollars to strengthen US public health informatics and data science to address health and socioeconomic inequities that have been exacerbated by the COVID-19 pandemic. The recently announced Digital Health Security (DIGIHEALS) project, carves another $50 million out of the federal budget to safeguard digital health data.

Many facets of other seemingly innocuous but ubiquitous technologies—like smartphone apps, wearables, remote monitoring systems and even telehealth services—which are often pushed as health equity solutions, fail underserved communities in several ways:

  • Access to the internet has become a super determinant of health, playing a larger role in determining health care outcomes than education, employment, and health care access. In addition to a lack of access, poverty, poor engagement with digital health, barriers to digital health literacy, and language barriers may render such solutions ineffective for geographically and/or socially isolated communities.
  • Self-monitoring applications rely on persuasion to nudge users toward healthier choices and behaviors rather than providing the resources marginalized communities need to realize healthier lifestyles. More problematically, well-meaning incentives for healthy behaviors may end up rewarding the rich and penalizing the poor.
  • Where “innovations” are thrust on vulnerable communities despite mismatches between the tech and local needs, values, capacity, or connectivity, there is a fundamental problem of waste. This can include abandoned equipment, incompatible computer programs, and ineffective policies. Poor communities simply cannot afford to misuse valuable resources that could have been aimed at more sustainable, proven health interventions.
  • Due to the potential for cost reduction and scalability, digital health innovations have increasingly replaced high-touch care with high-tech solutions. However, human interaction is still an important factor in health care, and high-touch models have been linked to improved access to preventative care for some vulnerable populations. While telehealth and other forms of digital health have some utility, these technologies should be used to supplement rather than replace patient provider interactions.

Shifting to Equity

How can we shift from high-cost technological innovation that further marginalizes the vulnerable to innovation that is equitable, human-centered, impactful, and sustainable for the underserved? Four core principles should lead the way:

Hold health-care organizations accountable: Creating a digital health ecosystem that works for everyone starts by holding health-care organizations accountable for building responsible and sustainable solutions that promote equity. This requires ensuring that health-care organizations make good on health equity commitments and rigorously test new digital health innovations from an equity perspective before the technologies are unleashed onto the public.

Incorporate diverse perspectives among key decision makers: Including diverse stakeholders who can bring different lived experiences to the health-care innovation process is vital for creating equitable innovations. The people who are most likely to experience severe health disparities are often also underrepresented in R&D, have been underrepresented and marginalized in the tech industry, and nearly excluded from senior and executive roles throughout the health-care industry. This leaves important voices out of the decision-making process when it comes to the creation of digital health innovations.

Include marginalized people in research and product testing: Equitable research paradigms such as community-based participatory research, establishing opportunities for people of underserved communities to participate in co-creation with researchers and designers, or merely diversifying the pool of participants in research are vital parts of forging a more equitable path for health care innovation. They create opportunities for marginalized groups to provide input and feedback that only they can provide throughout the design and testing process of new digital health products.

Aim to replace costly, high-tech solutions with more affordable options: For all Americans, the cost burden of health care is already far too high. For minoritized communities who, on average, have lower incomes and are further financially strained from a lack of generational wealth, the cost burden of advanced technologies is even more excessive. As corporations and the federal government continue to pour money into digital health solutions, they need to ensure that the public is not bombarded with an onslaught of overlapping and unnecessary tools that further raise the costs of an already exorbitantly expensive health care system.

Investing in Social Innovations That Promote Health Equity

The nonprofit sector will be pivotal in the effort to redefine innovation. Though sparse, there are some examples of academics and philanthropic organizations creating digital solutions specifically designed to aid disadvantaged groups. For instance, researchers at the University of Southern California developed an algorithm to identify the best person in a specific homeless community to spread important information about HIV prevention among youth. And a nonprofit in Germany developed a mobile app that offers information on over 750,000 locations across the world, color-coded to show users whether they are fully, partly, or not at all wheelchair accessible.

While tangible artifacts that address societal and structural need are important, social innovation for health should be understood as innovation in social relations, in power dynamics, and in governance transformations, and may include institutional and systems transformations. To lay the foundation for an equitable health care system that responsibly uses emerging technologies, the government as well as the for-profit and nonprofit sectors need to prioritize addressing the biases embedded within our current health care system. Better understanding implicit bias among health care professionals is necessary, for instance, before creating clinical decision tools that could amplify prejudice. And, to broaden participation in clinical trials, pilots and other research efforts, all important stakeholders in digital health ecosystems need to establish trust among underrepresented minority groups.

It is up to mission-driven and socially conscious organizations to further develop roadmaps, guidelines, and heuristics that advance the practice of social innovation in health care. Crucially, if these organizations can decolonize health-related research and development, rigorously test new technologies, and take stock of their effect on vulnerable groups, then all organizations can be held to account for creating the circumstances necessary to produce responsible digital health solutions.

While some digital health solutions have the potential to improve the health and well-being of marginalized populations, their overuse or misuse could add more issues to a health-care system that is already riddled with issues that have accumulated in substantial health disparities. Strategic investments in health-focused social innovations—rather than dumping more public and private funding into tech solutions that have shown to either contribute little to health equity or that have made an already biased health care system even more unfair—are more likely to help those the US health care system fails the most. With large investments in health innovation on the horizon, we can either design a system that will benefit all, or we can continue down the path of irresponsible tech that works to the detriment of minority communities. For the sake of everyone at the periphery of the US health-care system, let’s hope we choose the right path.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Tonie Marie Gordon.