skinny robot looking at a butterfly (Image generated in DALL-E by the author and her daughter, Ellery Born)

Until recently, the arrival of generative artificial intelligence seemed farther in the future. To date, most funders investing in artificial intelligence—including McGovern, Schmidt Futures, and Open Philanthropy—have focused primarily on understanding AI’s potential risks, or supporting AI’s positive impacts on society, in the longer-term. Others, like the Ford, MacArthur, and Hewlett Foundations, and Omidyar Network, have focused on building the capacity to address the risks and opportunities posed by a wide range of technologies, including, but not limited to, artificial intelligence. But because OpenAI’s release of GPT4 caught the world by surprise, fewer funders have had time to think through how to address the immediate, non-existential risks—and astounding opportunities—posed by generative AI, or how to help groups currently working on public interest technology, cyber policy, or responsible technology build out their capacity to better address the moment.

The future is now. At this uncertain time, as the potential use-cases of generative AI begin to become apparent, there are at least 10 things that funders can do to help the existing field of tech-related nonprofits—and society at large—better prepare.

Most obviously, funders working in specific issue areas—climate, health, education, or in my case, democracy—can work to support efforts downstream to prepare government and civil society in their respective sectors to take advantage of the opportunities and mitigate the risks of AI on their specific areas of concern. This might include:

1. Understanding, and developing guidelines and guardrails for, government use of AI. The discriminatory effects of predictive AI in prison sentencing decisions are now well understood, and judges and lawyers are already using generative AI to write opinions. Yet surprisingly little is known about how government is using AI beyond the justice system, much less what the guardrails are, or should be. A 2020 report from Stanford Law School and NYU School of Law researchers documented that nearly half of the 142 federal agencies surveyed had already experimented with AI applications, including to adjudicate disability benefits and communicate with the public. Helping leaders in government determine best practice around AI usage is critical for at least three reasons: Government has a legal obligation to protect citizens’ civil rights, federal government adoption of these technologies will have impact at enormous scale, and because government will be a dominant customer in this space, the procurement standards government sets will have significant knock-on effects.

Are you enjoying this article? Read more like this, plus SSIR's full archive of content, when you subscribe.

The Executive Branch has already done a lot. In 2020, Executive Order 13960 required all civilian federal agencies to catalog their non-classified uses of AI (though the results were disappointing, and for most agencies, little is known about even basic facts such as whether the AI models were developed by external contractors, as an estimated 33 percent of government AI systems are, or by the agency itself). For any issue area a given funder cares about, understanding how government is already using AI in their issue area is key, as are more specific guidelines and best practices to inform whether, when, and how Congress, the courts, and specific government agencies should deploy new technologies. This September, California’s governor took similar steps to better document, improve, and de-risk California’s use of AI. At the same time, the National Institute of Standards and Technology (NIST) AI Risk Management Framework offers a good start, but is by design “non-sector-specific” and will need to be significantly customized for different audiences. At the same time the Office of Management and Budget was tasked with issuing draft guidance for federal agency development, procurement, and use of AI systems. Due this summer, it is now several months behind. Once available for public comment that (alongside an expected Executive Order and National Strategy/Framework, expected later this fall) will require that existing civil society groups build out much greater AI-specific capacity if they are to provide meaningful input.

2. Building government (and civil society) capacity to use AI. Even with the right knowledge and guardrails in place, government leaders will still need to develop the capacity to meaningfully employ these technologies—especially at the state and local level. Without this, government and civil society will only fall farther behind the private sector in their ability to deliver to their constituencies, and in protecting the most vulnerable communities likely to be harmed by these technologies. We are already seeing groups like Climate Change AI exploring the use of AI to inform decisions regarding the design of “roads, power grids, and water mains [that] must be designed to account for the increasing frequency and severity of extreme weather events.” Climate Change AI is also looking to use AI to “pinpoint vulnerable areas, provide localized predictions, and incorporate historical or proxy data to identify what infrastructure is needed.” Both government and nonprofits in any field might soon use these technologies to assess policy options, and to evaluate likely policy impacts. California Governor Gavin Newsom's new Executive Order rightly also calls for the development of government training materials on AI. To help government and nonprofits keep up, and to help mitigate the very likely discriminatory uses of these technologies, they will need to be trained fast. Most basic courses, boot camps, and shared training modules that have been developed so far are targeted at the private sector. The Partnership for Public Service is now training leaders at federal agencies, while the Stanford Institute for Human-Centered Artificial Intelligence (HAI) trains congressional staffers on AI policy issues through its annual summer bootcamp and is developing new work focused on civil society. Government will need more help keeping up. And civil society groups must at the very least be better equipped to respond and, in some cases, will need the capacity to use these tools themselves (when well aligned with—and not a distraction from—their missions).

* * *

The above work can be easily focused on any funders’ core issue area; for example, on understanding how the EPA is using AI and what guardrails should be in place, or on training government and civil society actors working on housing policy to improve their understanding of these technologies. Because philanthropy is generally organized by issue verticals like health, education, or the environment, it is easier for most funders to address problems downstream, once new technologies begin to affect their chosen issues.

However, efforts focused solely on our issue verticals leave funders in the position of cleaning up the mess downstream, year after year, for the foreseeable future. The field is less well organized to address cross-cutting issues like journalism, democracy, and technology. But generative AI is—like the internet itself—a foundational tool. It is already being embedded across thousands of application layer tools.

For this reason, upstream interventions are needed to address challenges with AI that impact all issue areas. To address the problems at their core, several necessary—but not sufficient—foundational actions are required. Such efforts offer leveraged impact, improving the effects of these technologies on all issues areas.

3. Transparency and data access. First, the most essential requirement is that governments and civil society must have visibility into how AI tools are being used. This includes the degree of bias, explainability, and interpretability of inputs, and outputs; the degree to which those outputs are “aligned” with, and accountable to, user (and societal) interests; the frequency of their “hallucinations” and more. Data access will be a necessary, but not sufficient, condition for any efforts aimed at understanding impacts, holding companies accountable, and providing redress for individuals or communities harmed. Yet right now, largely because of privacy concerns, companies like OpenAI have adopted the policy of deleting usage data after three months. This will make it nearly impossible for governments, academics, or civil society to understand how these tools are being used, over time, and their impacts.

Work remains to be done to determine the right balance between protecting privacy and ensuring transparency, and to define exactly what kinds of transparency is required: regarding the parameters and training datasets, regarding explainability and interpretability of outputs, or more? Clear transparency frameworks will be needed to ensure the right information is made available, akin to what Stanford’s Cyber Policy Center (where I previously worked) helped to develop for social media. Such transparency would enable the development of scorecards and other comparative tools to inform government, and consumer, choices regarding which AI products to use. That said, developing the technical infrastructure to enable transparency, across platforms, at scale is no joke. Groups like OpenMined are experimenting with designs here, but public pressure, or regulation, will be required if companies are going to share data.

4. Advocacy for research funding. Looking back at the disinformation field, philanthropy has invested over $100M to build research centers devoted to understanding harms, and (to a lesser degree) potential solutions. Over $100M more has been invested by US government including by the National Science Foundation (NSF), the Global Engagement Center, and others. With respect to AI, last year the National Science Foundation announced its Technology, Innovation and Partnerships, or TIP, directorate (its first new directorate in 30 years)—however, the research it will support appears more likely to privilege commercially (rather than societally) beneficial applications of AI. In May, the NSF announced plans to invest almost $500 million in an AI Institutes research network, reaching almost every US state. However, the emphasis appears much heavier on leveraging the opportunities rather than mitigating risks. Risk-oriented research on AI should also be funded, and there is a critical role for philanthropy in advocating that this research be financed not by philanthropy, but by the AI labs themselves. For example, a pooled research fund into which AI labs contribute 1 percent of annual profits—a fund administered independently, overseen by a cohort of civil society leads, and without the ability for the labs to pick and choose scholars, or topics—would help us better understand the impacts of AI on society.

5. Formal collaborative institutions. There have been many recent calls for some form of multi-stakeholder table: a Christchurch Call on Algorithmic Outcomes similar to the original Christchurch call, or a table equivalent to the Global Internet Forum to Counter Terrorism (GIFCT) which combats terrorist content online. Such an entity would enable the AI labs to collaborate with each other, as new vulnerabilities are discovered or new safeguarding best practices are developed, with the social media platforms that are the likely distributors of the content generated by these AI labs, with civil society leaders most likely to observe the harms caused to communities, and with governments working to represent the public’s interests. The Partnership on AI, founded in 2016 and now with almost 100 partners, has been working to secure both policy innovations and changes in public practices. OpenAI just announced its new Frontier Model Forum, aiming to create “a new industry body to promote the safe and responsible development of frontier AI systems: advancing AI safety research, identifying best practices and standards, and facilitating information sharing among policymakers and industry.” But much remains to be done to determine the distinct roles of these different bodies, and to ensure meaningful participation from civil society in forums that are largely led, and heavily funded, by industry.

6. Informing voluntary industry best practices and codes of conduct. All industries have best practices. Many have voluntary codes of conduct, the Motion Picture Association’s voluntary ratings system being a prime example. Given the challenges in passing legislation in the United States, and the lack of direct applicability of at least some elements of the EU AI Act to the US context, codes of practice or industry norms offer a more immediate path to (albeit likely less transformative) impact. The Office of Science and Tech Policy (OSTP’s) AI Bill of Rights (AI BoR) is a step in the right direction, though relatively high-level. A growing number of groups are experimenting with applying concepts like citizens’ assemblies—enabling structured public input—to the design and governance of technology platforms. The Partnership on AI, too, is largely devoted to partnering with industry and civil society to develop self-regulatory frameworks. Other efforts, like the Collective Intelligence Project, are exploring work around “constitutional AI,” a voluntary method for companies to train their AI systems “using a set of rules or principles that act as a “constitution” for the AI system.” Ongoing civil society input will be key.

7. Advocating for new models for AI in the public interest. The AI field is currently dominated by private companies with profit incentives. Different financial models warrant consideration. Rather than assuming that companies should lead, there should be a public option designed to serve the public interest. AI model development is phenomenally expensive and requires vast computational resources. In late July, a bipartisan bill was introduced to create the National Artificial Intelligence Research Resource (NAIRR). The law would give researchers from universities, nonprofits, and government access to “the powerful tools necessary to develop cutting-edge AI systems that are safe, ethical, transparent, and inclusive.” Greater public support will be necessary to see passage.

8. Building government and civil society capacity to govern AI. Governments around the world have struggled to keep up with today’s technological pace of change, often failing to successfully appreciate and mitigate associated externalities until significant harm has been done. Here (at least) three ways to help government better keep pace with the private sector:

  • Support audits and impact assessments. All AI companies are, presumably, doing some level of auditing to ensure the safety and trustworthiness of their models, but likely without the level of attention to social and political risks these technologies might pose. Impact assessments need to also be conducted externally, by stakeholders in civil society, to understand any additional social or environmental harms the companies might not prioritize on their own. However, today, the field of experts who understand AI is vanishingly tiny: very few experts know enough to even design the requirements for such audits and assessments, much less administer them at scale. For that reason, support to develop standards is needed, as are trainings to upskill engineers, enabling those who are already technically proficient to more quickly develop the skills to audit these new technologies.
  • Improve consulting capacity. In the near future, there is a need for a nimble pool of AI experts who governments and NGOs around the world can consult with as they begin to develop regulations. But the demand for experts who understand AI is so high that government and civil society groups will not be able to staff up on their own. Efforts like Data & Society’s Public Technology Leadership Collaborative, a peer learning collective of scholars, researchers, and government leaders, could help fill this gap. Additional consultative capacity could be built by groups like Tech Congress or the Integrity Institute to, for example, hire a pool of AI experts and to lend out 10 percent of their time to nonprofits and governments around the world that are seeking advice on draft legislation, or looking to stress-test their advocacy positions.
  • Improve educational infrastructure. In the longer term, academic institutions around the country—and the world—will need to upskill their curriculums to create both bespoke courses and new degree programs. MIT, Stanford’s aforementioned Cyber Policy Center, Harvard’s Berkman Klein, Georgetown’s Center for Security and Emerging Technology, and others are already beginning to fill this gap. Through New America’s Public Interest Technology University Network, 64 institutions of higher education throughout the United States (and counting) have committed to ensuring emerging technologists of all kinds are equipped to understand the sociotechnical complexity of their work but need more support.

9. Developing new legal theory. There is significant work to be done translating existing legal theory to apply to the societal harms posed by modern technologies. Most US legal theory, for example, emphasizes individual harm, whereas many of the harms associated with newer technologies—such as biased algorithmic decision-making that affects entire classes of people, or privacy violations associated with data collection and use—have collective components. By the same token, copyright and IP are struggling to keep up with the ways generative AI are re-using creative materials. US antitrust laws are outdated, privileging price harms (which proved hard to apply in the social media context, where platforms are often free). Antitrust may be similarly challenging to enforce in the case of AI, if the AI labs decide to move from a paid subscription model to a free (advertising) model. Similarly, free speech laws were developed long before the Internet existed, and were thus predicated on a speech environment where the ability to be speak was the primary concern, rather than the ability to listen, the ability to be heard, or the ability to make sense of the truth. Law schools will need more funding to keep up. And we will need much more legal aid and capacity to support those who have suffered from AI-related harms.

10. Informing narrative change. The most upstream problem of all is the question of how we, as a society, view the role of technology in our lives. How do we tell the story of generative AI?

Not that long ago it was projected that, given an aging population, robots would take over care for 80 percent of Japan’s elderly population, and well over $300 million in government funding was invested here. This year, the US actors and writers strikes underscore questions about what role we want humans to play in the future of creativity and art. The rise of AI poses many important questions: Which aspects of the human experience are we willing to automate? Where do we want to draw the line? How much concentration should be allowed in a market where network effects are real? Should the private sector dominate a technology as far-reaching as AI, or must there be a public-interest option? What role should privacy play?

These are complex questions. But today, when new technologies are introduced, many around the world are expected to simply deal with it—rather than, as has been Europe’s approach, taking a more proactive stance in defining the boundaries that new technologies must respect, and the public benefits they must deliver. These questions are particularly pressing, given the risks associated with public-facing generative AI tools released just before a major US presidential election, at a moment in history when trust is low, democracies around the world are faltering, and US polarization and political violence are climbing.

The US AI Bill of Rights is a good start, moving from tinkering around the edges to asserting a positive set of rights that protect all people from the risks of generative AI. Groups like AI Now are similarly thinking through narrative.

In America, for foreign-born technologies like TikTok, the US government has been much more activist. But whether it is because of a lack of public pressure, because these are home-grown technologies, because they are engines of our economic growth and global power, or because the tech giants are now amongst the biggest lobbyists on the hill (with Facebook and Amazon now ranking as the two biggest corporate lobbying spenders in the country, eclipsing past leaders from oil and tobacco), the United States has had a much harder time protecting our own people from technological harm than have our European counterparts. An evolution in US public opinion will be essential, both to inform our own individual choices, and as a necessary, if insufficient, precursor to any meaningful US government regulation.

* * *

There is a world of work to be done if we are to successfully maximize the benefits and minimize the harms associated with AI—new legal theories, new academic programs, and new standards for transparency, explainability, and more, will be required. But most foundational of all, we need a clear vision and narrative to help Americans understand and determine the kind of economy and society we want to transition toward. We had the ingenuity to create generative artificial intelligence. We also have the ability to govern it in ways that support the public interest, human rights, and democracy.

Support SSIR’s coverage of cross-sector solutions to global challenges. 
Help us further the reach of innovative ideas. Donate today.

Read more stories by Kelly Born.