SMU DataArts - Cultural Data Profile

Top
Categories
Share

AI and the Future of Grantmaking: Lessons from Public Sector Research

  • Posted Feb 19, 2026

6-minute read

In recent discourse among funders, artificial intelligence (AI) in arts grantmaking has moved from theoretical possibility to practical consideration. Questions about AI’s role in grantmaking—from administrative support to the more consequential possibility of AI-assisted application review—are surfacing in conference rooms, professional networks, and strategy sessions with increasing urgency. The pressures driving these conversations are familiar: increasing application volumes, limited staff capacity, and mounting expectations for speed and consistency in decision-making.

Long-time arts strategist, technologist, and program director, Koven Smith captures both the interest and ambivalence in a reflection piece that summarized a recent Grantmakers in the Arts (GIA) panel discussion exploring these very questions. What stood out in Smith’s reflection was not a rush toward adoption, but a shared unease among panelists. Many of the scenarios discussed, such as using AI reactively during application surges, or in response to board pressure, highlighted how easily these tools could be introduced without sufficient deliberation, literacy, or safeguards.

These concerns align closely with the work we’ve done in Responsible AI for Public Evaluation, a report we developed with the IBM Center for the Business of Government. That report used a government-funded arts grant program as its central case study, focusing on how AI could be used to evaluate decision-making processes. Since its release, one of the most common follow-up questions we’ve received is also the one we intentionally avoided answering directly:

What would it look like to use AI systems to review grant applications themselves?

As these conversations like the one at GIA make clear, this question is no longer hypothetical.

A Crucial Distinction: Using Artificial Intelligence to Examine Decisions vs. Make Them

The core argument of our report is not that AI should be used to automate evaluation, but that it can be used to interrogate existing human decision-making. In the case study, AI systems are trained on historical human judgments to simulate how past processes operate at scale. This makes it possible to surface patterns, inconsistencies, or biases that are difficult to detect through manual review alone.

In other words, the AI is not positioned as a neutral decision-maker, but as a diagnostic tool, one that reflects the values, assumptions, and limitations embedded in prior human processes.

Grantmaking, as discussed in the GIA panel, is a particularly tempting domain for more direct AI use precisely because so much of the evaluative infrastructure already exists. Guidelines, rubrics, applications, reviewer notes, and scores are typically documented and archived.

The Risk of Scaling Yesterday’s Judgments

Several panelists in the GIA session raised concerns about fairness and bias—concerns that become sharper when AI is trained on historical data. Any system designed this way will inevitably encode the priorities and potential errors of past decision-making.

If prior review processes favored certain organizational sizes, artistic forms, geographic regions, or institutional norms, an AI system may simply reproduce those patterns more efficiently and with less visibility. This is especially concerning in arts grantmaking, where values, equity goals, and definitions of excellence have evolved significantly over time.

That raises an important question: What happens if the guidelines, rubrics, or criteria themselves are problematic or outdated?

This is where discourse shifts from whether to use AI to what AI might reveal. AI does not resolve normative questions about what should be valued. But it can make explicit what has been valued—surfacing patterns in past decision that might otherwise remain invisible. The question then becomes: do those patterns still align with a funder’s current goals and values? If problematic patterns aren’t identified and addressed during implementation, AI will simply amplify them.

Literacy Before Adoption

One theme implicit in the GIA discussion, but worth stating plainly, is the gap between the power of these tools and the level of literacy most grantmakers currently have about how they work.

There is a real danger that AI enters grantmaking through side doors: a temporary response to overwhelming application numbers, a vendor pitch promising efficiency, or a directive from leadership to “explore AI” without a clear problem definition. In these contexts, it becomes easy to treat the tool as an external authority rather than an extension of human judgment.

Responsible use requires resisting the impulse to say, “The system made the decision, not us.”

Instead, grantmakers need to understand that any AI system used in application review is, in effect, a compressed version of prior human choices trained on historical data, guided by existing rubrics, and constrained by what was previously documented.

A Practical Bridge: Auditing Before Deploying

Most grantmakers, as noted in the GIA conversation, will not build bespoke AI systems. They will use off-the-shelf tools with limited transparency into their internal mechanics. That lack of visibility does not eliminate responsibility—but it does require different strategies.

One low-barrier approach, drawn directly from the auditing methods outlined in our report, is to test AI systems before applying them to live grant cycles.

This does not require code or technical expertise. For example, a funder could:

  • Submit fictional applications that differ in only one attribute (such as budget size or organizational age)
  • Re-run applications from prior years where outcomes are already known
  • Compare AI-generated scores or feedback with human expectations and program goals

 

These exercises function as a kind of stress test. If the tool produces results that feel misaligned (i.e. systematically disadvantaging certain applicants, emphasizing criteria in unexpected ways, or offering opaque reasoning) that information is itself valuable. It may indicate that the tool, as configured, is not appropriate for the program, regardless of efficiency gains.

This kind of audit reframes AI adoption from a binary choice to an ongoing evaluative process.

What “Responsible” Might Actually Mean

The growing interest in AI-assisted grantmaking brings both opportunity and risk. At a minimum, a responsible approach likely includes:

  • Treating AI as an assistive or diagnostic tool, not an unquestioned authority
  • Recognizing that systems trained on historical data encode historical values
  • Investing in staff literacy before deploying tools under pressure
  • Testing systems with fictional or historical applications before live use
  • Maintaining clear human accountability for outcomes

 

To help in this effort, we have put together a high-level checklist of things grantmakers should keep in mind when considering the use of AI tools to evaluate grant applications. Download The Practical Guide for AI Readiness for Grantmakers linked below.

AI may help grantmakers manage scale or surface patterns, but it cannot substitute for judgment about mission, equity, or public value. The most important question is not whether AI can review grant applications, but whether funders are prepared to take responsibility for what those systems reveal about their own processes.

AI tools were used to polish the text for this article. All content, ideas, and research are those of the author.

The Practical Guide for AI Readiness for Grantmakers

Drawing on a framework from Responsible AI for Public Evaluation (2025), this checklist is designed specifically for grantmakers seeking to pilot or deploy any AI tool to assist with their grant application review process.

Download Now