SMU DataArts - Cultural Data Profile

Top
Categories
Share

Grantmaking and the AI Bill of Rights: Safeguarding the Nonprofit Sector

  • Posted Aug 10, 2023

7-minute read

Beyond all of the hype, buzzwords, and doomsday predictions surrounding artificial intelligence (AI), nonprofit, philanthropic, and government agencies need to cut through the noise and develop a basic understanding of these systems to enable the responsible use of AI technologies, especially in grantmaking. (For starters, the potential apocalypse caused by ever-advancing AI systems and promoted by some of the biggest tech companies is far from proven at this point, obscuring some of the current, and very real, issues in AI systems today.) 

But as a sector with diverse knowledge bases, how do we explore and evaluate these complex systems?  

First, exploring a basic primer on AI (such as this explainer from Carnegie Mellon University) will provide some basic definitions and concepts related to AI that are foundational to many of the tools that appear on the market every day.  

Second, we have to determine who to trust in AI system development and how to responsibly apply these systems within our work. This is where nascent public policies come into play that empower consumers to critically probe the systems they use and are evaluated by. It all starts with the Blueprint for an AI Bill of Rights.

Image shows various areas of data science listed in hexagonal shapes with person pointing to Machine Learning. Image shows various areas of data science listed in hexagonal shapes with person pointing to Machine Learning.

Understanding the Blueprint for an AI Bill of Rights 

In 2022, the White House Office of Science and Technology Policy (OSTP) released the Blueprint for an AI Bill of Rights, which “identifie[s] five principles that should guide the design, use, and deployment of automated systems to protect the American public in the age of artificial intelligence.” The five principles serve as guideposts, which nonprofit and philanthropic leaders can reference to assess how they intend to use AI systems: 

  • Safe and Effective Systems 
  • Algorithmic Discrimination Protections 
  • Data Privacy 
  • Notice and Explanation 
  • Human Alternatives, Consideration, and Fallback 

Let’s break these down and frame questions we should all ask when evaluating an AI system used in the nonprofit sector. As an example, we will analyze an AI system we are developing at SMU DataArts to evaluate nonprofit grantmaking under these frameworks.

 

Safe and Effective Systems 

“You should be safe from unsafe or ineffective systems.” 

  • Who developed this system?
  • Do they have domain knowledge of the target population/sector?
  • What steps did they take to identify and mitigate risks? 

Before writing our first piece of code, our work in grantmaking evaluation started by studying the grant panel review process in great detail based on our domain expertise in program evaluation and arts and culture grantmaking. This knowledge helped inform us of where issues and pain points exist in current human-driven systems and direct us to where we need to pay close attention when developing a computational approach. Additionally, we submit our research proposals to Southern Methodist University’s Institutional Review Board (IRB) to ensure we are following best practices in research, especially as projects relate to human subject research.

 

Algorithmic Discrimination Protections 

“You should not face discrimination by algorithms, and systems should be used and designed in an equitable way.” 

  • Does the model or data potentially reinforce historic bias towards any communities? 
  • Were the algorithms audited in any manner? 
  • Does the provider continually conduct tests and updates to mitigate any discrimination as it is identified? 

Many algorithmic issues are found in predictive and generative AI systems. For this reason, we are not developing our system to make predictions until we are confident that the system does not further historic biases that exist in nonprofit grantmaking. We are taking an evaluative approach where we will use the identification of biases in the system to inform potential changes in human-led panel grantmaking processes. This does not absolve us of critically studying and evaluating our system but provides a framework that aims to identify and mitigate future bias.

 

Data Privacy 

“You should be protected from abusive data practices via built-in protections and you should have agency over how data about you is used.”  

  • Where did the data used to train the tool come from? 
  • Was it legally acquired and/or consent provided from the data creator?
  • Does the data contain personally identifiable information (PII)? 

Data privacy is critical to all areas of research. At SMU DataArts, we do not “out” any individual organization or person in any research without express consent from the subjects. In our evaluation project, we have worked very carefully with GPAC to protect the subjects in the data. This has included the signing of a nondisclosure agreement (NDA) that stipulates the limitations on data usage and the requirements of safe storage and destruction of data when required. Additionally, GPAC is removing all personally-identifiable information (PII) from the data to ensure we do not intentionally or unintentionally cause harm to any individual organization or person.

 

Notice and Explanation 

“You should know that an automated system is being used and understand how and why it contributes to outcomes that impact you.” 

  • Was my organization assessed by an AI system in some way? 
  • What factors went into that assessment? 
  • What are the implications of the assessment? 

Notice and explanation are extremely critical, especially in predictive systems. For our research, we are again avoiding predictive uses of the system to ensure we are not creating a grant management system that elevates applicants fitting a historical mold that is potentially biased. Rather than evaluate applicants, our focus is on evaluating how grantmakers evaluate their applicants. SMU DataArts and GPAC have agreed to share the results of our evaluative process publicly to ensure past and potential future applicants are fully aware of our findings and any adjustments GPAC might make to their grantmaking processes based on the results. 

One example where this isn’t fully present in government grantmaking comes from a little known entity within the US Dept. of Health and Human Services called GrantSolutions, which creates risk scores for government grant applicants to assist funding agencies distributing over $100B annually. Many applicants may not know they are being evaluated by an algorithm to assess the risk of their organization or application.

 

Human Alternatives, Consideration, and Fallback 

“You should be able to opt out, where appropriate, and have access to a person who can quickly consider and remedy problems you encounter.” 

  • Are there any humans-in-the-loop to continually assess and address identified issues?
  • Can your organization be removed from an AI assessment process?
  • What remedies are available for identified harms? 

The current paradigm of panel review processes in grantmaking is inherently human-driven to reflect the knowledge and expertise of panelists to assess high-quality applications. While our system will not function as a reviewer or predict application success, it is critical to have humans continually assess the results and determine what results from the system are relevant and appropriate for the field. If at any point our system seems to cause harm or reinforce biases, we will stop the process and determine if there is a viable path forward from that point.

  

 

Even with mounting public and political pressure on regulating AI, the Blueprint for an AI Bill of Rights doesn’t really have any “teeth” at this point to protect nonprofits or society at-large. OpenAI, Microsoft, Google, and many other companies have stated the need for regulations and voluntary actions, but the public discussions are many times watered down by private lobbying for less oversight.  

So, as new AI technologies are put on the market every day (even including things developed by SMU DataArts!), nonprofit administrators and grantmakers should consider these questions and components of the Blueprint for an AI Bill of Rights. We at SMU DataArts will continue to scrutinize our work within these frameworks to provide insights to the sector and ensure we protect our organizations and communities.

Researchers Turn to Machine Learning to Evaluate Equitable Practices in Grantmaking

When developed well, machine learning can be a powerful tool, and we are just in the early stages of developing this work to support the field.

Learn More

Share Your Thoughts

We want to hear about your experiences interacting with AI systems relative to the nonprofit sector and/or grantmaking. Reach out directly to our research team or comment below.

Contact Us

Comments

Leave yours below.