Can we use AI in the third sector?

Can we use AI in the third sector?
Photo by Mojahid Mottakin / Unsplash

Your organisation might already be using Artificial Intelligence (AI), curious about AI, or been told you should not be using AI. In this article, we'll give an overview of things a third sector organisation might want to consider if they choose to use a particular form of AI called ‘Generative AI'.

What is Generative AI?

Generative AI is the most widely discussed AI since the launch of ChatGPT in November 2022, followed by a raft of similar tools and platforms such as Google Bard, Claude Bing AI and Hugging Face.

Generative AI (we’ll now call it AI) refers to a type of AI that can create content, such as text, images, music, or videos. Generative AI models learn patterns from training data and can generate new content based on this learning. The ability to quickly generate new and specific content that is clearly written and seemingly well-informed may alter how many industries and people work, including the third sector. 

Should my organisation use AI?

AI may dramatically free up time from some tasks, allowing for more time to be spent doing in-depth work, which is great news for the third sector. Conversely, it is difficult to understand the underlying technology and data that inform AI, which means there is a possibility that AI could contribute to issues of structural bias which may have an impact on the very people that organisations are trying to support.

Over recent years, ethical concerns about the role and use of digital technology have become more prominent. AI is a rapidly growing field of digital, and consequently, there are an emerging range of ethical considerations around the technology as well as a significant amount of hype which often makes it difficult to sort fact from fiction. One source we recommend specifically for third sector information on AI is Civic AI Observatory.   

A brief starter of things to consider when using AI

To begin with, you may want to consider:

  • How would your use of AI benefit the people you work with and/or your staff?
  • How might AI negatively affect the people you support and/or your staff?
  • How will you notify people that you are using AI?
  • Is there more value that a task is performed by a person rather than an AI?
  • Do you understand how AI might affect your organisation or the people using your service?
  • Will it help you achieve your goals?
  • Where should you be using/not be using AI?
  • What is the environmental impact of using AI?

The Catalyst have written a guide on how the third sector can ethically use AI

Quality control

AI is capable of producing technically correct text and occasionally it may suffer from AI hallucinations. There are also some examples of plagiarism where the AI has copied significant passage of texts or artworks from copyrighted material. Currently some companies behind AIs are offering legal protection for copyright infringement while using their paid for models. However, we would advise caution as legal precedents around the use of AI are not clearly established. 

When using AI to create content, it is helpful to have a process to review and ensure it is accurate and, where possible, checking that is not copyrighted.

Using AI with or on people? 

While AI itself does not have imagination, it is capable of combining novel ideas and helping to expand a creative thought. However, at the moment there are not any examples of how communities can participate with AI to co-generate novel ideas to societal issues. This is an emerging field, and its important to share and be mindful of the need to develop processes (or learn from others) that involve communities in the use of new technology.

Regulation and AI

There have been high profile international events on AI regulation, but third sector organisatioins have not had a prominent role in discussions on regulation.The UK government’s white paper (August 2023) sets out its approach to regulating AI.  There will be no new comprehensive set of AI laws and no new AI regulator. Instead, regulators, including the UK Information Commissioner’s Office and the Financial Conduct Authority, will oversee how their industries use AI. This differs from the EU who are preparing to deliver The EU AI Act

Unfortunately, there are lots of issues with AI regulation. And because tech companies move so fast, governments are often playing catch up.

The Centre for Digital Public Services Wales is seeking to further understand the use of AI across the public sector in Wales and delivering a series of webinars on the subject

Data analysis

AI presents a big opportunity for the third sector to make use of its data. Some AIs let you upload data and ask queries or create visualisations that previously would not have been possible without a data analyst. This could offer some exciting new insights to support communities or better explain complex realities.

Should we have an AI policy?

It is likely that having a policy for AI will become more established as AIs’ are more widely adopted and understood. Currently, it may be easier to amend your communications policy with considerations on how your organisation should use or experiment with AI.

Code Hope Labs have developed a sample policy which can be adapted to your needs.

Ethical considerations

Some AI’s like ChatGPT operate a closed model which means it is difficult for anyone to unpick how they actually work. ChatGPT may also at times use the data you input to further develop their models. Other AI’s such as Mistral use an open source model which typically will not use data you input to develop their models and are often more efficient than closed models which means they may have less environmental impact.

Some AIs may have unknown biases due to having been trained on biased or limited data sources and may make incorrect assumptions/statements based on this data.  

Like this post? Click below to share: