Global Care recognises the potential of Generative AI to enhance creativity, efficiency, and communication. This policy sets out how AI tools can be used responsibly and ethically across the organisation, ensuring alignment with Global Care’s values and safeguarding the people we serve.
Purpose of the Policy
This policy provides a framework for using Generative AI tools (e.g. ChatGPT, ElevenLabs) to support Global Care’s work, while protecting authenticity, privacy, and human dignity.
Guiding Principles
- Values-Based Use
AI must reflect Global Care’s core values: Christ-centred, authentic, relational, enterprising, and serving. It must also support our unique strengths, such as grassroots partnerships and maximising support for vulnerable children. - Human-Centred and Ethical
AI use must respect human rights, avoid bias, and comply with data protection laws. It should enhance — not replace — human creativity and decision-making. - Transparency and Safety
AI-generated content must be clearly disclosed when used in public-facing materials. All outputs must be reviewed by a knowledgeable person to ensure accuracy and avoid misinformation. - Safeguarding and Privacy
AI must never be used to create or alter images of children or partners. Sensitive media must only be processed using approved tools, and personal data must never be included in prompts.
Practical Use of AI
- AI tools may be used for tasks like content creation, reporting, and internal support — but only with approval from the AI Governance Team.
- All AI-generated content must undergo the same editorial checks as human-created content.
- An Approved AI Tools Register is maintained to guide safe and appropriate use.
Oversight and Accountability
- The AI Governance Team, chaired by the General Manager, oversees AI use and tool approval.
- Any new AI use must be proposed and reviewed before implementation.
- The policy is reviewed regularly to keep pace with technological developments.
You can read the full AI policy below.
1. Introduction
Generative AI is a type of artificial intelligence that can create images, text or audio output in response to an input, or ‘prompt’, given. Generative AI learns from multiple examples and then uses our prompting, together with what it has learned, to make something new and original. Imagine it as an artist who learns to paint in the style of many famous painters and then creates unique paintings in any style we ask for. This technology is exciting but also challenges us to ask questions such as “what counts as original work when a computer is the one creating it?”
The use of Generative AI is increasing rapidly, and it can be tempting to use it for a range of purposes, sometimes without considering the necessary safeguards. This Generative AI Policy explains Global Care’s position and sets a framework within which Generative AI can be used to further our work and partnerships.
This Generative AI Policy relates to Global Care and all its staff, workers, and volunteers, no matter where they are located.
2. Policy definition
This AI Policy is centred around the following principles:
- Global Care Values: When considering a new use of an AI tool, it is essential to ensure that it reflects Global Care’s values and Unique Selling Points (USPs). Examining the proposal in light of these values and USPs means potential pitfalls can be identified and addressed before associated activities begin. See the worked example for an application of the below:
- Values:
- Christ-centred: does the proposed use resonate with Global Care’s Christian values
- Authentic: does the proposed use maintain authenticity?
- Relational: does the proposed use promote and deepen relationships?
- Enterprising: does the proposed use widen the boundaries of what we can achieve as a charity?
- Serving: does the proposed use serve our partners, donors and the children we support?
- USPs
- Grassroots Partnership: does the proposed use value Global Care’s grassroots partners?
- Valuing Relationships: does the proposed use demonstrate how Global Care values relationships?
- Maximising financial support of overseas projects: does the proposed use enable Global Care to steward resources effectively?
- Benefiting the most vulnerable children: does the proposed use help Global Care to do more for the most vulnerable children?
- Values:
- Inclusive growth, sustainable development and well-being
- Proactively engage, wherever possible, in responsible stewardship of trustworthy AI in pursuit of beneficial outcomes for everyone.
- Recognise that Generative AI is not typically a substitute for roles undertaken by people, but people may use Generative AI to be even more creative and productive.
- Human-centred values and fairness
- Design and use Generative AI systems in ways that respect the rule of law, human rights, democratic values and diversity.
- Take into account those situations where biases in training data can affect outputs e.g. language, ethnicity, gender, etc.
- Uphold the rights of data subjects under UK GDPR and other regulations, and ensure that every type of Generative AI use will be supported by a clear purpose, lawful basis, and an appropriate risk assessment. Data subject rights will be explained and honoured.
- Consider how Generative AI might be used in different scenarios such as audience-facing, organisation support and back-end reporting assistance.
- Transparency and explainability
- Commit to transparency and responsible disclosure regarding AI systems, and in particular where Generative AI is used (see 3.5).
- Foster a general understanding of AI systems, including their capabilities and limitations and potential risks.
- Make stakeholders aware of their interactions with AI systems, including in the workplace.
- Provide plain and easy-to-understand information to the consumers of AI generated output, where feasible and useful, on the sources of data/input, factors, processes and/or logic that led to the prediction, content, recommendation or decision, to enable those affected by an AI system to understand the output (see 3.5).
- Robustness, security and safety
- Ensure that AI systems function in robust, secure and safe ways throughout their lifetimes by continually assessing and managing potential risks (see 3.2).
- Consider restricting access to Generative AI tools to only those with a genuine and pre-approved business need, without compromising innovation.
- Outputs of Generative AI tools should always be validated by a real person with the relevant knowledge of the subject area in order to minimise the risk of misinformation and bias (see 3.6).
- Seek to protect the intellectual property and copyright of others.
- Never prompt Generative AI with messages or questions that contain personal data, copyrighted material, or material where someone else owns the intellectual property.
- Seek to protect the personal data, intellectual property, and copyright of Global Care’s donors, staff, partners and supported children in line with our privacy policy.
3. Uses of Generative AI
- Any use of Generative AI must align with the principles described above.
- Global Care will maintain an Approved AI Tools Register, which also contains guidelines on appropriate use cases for each. The register will be maintained by the AI Governance Team (AIGT) which is chaired by the General Manager. Any use of Generative AI needs to align with the register, otherwise a proposal should be brought to the AIGT for consideration before any AI related activities commence.
- When using AI tools for the processing of images and video media, only tools specifically approved for processing internal (not publicly available) data can be used (refer to the Approved AI Tools Register), until the media has been prepared for public facing use. Special care must be taken as this media may contain geolocation data that could result in risk for Global Care, our partners, and our supported children. To ensure the safety of children and partners, employees should never upload any images or videos from our projects to AI software that is not preapproved.
- Generative AI should never be used to create images of children or partners, or substantially alter the appearance of a child or the context they live in. Photo editing must preserve an authentic representation of our supported children, our partners and our projects.
- Where Generative AI has been a substantial creator in public facing output, its use should be declared in an appropriate way to the consumer of the output. This would be in scenarios where Generative AI has been used to directly create elements of the output. Where Generative AI has been used as a “thought partner” or a tool to enhance the output of a staff member or volunteer, and not directly contribute to the generation of the output, it is not necessary to declare its use.
- All AI generated content should go through the same human oversight and editorial diligence as human-generated content, taking the same levels of care to ensure details are accurate and correct. Special attention needs to be paid where the AI tool may have generated details that are false or filled in information gaps with its own ideas.
4. Approved generative AI tools
Alongside this policy, the Approved AI Tools Register provides details of which AI tools have been approved or prohibited for use by Global Care. This is reviewed regularly by the AI Governance Team (as new tools or requirements emerge, and as existing tools are scheduled for review) and any changes will be communicated accordingly. However, the register should be checked regularly by all users of Generative AI tools to ensure the tools continue to be used appropriately.
Worked Example
The below example demonstrates how this policy might be applied to a proposed use of AI tools.
The proposal
Use ChatGPT to turn Global Care’s published articles in Newsbrief into a podcast script. Then use ElevenLabs AI to create the audio of the podcast, ready for publishing on Spotify.
The benefits
- Enterprising and Maximising financial support of overseas projects: A podcast is a new medium which we could not previously afford; AI tools make it a possible and good value way to reach a new audience.
- Serving and Relational: We are bringing our projects and partners to the attention of a new audience. We’re also reaching donors in a new way that they may prefer, building new relationships and deepening our relationship with donors who may prefer listening to reading.
Points to consider
- Relational: we need to be careful that the podcast protects the identity and privacy of vulnerable children and our vulnerable partners. Scripts will need to be scrutinised even if the source articles have been previously checked.
- Grassroots & most vulnerable: the podcast needs to platform our grassroots approach, as well as our USP to reach the most vulnerable children. Each podcast needs to make a relevant call to action.
- Christ-centred: to authentically represent Global Care, the podcast needs to reflect our Christ-centred approach. It therefore should include Christian content (Bible verses, etc).
- Authentic: The question of authenticity is particularly important as the voices are AI generated which may cause listeners to doubt its authenticity. This will be mitigated for by: an intro declaring that the voices are AI but the stories are absolutely real; an outro that explains why we do the work that we do, pointing back to our key values; and a website page explaining the same in written form.

