This template is a preliminary guide to creating an acceptable use policy for generative AI in your nonprofit organization. It includes checklists for each of the steps, examples, and process questions to consider.
As you create your generative AI acceptable use policy, you may want to visit The Generative AI Glossary for Business Leaders (From A-Z) to review commonly used terms.
This section of your policy will contain values statements that identify your organization’s ethical AI use principles. Ethical AI principles help guide decisions, norms, and behaviours when using generative AI. Meanwhile, your value statements should closely align with your organization’s mission, core beliefs, norms, culture, and interactions with stakeholders. They are based on an understanding of the benefits and challenges of AI use at your nonprofit. For examples of value statements that reflect an organization’s ethical AI principles, visit Google’s AI Principles page, or the Government of Ontario’s Principles for Ethical Use of AI.
Strive to accomplish the following as you create this section of your policy:
Articulate the core values guiding your organization’s AI use
Address the “dividend of time” concept: How will your organization reinvest time saved by AI?
Define your stance on human-centred AI use
Review the following examples of ethical AI principles and evaluate how they relate to your organization’s value statements:
Questions to consider:
How do these principles align with your organization’s mission?
What unique ethical considerations – perhaps not listed above – arise from your organization’s specific work?
Organizational norms are the shared expectations that guide staff actions and interactions. These norms influence how work is performed and how staff communicate and collaborate, contributing to the organization’s overall mission, culture, and operations. Your acceptable use policy should detail human-centred norms, use cases, and tool selection and provisioning.
Strive to incorporate the following as you create this section of your policy:
Establish criteria for selecting AI tools
Define how AI tools will be integrated into workflows
Specify approved and prohibited AI tools
Outline the process for tool provisioning and reimbursement
Define acceptable use cases for AI such as the following, for example:
Content creation and editing
Translation
Summarization
Meeting notes
Research
Brainstorming
Analysis
Examples of human-centred norms:
Questions to consider:
Which tasks should always involve human oversight?
How will you encourage staff to identify beneficial AI use cases?
How will you promote knowledge sharing about AI usage among staff?
Guardrails are rules that support the ethical principles in Section 1 and reduce risk. This section should provide specific guidance on how employees should use generative AI. It should also be included as part of onboarding and introduction to the tools, as well as ongoing training.
Strive to establish clear guardrails for the following as you create this section of your policy:
Equity and access in AI use
Bias detection and mitigation
Privacy protection
Confidentiality preservation
Ensuring the accuracy of AI outputs
Respecting intellectual property
Transparency in AI use
Promoting sustainability
Supporting Fairwork practices
Any other guardrails that surface in your discussion
Below are examples of the above commonly used guardrails:
Equity & Access
Bias
Privacy
Decide if the app is default “on” or default “off” and train staff on how to start or stop the app
Confidentiality
Accuracy
Intellectual Property
Transparency
Sustainability
Fair Work
Questions to consider:
How do these guardrails support your organization’s ethical principles?
What unique risks does your organization face that require specific guardrails?
This section of your policy identifies your organization’s adoption strategy, including how you will roll out AI use across your organization. It will also cover opportunities for peer learning, training, experimentation, and open learning. Lastly, this section will spell out guidance for updating and reviewing your policy on a regular basis as additional use cases, platforms, and systems are adopted.
Below is a sample plan for an acceptable AI usage policy rollout:
Identify a pilot group
Design basic training in AI prompting skills
Create opportunities for peer learning
Develop an internal AI playbook or hub:
Include workflow examples
Provide prompting tips
Create cheat sheets and other resources
Establish a policy review schedule
Create mechanisms for staff feedback
Define metrics for evaluating AI policy effectiveness
Plan for staying updated on legal and compliance issues
Resource allocation for continued adoption, including budget, staff time, and training
Additional questions to consider:
How will you encourage experimentation within safe boundaries?
What resources will you allocate for ongoing AI adoption and training?
How will you adapt the policy as AI technology evolves?
For curated and annotated selection articles, resources, and training materials to further develop your acceptable use policy, you may refer to this document.