What is an AI Use Policy?
An AI use policy is a formal document that outlines guidelines, rules, and expectations for how artificial intelligence technologies should be used within an organization. These policies help ensure that AI systems are used ethically, responsibly, and in compliance with relevant laws and regulations while aligning with the organization's values and mission.

Why Organizations Need AI Use Policies
Organizations implement AI use policies for several important reasons:
-
Consistency and clarity: Provides clear guidelines so everyone understands what is permitted and prohibited
-
Risk management: Helps prevent misuse that could lead to privacy breaches, copyright violations, or other legal issues
-
Ethical alignment: Ensures AI use reflects the organization's values and ethical principles
-
Accountability: Establishes who is responsible for AI-generated outputs and decisions
-
Regulatory compliance: Helps meet legal requirements around data privacy, intellectual property, and other regulations
Key Components of an AI Use Policy
A comprehensive AI use policy typically addresses:
-
Permitted and prohibited uses of AI tools
-
Data privacy and security requirements when using AI
-
Attribution and intellectual property guidelines for AI-generated content
-
Human oversight expectations for AI-assisted decisions
-
Transparency requirements about when and how AI is being used
-
User responsibilities and accountability for AI outputs
-
Training and awareness provisions for users
Example of an AI Use Policy
Based on the Redeemer University policy from your code, here's an example of what an AI use policy might include:
Sample AI Use Policy
1. Purpose and Scope This policy governs the use of artificial intelligence technologies by all employees within [Organization Name]. It applies to all AI tools, whether provided by the organization or accessed through third parties.
2. Appropriate Usage
-
AI systems must only be used in ways that further the organization's mission and purposes
-
AI should be used to enhance productivity and effectiveness, not replace critical human judgment
-
AI usage must adhere to our organization's ethical standards and values
3. Accountability
-
Despite the use of AI systems, accountability for decisions and products ultimately lies with the human user
-
AI is a tool to aid decision-making, create documents, or speed up processes, not a replacement for professional judgment
-
Users must review and verify all AI-generated outputs before using them in official capacities
4. Data Protection and Privacy
-
Personal or sensitive information must not be input into AI systems without proper authorization
-
Users must comply with all data protection regulations when using AI tools
-
Confidential organizational information must be protected when using external AI systems
5. Copyright and Intellectual Property
-
The use of AI in creating materials does not absolve users from the responsibility to properly cite sources
-
Users must verify that AI-generated content does not infringe on copyright or intellectual property rights
-
AI-generated content used for official purposes must be clearly attributed as such when appropriate
6. Compliance
-
Use of AI technologies must comply with all applicable laws, regulations, professional standards, and internal policies
-
Users must stay informed about policy updates and complete required training on responsible AI use
-
Violations of this policy may result in disciplinary action
7. Ethical Considerations
-
AI should be used in ways that promote fairness, avoid bias, and respect human dignity
-
AI systems should enhance rather than replace human connections and community
-
Users should regularly reflect on whether their AI use aligns with organizational values
Why I Built the Faculty/Student/Author AI Policy Alignment Tool
As AI tools have rapidly become integrated into our academic workflows, I've noticed a growing gap between our institutional policies and how these tools are actually being used day-to-day. Many faculty members have expressed confusion about what's permitted, what requires caution, and what practices might violate our university's principles.
This confusion is completely understandable. We're navigating new territory together, and the landscape is changing weekly. That's why I built the Faculty AI Policy Alignment Tool - a straightforward way for our Redeemer University community to self-assess their AI usage and receive personalized guidance.

Disclaimer****** this tool is not LIVE online or even used by my institution as of yet, BUT it is a great MVP - or minimal viable product to get across the intent of the tool.
What the Tool Does
The Faculty AI Policy Alignment Tool is an interactive assessment that helps you understand where your AI practices stand in relation to our institutional policies. Here's how it works:
-
You select your role at the university (Faculty, Administrative Staff, Research, IT, etc.)
-
You answer 10 thoughtful questions about how you're currently using AI
-
The tool analyzes your responses across five key policy areas:
-
Appropriate Usage
-
Accountability
-
Data Protection & Privacy
-
Compliance with Laws and Regulations
-
Copyright and Attribution
-

The questions explore everything from how you handle accountability when using AI outputs to whether you've ever input university data or personal information into AI systems. The assessment is designed to be reflective rather than punitive - helping us all gain clarity in this complex area.
Guided by Our Values
What makes our approach unique is that it's firmly rooted in our institution's values. As the tool explains, we're committed to engaging with AI technologies in a manner that reflects our university's mission and its basis in the Reformed confessions, traditions and perspectives, including:
-
Stewardship of God's Creation: Using AI in ways that promote justice, protect the vulnerable, and respect human dignity.
-
Seeking Wisdom and Discernment: Acknowledging the complex ethical implications of AI and committing to pursue what is morally and ethically right.
-
Fostering Community: Using AI to strengthen community, enhancing rather than replacing human interactions and collaboration.
What The User Receives
After completing the assessment, you'll receive a comprehensive analysis that includes:
-
A personalized AI usage summary
-
Visual indicators showing your alignment status in each policy area
-
Specific recommendations for improvement where needed
-
Clear explanations of university policy sections
-
Practical guidance tailored to your role and usage patterns

The results are completely private - this is a tool for self-reflection and growth, not monitoring or evaluation. My hope is that it serves as a catalyst for thoughtful conversations about how we can harness AI's potential while staying true to our institutional mission.
Looking Forward
As AI continues to transform education, having clear guidelines and accessible tools for self-assessment becomes increasingly important. This initial version is just the beginning - I plan to refine the tool based on your feedback and evolving best practices in the field. But, what are you doing to know if you are following the Policies that you are supposed to be in alignment with and if you could, would you want a tool like this?
If you're interested in taking the assessment, REPLY to this email and i’ll make it available to demo on my website soon. And if you have suggestions for improvement or questions about the tool, please don't hesitate to reach out to me directly.
Until then, happy (and responsible) AI experimenting!