This ethics statement is updated regularly as our understanding of AI and its use in professional fields evolves. This statement applies to UAN staff’s use of AI tools for administrative, educational, or communications purposes. Staff and partners are encouraged to report concerns or ethical questions about AI use to UAN’s Executive Director.
Our Commitment
At the Utah Afterschool Network, we are committed to youth development, community well-being, and advancing equitable access to educational resources. As artificial intelligence (AI) becomes increasingly integrated into our daily lives and professional tools, we believe it is essential to approach its use with care, transparency, and accountability.
Our Vision
We believe that artificial intelligence can be a tool for creativity, accessibility, and empowerment, but only if used thoughtfully and justly. Our responsibility is to shape how AI serves our work and our values — not to be shaped by its trends or technologies.
What We Mean by "AI"
We define AI as any computer system that makes predictions, generates content, or automates decision-making based on data. This includes tools like large language models (e.g. ChatGPT), image generators, chatbots, data analysis platforms, and predictive algorithms used in educational software.
Guiding Principles
1. We Acknowledge Our Limits
We recognize that no AI system — or human — is perfect. We may not always have perfect information, but we will:
-
- Be transparent when we are unsure.
- Favor tools that give us local control (e.g. local storage, opt-out features).
- Default to human-driven practices when risks are unclear.
2. Human-Centered, Youth-Centered
We use AI tools to support — not replace — human relationships, learning, and creativity. We believe individuals deserve connection with people, not algorithms acting in their place.
3. Transparency and Consent
We commit to clear communication about when and how AI is used in our work, as outlined in UAN policy. We will not use AI tools to collect or analyze personal data from youth, families, employees, and stakeholders without meaningful, informed consent.
4. Fair and Balanced Consideration
We recognize that AI systems can reflect and reinforce systemic biases. We prioritize tools and practices that mitigate bias, support marginalized communities, and reflect our values of inclusion, particularly for underrepresented groups.
5. Privacy and Safety
We protect the privacy and dignity of youth, families, and staff in all technology use. We will avoid tools that track, profile, or store sensitive data in ways that are opaque or unethical.
6. Critical Thinking Over Automation
We promote media and AI literacy for both staff and youth. We encourage questions like: Who built this tool? Whose values are embedded in it? What are its limits?
Practical Commitments
- UAN Leadership will provide training and guidance for staff on responsible AI use.
- We will vet AI tools for educational or operational use using these ethical criteria.
- We will include diverse staff voices in decisions about adopting and implementing technology tools.
- We will regularly revisit this statement as the technology and our understanding of it evolve.
How We Evaluate AI Tools
1. We Use a “Good Enough for Now” Ethics Rubric
We acknowledge that few AI tools are fully transparent or free of bias. Instead of aiming for perfection, we commit to using a clear set of guiding questions that help us assess whether a tool aligns with our values and where caution is warranted.
-
- Purpose: What is this tool for? Does it support human connection, creativity, access, or learning—or is it primarily automating, replacing, or surveilling?
- Data Use: Does the tool require us to upload personal information, identifiable data, or private documents? Where is that data stored? Can we opt out?
- Bias and Representation: Has the tool been evaluated for racial, gender, cultural, or linguistic bias? Does it allow for inclusive and affirming content generation?
- Transparency: Does the company disclose how the AI works (e.g. model type, data sources, limitations)? Do they publish audits, research, or third-party reviews?
- Accountability: Who is responsible when the tool gets something wrong? Is there a clear appeals or correction process?
- Cost and Power: Who profits from this tool? Does it widen or close equity gaps in our field?
Staff may internally use the following checklist in order to evaluate whether AI tools are appropriate for a given use-case:
2. We Consult External, Independent Sources
Because vendor marketing materials aren’t enough, we look to trusted third parties for additional insight. The following is a partial list of sources for researching potential AI tools, subject to change as our understanding and the AI field evolves:
-
- Common Sense Media – AI Ratings (https://www.commonsense.org/education/)
- Reviews educational technology tools with attention to privacy, bias, and developmental appropriateness.
- Mozilla’s Foundations of Trustworthy AI (https://foundation.mozilla.org/en/)
- Offers toolkits and research on fair, transparent, and ethical technology.
- AI Now Institute (https://ainowinstitute.org/)
- Academic center focused on the social implications of AI, especially in education and public services.
- Algorithmic Justice League (https://www.ajl.org/)
- Advocates for equitable AI and offers community tools for understanding bias in algorithmic systems.
- ISTE Standards (https://iste.org/standards/)
- For tech in learning environments.
- Common Sense Media – AI Ratings (https://www.commonsense.org/education/)
3. We Involve Staff in Testing
We believe the people most affected by a tool should be part of its evaluation. When possible, we:
-
- Pilot tools in low-risk settings.
- Collect qualitative feedback: Was this engaging? Did it make you feel safe? Was anything confusing or off-putting?
- Debrief with equity in mind: Whose experiences were centered? Who felt left out?
- Implement a peer-review process for all outward-facing documents created using AI tools.
This doesn’t replace expert review — but it does keep us accountable to real human impact.
4. We Maintain a Caution List and a Green List
Rather than making a decision from scratch every time, we maintain:
-
- A "caution list" of tools that raise red flags around privacy, bias, or opacity.
- A "green list" of tools that are safer, well-reviewed, and used in equity-aligned education spaces.
This list is maintained by the UAN Communications Committee. It is updated as needed and with committee review.
