The P.A.T.H. Framework is designed to guide Colorado State University faculty in the responsible use of generative AI tools for grading. It emphasizes transparency, ethical considerations, and maintaining human oversight to ensure AI use enhances learning outcomes while protecting student rights and privacy.

What is the P.A.T.H. Framework?

The P.A.T.H. Framework helps faculty reflect on four key principles when considering AI in grading:

  • Principles: Align AI use with CSU’s ethical standards and values (1).
  • AI: Ensure responsible and secure use of AI tools.
  • Transparency: Be open with students about how AI is used.
  • Humanity: Keep human oversight and relationships central to the grading process.

Key Questions to Guide Your P.A.T.H.

1. Principles: Does AI Use Align with CSU's Values?
  • How will you ensure that AI use supports and validates student learning outcomes?
  • Does your use of AI align with CSU’s ethical standards and interim guidelines?
  • Have you consulted your department chair or leadership about the implications of using AI in grading?
  • Are you prioritizing AI tools that are approved by CSU, such as Microsoft CoPilot, or RamGPT?
2. AI: Are You Using AI Responsibly and Securely?
  • Have you tested the AI tool to ensure its outputs are accurate, reliable, and aligned with your grading rubrics?
  • Are you using AI tools that comply with CSU’s data governance policies and federal/state regulations (e.g., FERPA, HIPAA)?
  • Have you ensured that no private or restricted student data is entered into AI systems without prior approval from student and the institution?
  • Does the AI tool secure student information and prevent unauthorized access to personally identifiable data?
3. Transparency: Are You Open About AI Use?
  • Have you disclosed your use of AI in grading to students in a way that is clear and accessible?
  • What will you tell students about why and how AI is being used?
  • Have you explained the limitations of the AI tool (e.g., potential inaccuracies or biases)?
  • What options will you provide for students who decline opting in to AI-assisted grading? For others, how will you have them affirm their understanding and agreement
4. Humanity: Are You Keeping Human Oversight Central?
  • How will you ensure AI serves as a tool to assist you, rather than replace your judgment?
  • How are you supervising AI-assisted grading to ensure fairness and accuracy?
  • Is there a clear method for students to audit or challenge grades generated or informed by AI?
  • How might using AI in grading affect the authenticity of your relationships with students?
  • How will you evaluate the long-term impact of AI use on student learning and engagement ?

Why These Questions Matter

The P.A.T.H. Framework ensures that AI tools are used thoughtfully and responsibly in grading. By answering these questions, faculty can:

  • Align AI use with CSU’s values and ethical standards.
  • Maintain transparency with students and respect their rights.
  • Protect student data and comply with privacy regulations.
  • Preserve the human connection that is central to education.

Next Steps

  • If you’re unsure about any of these questions, consult your department leadership, TILT, or CSU’s Division of IT for guidance.
  • Review CSU’s interim guidelines for AI use to ensure compliance with institutional policies (2).
  • Engage in ongoing discussions with students and colleagues to refine your approach to AI use in grading.

Final Thought

The P.A.T.H. Framework provides a clear roadmap for faculty to navigate the complex and evolving role of AI in education. By following this path, you can ensure that AI is used ethically, transparently, and in ways that enhance the learning experience.

Acknowledgments

This framework was coauthored using generative AI (OpenAI’s GPT-4) under the guidance of Colorado State University employees. The generative AI tool was used to assist with drafting, synthesizing, and organizing content based on the following sources:

  1. “Interim Guidelines for the Use of Artificial Intelligence in University Operations”, Colorado State University.
  2. S.E.C.U.R.E. GenAI Use Framework for Staff, from secureframework.ai.3
  3. VALUES Framework for Faculty Use of AI in Education, by Marc Watkins.4

The final content reflects human review, refinement, and approval to ensure alignment with professional standards and institutional values.

  1. Generally speaking, to the ethical standards and values articulated in the CSU General Catalog. Specifically, to the values articulated in the “Interim Guidelines for the Use of Artificial Intelligence in University Operations”: Beneficence, Equity & Fairness, Transparency & Documentation, Security & Privacy, Training & Education, Human Interaction, and Reliability & Accuracy.
  2. This document can be found at the following linked address.