Body
Policy Information
|
Issuing Authority: Office of the Provost/President’s Cabinet
|
Effective Date: 9/12/2025
|
Responsible Unit: Data Operations & Standards Committee
|
Revision Dates:
|
Contact Email: helpdesk@cedarcrest.edu (Information Technology Help Desk for general inquiries.)
|
|
Click here for a video explanation of Cedar Crest's AI Data Classification System.
Policy Summary:
A data privacy and security regulation and ethical use compliance policy for the College use of Generative Artificial Intelligence.
Policy:
1.1 Introduction
1.1.1 The Importance of AI Privacy and Security
The integration of Artificial Intelligence (AI) into higher education presents numerous opportunities to enhance learning, research, and administrative functions. At Cedar Crest College, the use of AI must align with our educational mission — supporting personalized, human-centered learning, equity of access, and the empowerment of students to think critically about technology and its ethical implications. Generative AI use also poses significant challenges to privacy and data security. Cedar Crest College must thus have robust policies and practices to mitigate these data privacy and security risks and ensure the ethical use of AI technologies.
1.1.2 Purpose of the Policy
This policy outlines Cedar Crest College’s commitment to protecting the privacy and security of students, faculty, and staff data in the context of AI use. It establishes principles and guidelines for the responsible and secure use of AI technologies and aims to:
- Safeguard sensitive data from unauthorized access and misuse.
- Ensure compliance with relevant privacy laws and regulations.
- Promote transparency and accountability in AI data handling practices.
- Foster a culture of responsible data stewardship among the College community.
1.2 Scope
This policy applies to all Cedar Crest College students, faculty, staff, administrators, trustees, and third parties who develop, implement, or interact with AI technologies used in the College environment. It covers all AI systems used for education, research, administration, and operations, including but not limited to:
- Generative AI models
- Intelligent tutoring systems
- Conversational agents
- Automation software
- Analytics tools
1.3 Principles
1.3.1 Data Minimization
The principle of data minimization is fundamental to protecting privacy in AI systems. It dictates that the College should only collect and retain the minimum amount of data necessary to achieve the specific purpose for which the AI system is being used. This means that data collection should be purposeful, and the College should avoid gathering extraneous information that is not directly relevant to the AI application's function. Furthermore, data should not be stored indefinitely; once it is no longer needed, it should be securely deleted or anonymized to prevent potential misuse. By adhering to data minimization, the College can reduce the risk of privacy breaches, limit the potential impact of such breaches if they occur, and demonstrate a commitment to responsible data handling. This principle aligns with ethical considerations and legal requirements, ensuring that individuals' privacy is respected and protected in the College's use of AI.
1.3.2 Data Security
Data security is a critical aspect of ensuring the responsible use of AI. The College will implement appropriate technical and organizational measures to protect data against unauthorized access, use, or disclosure. These measures include:
- Encryption of data at rest and in transit
- Access controls and authentication mechanisms
- Regular security assessments and vulnerability testing
1.3.3 Data Security Classification
Data Classification Outline Cedar Crest College adopts the below classification system of data use in AI tools and AI data environments (secured or unsecured) in five levels, from the least-restrictive “Public” Data (widest AI-tool use) to the most-limited “Restricted” Data (prohibited AI-tool use). Click here for a video explanation of Cedar Crest's AI Data Classification System.
Level 1: Public
- Description: Data that is intended for public dissemination and poses no risk if disclosed.
- Examples of Data: College website content (excluding internal portals), publicly available event schedules, directory information (name, title, department), general college news and announcements.
- Relevant Regulations: None typically applicable.
- Required Security Measures (Examples): Standard web security practices, regular backups.
- AI Use: Public AI Tools
Level 2: Internal Use
- Description: Data that is intended for internal college use and is generally not sensitive, but its unauthorized disclosure could cause minor disruption or reputational risk.
- Examples of Data: Internal email communications, staff directories with contact information, course catalogs, non-sensitive committee minutes, general operational procedures.
- Relevant Regulations: None typically applicable.
- Required Security Measures (Examples): Access control based on roles, secure storage on college-managed systems, regular backups.
- AI Use: Public AI Tools
Level 3: Confidential
- Description: Data that requires protection due to its sensitivity. Unauthorized disclosure could create a moderate risk of financial loss, reputational damage, or operational disruption.
- Examples of Data: Student grades (summary data), non-personally identifiable research data, internal financial reports, personnel contact information (beyond directory), vendor contracts.
- Relevant Regulations: Potentially relevant regulations depending on the specific data, such as FERPA (Family Educational Rights and Privacy Act) and others.
- Required Security Measures (Examples): Encryption in transit and at rest, strong access controls with authentication, audit logs, limited access based on need-to-know.
- AI Use: Sandbox AI Tools
Level 4: Highly Confidential
- Description: Data that is highly sensitive and requires a high level of protection. Unauthorized disclosure could create a significant risk of financial loss, legal liability, reputational damage, or harm to individuals.
- Examples of Data: Student records containing Personally Identifiable Information (PII) such as names, addresses, dates of birth, social security numbers (if collected), financial aid information, academic transcripts with detailed course information. Employee PII such as social security numbers, bank account information, medical records. Protected Health Information (PHI) as defined by HIPAA (if applicable to the College's operations). Personally Identifiable Information (PII) subject to GDPR for EU residents (names, email addresses, IP addresses, etc.). Research data involving human subjects with identifying information.
- Relevant Regulations: FERPA (Family Educational Rights and Privacy Act), GDPR (General Data Protection Regulation), HIPAA (Health Insurance Portability and Accountability Act 1 - if applicable), other state-specific privacy laws.
- Required Security Measures (Examples): Strong encryption in transit and at rest, multi-factor authentication, strict access controls with role-based access and regular reviews, comprehensive audit logs, data loss prevention measures, secure disposal procedures, incident response plan.
- AI Use: Sandbox AI Tools
Level 5: Restricted
- Description: Data that is extremely sensitive and subject to strict legal or regulatory requirements. Unauthorized disclosure would likely cause severe harm, significant legal penalties, or irreparable damage.
- Examples of Data: Specific categories of data as defined by contractual obligations or highly specific regulations (e.g., certain government-sponsored research data with specific security requirements, payment card information subject to PCI DSS if the College processes credit card payments directly).
- Relevant Regulations: Specific regulations and contractual agreements will dictate requirements.
- Required Security Measures (Examples): All measures from Level 4, plus potentially additional measures such as isolated network segments, enhanced monitoring, specialized security personnel, and adherence to specific compliance frameworks.
- AI Use: Not Recommended or Non-Cloud AI Tools
1.3.4 Privacy Compliance
AI systems often involve the collection, use, and processing of personal data, making privacy compliance a paramount concern. The College must ensure that its AI systems comply with all applicable privacy laws and regulations, such as FERPA, HIPAA, and GDPR. This includes obtaining necessary consents for data collection and use, providing clear and transparent privacy notices, and respecting individuals' rights regarding their data. Privacy by Design principles should be incorporated into the development and implementation of AI systems, ensuring that privacy considerations are proactively addressed throughout the system's lifecycle. Regular audits and assessments should be conducted to verify ongoing compliance and identify potential privacy risks.
1.3.5 Transparency
Transparency will be communicated in ways that are accessible to students, faculty, staff, and other Cedar Crest community members across disciplines and administrative office functions, reflecting Cedar Crest’s emphasis on clear communication and ethical reasoning. College employees will receive annual briefings on best AI data-practices to ensure that transparency extends into the learning and administrative environments. This includes, but is not limited to, the following aspects of AI system use: :
- What data is being collected
- How it will be used
- With whom it may be shared
1.3.6 Consent
Obtaining informed consent is a critical aspect of ethical AI implementation, particularly when dealing with sensitive or personally identifiable information (PII) that may be protected under relevant privacy and security laws and regulations (e.g., FERPA, HIPPA, or GDPR, among others). The College will ensure that individuals are fully informed about how their data will be collected, used, and shared by AI systems, and that they have the right to grant or withhold their consent as consistent with relevant laws and regulations. Special consideration must be given to obtaining consent from parents or guardians when dealing with minors. The consent process will be designed to be clear, concise, and easily understandable, providing individuals with genuine choice and control over their data. The College will also establish mechanisms for individuals to withdraw their consent.
1.3.7 Ethical Use
The ethical use of AI is of paramount importance to Cedar Crest College. AI systems must be used in a manner that respects human dignity, promotes fairness, and avoids discrimination. The College will take proactive steps to identify and mitigate potential biases in AI algorithms and datasets, ensuring that AI systems do not perpetuate or amplify existing societal inequalities. These steps include establishing a cross-disciplinary AI Ethics Review Board, to be constituted as a subgroup of the Data Standards & Governance Committee, that shall have as its charge the duty to assess AI tools and projects for potential biases before implementation, mandating AI-tool bias-awareness training for all personnel involved in AI development or deployment, and implementing rigorous data auditing procedures to identify and correct biases in training datasets. Furthermore, the College will actively seek out and utilize diverse datasets, to the extent practicable, that reflect the breadth of its community and will require vendors to provide transparency regarding the data used to train their AI models. AI systems will be designed and used in ways that prioritize human well-being, safety, and autonomy. The College will also promote critical thinking and ethical reasoning skills among students, faculty, and staff, empowering them to make informed decisions about the use of AI technologies. This includes educating students and employees about the ethical implications of AI, including issues such as algorithmic bias, privacy, and the potential impact of AI on employment and society.
1.4 Responsibilities
1.4.1 College Responsibilities
The College will:
- Provide training and resources on data privacy and security to all members of the college community.
- Regularly audit AI systems to ensure compliance with this policy.
- Establish clear procedures for reporting and addressing privacy or security violations.
- Develop a data breach response plan.
1.4.2 User Responsibilities
All users of AI systems are responsible for:
- Understanding and adhering to this policy.
- Using AI systems in a secure and responsible manner.
- Reporting any suspected privacy or security violations.
1.4.3 Ethical Use
Consistent with Cedar Crest’s educational and research mission, ethical use will include the development of AI literacy and critical thinking skills so that students, faculty, and staff can thoughtfully engage with and utilize AI tools. AI adoption will be framed not only as a technical practice but as an opportunity for ethical inquiry ,dialogue, and discovery across the college community.
1.5 Prohibited Activities
To ensure the responsible and ethical use of AI technologies, Cedar Crest College establishes the following list of prohibited activities. These prohibitions are essential to protect academic integrity, prevent misuse of AI tools, and uphold the College's ethical standards. Any violation of these prohibited activities will be subject to disciplinary action, as consistent with relevant Handbook policies for Students, Faculty, and Staff.
1.5.1 Unauthorized Access, Use, or Disclosure of Data
Unauthorized access, use, or disclosure of data is strictly prohibited. This includes attempting to access data that an individual is not explicitly authorized to view, use, or modify. It also encompasses the unauthorized sharing or distribution of data, whether intentionally or unintentionally, to individuals or entities without proper authorization. Users are responsible for ensuring that they handle data in compliance with college policies and all applicable laws and regulations. Special care must be taken to protect sensitive data, including student records, employee information, and research data.
1.5.2 Use of AI Systems to Discriminate, Harass, or Threaten Others
The use of AI systems to discriminate, harass, or threaten others is strictly prohibited. This prohibition applies to any use of AI that creates a hostile or intimidating environment, or that targets individuals or groups based on attributes such as race, ethnicity, gender, sexual orientation, religion, disability, nationality, or other protected classes. AI systems must not be used to generate or disseminate content that promotes hate speech, incite violence, or perpetuate harmful stereotypes. The College is committed to fostering a safe, inclusive, and respectful environment for all members of the community, and any use of AI that undermines this commitment will not be tolerated.
1.5.3 Circumventing Security Measures
Circumventing security measures or attempting to gain unauthorized access to AI systems or data is strictly prohibited. This includes any actions taken to bypass authentication protocols, exploit vulnerabilities in AI systems, or gain access to data or system functions beyond one's authorized permissions. Users must respect the security measures implemented by the College to protect AI systems and data. Any attempt to compromise these security measures will be treated as a serious violation of college policy.
1.5.4 Sharing Confidential or Personally Identifiable Information with Unauthorized AI Tools
Sharing confidential or personally identifiable information (PII) with unauthorized AI tools is strictly prohibited. This includes inputting sensitive data into AI systems or platforms that have not been officially approved by the College and do not have adequate data privacy and security measures in place. Confidential information and PII include, but are not limited to, student records, employee information, research data, and any other data that is protected by law or college policy. Users must exercise caution and ensure that they only use authorized AI tools when handling sensitive information.
1.6 Compliance and Enforcement
Violations of this policy may result in disciplinary action, as provided under the College’s Handbooks for students, faculty, and staff, up to and including termination of employment or expulsion for students.
1.7 Review and Updates
This policy will be reviewed and updated regularly to ensure its effectiveness and compliance with evolving laws, regulations, and technologies.
1.8 Additional Considerations
1.8.1 Data Ownership
When licensing with any approved third-party AI tool, the College will establish clear guidelines regarding data ownership and intellectual property rights related to AI-generated content and data.
1.8.2 Third-Party AI Tools
The College will assess and approve any third-party AI tools used in the College environment to ensure they meet the standards outlined in this policy.
1.8.3 AI and Research
Special attention will be given to student-led research and faculty-student collaboration for academic inquiry, consistent with Cedar Crest’s educational mission. For specific guidelines for the ethical and secure use of AI in research activities involving human subjects, please see relevant research policies and the College’s Institutional Review Board policies and procedures for research involving human subjects.
1.8.4 Alignment
The College will ensure that AI tool adoption and data privacy and security use will align with Cedar Crest’s mission and strategic plan goals.
Revision History: