Artificial Intelligence Guidelines
Recent developments in artificial intelligence (AI) have generated much discussion on the role of AI in higher education. Many questions have been raised about risks related to security, privacy, and ethical considerations. Because this field is quickly evolving, guidance is needed to help understand and evaluate these risks. The following are key points to understand:
- California State University Bakersfield does not have a contract or agreement for most AI tools or platforms. This means that standard CSUB security, privacy, and compliance provisions are not in place when using these technologies.
- As with any other IT service or product without a CSUB contract or agreement, AI tools should only be used with institutional data classified as Level 3 (General Data). See the CSU Data Classification Levels for descriptions and examples of each data classification.
- The data owner must formally approve any exception to the use of sensitive data in public AI systems before any action can occur.
- AI tools can generate incomplete, incorrect, or biased responses, so a human should closely review and verify any output.
- AI-generated code should not be used for institutional IT systems and services unless a human reviews it.
- Faculty, staff, and students should be aware that OpenAI Usage policies disallow the use of its products for other specific activities.
- CSUB Staff and faculty will be accountable for any issues arising from their elective use of generative AI as part of business processes, including, but not limited to, copyright violations, sensitive data exposure, poor data quality, and bias or discrimination in outputs.
- The laws regarding copyright and AI-generated images are still evolving and vary depending on the jurisdiction. However, it is generally recommended that users label AI-generated images as such to avoid confusion or misattribution.
Any implementation of artificial intelligence is subject to applicable university policies and standards, including the Solutions Consulting review process. If you have questions about how to assess the above attributes for a given implementation of AI, please contact Information Security informationsecurity@csub.edu.
The National Institute of Standards and Technology (NIST) has published a draft AI Risk Management Framework to help organizations use a formal approach to managing AI risks. The Framework lists the following attributes of trustworthy AI:
- Valid and Reliable. Trustworthy AI produces accurate results within expected timeframes.
- Safe. Trustworthy AI produces results that conform to safety expectations for the environment in which the AI is used (e.g., healthcare, transportation, etc.)
- Fair – and Bias is Managed. Bias can manifest in many ways; standards and expectations for bias minimization should be defined prior to using AI.
- Secure and Resilient. Security is judged according to the standard triad of confidentiality, integrity and availability. Resilience is the degree to which the AI can withstand and recover from attack.
- Transparent and Accountable. Transparency refers to the ability to understand information about the AI system itself, as well as understanding when one is working with AI-generated (rather than human-generated) information. Accountability is the shared responsibility of the creators/vendors of the AI as well as those who have chosen to implement AI for a particular purpose.
- Explainable and Interpretable. These terms relate to the ability to explain how an output was generated, and how to understand the meaning of the output. NIST provides examples related to rental applications and medical diagnosis in NISTIR 8367 Psychological Foundations of Explainability and Interpretability in Artificial Intelligence
- Privacy-enhanced. This refers to privacy from both a legal and ethical standpoint. This may overlap with some of the previously listed attributes.
Frequently Asked Questions
There are two primary categories of risk for AI usage: output risk, in which the information generated by the AI system proves too risky to use, and input risk, in which information that gets inputted to an AI model may itself be at risk.
Output – A GPT model cannot reason as human beings do, and its knowledge about a topic is entirely owed to the data that it’s been given. Thus, inadequately prepared models may return murky, nonsensical, or flat-out wrong answers to user queries.
Input – Unless the campus specifically has a contract with the AI tool provider, almost always, all of the data you provide to the AI tool will then belong to the company to utilize in the “training” of the tool. Therefore, you should never use confidential campus information or confidential research data on these platforms.
- Zoom AI Companion for Campus Staff & Faculty
- Adobe Express for Campus Staff, Faculty, & Student Workers
- Microsoft Copilot for Campus Students, Staff, & Faculty
Just as all campus technology purchases are evaluated for security and accessibility, all new artificial intelligence platforms must be reviewed by the ITS Solutions Consulting Group before use. The proposed solution review will include the platform’s policies for data privacy, the data the user intended to provide the system, and a signed agreement by the data owner for the information to be utilized as intended. See the ITS Service Consulting catalog item for more information.
While AI-generated content can be a valuable source of information, it is essential to remember that these pieces are only as reliable as the data they are based on. Hence, you should not solely rely on the output generated by any AI.
There is a phenomenon in the field of artificial intelligence called a hallucination or delusion. It is where a response generated by an AI that contains false or misleading information is presented as fact.
It’s essential to make sure that the content generated is accurate and reliable. The reputational risk to the campus of going public with patently false AI outputs is enormous, both reputationally and financially.
Last updated 3/08/2024