AI Resources
Training
Policies
- AB-302 Department of Technology: High-risk Automated Decision Systems: Inventory
- Academic Senate of the CSU: Artificial Intelligence: Empowering CSU Faculty Colleagues
- CSU Generative AI Committee Report
- CSU Responsible Use Policy
- Guidelines for Safe and Responsible Use of Generative AI Tools
- OpenAI Supplier Code of Conduct
Faculty Resources
Research
At California State University, Bakersfield (CSUB), we are committed to fostering a dynamic research environment that empowers both students and faculty to push the boundaries of innovation and discovery. As a proud participant in the National Research Platform, CSUB provides cutting-edge resources and high-performance computing to support groundbreaking research across disciplines. Our faculty-led research spans fields such as Psychology and Computer Science, offering students unparalleled opportunities to engage in hands-on projects, collaborate with experts, and contribute to real-world solutions. By investing in advanced technology, interdisciplinary collaboration, and mentorship, CSUB is shaping the future of research and preparing the next generation of innovators.
Security and Guidelines
CSUB does not have a contract or agreement for most AI tools or platforms. This means that standard CSUB security, privacy, and compliance provisions need to be put in place when using these technologies. Therefore, like any other IT service or product without a CSUB contract or agreement, AI tools should only be used with institutional data classified as Level 3 (General Data). The data owner must formally approve any exception to using sensitive data in public AI systems before any action can occur.
Understanding how the AI tool you wish to use will collect and maintain your data is crucial. You should never provide campus data that is not approved for use in the application, as this could compromise the security and privacy of the data.
Read our CSUB AI Guidelines.
Additional Resources
Term | Definition |
---|---|
Artificial Intelligence (AI) | The ability of a digital computer or computer-controlled robot to perform tasks commonly associated with intelligent beings. This includes learning from experience, reasoning, understanding language, recognizing patterns, and problem-solving. |
Generative Models | These are a type of machine learning models that creates new content by learning from training data and then generating new output that are based on, or similar to, that specific training set. These generative models learn patterns, structures, and features from the training data and can create content with similar characteristics. |
Generative Pre-trained Transformer (GPT) | A language model that uses deep learning to create realistic text. It is used in many applications, such as translation, question-answering, and text generation. This represents the "GPT" of ChatGPT. |
Machine Learning (ML) | A subset of AI that involves the practice of using algorithms to parse data, learn from it, and then make a determination or prediction about something in the world. |
Deep Learning (DL) | A subset of machine learning that's based on artificial neural networks with representation learning. It can be supervised, semi-supervised, or unsupervised and aims to model high-level abstractions in data by using multiple processing layers. |
Natural Language Processing (NLP) | A subfield of AI that focuses on the interaction between computers and humans through natural language. The ultimate objective of NLP is to read, decipher, understand, and make sense of human language in a valuable way. |
Language Model (LM) | A type of model in NLP that predicts the next word or character in a sequence. These models are used in speech recognition, text generation, and other NLP tasks. |
Token | In the context of NLP, a token is a single unit that is a building block for a sentence or document, such as a word, a character, or a subword. |
Fine-Tuning | A process in machine learning where a pre-trained model (like GPT) is further trained on a new dataset with a smaller amount of data. The purpose of fine-tuning is to adopt the general knowledge of the pre-trained model to a specific task. |
Text Classification | This involves assigning categories or labels to text. For example, sorting emails into "spam" and "not spam" is a form of text classification. |
Transfer Learning | The application of knowledge gained while solving one problem to a different but related problem. For example, knowledge gained while learning to recognize cars could apply when trying to recognize trucks. |
Prompt | In the context of AI, a prompt is an input given to a language model that it uses to generate a response or output. |
Training | This is the process of feeding data into AI software so that it begins the machine learning process. |
- Generative AI Enhances Team Performance and Reduces Need for Traditional Teams
- This study examines the role of generative AI in enhancing or replacing traditional team dynamics using a randomized controlled experiment. It highlights that teams augmented with generative AI significantly outperformed those relying solely on human collaboration.
- The Impact of Generative AI on Labor Market Matching
- by Justin Kaashoek, Manish Raghavan, and John J. Horton
- Published: Mar 27, 2024
- Imagine applying for jobs by simply asking your artificial intelligence (AI) assistant to “please put together a resume and cover letter based on my experiences and submit the application to senior management positions at clean-energy start-ups with fewer...
- The Productivity Effects of Generative AI: Evidence from a Field Experiment with GitHub
Copilot
- by Kevin Zheyuan Cui, Mert Demirer, Sonia Jaffe, Leon Musolff, Sida Peng, and Tobias Salz
- Published: Mar 27, 2024
- We are providing a preview of a project that analyzes two field experiments with 1,974 software developers at Microsoft and Accenture to evaluate the productivity impact of Generative AI. As part of our study, a random subset of developers was given...
- Bringing Worker Voice into Generative AI
- by Thomas A. Kochan, Ben Armstrong, Julie Shah, Emilio J. Castilla, Ben Likis, and Martha E. Mangelsdorf
- Published: Mar 27, 2024
- After conducting more than fifty interviews about generative AI with experts in business, academia, labor, government, and the AI development community, the authors summarize their findings and include recommendations for incorporating employees’ perspectives into AI development.
- AI in Higher Education Resource Hub
- ASU is the first university to enter into an agreement with Open AI
- ChatGPT and the Rise of AI Writers: How Should Higher Education Respond?
- From Chapman's Department of Institutional Compliance: Compliance and ChatGPT
- Generative AI History and Modalities
- Generative AI Internal Policy Checklist To Guide Development Of Policies To Promote
- Google's Recommended Responsible AI Practices
- How About We Put Learning at the Center?
- Leatherby Libraries' Resources on Artificial Intelligence (AI) in Relation to Teaching, Learning, and Higher Education
- Microsoft's AI Fairness Checklist
- Responsible Employee Use Of Generative AI Tools
- Teaching Writing with Generative AI
- 10 Ways Artificial Intelligence Is Transforming Instructional Design
- 10 Ways Technology Leaders Can Step Up and In to the Generative AI Discussion in Higher Ed
- 2023 EDUCAUSE Horizon Action Plan: Generative AI
- 2024 EDUCAUSE AI Landscape Study
- 7 Things You Should Know About Generative AI
- Digital Transformation 2.0: The Age of AI
- EDUCAUSE Artificial Intelligence Resources
- Exploring the Opportunities and Challenges with Generative AI
- Leveraging Generative AI for Inclusive Excellence in Higher Education
- Student Perspectives on Using AI
- The Impact of AI on Accessibility