Definition and Purpose
Artificial Intelligence refers to the broad field of developing machines or software that can perform tasks that would typically require human intelligence, including learning, reasoning, problem-solving, perception, language understanding, and more.
Generative Artificial Intelligence (AI) refers to a subset of AI technologies that can create new content, data, or information that resembles human-generated output. These technologies utilize machine learning models and deep learning algorithms to analyze and learn from existing data, thereby generating novel outputs that can include, but are not limited to, text, images, audio, and video.
Scope of AI
This guidance document outlines the ethical, responsible, and secure use of Artificial Intelligence (AI) within the University. This guidance aims to support the University’s mission of education, research, and community service while protecting all stakeholders' integrity, privacy, and rights. It covers the use of AI tools, including but not limited to machine learning models, natural language processing systems, and AI-based search and administrative tools.
This page focuses on guidance for èßäÊÓƵapp students, faculty, and staff. It is divided into the following sections:
- Key Considerations
- Educator Guidance
- Student Guidance
- Practical AI Applications in Research
- Intellectual Property Guidelines
- International Travel Considerations
- Procurement and Contracting
Key Considerations
- Privacy Violations: While powerful, AI models can inadvertently reveal sensitive information when trained on personal data, leading to privacy breaches. Additionally, synthetic data generated by AI can sometimes be reverse-engineered to identify individuals, underscoring the need for caution and awareness.
- Intellectual Property and Copyright Issues: Generative AI can produce similar content or directly copy existing works, potentially violating intellectual property rights. This can result in legal disputes and challenges in determining the ownership of AI-generated content.
- Privacy Concerns: Generative AI, with its ability to create realistic synthetic data that includes personal information, raises significant privacy concerns. This synthetic data, if not handled with caution, can be misused for identity theft, surveillance, or other malicious activities.
- Security Risk: Generative AI can create sophisticated phishing scams, deep fakes, and other forms of cyberattacks. These can be challenging to detect and defend against, posing significant security risks.
- Bias: AI models can inherit biases in the data they are trained on, leading to unfair or discriminatory outcomes.
- Accuracy: AI users should always validate the accuracy of created content with trusted first-party sources. Users are accountable for the content, code, images, and other media AI tools produce. They should be wary of potential “hallucinations” (e.g., citations to publications or materials that do not exist) or misinformation.
- Code Development: Generative AI can assist software developers with writing code. However, caution should be taken when using AI for computer code because the resulting code may be inaccurate, lack security precautions, and potentially damage software systems. All code should be reviewed, ideally by multiple people.
- Institutional Data: The acceptable use and institutional data governance policies govern èßäÊÓƵapp institutional data. These policies disallow the uploading of institutional data into non-èßäÊÓƵapp-sanctioned AI products. In addition, AI users should exhibit great care in what data is uploaded into a product.
- Business Process: AI offers many potential benefits and efficiencies for business process improvements and process automation. Tools, processes, and outputs should be reviewed for institutional data's reliability, accuracy, consistency, and privacy.