Guidelines for Use of Artificial Intelligence (AI)
At SUNY Downstate Health Sciences University, we prepare health professionals to uphold
the highest standards of academic integrity, clinical judgment, and ethical reasoning.
These guidelines outline expectations for the responsible use of artificial intelligence
(AI) tools in support of student learning, academic inquiry, and professional development.
AI tools, including generative systems such as, but not limited to, ChatGPT and CoPilot, image generators like DALL-E, code assistants, transcription software, and research summarizers like Elicit, are rapidly evolving. Used responsibly, they can serve as helpful tools in multi-step writing, research, or the clinical preparation process. However, indiscriminate use of AI can impede learning, obscure authorship, and violate ethical and legal obligations. These guidelines help ensure AI use aligns with the University's mission, values, and academic policies.
Because AI systems can produce inaccurate, incomplete, or biased output, its use requires critical evaluation. All AI use must comply with SUNY Downstate’s Academic Integrity Policy, the specific college or school’s policies, and course-specific instructions and learning objectives. These guidelines apply to all students across all SUNY Downstate academic programs and may be supplemented by additional guidance from specific schools, colleges, departments or programs, or specific course-level policies.
These guidelines will be reviewed annually to ensure they remain relevant in a rapidly evolving landscape.
Scope of Applicability
These guidelines apply to all SUNY Downstate students across all schools, programs, and levels of study. Faculty members and departments may issue additional, discipline-specific guidance. Students must always defer to course-specific instructions set by course faculty/instructors.
General Expectations for Students
- Determine Whether AI Use is Permitted
Check your course syllabus and institutional policies for statements about AI use. If no statement is present, email your instructor for clarification before using AI. When in doubt, assume that AI use is not permitted until explicitly approved. - Disclose and Document AI Use
If permitted, you must submit the following with your assignment: Name and URL of the AI tool(s) used; date accessed; your original prompt(s) or query; the AI-generated output; A brief explanation of why you used the tool and how you evaluated its results. Failure to disclose AI use as required will be treated as a violation of the academic integrity policy. Instructors may use plagiarism detection software and other tools that identify AI-generated content to verify the originality of submitted work. Students should assume that their submissions will be reviewed with these tools to ensure compliance with academic integrity standards. - Critically Review AI Output
AI tools can produce inaccurate or fabricated information ("hallucinations"); questionable or invented sources; or biased, offensive, or discriminatory content. Consider the following when AI use is permitted:- Does the output meet the assignment's learning objectives?
- Is the information verifiable, complete, and supported by credible sources?
- What biases or assumptions may be embedded in the tool’s responses?
- How did you revise or improve the AI-generated text?
- Avoid Inappropriate Uses of AI
Students may not use AI to generate entire assignments or fulfill core academic tasks without explicit instructor permission; fabricate citations, data, or research findings; paraphrase or reword another’s work to evade plagiarism detection; submit AI-generated work as their own without proper attribution; generate content that misrepresents knowledge, skills, or clinical decision-making. - Use Caution with Tools That Incorporate AI
Tools like Grammarly, transcription services, and browser extensions may use AI components. Always check with your instructor before using writing aids, summarizers, or auto-transcription platforms. Be mindful of tools that might passively capture sensitive content without disclosure. - Use Downstate Issued Accounts When Possible
When possible and available, use Downstate issued products and Downstate-issued credentials when completing assignments or doing projects related to Downstate.
Data Privacy, Confidentiality, and Security
Never enter Protected Health Information (PHI), student records, clinical cases (even
fictionalized), or sensitive data into AI tools, even if anonymized. AI must not be
used for clinical documentation (e.g., patient notes), including when running scenarios
with standardized patients and hypothetical scenarios, except where explicitly authorized.
Even anonymized or hypothetical patient cases may include elements of PHI and should
not be processed with AI systems without approval. Using AI platforms that store,
train on, or share submitted content may violate HIPAA, FERPA, New York State Privacy
laws, or institutional privacy standards. Check the terms of service and data handling
policies of any AI system before using it for coursework or research.
The State of New York has issued a directive specifically banning the use of the AI tool DeepSeek on government-managed devices or networks due to national security and privacy risks. SUNY Downstate likewise prohibits use of DeepSeek, even on personal devices, for coursework, research or other purposes. Students must ensure any AI tool they use complies with relevant SUNY, New York State, and federal policies, especially when those devices connect to the SUNY Downstate network.
Clinical, Research, and Presentation Contexts
Do not use AI tools to create content for OSCEs, patient case notes, clinical assessments,
or board preparation materials unless specifically authorized. If AI is permitted
in group work or presentations, clarify whether disclosure must be collective or individual.
Use of AI in lab reports, abstracts, posters, or oral presentations must be fully
disclosed and aligned with academic integrity standards.
Ethics and Professionalism
Students are expected to demonstrate personal accountability and uphold the ethical
principles of their professions. Responsible use of AI includes:
- Maintaining transparency about how content is created.
- Ensuring that technology supports—not replaces—critical thinking.
- Avoiding reliance on AI tools for tasks requiring original analysis or clinical judgment.
- Upholding values of equity, objectivity, and scholarly rigor.
- Maintaining appropriate confidentiality of non-public/sensitive information.
Support and Reporting
For questions about AI use in a course, consult your instructor. For broader academic
integrity concerns, contact your program chair, dean, or the Office of Student Affairs.
Suspected misuse of AI may be reported through the existing channels for academic
misconduct.
Last Updated: August 2025