Skip to main content

AI Tools & Academic Integrity

Modern data science and medicine are deeply intertwined with AI — and so is your workflow. You are welcome to use AI assistants such as ChatGPT, Copilot, Gemini, Claude, or similar tools. However, using them wisely is part of your academic and professional responsibility.

This page explains what responsible AI use looks like in CACoM, what crosses the line, and how it affects your grade.


Guiding Principle

You may use AI tools to assist you — but you must remain the author of your work.

AI can help you write cleaner code, debug faster, or phrase ideas more clearly. It cannot think critically, design experiments, or understand clinical relevance. Those remain your tasks.

If your submission looks like AI wrote it for you rather than with you, it will show — and it will be graded accordingly.


✅ Responsible Use (Encouraged)

Good, transparent, and constructive use of AI includes:

ExampleWhy it's good
Using ChatGPT to rephrase unclear sentences in your report.Improves readability without changing substance.
Asking Copilot to generate boilerplate code or plotting functions that you review and adapt.Saves time, still requires understanding.
Using an LLM to summarize papers that you later read and verify.Aids comprehension but doesn't replace reading.
Asking an AI assistant to suggest alternative statistical tests or modeling ideas — then checking them manually.Stimulates exploration, maintains intellectual control.
Using AI to help design a diagram, figure layout, or README template.Supports presentation, not content creation.
tip

You are encouraged to use AI tools as productivity and clarity enhancers, not as replacements for reasoning.


🚫 Irresponsible or Lazy Use

Certain patterns of AI use clearly indicate lack of effort or understanding and will reduce your grade:

ExampleWhy it's bad
“Write a report on fetal heart rate variability for me.”You have no idea what the text means — superficial and unverifiable.
Submitting AI-generated paragraphs that sound fluent but contain factual or logical errors.Demonstrates lack of comprehension.
Using AI-generated figures, pseudocode, or equations you can't explain.You can't defend your own work.
Letting AI decide the direction of your project or the interpretation of results.Abdication of intellectual responsibility.
Filling your report with verbose, meaningless text that reads beautifully but says nothing.Style over substance — automatic low score for depth.
caution

Naive AI usage is easy to recognize: elegant phrasing, generic structure, and no depth. Such work will be treated as minimal effort and graded accordingly.

Responsible use is about collaboration with AI, not delegation. The following expectations clarify what must still come from you.


🧠 What You Must Still Do Yourself

  • Understand and be able to defend every figure, result, and statement in your project.
  • Critically assess all AI suggestions — you remain the final reviewer.
  • Verify that generated content (text, code, or citations) is accurate and verifiable.
  • Ensure your analysis and writing reflect your own understanding, not an AI's hallucination.
  • Maintain consistency: if AI introduces terminology or style shifts, correct them manually.

Transparency

There are currently no universally agreed academic guidelines on how to use or disclose AI tools. In this course, we adopt a simple and practical convention — not as a moral stance, but as a matter of scientific transparency.

We do not judge you for using AI tools. As a matter of fact, the content for this very website has been to a large extent generated with intelligent prompting.
The key difference lies not in whether you use AI, but how consciously and responsibly you do it.

You don't have to log every prompt or completion, but if you used AI in a meaningful way, please mention it briefly, for example:

“ChatGPT was used to improve clarity of phrasing and generate code comments.”
“GitHub Copilot was used for code scaffolding; all logic and analysis were developed by the authors.”
“Claude was used to help draft the project abstract, which was then rewritten and verified by the team.”

This acknowledgment shows professionalism and intellectual integrity — nothing more, nothing less.


How It Affects Grading

  • Good AI use can improve clarity, readability, and efficiency — indirectly helping your score under Presentation & Professionalism.
  • Bad AI use (lazy or uncritical) will reduce your score under Results, Analysis & Reflection and Professionalism.
  • If AI-generated content conceals plagiarism or misrepresents your understanding, it will be treated as academic misconduct.

AI and Plagiarism

AI-generated text can inadvertently include copyrighted or unattributed material.
If you use AI output, you are still responsible for verifying its originality and accuracy.
Any plagiarized or unverifiable AI content will be treated as plagiarism under the Plagiarism & Citation Policy.


Quick Checklist

  • AI used to assist, not replace, your reasoning.
  • You understand and can explain everything you submit.
  • No unverifiable or fabricated references.
  • All generated text or code verified for factual and logical accuracy.
  • Major AI assistance acknowledged (e.g., in README, report, or poster).