Grading & Rubric
Your final CACoM grade reflects both scientific quality and professional conduct. We value clear research thinking, rigorous execution, reproducibility, and collaboration.
Rather than assigning fixed percentages, CACoM uses a priority-based evaluation model.
Each priority represents how central a given aspect is to the overall assessment — from foundational scientific quality to professional behavior.
The breakdown below is meant as a guideline for orientation, not a point-by-point formula.
It shows which aspects of your work receive the most attention during evaluation, but the final grade is awarded according to the German grading system (1.0–5.0) and is not reported as separate component scores.
Think of this rubric as a transparent map of what matters most, not a mechanical point system.
Understanding the Priorities
CACoM grading follows a priority model rather than a fixed percentage formula.
Each priority reflects how much weight and attention a given aspect receives during evaluation.
- Priority 1 — Foundational: the core of your scientific contribution.
- Priority 2 — Central: critical for demonstrating depth, reasoning, and insight.
- Priority 3 — Essential: ensures clarity, technical quality, and reproducibility.
- Priority 4 — Professional: reflects independence, reliability, and conduct.
The priorities are hierarchical in emphasis, not mathematical in weight.
Strong performance in Priorities 1 and 2 usually defines excellent projects,
but neglecting later priorities (e.g., poor documentation or unprofessional behavior)
can still significantly lower your final grade.
Priority 1
1. Research Question & Motivation
| Aspect | What we look for |
|---|---|
| Clarity | Is the question specific, answerable, and relevant to computational medicine? |
| Background | Does the team understand prior work and clinical context? |
| Motivation | Is the clinical or scientific importance of the problem clear? |
| Feasibility | Is the scope appropriate for a one-semester project? |
Projects that start with a focused and well-motivated question are far easier to execute — and almost always score higher.
Priority 2
2. Results, Analysis & Reflection
| Aspect | What we look for |
|---|---|
| Clarity of results | Are findings clearly presented and supported by evidence? |
| Critical reflection | Does the team discuss limitations and alternative explanations? |
| Consistency | Do the results align with the stated objectives? |
| Scientific maturity | Does the team show understanding of what their results mean (and what they do not mean)? |
Priority 3
3. Methods & Technical Implementation
| Aspect | What we look for |
|---|---|
| Methodological soundness | Are chosen methods appropriate and justified? |
| Execution quality | Is the implementation correct, documented, and reproducible? |
| Innovation | Does the team adapt or improve existing techniques thoughtfully? |
| Validation | Are evaluation metrics meaningful and properly applied? |
Reproducing and extending an existing method is fully acceptable — originality matters less than sound reasoning and careful validation.
4. Reproducibility & Documentation
| Aspect | What we look for |
|---|---|
| Organization | Are code, data, and instructions complete and coherent? |
| Transparency | Are all decisions and parameters documented? |
| Repeatability | Can the main results be regenerated from the materials provided? |
| Ethics & compliance | Are data handled responsibly (no leaks, proper attributions)? |
See Reproducibility Package for detailed requirements.
5. Presentation & Performance During Discussion
| Aspect | What we look for |
|---|---|
| Poster & Video | Visual clarity, focus on scientific content, and absence of promotional fluff. |
| Oral Presentation | Concise, engaging, and within the allotted time. |
| Discussion Handling | Can the team defend and explain their choices during questions and feedback? |
| Clarity of communication | Are ideas expressed in a way understandable to both technical and clinical audiences? |
Even though this category is listed under priority 3, the ability to clearly communicate your findings is non-negotiable. If you cannot convincingly explain what you did, why you did it, and what it means, your project cannot be considered successful — regardless of its technical depth.
Priority 4
6. Independence, Initiative & Professionalism
| Aspect | What we look for |
|---|---|
| Autonomy | How independently did the team plan, execute, and troubleshoot their work? |
| Proactivity | Did the team identify missing information, tools, or collaborators early on? |
| Effort & persistence | Is progress traceable through commits, drafts, or iterations (not last-minute work)? |
| Professional behavior | Was communication with teammates, instructors, and collaborators reliable, respectful, and timely? |
“Effort” is evaluated through evidence of sustained engagement — regular progress, iterative refinement, and learning.
Hard work alone doesn't guarantee success, but consistent, self-directed problem solving does.
Unprofessional conduct, missed meetings, or loss of collaborator trust directly affect this score.
Outstanding independence and professionalism can significantly boost your overall evaluation.
Bonus & Penalties
While CACoM primarily rewards scientific depth and professionalism, there is room for outstanding work to shine — and for unprofessional behavior to impact your outcome. The table below summarizes possible adjustments applied at the instructors' discretion.
| Situation | Adjustment |
|---|---|
| ✅ Outstanding innovation or contribution Exceptional insight, novel analysis, or unusually polished outcome. | ⬆️ Can positively influence the final grade. |
| ✅ Exceptional independence and professionalism in collaborations Sustained initiative, reliability, and constructive engagement with clinicians, industry partners, or other collaborators. | ⬆️ Can strongly enhance the final evaluation. |
| ⚠️ Late topic approval Topic approved after the official deadline but within the late window. | ⬇️ May slightly reduce the final grade. |
| ⚠️ Late submission (without prior notice) Submitted after the deadline without a valid reason. | ⬇️ Severely affects the grade or may lead to non-acceptance of materials. |
| ❌ Unprofessional behavior in external collaborations (e.g. missed meetings, poor responsiveness, loss of partner trust, ignoring agreed directions) | ⬇️ Negatively impacts evaluation and reputation within the course. |
| ❌ Plagiarism or academic misconduct | 🚫 Automatic failure (grade = 5.0) — see Plagiarism & Citation Policy |
| ⚙️ AI misuse or superficial content Overreliance on AI tools resulting in shallow or unverifiable work. | 🔍 Reflected in the assessment under “Results & Reflection” — see AI Tools & Academic Integrity. |
Bonuses and penalties are qualitative modifiers, not fixed point additions.
They reflect the instructors' overall judgment of excellence, integrity, and professionalism.
Weight Summary
| Criterion | Priority |
|---|---|
| 1. Research Question & Motivation | 1 |
| 2. Results, Analysis & Reflection | 2 |
| 3. Methods & Technical Implementation | 3 |
| 4. Reproducibility & Documentation | 3 |
| 5. Presentation & Performance During Discussion | 3 |
| 6. Independence, Initiative & Professionalism | 4 |
Summary Checklist for a High Grade
- Your question is clear, clinically motivated, and feasible.
- Your methods are justified and executed correctly.
- Your results are reproducible and critically discussed.
- You can communicate and defend your work clearly.
- Your materials are organized, ethical, and well-documented.
- You demonstrated independence, professionalism, and consistent engagement.
- Any AI assistance was used thoughtfully and transparently.