// For each of the 6 exam tasks:
## 1. Walk Through Your Process
- "Here's how I diagnosed it..."
- "I found the root cause here..."
- "My fix was..."
- "I verified by..."
## 2. Show Your Work
- Git diff for each fix
- Test results
- Before/after screenshots
- Commit messages
## 3. Highlight AI Usage
- When did you use AI? Why?
- What did AI get right?
- What did AI get wrong?
- How did you verify AI's output?
## 4. Honest Assessment
- What took longest?
- What are you most confident about?
- What are you least confident about?
- What would you do differently
with more time?
// Time: 15 minutes per person
// Audience: instructor + peers
// Code review framework:
FOR EACH OF THEIR 6 SOLUTIONS:
## Correctness
- Does the fix actually solve
the root cause?
- Or does it mask the symptom?
- Are there edge cases missed?
## Completeness
- Is the fix tested?
- Does it handle error cases?
- Would it survive a code review
at a real company?
## Quality
- Is the code clean and readable?
- Are the commit messages clear?
- Is the approach maintainable?
## Comparison
- Did they find the same root cause?
- Is their approach different?
- Which approach is better? Why?
// Write your review as comments:
// "Nice catch on the IDOR! I used
// the same $or approach. Did you
// consider adding an index on
// { userId: 1 } for the new query
// pattern?"
// Group discussion topics:
BUG 1 (CSS):
- Who found it fastest? How?
- DevTools Inspect vs searching CSS?
- Did anyone use responsive mode?
BUG 2 (Redo):
- Where exactly was the off-by-one?
- Did anyone add a unit test?
- Could TypeScript have prevented it?
BUG 3 (Upload crash):
- Which file type triggered it?
- Defensive coding vs strict typing?
- Should we validate MIME types at
the middleware level?
SECURITY (IDOR):
- Did everyone find the same vuln?
- Authorization at route vs middleware?
- How would you prevent IDORs
systematically, project-wide?
PERFORMANCE (N+1):
- $lookup vs application-level join?
- Did anyone add database indexes?
- How would you detect N+1 problems
automatically in the future?
FEATURE (Bookmarks):
- Compare data models
- Compare API designs
- Who wrote the most complete tests?
// Instructor reviews each solution
// against the rubric:
## Diagnosis Quality /20
- Found root cause, not symptom?
- Used proper debugging tools?
- Systematic approach?
## Fix Quality /20
- Correct and complete?
- Handles edge cases?
- Doesn't break other things?
## Communication /20
- Clear commit messages?
- Good documentation?
- Explained reasoning?
## AI Maturity /20
- Justified AI usage?
- Verified AI output?
- Could explain without AI?
## Feature Quality /20
- Data model appropriate?
- API design clean?
- Tests comprehensive?
## Total /100
- 90+: Exceptional
- 80+: Strong pass
- 70+: Pass with notes
- <70: Areas to strengthen
// Write honest answers:
## What was the hardest problem?
// Why? What made it hard?
// What skill gap did it reveal?
## What was the easiest?
// Why? When did that skill click?
// How long ago would this have
// been hard for you?
## What surprised you?
// Did a "simple" bug take forever?
// Did a "hard" one resolve quickly?
## What would you do differently?
// With unlimited time?
// If you could restart the exam?
## How did AI help / hinder?
// When was it useful?
// When did it lead you astray?
// What's your AI workflow now vs
// when you started the course?
## Biggest growth moment?
// Looking back at all 125 sessions,
// when did the biggest shift happen?
// When did you stop feeling like
// a student and start feeling like
// an engineer?
Code review is the highest-leverage activity in software engineering.
Google's research found that code review catches more bugs than testing, and the reviewer learns as much as the author. It's not overhead — it's the core of the engineering process.