119
⚫ LVL 05 — SENIOR DEVELOPERSESSION 119DAY 119

AUTO-ENHANCE DESIGN

🏢 THE ASSIGNMENT
Senior Ownership | End-to-End Feature

The CTO calls you into her office. "The #1 user request is AI-powered auto-enhance: upload a photo, click one button, and the image looks professional. I want YOU to own this feature end-to-end." Design the scope, UX, API, data model, and implementation plan. Present your design to the team tomorrow. This is what senior ownership looks like.
CONCEPTS.UNLOCKED
🔍
Product Research
What does "auto-enhance" actually mean? Brightness correction, contrast adjustment, color balance, sharpening, noise reduction. Study how Google Photos, Apple Photos, and Lightroom implement it. Understand the user expectation.
📐
Scope Definition
What's achievable in 3 sessions? MVP: analyze image statistics → apply corrective filters automatically. Not ML-based (yet). Math-based: histogram analysis, mean brightness, contrast range. Ship the 80% solution.
🎨
UX Design
Where does the button go? What feedback does the user see? One-click in the toolbar. Loading animation during analysis. Before/after slider to compare. "Adjust Result" panel for fine-tuning. The user stays in control.
📡
API Design
POST /api/enhance — accepts image, returns parameters. Not the enhanced image itself — the enhancement PARAMETERS. The client applies them. This is faster, reversible, and adjustable. Separation of analysis from application.
📋
Technical Design Doc
1-2 pages. Problem, approach, API, data model, plan. Written before code. Reviewed by peers. The document that aligns everyone. Senior engineers write design docs — junior engineers jump to code.
🎤
Design Presentation
Present to the team. Defend decisions. Incorporate feedback. "Why not use a pretrained ML model?" "What about already-perfect images?" "How do you handle black and white?" Every question improves the design.
HANDS-ON.TASKS
01
Research: What Is Auto-Enhance?
// Auto-enhance typically adjusts: 1. BRIGHTNESS Dark photos → lift shadows Overexposed → pull highlights Target: mean brightness ~128 2. CONTRAST Flat/hazy → increase range Already punchy → leave alone Target: full histogram spread 3. SATURATION Washed out → boost colors Oversaturated → pull back Target: natural-looking vibrancy 4. SHARPNESS Soft/blurry → apply unsharp mask Already crisp → minimal touch Target: edges defined, no halos // MVP scope (3 sessions): // ✅ Brightness, contrast, saturation // ✅ Statistical analysis (histograms) // ✅ Before/after comparison // ✅ User-adjustable result // ❌ ML-based (future iteration) // ❌ Noise reduction (complex) // ❌ Face detection (scope creep)
02
Design the UX Flow
// User flow: 1. User uploads / has an image open 2. Clicks "✨ Auto-Enhance" in toolbar 3. Button shows loading spinner (1-2s) 4. Image updates with enhancements 5. Before/after slider appears (drag to compare original vs result) 6. "Adjust Result" panel shows: - Brightness: +15 (slider) - Contrast: +20 (slider) - Saturation: +10 (slider) - User can tweak any value 7. "Apply" commits the change "Revert" returns to original // Key UX decisions: // - Non-destructive: original preserved // - Adjustable: user isn't stuck // - Fast: Web Worker for analysis // - Accessible: screen reader // announces "Enhancement applied: // brightness +15, contrast +20"
03
Design the API
// Option A: Server-side analysis // POST /api/enhance // Body: { imageData: base64 } // Response: { suggestions: {...} } // Pro: can use ML later // Con: upload latency, server cost // Option B: Client-side analysis ✅ // Analyze in Web Worker (no upload) // Pro: instant, no server cost // Con: limited to JS math // (Perfect for our MVP) // Decision: Client-side for MVP. // API endpoint for analytics only: // POST /api/enhance/analytics // Body: { // imageId: string, // analysis: ImageAnalysis, // userAdjustments: Adjustments, // applied: boolean, // } // Purpose: track usage patterns // to improve algorithm later. // Future: POST /api/enhance/ml // Server-side ML analysis // Behind feature flag // Rolled out to premium users first
04
Design the Data Model
// Enhancement history interface EnhancementRecord { id: string; imageId: string; userId: string; timestamp: Date; // What the algorithm suggested analysis: { brightness: { mean: number; suggested: number; }; contrast: { range: number; suggested: number; }; saturation: { mean: number; suggested: number; }; }; // What the user actually applied // (null if they reverted) applied: { brightness: number; contrast: number; saturation: number; } | null; // Did the user adjust our // suggestions? userAdjusted: boolean; } // This data lets us measure: // - How often is auto-enhance used? // - How often do users adjust results? // - What direction do they adjust? // (algorithm too aggressive? timid?) // - Data-driven algorithm improvement.
05
Write the Design Document
# Technical Design: Auto-Enhance # Author: [You] Date: 2026-02-XX ## Problem Users manually adjust brightness, contrast, saturation. 73% of users apply the same basic corrections. One-click enhancement would save time and improve results. ## Approach Client-side image analysis using histogram statistics. Suggest optimal corrections. User can adjust or revert. ## Scope (3 sessions) - 120: Backend analysis engine + tests - 121: Frontend integration + UX - 122: Edge cases, a11y, polish ## Non-Goals (this iteration) - ML-based enhancement - Noise reduction - Face-specific corrections ## API - Analysis: client-side Web Worker - Analytics: POST /api/enhance/analytics ## Data Model EnhancementRecord (see above) ## Risks - "Already good" images get worse → Detect and skip minimal changes - Very dark/light images over-correct → Clamp suggestions to safe ranges - User expectation exceeds algorithm → Set clear "auto" vs "AI" framing ## Success Metrics - 40%+ of users try auto-enhance - <30% revert (70%+ keep result) - <50% manually adjust (algorithm is good enough for most)
06
Present & Defend

Present your design document to the team (instructor/peers). Defend decisions. Incorporate feedback.

// Anticipate these questions: "Why not use a pretrained ML model?" → Scope. MVP ships in 3 sessions. ML is the future iteration. Client-side math ships NOW. "What about already-perfect images?" → Detect: if all suggestions < 5%, show "Image looks great already!" "How do you handle B&W photos?" → Skip saturation adjustment. Detect via saturation variance. "What if users hate the result?" → Non-destructive. One-click revert. Adjustable sliders. Data collection for algorithm improvement. "What's the performance impact?" → Web Worker. ~200ms for 4K image. Main thread never blocks. // Incorporate feedback. Update the // design doc. This is the process.
CS.DEEP-DIVE

Design docs are how senior engineers scale their impact.

Code solves problems for users. Design docs solve problems for the team. Alignment before implementation prevents costly rewrites.

// Why write before you code:

Cost of change over time:
  Design doc: $1 to change
  In development: $10 to change
  In code review: $100 to change
  In production: $1000 to change

// Design doc structure:
Problem   → WHY are we doing this?
Approach  → HOW will we solve it?
Scope     → What's IN and OUT?
API       → The contract
Data Model→ The schema
Risks     → What could go wrong?
Metrics   → How do we know it worked?

// Google, Amazon, and Stripe all
// require design docs before any
// significant engineering work.
// The document IS the thinking.
"Design Lab"
[A]Get feedback on your design doc from 2+ people. Document every piece of feedback and whether you incorporated it or why you didn't. The feedback log IS the evidence of good process.
[B]Research how Google Photos, Apple Photos, and Adobe Lightroom implement auto-enhance. What's similar? What's different? What can you learn from their UX decisions? Write a competitive analysis.
[C]Create a wireframe or mockup of the auto-enhance UX: button placement, loading state, before/after slider, adjustment panel. Use Figma, pen and paper, or HTML. Visual design communicates better than words.
REF.MATERIAL
ARTICLE
Malte Ubl
How Google writes design docs: context, goals, non-goals, design, alternatives considered. The template used by thousands of engineers.
DESIGN DOCGOOGLEESSENTIAL
VIDEO
Computerphile
How brightness, contrast, and histogram equalization work mathematically. The computer science behind "make this photo look better."
IMAGEALGORITHMS
ARTICLE
Wikipedia
The classic auto-contrast algorithm: redistribute pixel intensities to fill the full range. The math behind auto-levels in Photoshop.
HISTOGRAMALGORITHM
ARTICLE
Joel Spolsky
Classic essay on why specs matter: "the single biggest unnecessary risk you can take in a software project." Writing before coding saves time.
SPECSCLASSIC
VIDEO
Lenny Rachitsky
How engineers can think like product managers: user research, scoping, success metrics. The skillset that separates senior from staff.
PRODUCTSENIOR
// LEAVE EXCITED BECAUSE
You designed a feature end-to-end before writing a single line of code. Scope defined. UX flow mapped. API designed. Data model planned. Risks identified. This is how senior engineers work. Tomorrow: implementation begins.