<aside> 😎
This conversation demonstrates using AI as a thought partner for pedagogical design—exploring trade-offs, anticipating implementation challenges, and iteratively refining assessment structures that prioritize authentic learning over performance metrics. The system addresses AI-enabled cheating not through surveillance or punishment, but by designing assessments where AI tools provide no advantage because the evaluation focuses on synthesis, application, and demonstrated thinking rather than information retrieval or text generation.
TL;DR Overview first, full Chatlog (link to that block) demonstrates how I came to the first iteration of a new approach that disallows short cutting desirable difficulty in learning.
</aside>
Students are using AI (especially agentic browsers) to complete online quizzes autonomously. Traditional online assessments no longer reliably measure student learning.
Shift to assessment methods that AI can't fake: live conversation and handwritten work that requires thinking, not just information retrieval.
What it is: A simple 0-5 score each week based on any evidence of understanding you observe—class discussion, questions during lecture, think-pair-share, office hours conversations, or written work.
Why it works:
The key insight: You're not grading "participation"—you're documenting evidence that learning is happening.
What it is: Students fold paper in half. Left side = textbook notes. Right side = class notes. They submit book notes before the chapter starts (timestamp proves they did reading), then resubmit with class notes added after you finish the chapter.
Why it works: