An AI-supported assessment platform that makes student thinking visible.
Generative AI has irrevocably changed how students work. Universities now ask for "AI reflections" — but the reflection itself can be fabricated by the same tool that wrote the essay.
Educators receive the final essay and nothing else — the process is invisible.
Student over-reliance on GenAI — using it to substitute for engagement rather than enhance it — has become a recognised hurdle in higher-ed assessment.
The window for building assessment systems that meaningfully evaluate AI-assisted learning is closing — fast.
Many educators are moving from prohibition toward integration — embedding AI into assessment to prepare students for an AI-literate job market.
But the shift creates a validity problem. Instructors receive the final essay and nothing else. They cannot tell whether a student engaged with the ideas or simply delegated cognition to a machine.
Assessment systems built only around final products cannot answer the question that now matters most: how was this made?
Friction is the resistance between intention and insight. We think that's where learning happens — and where assessment should look.
Friction is a web-based workspace that makes student–AI collaboration visible, structured, and pedagogically meaningful. A layered AI architecture observes the process and reports on it — without surveilling the student.
The student's primary workspace — a ChatGPT-style interface embedded beside a working document. Every prompt, response, paste event, and timing signal is captured as it happens.
An AI study partner — not an authority figure. It observes the full workflow and intervenes at pedagogically significant moments, asking the student to explain, verify, justify, or rethink.
On submission, a third agent synthesises the entire interaction record into a criteria-aligned report for the educator. Engagement quality, paste behaviour, working time, process depth — all in one published artefact.
Friction runs alongside existing assessment, not on top of it. The student experience stays familiar; the educator gets something they've never had before.
Essay on one side, AI beside it. Prompts, pastes, and edits captured in real time.
At pedagogically meaningful moments, the Peer asks the student to explain, verify, or rethink.
The response is logged — not graded — as evidence of engagement with the material.
The full interaction trace is sealed and passed to Layer 03 for synthesis.
Thirteen observer behaviours — copy-paste detection, reflection laundering, prompt injection, answer extraction, and more.
A criteria-aligned account of process: working time, paste events, intervention responses, scaffold retention.
Each learning objective maps to observed behaviour — or, visibly, to "Not Observed".
Feedback the student can act on. Integrity flags where warranted, guidance where helpful.
We treat each report as an academic artefact — criteria-aligned, citable, and designed to be read.
The Friction Report arrives with the submission. It doesn't replace the essay — it sits beside it, surfacing the process the essay conceals.
Generous typography, plain language, and a structure educators already recognise from journal articles. No scores. No leaderboards. No surveillance theatre.
A criteria-aligned account of process, not product.
Submission completed in 14 minutes of active working time across 3 sessions. 2,140 words pasted from AI across 7 events; 58% of final text retains the AI scaffold unchanged. No evidence of source verification or argument development.
Friction won the challenge track at the 2026 EduX Oceania Hackathon — designed, built, and deployed in five days. The demo below was the winning submission.
We're aiming to open pilots soon with a small number of law schools and AI-forward faculties. One semester, one assessment, one subject.
Log in to live prototype →Or reach out · isam.elsheikh@monash.edu