A.
B.
C.
D.
A.
B.
C.
D.
A.
B.
C.
D.
A.
B.
C.
D.
A.
B.
C.
D.
Evolving "Practice" from a Feature to a Scalable Learning System.
At Allen, "Practice" isn't just a feature; it is 50% of the app experience. For students preparing for high-stakes exams (JEE/NEET), volume and quality of practice are the primary drivers of student success. Students who cross the threshold no. of questions in practice (500+ questions for JEE and 2000+ questions for NEET) consistently demonstrate higher engagement, better renewal outcomes, and see their marks improve in the JEE and NEET exams.
However, our practice ecosystem was fragmented, technically debt-ridden, and treated "learning" and "testing" as the same action.
At Allen, "Practice" isn't just a feature; it is 50% of the app experience. For students preparing for high-stakes exams (JEE/NEET), volume and quality of practice are the primary drivers of student success. Students who cross the threshold no. of questions in practice (500+ questions for JEE and 2000+ questions for NEET) consistently demonstrate higher engagement, better renewal outcomes, and see their marks improve in the JEE and NEET exams.
However, our practice ecosystem was fragmented, technically debt-ridden, and treated "learning" and "testing" as the same action.
I led the strategy to unify these fragmented experiences into a single, modular Practice Engine. While our initial proposal was deprioritised, a subsequent feature initiative ("Score Booster"), was used to gather deep behavioural insights. We pivoted from a purely "Exam Simulation" approach to a "Guided Learning" framework, shipping a scalable foundation that supports diverse student cohorts and technical constraints.
I led the strategy to unify these fragmented experiences into a single, modular Practice Engine. While our initial proposal was deprioritised, a subsequent feature initiative ("Score Booster"), was used to gather deep behavioural insights. We pivoted from a purely "Exam Simulation" approach to a "Guided Learning" framework, shipping a scalable foundation that supports diverse student cohorts and technical constraints.
Duration
Duration
18 months (Phased rollout)
18 months (Phased rollout)
Platform
Platform
Mobile, tablet, desktop
Mobile, tablet, desktop
Outcome
Outcome
Students who crossed threshold qs
went from 8K last year to :
14.5K
14.5K
14.5K
↑ 81%
Power users (weekly)
went from ~24K to :
~40K
~40K
~40K
↑66%
The Challenge : A Fragmented Ecosystem.
In April 2024, we received a requirement to “apply a quick fix to the Q-Reel experience to support complex question types.” This prompted a deeper audit of the existing practice experience and an analysis of the kinds of questions students encounter in JEE and NEET.
In April 2024, we received a requirement to “apply a quick fix to the Q-Reel experience to support complex question types.” This prompted a deeper audit of the existing practice experience and an analysis of the kinds of questions students encounter in JEE and NEET.
The audit revealed that practice existed across multiple contexts (Homework, Tests, Custom Practice), but relied on two legacy interfaces:
The audit revealed that practice existed across multiple contexts (Homework, Tests, Custom Practice), but relied on two legacy interfaces:
Third-party NTA web view
Functionally complete but hostile to mobile: non-responsive layouts, no image zoom, and mouse-centric interactions.
Functionally complete but hostile to mobile: non-responsive layouts, no image zoom, and mouse-centric interactions.
Third party NTA experience :
Third party NTA experience :



2. Q-Reel interface
Designed for quick revision, not deep problem-solving. Auto-scrolling, timers, limited navigation, and constrained layouts made it unsuitable for complex questions.
Designed for quick revision, not deep problem-solving. Auto-scrolling, timers, limited navigation, and constrained layouts made it unsuitable for complex questions.
Q-Reel experience :
Q-Reel experience :



The realisation :
We were asking students to prepare for the hardest exams of their lives on interfaces that couldn’t even let them properly view a table on a phone.
We were asking students to prepare for the hardest exams of their lives on interfaces that couldn’t even let them properly view a table on a phone.
This made it clear that complex question types could not be force-fit into Q-Reel. A fundamentally new practice experience was required.
This made it clear that complex question types could not be force-fit into Q-Reel. A fundamentally new practice experience was required.
The False Start : The Exam Simulator.
My initial hypothesis was that students needed a high-fidelity simulation of the real exam environment. To validate this, I studied competitor apps such as PW, Unacademy, BYJU’S, and Aakash, focusing on how they supported complex question types, images, tables, and exam-like interactions.
My initial hypothesis was that students needed a high-fidelity simulation of the real exam environment. To validate this, I studied competitor apps such as PW, Unacademy, BYJU’S, and Aakash, focusing on how they supported complex question types, images, tables, and exam-like interactions.
The assumption was that a native, unified practice interface - mirroring the computer-based JEE/NEET experience with timers, question palettes, and mark-for-review would increase engagement.
The assumption was that a native, unified practice interface - mirroring the computer-based JEE/NEET experience with timers, question palettes, and mark-for-review would increase engagement.
The proposal
We designed a robust “Exam Mode” for power users, combining student feedback with usability findings:
We designed a robust “Exam Mode” for power users, combining student feedback with usability findings:
Portrait and landscape support, added based on explicit student requests and observed behaviour of rotating phones for dense questions.
Portrait and landscape support, added based on explicit student requests and observed behaviour of rotating phones for dense questions.
A bookmark feature, requested by students to mark difficult questions for later review.
A bookmark feature, requested by students to mark difficult questions for later review.
A split-screen interaction model using a bottom-sheet answer palette, introduced to solve a usability issue: long, data-heavy questions forced excessive scrolling, making it impossible to view the question and answer options simultaneously.
A split-screen interaction model using a bottom-sheet answer palette, introduced to solve a usability issue: long, data-heavy questions forced excessive scrolling, making it impossible to view the question and answer options simultaneously.
"Exam Mode" with bottom-sheet answer palette :
"Exam Mode" with bottom-sheet answer
palette :
The outcome :
Deprioritised
Leadership viewed the solution as an optimization rather than a growth lever. The feedback was pragmatic
Leadership viewed the solution as an optimization rather than a growth lever. The feedback was pragmatic
“The third-party web view is clunky, but students are using it. Why invest months of engineering effort in rebuilding it natively?”
“The third-party web view is clunky, but students are using it. Why invest months of engineering effort in rebuilding it natively?”
My takeaway
I had optimised for usability hygiene, not learning impact. To secure stakeholder buy-in, I needed to demonstrate that the redesign would measurably improve learning outcomes - not just polish the interface.
I had optimised for usability hygiene, not learning impact. To secure stakeholder buy-in, I needed to demonstrate that the redesign would measurably improve learning outcomes - not just polish the interface.
The Pivot: Insights from "Score Booster".
A year later, we initiated "Score Booster"- a feature designed to improve mastery in weak topics. The behavioural study we conducted here unexpectedly became my Trojan horse to revisit the Practice architecture, armed with better research.
A year later, we initiated "Score Booster"- a feature designed to improve mastery in weak topics. The behavioural study we conducted here unexpectedly became my Trojan horse to revisit the Practice architecture, armed with better research.
Our research uncovered a critical insight: students are not a single, uniform group. In high-stakes exam prep, practice behaviour varies significantly by confidence, proficiency, and intent.
We identified three distinct student cohorts, each with fundamentally different mental models of practice:
Our research uncovered a critical insight: students are not a single, uniform group. In high-stakes exam prep, practice behaviour varies significantly by confidence, proficiency, and intent.
We identified three distinct student cohorts, each with fundamentally different mental models of practice:
Toppers
Toppers practice for simulation and diagnosis.
Toppers practice for simulation and diagnosis.
They often attempt the hardest questions first, use timers consistently, and limit attempts before skipping and returning later.
They often attempt the hardest questions first, use timers consistently, and limit attempts before skipping and returning later.
They rely on tools like question palettes and mark-for-review, prefer dense information layouts, and are disrupted by hints or pop-ups.
They rely on tools like question palettes and mark-for-review, prefer dense information layouts, and are disrupted by hints or pop-ups.


Average performer
Average performers practice to build momentum and confidence.
Average performers practice to build momentum and confidence.
They typically start with easier questions, seek immediate answer validation, and avoid timers during daily practice due to anxiety.
They typically start with easier questions, seek immediate answer validation, and avoid timers during daily practice due to anxiety.
Progress indicators and small wins help them stay engaged, especially around medium-difficulty questions where they often stall.
Progress indicators and small wins help them stay engaged, especially around medium-difficulty questions where they often stall.



Low performer
Low performers need structure and scaffolding
Low performers need structure and scaffolding
They gravitate toward solved examples, feel overwhelmed by open-ended problems, and find binary “right/wrong” feedback discouraging.
They gravitate toward solved examples, feel overwhelmed by open-ended problems, and find binary “right/wrong” feedback discouraging.
For them, the system must behave more like a tutor offering hints, step-by-step guidance, and direct links to learning content.
For them, the system must behave more like a tutor offering hints, step-by-step guidance, and direct links to learning content.



The synthesis
A one-size-fits-all practice interface is fundamentally broken. What accelerates toppers creates anxiety for beginners, and what supports beginners slows experts down. This insight became the foundation for a modular practice engine that adapts its behaviour acting as an examiner, coach, or tutor based on the student’s proficiency and intent.
A one-size-fits-all practice interface is fundamentally broken. What accelerates toppers creates anxiety for beginners, and what supports beginners slows experts down. This insight became the foundation for a modular practice engine that adapts its behaviour acting as an examiner, coach, or tutor based on the student’s proficiency and intent.
Foundations: Pedagogical Strategy & Comparative Analysis.
To ensure the redesign drove actual learning outcomes we triangulated our strategy by combining established Learning Science principles with an analysis of Habit-Forming Learning Apps.
To ensure the redesign drove actual learning outcomes we triangulated our strategy by combining established Learning Science principles with an analysis of Habit-Forming Learning Apps.
Grounding in Learning Science
Grounding in Learning Science
Grounding in Learning Science
We referenced three core frameworks to understand how students process complex information and progress toward mastery:
We referenced three core frameworks to understand how students process complex information and progress toward mastery:



Cognitive Load Theory
Cognitive Load Theory
The legacy experience created high "extraneous load" through constant scrolling and separated data.
The legacy experience created high "extraneous load" through constant scrolling and separated data.
The new interface needed to eliminate split attention, preserving the user's working memory for actual problem-solving.
The new interface needed to eliminate split attention, preserving the user's working memory for actual problem-solving.
Learner can do with guidance
Learner can do with guidance
Learner cannot do
Learner cannot do
Learner can do unaided
Learner can do unaided
Vygotsky’s Zone of Proximal Development
Vygotsky’s Zone of Proximal Development
The research highlighted the need for a "More Knowledgeable Other" - a scaffolding that provides tiered support to bridge the gap between confusion and capability, eventually fading as the student improves.
The research highlighted the need for a "More Knowledgeable Other" - a scaffolding that provides tiered support to bridge the gap between confusion and capability, eventually fading as the student improves.



Bloom’s Taxonomy
Bloom’s Taxonomy
To mirror the cognitive journey, we recognised that the experience could not be static.
To mirror the cognitive journey, we recognised that the experience could not be static.
It had to support different cognitive stages: from Remembering & Understanding (learning concepts) to Applying (practice) and finally Evaluating (high-pressure exams).
It had to support different cognitive stages: from Remembering & Understanding (learning concepts) to Applying (practice) and finally Evaluating (high-pressure exams).
Learning from Casual and Guided Apps
Learning from Casual and Guided Apps
Learning from Casual and Guided Apps
While direct competitors (PW, Aakash, BYJU’S) largely replicate high-stress exam conditions, we looked outside the industry to apps that excel at engagement and concept building, such as Duolingo, Brilliant, and Khan Academy.
Our analysis revealed a different approach to difficulty:
While direct competitors (PW, Aakash, BYJU’S) largely replicate high-stress exam conditions, we looked outside the industry to apps that excel at engagement and concept building, such as Duolingo, Brilliant, and Khan Academy.
Our analysis revealed a different approach to difficulty:
Progressive disclosure:
Progressive disclosure:
Breaking complex concepts into digestible steps to reduce intimidation.
Breaking complex concepts into digestible steps to reduce intimidation.
Instructional Feedback:
Instructional Feedback:
Treating errors as learning opportunities with immediate context, rather than simple judgment.
Treating errors as learning opportunities with immediate context, rather than simple judgment.
Progressive disclosure:
Progressive disclosure:
Using lightweight reinforcement to encourage consistency over intensity.
Using lightweight reinforcement to encourage consistency over intensity.
Strategic insight :
This research highlighted a critical gap in the market: existing tools were optimised for Assessment, but students needed a tool optimised for Learning.
This research highlighted a critical gap in the market: existing tools were optimised for Assessment, but students needed a tool optimised for Learning.
Our solution needed to be hybrid and adaptive - capable of acting as a gentle, guided tutor for early-stage learning, while retaining the ability to transform into a rigorous, unassisted testing environment for exam prep.
Our solution needed to be hybrid and adaptive - capable of acting as a gentle, guided tutor for early-stage learning, while retaining the ability to transform into a rigorous, unassisted testing environment for exam prep.
The Strategy: The "Guided Learning" Framework.
We moved away from a binary "Practice vs. Test" model to a Progressive Disclosure model and defined the logic based on student mastery levels, ensuring the system adapts to the user's needs rather than forcing a one-size-fits-all interface.
We moved away from a binary "Practice vs. Test" model to a Progressive Disclosure model and defined the logic based on student mastery levels, ensuring the system adapts to the user's needs rather than forcing a one-size-fits-all interface.
Phase 1 : The Learning Phase (Concept Mastery)
Designed for new topics and low-to-average performers.
We minimised cognitive load by enforcing linear progression, hiding question lists, and serving only easy questions in small batches. Timers were off, feedback was immediate, and hints auto-surfaced. If a student struggled repeatedly, the system triggered an in-context concept intervention (micro-video), followed by a similar-question loop to confirm understanding.
Designed for new topics and low-to-average performers.
We minimised cognitive load by enforcing linear progression, hiding question lists, and serving only easy questions in small batches. Timers were off, feedback was immediate, and hints auto-surfaced. If a student struggled repeatedly, the system triggered an in-context concept intervention (micro-video), followed by a similar-question loop to confirm understanding.
When answered correctly :
When answered correctly :
When answered incorrectly, hint experience auto-surfaces :
When answered incorrectly, hint experience auto-surfaces :
Incorrect even after trying with hint - show solution and try similar question :
Incorrect even after trying with hint - show solution and try similar question :
Incorrect even after showing hint and solutions - system was unable to solve the issue, so we suggest the question be routed to the teacher as a doubt :
Incorrect even after showing hint and solutions - system was unable to solve the issue, so we suggest the question be routed to the teacher as a doubt :
If 3 consecutive incorrect attempts from same concept :
If 3 consecutive incorrect attempts from same concept :
Phase 2 : The Practice Phase (Building Momentum)
Designed for Average Performers building proficiency. This phase is split into two distinct modes that unlock sequentially.
Designed for Average Performers building proficiency. This phase is split into two distinct modes that unlock sequentially.
Untimed Practice (Sandbox):
Untimed Practice (Sandbox):
Easy-to-medium questions, flashcard-led revision, visible question navigation, guided hints, and lightweight gamification to build confidence.
Easy-to-medium questions, flashcard-led revision, visible question navigation, guided hints, and lightweight gamification to build confidence.
Timed Practice (Training Wheels):
Timed Practice (Training Wheels):
Medium-difficulty questions with exam-like constraints - timers on, delayed feedback, no auto-hints, and solutions revealed only after submission, helping students transition toward test readiness.
Medium-difficulty questions with exam-like constraints - timers on, delayed feedback, no auto-hints, and solutions revealed only after submission, helping students transition toward test readiness.
Untimed practice experience with optional flashcards and question list view for easy navigation :
Untimed practice experience with optional flashcards and question list view for easy navigation :
Phase 3: Revision / Test (The Arena)
For exam readiness.
For exam readiness.
Students face medium-to-hard questions in a full exam simulation: strict timers, visible question palette, zero assistance (no hints or interventions), and performance-only analytics - mirroring the high-stakes JEE/NEET environment.
Students face medium-to-hard questions in a full exam simulation: strict timers, visible question palette, zero assistance (no hints or interventions), and performance-only analytics - mirroring the high-stakes JEE/NEET environment.
Timed, with zero assistance / intervention :
Timed, with zero assistance / intervention :



The System Architecture: Building a Modular Engine.
To make this strategy viable for engineering, we couldn't build three separate apps. We created a Modular Practice Engine - a single codebase with configurable properties.
To make this strategy viable for engineering, we couldn't build three separate apps. We created a Modular Practice Engine - a single codebase with configurable properties.
Feature
Learning Phase
Practice Phase (untimed)
Practice Phase (timed)
Revision Phase
Timer
OFF
OFF
ON
ON
Question list view
Hidden - Linear flow
Visible
Visible
Visible
Hint behaviour
Auto-surface (proactive)
Auto-surface (proactive)
Voluntary (click-to-view)
Disabled
Feedback loop
Immediate (per question)
Immediate (per question)
Batch (end of session)
Batch (end of session)
Content injection
Concept videos (auto)
Flashcards (intro only)
None
None
Error journey
Similar qs loop
Standard retry
None
None
Why this matters
This modularity allowed us to scale the "Practice" experience across the entire app ecosystem:
This modularity allowed us to scale the "Practice" experience across the entire app ecosystem:
Score Booster uses the Learning Phase configuration (Guided).
Score Booster uses the Learning Phase configuration (Guided).
Homework uses the Practice Phase configuration (Untimed or Timed based on teacher assignment).
Homework uses the Practice Phase configuration (Untimed or Timed based on teacher assignment).
Mock Tests use the Revision Phase configuration.
Mock Tests use the Revision Phase configuration.
Support for all question types :
Support for all question types :
















Landscape mode support & support for all devices :
Landscape mode support & support for all devices :



Outcome.
What went live
We shipped the first phase of the Modular Practice Engine, standardising the core experience across learning and assessment workflows. This release included:
We shipped the first phase of the Modular Practice Engine, standardising the core experience across learning and assessment workflows. This release included:
Practice modes with & without timer
Practice modes with & without timer
Revision / Test mode with strict timing
Revision / Test mode with strict timing
Unified Question List View (palette)
Unified Question List View (palette)
Dedicated Solution View with structured explanations
Dedicated Solution View with structured explanations
All devices support - including landscape and portrait support in mobile and tablet
All devices support - including landscape and portrait support in mobile and tablet
Product adoption
The new Practice Engine is now the default experience across multiple surfaces:
The new Practice Engine is now the default experience across multiple surfaces:
Custom Practice
Custom Practice
Improvement book
Improvement book
Score Booster - Practice and Revision modes
Score Booster - Practice and Revision modes
Test Experiences
Test Experiences
This allowed us to replace fragmented legacy flows with a single configurable system without rebuilding UI multiple times.
This allowed us to replace fragmented legacy flows with a single configurable system without rebuilding UI multiple times.
Early Impact (Post Launch)
Students who crossed threshold qs
went from 8K last year to :
14.5K
14.5K
14.5K
↑ 81%
Power users (weekly)
went from ~24K to :
~40K
~40K
~40K
↑66%
Why this mattered :
Clarifying intent (practice vs. revision vs. testing), improving navigation, and standardising the solution flow meaningfully reduced cognitive friction, encouraged more consistent practice and made it accessible to a wider student base.
Clarifying intent (practice vs. revision vs. testing), improving navigation, and standardising the solution flow meaningfully reduced cognitive friction, encouraged more consistent practice and made it accessible to a wider student base.
This validated the system-first approach - proving value even before introducing guidance layers like hints and tutoring logic in the next phase.
This validated the system-first approach - proving value even before introducing guidance layers like hints and tutoring logic in the next phase.
Learnings.
Design for intent, not features
Design for intent, not features
Practice isn’t a single workflow - students switch between learning, training, and testing. Treating these as distinct modes (not one feature) gives us a clearer foundation to design on.
Practice isn’t a single workflow - students switch between learning, training, and testing. Treating these as distinct modes (not one feature) gives us a clearer foundation to design on.
One-size-fits-all systems silently exclude users
One-size-fits-all systems silently exclude users
Unifying Practice into a configurable engine (timers, hints, navigation, feedback) reduced fragmentation and future tech debt.
Unifying Practice into a configurable engine (timers, hints, navigation, feedback) reduced fragmentation and future tech debt.
Business alignment requires outcome framing, not UX framing
Business alignment requires outcome framing, not UX framing
Our first redesign failed to get buy-in because it solved usability pain, not learning impact. Reframing around mastery + cohort needs created executive alignment, even for phased delivery.
Our first redesign failed to get buy-in because it solved usability pain, not learning impact. Reframing around mastery + cohort needs created executive alignment, even for phased delivery.
Research should shape the roadmap, not just the UI.
Research should shape the roadmap, not just the UI.
Cohort insights (toppers vs. average vs. struggling learners) helped us sequence what to build now vs. later, ensuring guided learning can be layered in safely when ready.
Cohort insights (toppers vs. average vs. struggling learners) helped us sequence what to build now vs. later, ensuring guided learning can be layered in safely when ready.
Open to new ideas, learnings and coffee chats.
+91 9667319653
saraswathyp.design@gmail.com
Made with love on Framer
© SARASWATHY 2025
Open to new ideas, learnings and coffee chats.
+91 9667319653
saraswathyp.design@gmail.com
Made with love on Framer
© SARASWATHY 2025
Open to new ideas, learnings and coffee chats.
+91 9667319653
saraswathyp.design@gmail.com
Made with love on Framer
© SARASWATHY 2025