How to build an AI working prototype in 2 hoursWhat if a dyslexic child who writes only the bare minimum could become a storyteller?A couple of weeks ago, I was at parents' evening with my wife, and this simple observation turned into a light-bulb moment. My daughter is bright, but every time she sits down to write, she scrapes a sentence or two and then stops. “It’s such a shame,” my wife said, “she’s learned to write the bare minimum to get by, and no one sees her intelligence or creativity.” That moment made me think about cognitive economy; dyslexic children deliberately shrink their output because the act of writing drains every spare resource they have. We were not aloneThis isn’t just an isolated family issue. Once I started talking to people and reading academic research, it confirmed that dyslexic children develop a resistance to writing because the task demands coordination of ideas, language, and motor skills, all of which tax working memory. The result is a “confidence gap”. The child thinks they can’t write, teachers think she is a weak writer, and the child’s creative ideas and intellect never surface. The literature shows that this loop feeds back into avoidance and learned helplessness. However, a 2024 study on speech‑to‑text (STT) found that children who used STT not only wrote more but also improved their reading decoding, showing that easing the transcription bottleneck can actually boost overall literacy. When I set out to prototype a solution, I wanted to break that loop. I didn’t want to build a just generic transcription app. I wanted a create a piece of personal software, a story‑coach that turns dictation into a narrative journey, giving children instant confidence boosts while keeping the cognitive load manageable. In other words, a narrative‑based confidence builder that helps them articulate their true voice. How did I turn this idea into a prototype in just two hours?I created a rapid‑prototype sprint that looked like this: Step 1 (0–5 mins): Clarify the core problemStart by slowing down just enough to articulate the real idea and the problem it’s trying to solve. Write it out in plain language. How:
Tools: Some form of AI chat interface, I like Gemini for research and Chat GPT for knocking ideas around. Output: A clear problem you can test, not a vague ambition. Here's my original observation: "I have noticed that my dyslexic child is resistant to writing, and whenever she has to write, she actively writes the bare minimum. This means that she has already learned to cope and preserve the cognitive load, but this means that she doesn’t give full voice to her brilliant ideas and people who read her writing don’t see the full breadth of her intelligence and creativity. This is a shame, and I am worried that if this continues, she will be put off sharing her ideas, and teachers will assume that she doesn’t have any because of the effort it takes her to articulate her thoughts. This, in turn, will negatively impact her and reduce her future opportunities. I want to build a writing app that helps her build her confidence and helps articulate her thoughts to overcome her barriers to writing regularly." Step 2 (5–15 mins): Ground it in evidenceDo rapid research to make sure the idea isn’t just intuition. Look for scientific, academic, or strategic backing. How:
The Deep Research Prompt
To use, select the block above and copy and paste it into any AI chat interface you'd like to use and customise. Make sure you have deep research enabled before submitting. Tools: Google Scholar, Perplexity, Gemini (for deep research) and academic search. Step 3 (15–20 mins): Validate the real job-to-be-doneSense-check the idea against what people are actually trying to achieve. I have a handy tool to do this. Drop me a line if you'd like access, or do it manually JTBD. How:
Tools: Value by Design customer canvas (Email me) Output: A confirmed target job and early signal of fit. Here's my output. Step 4 (20–25 mins): Prepare the technical environmentBefore designing anything, remove friction from building. There is an amazing repository created by Den Delimarsky called SpecKit which will help you define and develop you idea without needing to know how to code (I really don't). How:
Tools: Cursor + Spec Kit Step 5 (25–35 mins): Explore naming and colourGenerate options, then narrow quickly. I used a combination of Looka, Khroma, and Namelix to explore ideas for the app name and to come up with a colour theme that I thought she would like. How:
Tools: Namelix, Material Design themes, Khroma Step 6 (35–40 mins): Create the design foundationsNow shape how the product will feel, not just what it does. How:
Tools: Looka, Google Font Pairings Output: An early visual identity you can iterate on. Step 7 (40–45 mins): Capture the voice of the customerAnchor everything in real language, not internal assumptions. How:
Tools: Value by Design customer canvas (Email me) Output: Authentic messaging and language. Tone of Voice definition Finalised Design System. Step 8 (45–55 mins): Act as product managerSwitch hats and decide what actually ships first. I found a brilliant methodology and prompt from a founder called Sean Kochel you can access it from this Google Doc. How:
Tools:Product Manager Prompt and Design First Approach Research synthesis + PM heuristics Output: Clarity on priorities and first release. Here's what it should look like. Step 9 (55–60 mins): Translate into screensTurn strategy into something visual and concrete for your LLM to build. I used Sean's approach to design screens of the whole experience in Sketch and then only built the ones required for an MVP. How:
Tools: Google Sketch Step 10 (60–90 mins): Formalise the specThe penultimate, but probably the most critical stage. Bring everything together into something an LLM coding system (or developer) can act on. How:
Tools: Cursor (or any IDE) + your LLM of choice (frontier thinking model) SpecKit Step 11 (90–120 mins): Build the functional prototypeFinally, move from thinking to making. As simple as writing "Now build this MVP using .codex/prompts/speckit.implement.md" How:
Tools: Cursor, Spec Kit, long-running LLM (Codex, Gemini, Sonnet 4.5 or similar) The ImpactThe transformation the prototype promises is clear. In the first minute, a child hears the microphone click, feels the instant release of transcription, and sees a bubble appear. In the next five minutes, the child picks a bubble, taps a prompt, and receives a concise suggestion that feels like a friendly nudge. By the end of a ten‑minute session, the child has a fledgling story, can read it aloud, and sees a badge of completion. That simple, joyful loop mirrors the confidence arc that research identifies as essential: awareness, action, reflection, mastery. The child moves from “I can’t write” to “I can write a story,” and that shift is the promise of the prototype. So, what’s next?If you’d like to try the demo, let me know. I’d love to hear what you think of it. If you'd like to share your child’s first full paragraph, that would be amazing! I'd love to highlight a few stories each week. Finally, if you’re excited to see the full product, sign up for the beta and be among the first to shape the next generation of writing tools for dyslexic learners. Why not, give this process a try yourself. In just two hours, we built a prototype that turns a whisper into a story. Think about what you could build, or what problem you could solve for someone you love? If I can do it, you can too! Have a great weekend! Much love to you all, C. |