How I built an AI prototype in 2 hours (it all started at parents’ evening)


How to build an AI working prototype in 2 hours

What if a dyslexic child who writes only the bare minimum could become a storyteller?

A couple of weeks ago, I was at parents' evening with my wife, and this simple observation turned into a light-bulb moment. My daughter is bright, but every time she sits down to write, she scrapes a sentence or two and then stops.

“It’s such a shame,” my wife said, “she’s learned to write the bare minimum to get by, and no one sees her intelligence or creativity.”

That moment made me think about cognitive economy; dyslexic children deliberately shrink their output because the act of writing drains every spare resource they have.

We were not alone

This isn’t just an isolated family issue. Once I started talking to people and reading academic research, it confirmed that dyslexic children develop a resistance to writing because the task demands coordination of ideas, language, and motor skills, all of which tax working memory. The result is a “confidence gap”.

The child thinks they can’t write, teachers think she is a weak writer, and the child’s creative ideas and intellect never surface. The literature shows that this loop feeds back into avoidance and learned helplessness.

However, a 2024 study on speech‑to‑text (STT) found that children who used STT not only wrote more but also improved their reading decoding, showing that easing the transcription bottleneck can actually boost overall literacy.

When I set out to prototype a solution, I wanted to break that loop. I didn’t want to build a just generic transcription app.

I wanted a create a piece of personal software, a story‑coach that turns dictation into a narrative journey, giving children instant confidence boosts while keeping the cognitive load manageable. In other words, a narrative‑based confidence builder that helps them articulate their true voice.

How did I turn this idea into a prototype in just two hours?

I created a rapid‑prototype sprint that looked like this:

Step 1 (0–5 mins): Clarify the core problem

Start by slowing down just enough to articulate the real idea and the problem it’s trying to solve. Write it out in plain language.

How:

  • Observe what’s bothering you (or your customer)
  • Write a single, clear problem statement

Tools: Some form of AI chat interface, I like Gemini for research and Chat GPT for knocking ideas around.

Output: A clear problem you can test, not a vague ambition. Here's my original observation:

"I have noticed that my dyslexic child is resistant to writing, and whenever she has to write, she actively writes the bare minimum. This means that she has already learned to cope and preserve the cognitive load, but this means that she doesn’t give full voice to her brilliant ideas and people who read her writing don’t see the full breadth of her intelligence and creativity. This is a shame, and I am worried that if this continues, she will be put off sharing her ideas, and teachers will assume that she doesn’t have any because of the effort it takes her to articulate her thoughts. This, in turn, will negatively impact her and reduce her future opportunities. I want to build a writing app that helps her build her confidence and helps articulate her thoughts to overcome her barriers to writing regularly."

Step 2 (5–15 mins): Ground it in evidence

Do rapid research to make sure the idea isn’t just intuition. Look for scientific, academic, or strategic backing.

How:

  • Scan relevant academic papers
  • Look for proven frameworks or comparable solutions
  • Kick off a deep research call using something similar to the prompt below:

The Deep Research Prompt

"Role: Act as a Lead Product Researcher and Cognitive Scientist specialising in Neurodiversity and EdTech.

Context: I am designing a writing application for a smart dyslexic child who has adopted a "cognitive economy" strategy. Because the act of transcribing thoughts is mentally exhausting, she writes the bare minimum, resulting in a large gap between her high oral/intellectual capability and her limited written output. This is leading to learned helplessness and a loss of confidence.

Objective: Conduct deep research to identify the academic foundations, pedagogical frameworks, and product "First Principles" are necessary to build an app that reduces the cognitive load of writing and bridges the gap between ideation and text generation.

Please research and synthesise the following three domains:

1. The Cognitive Science of "Writing Resistance" in Dyslexia (2020–2025 Focus)
2. Pedagogical Frameworks & Interventions
3. Technological First Principles & Product Mechanisms.

Output Deliverables:
Executive Summary of Research:

  • Key insights on why she restricts her output.
  • The Framework List: 3–5 specific academic methodologies that the app should mechanise.
  • First Principles for Product Architecture: A set of core design rules to guide the build.

Feature Concepts: Based on the research, suggest 3 innovative features that specifically target the confidence gap."

To use, select the block above and copy and paste it into any AI chat interface you'd like to use and customise. Make sure you have deep research enabled before submitting.

Tools: Google Scholar, Perplexity, Gemini (for deep research) and academic search.
Output: An evidence-based foundation you can build on. Here's the ​research​.

Step 3 (15–20 mins): Validate the real job-to-be-done

Sense-check the idea against what people are actually trying to achieve. I have a handy tool to do this. Drop me a line if you'd like access, or do it manually ​JTBD​.

How:

  • Create a Customer Value Canvas (Email me)
  • Identify the primary Job-to-be-Done
  • Check whether the problem is real and urgent

Tools: Value by Design customer canvas (Email me) Output: A confirmed target job and early signal of fit. Here's my ​output​.

Step 4 (20–25 mins): Prepare the technical environment

Before designing anything, remove friction from building. There is an amazing repository created by ​Den Delimarsky​ called ​SpecKit​ which will help you define and develop you idea without needing to know how to code (I really don't).

How:

  • Set up your development environment, ask the LLM to do this using .codex/prompts/speckit.implement.md and explain what you are trying to do.
  • Define the basic tech stack

Tools: Cursor + Spec Kit
Output: A build-ready environment.

Step 5 (25–35 mins): Explore naming and colour

Generate options, then narrow quickly. I used a combination of Looka, Khroma, and Namelix to explore ideas for the app name and to come up with a colour theme that I thought she would like.

How:

  • Generate names and colour themes
  • Shortlist what feels aligned with the problem and audience

Tools: Namelix, Material Design themes, ​Khroma ​
Output:
A tight shortlist of brand concepts. In the end I chose this:

Step 6 (35–40 mins): Create the design foundations

Now shape how the product will feel, not just what it does.

How:

  • Define early brand direction
  • Choose typography and layout principles
  • I refined naming options down to the following logo and fonts

Tools: Looka, ​Google Font Pairings ​Output: An early visual identity you can iterate on.

Step 7 (40–45 mins): Capture the voice of the customer

Anchor everything in real language, not internal assumptions.

How:

  • Extract phrases and pain points from interviews or customer value canvas with another research prompt to find the voice of the customer on forums like Reddit and specialist dyslexia discussion groups.
  • Note how customers describe the problem in their own words. I have my own product that does this. Drop me an email if you'd like access to it.

Tools: Value by Design customer canvas (Email me) Output: Authentic messaging and language. Tone of Voice definition Finalised Design System.

Step 8 (45–55 mins): Act as product manager

Switch hats and decide what actually ships first. I found a brilliant methodology and prompt from a founder called Sean Kochel you can access it from this ​Google Doc​.

How:

  • Define the MVP Create a simple, honest roadmap
  • Decide what not to build yet

Tools:​Product Manager Prompt and Design First Approach​ Research synthesis + PM heuristics Output: Clarity on priorities and first release. Here's what it ​should look like​.

Step 9 (55–60 mins): Translate into screens

Turn strategy into something visual and concrete for your LLM to build. I used Sean's approach to design screens of the whole experience in Sketch and then only built the ones required for an MVP.

How:

Tools: Google Sketch
Output: Front-end creative direction and functional intent, which aligns the front-end design and the back-end functionality of the app. ​See all screens here​.

Step 10 (60–90 mins): Formalise the spec

The penultimate, but probably the most critical stage. Bring everything together into something an LLM coding system (or developer) can act on.

How:

  • Upload documents (including your academic, customer, voice of customer research, design system, prototype screens, MVP functional specification into one place.
  • For me, this was Cursor, but you could use VSCode, Antigravity or any other environment suitable for building code.
  • Then I ran Speckit on the product manager output and features we defined earlier to get a granular and definitive spec to build.
  • Then I finalised screens, design system and comprehensive functional spec.

Tools: Cursor (or any IDE) + your LLM of choice (frontier thinking model) SpecKit
Output: Clear development direction. ​001 MVP Specification example​.

Step 11 (90–120 mins): Build the functional prototype

Finally, move from thinking to making. As simple as writing "Now build this MVP using .codex/prompts/speckit.implement.md"

How:

  • LLM builds the working prototype
  • LLM tests key flows for you
  • LLM validates any assumptions quickly

Tools: Cursor, Spec Kit, long-running LLM (Codex, Gemini, Sonnet 4.5 or similar)
Output:
A testable prototype you use, test and learn from.

The Impact

The transformation the prototype promises is clear. In the first minute, a child hears the microphone click, feels the instant release of transcription, and sees a bubble appear.

In the next five minutes, the child picks a bubble, taps a prompt, and receives a concise suggestion that feels like a friendly nudge. By the end of a ten‑minute session, the child has a fledgling story, can read it aloud, and sees a badge of completion.

That simple, joyful loop mirrors the confidence arc that research identifies as essential: awareness, action, reflection, mastery. The child moves from “I can’t write” to “I can write a story,” and that shift is the promise of the prototype.

So, what’s next?

If you’d like to try the demo, let me know. I’d love to hear what you think of it.

If you'd like to share your child’s first full paragraph, that would be amazing! I'd love to highlight a few stories each week. Finally, if you’re excited to see the full product, sign up for the beta and be among the first to shape the next generation of writing tools for dyslexic learners.

Why not, give this process a try yourself. In just two hours, we built a prototype that turns a whisper into a story. Think about what you could build, or what problem you could solve for someone you love? If I can do it, you can too! Have a great weekend! Much love to you all, C.

background

Subscribe to Polything Marketing Consultancy