Week of February 16th
I've always believed that everyone can be a creator.
Not a "creative" in the narrow, artistic sense - but a creator. Someone who looks at a problem, feels the friction of it, and decides to make something to fix it. That belief has shaped how I teach, how I design professional development, and honestly, how I move through the world. But for most of my career, there was a gap between having an idea and building a thing. You had to know how to code. You had to know how to design. You had to know the right tools, the right syntax, the right stack.
That gap isn't gone, but it's gotten a lot smaller. And over these past few months, and specifically this week, I have felt that more than ever. This post is my attempt to document the projects from this past week.
Please note - these are MVPs (Minimal Viable Products), thought experiments, and half-finished ideas.
I wanted to give myself a chance to reimagine what that tool could be if it were designed around the things I care most about: documenting the process, not just showcasing the product. I think process over product will be the most important part of the future of education.
Process+ is a community space built around the idea that the journey of learning is as important as the destination. Students record and upload videos — but what sets Process+ apart is the scaffolding built around those videos.
Every video can have structured reflection prompts attached: What were you thinking when you started? What changed? What would you do differently? AI generates conversation threads from video content, giving students something to respond to beyond a blank comment box. Students can curate highlight moments from their own and their peers' videos, creating a record of growth over time. Eventually, I'd like it so that an AI feedback coach can offer formative responses; not grades, but observations and questions that push thinking forward.
The name comes from what I most want this tool to do: keep the focus on Process, and the "+" signals that there's always more — more reflection, more community, more growth.
You can explore the current build here: Visit the Process+ MVP
I want to connect Process+ to my existing peer reflection work — where prompts help students articulate their thinking and document their creative process over time. The community feed could evolve into something more curated: teacher-selected "moments" that celebrate growth, not just polish. I'm also thinking about a "timeline view" where you can watch a student's understanding evolve across multiple videos over a unit or semester. That would be powerful for portfolio-based assessment.
There are some apps that are somewhat similar (or used to be similar). The AI features are the clearest path to making it genuinely new — but I need to be careful not to bolt AI on just to say it's there. The reflection prompts need to feel meaningful and the AI coach needs to feel supportive, not evaluative. Getting that tone right takes time and iteration.
What would make an AI feedback coach feel genuinely useful rather than just automated?
Is a "learning timeline" — seeing student growth across multiple videos — something teachers would actually use for assessment?
What community features would make students want to engage with each other's videos, not just post their own?
Every educator knows the scene. A student is supposed to be working on an assignment. You look over their shoulder and they've got six tabs open — none of them the assignment. Tab management in schools is mostly an admin problem: IT blocks sites at the network level, or teachers use tools like GoGuardian to monitor and restrict. But those tools work against students. What if a tool worked with them? I wanted to build something student-facing, task-driven, and a little silly. That's where the cat puns came in.
Tabby is a focus companion for students. The idea is simple: you set up a task, define which websites are allowed for that task, and set a timer. If you try to wander outside that, Tabby notices — and guilt-trips you back to work. Stay on task, and you unlock rewards. Lose focus and face a cat-astrophe.
The description I wrote for the MVP captures the vibe pretty well:
Stop kitten around! Tabby is the purr-fect focus companion that keeps your eyes on the prize. If you switch tabs, Tabby knows you're pro-cat-stinating and will guilt-trip you back to work. Stay focused to unlock rewards, or face a total cat-astrophe. It's the ultimate meow-tivation tool for when you really knead to get things done!
Right now the MVP lives as a Gemini-built web prototype ... not yet a Chrome Extension, but it shows the concept working.
You can see it here: Try the Tabby MVP
The real vision for Tabby is a full Chrome Extension with true whitelist-based tab management. A teacher (or the student themselves, depending on the context) creates a "task" — let's say a science research assignment — and defines exactly which sites are allowed: Google Classroom, a specific encyclopedia, maybe one approved video resource. The extension enforces those limits with a timer, and the gamification layer (rewards, streaks, consequence animations) keeps it from feeling punitive.
A fully functional Chrome Extension with real whitelist enforcement may be complex. The MVP proves the concept but building the extension version requires a deeper dive into browser extension architecture that I haven't tried to take on yet. The gamification layer is also something I want to get right, rewards systems are easy to design poorly and hard to design well, especially for different age groups.
Would you want the whitelist set by the teacher, the student, or both?
What would make this feel rewarding rather than restrictive for middle/high schoolers?
Is there a version of this that works district-wide as a Chrome Extension policy, or does it need to be individually installed?
Differentiated instruction is one of those ideas that every educator agrees with in theory and most find exhausting in practice. Not because teachers don't want to differentiate — they do — but because making multiple versions of the same assignment for thirty students is a massive time lift. What if AI could do the heavy lifting while the teacher focused on what matters: knowing their students?
Mosaic starts with a class profile. You tell it about your students — not just their names, but their interests, passions, and learning needs. Johnny loves plants. Bob is obsessed with music. Tom loves Minecraft. (Keep in mind- we can adjust the naming convention eventually to not include any PII with a number system).
Then you write your assignment — or upload one you already have — and Mosaic generates individualized versions for each student. The core concept stays exactly the same (you're still teaching the skill or standard), but the context shifts to match each student's world. Johnny's version of the fractions assignment involves dividing a garden. Bob's involves dividing a musical phrase. Tom's involves blocks in Minecraft.
The output can go directly to students through Google Classroom as individual google docs (not sure how yet) or a Google Form where students select their name and are routed to their personalized version, or as a printable PDF packet.
You can see the light MVP here: Try the Mosaic MVP
The full version of Mosaic would include real account creation (likely Firebase on the backend) so teachers can save and update their class profiles over time rather than re-entering them. I'd also want to build in support for IEP/504 accommodations and language modifications. Currently, it handles interest-based differentiation, but learning-need differentiation is where the real impact lives for many classrooms.
The Google Classroom integration would need to be built properly, not just linked. And ideally, the form routing would be seamless — students never know they're getting a "different" version, they just get their assignment.
This is the one that excites me most and worries me most. The impact potential is real, and the workflow is something teachers actually need. The worry is cost. Mosaic is deeply API-dependent. Generating individualized assignments for a class of 30 means a lot of LLM calls, and at scale, that adds up fast. I haven't figured out the sustainable model yet — whether that's a freemium tier with limited class sizes, a per-use cost, or some kind of institutional licensing.
How would you handle API costs in a tool like this?
Would teachers trust AI-generated differentiated assignments enough to hand them directly to students, or would there need to be a review/editing step built in? (I always think the human needs to be in the loop as much as possible)
Is interest-based differentiation enough, or does this only become truly valuable when it includes learning-need modifications?
I found this fascinating. Not because it's a clever trick, but because it's a genuinely different way of assessing students. It's not testing whether you can write — it's testing whether you can think. Whether you can evaluate. Whether you have the judgment to know when an AI version of you is missing something essential.
That felt like the future of education to me. And I wanted to build a tool that made this kind of thinking accessible beyond one elite honors program.
LoopAI gives students an essay prompt and asks AI to write a first draft — not to cheat, but to create a starting point for critical analysis. The student then has to engage with that draft: annotate it, challenge it, identify where the AI got things right and where it fundamentally missed the mark. Then they write their own version, informed by that analysis.
The "Loop" in the name is intentional: the human is always in the loop. AI starts the process, the human evaluates and redirects, and the final product is genuinely their own — not despite the AI involvement, but because of how they engaged with it.
There are two MVP versions:
K-12 version: Try LoopAI for K-12 — more scaffolded, with guided annotation prompts and structured reflection questions
Higher Ed version: Try LoopAI for Higher Ed — more open-ended, reflecting the kind of intellectual independence expected at the college level
The K-12 version needs more scaffolding around the evaluation step — that's the hardest part for younger students. I'm thinking about sentence starters for critique ("The AI got this wrong because...", "Something the AI couldn't know about me is...") and visual annotation tools that let students mark up the AI draft directly. I also want to build in a feature that highlights common AI writing patterns — hedging language, vague generalizations, generic transitions — so students can learn to spot those moves and push back on them.
At the higher ed level, I'd love to see this used as a research and argumentation tool, not just for personal essays. Imagine using LoopAI to evaluate an AI-generated analysis of a primary source. That's a completely different kind of critical thinking exercise.
This is the app that still needs the most conceptual work. The mechanics are relatively straightforward — it's the pedagogy that's hard. How do you make the evaluation step genuinely rigorous rather than superficial? How do you prevent students from just agreeing with everything the AI wrote? How do you assess this kind of thinking? Those are questions I haven't fully answered yet, and I'd love to think through them with educators who've worked on AI literacy.
Have you used a process like this with students? What worked and what didn't?
How would you assess the student's reflective essay in a way that feels fair and meaningful?
Is the K-12/Higher Ed split the right framing, or would it make more sense to differentiate by subject area instead?
Here's what I keep coming back to as I look at these five projects: none of them would exist without AI assistance, and none of them are AI. They're ideas that came from years of watching students struggle to focus, teachers burn out trying to differentiate, and platforms that get acquired and disappear.
The ideas were always there. The frustrations were always there. What changed is that the gap between having an idea and building a thing got small enough for me to jump across.
I've always believed that everyone can be a creator. I believe that more now than ever — not because AI is magic, but because it's finally making good on a promise that creativity has always made: that if you have something to say, you should be able to say it. And if you have something to build, you should be able to build it.
Now it's your turn:
Find the thing that doesn't work and tell me about it. I built these all in a week and I'd love your help figuring out which ones deserve the next month — or the next year. I'd love to hear from you wherever you are in the EdTech world. If any of these ideas spark something for you — as a teacher, a developer, or just someone who cares about what education could look like, let's talk.
Everyone can build. Let's build together.