Interspace / AI-Human Collaborative Platform + ITP Thesis
AI as a creative partner, not a vending machine.
I designed and built Interspace from scratch—a collaborative drawing platform where AI doesn't finish your ideas, it keeps them unfinished. Not just the UX, but the full-stack: real-time canvas, dual AI pipelines, edge functions. Design thinking meets vibe coding.
Role
Product Designer & Creative Technologist
Duration
Sep 2025 – May 2026 | NYU ITP Thesis
Skills
Concept & Research
Dual-System Design
Full-Stack Development
AI Pipeline Architecture

Overview
I built the AI creative tool I wished existed.
Every AI tool I tried worked the same way: prompt, wait, output, repeat. Efficient—but creativity doesn't work like that. Real breakthroughs come from dialogue, not transactions.
I designed and built Interspace from scratch—a platform where you draw on one screen while AI responds on another. No prompts. No waiting. Just real-time co-creation. The full stack: canvas, dual AI pipelines, edge functions. All me.
The problem
Generative AI tools produce finished images on demand.
That's exactly what ends exploration. Users get a polished result, stop asking questions, and iteration dies at the first output.
Users don't need AI to finish their ideas. They need AI to expand what they haven't imagined yet.
The solution
Two voices, one dialogue: Echo + Imagine.
system
Role
Output

Four provocation modes
Background
My ceramics background shaped this thinking: The most meaningful creative moments emerge from unexpected responses—when clay cracks in the kiln, when glaze runs in unplanned directions. These "accidents" become invitations to reinterpret. Could AI serve a similar role in digital creation?
I designed and built Interspace from scratch—a platform where you draw on one screen while AI responds on another. No prompts. No waiting. Just real-time dialogue. The full stack: canvas, dual AI pipelines, edge functions. All me.
Research & Process
Generative AI tools produce finished images on demand.
That's exactly what ends exploration. The efficiency of prompt-based tools is also their ceiling.
My hypothesis: drawings grow through dialogue.
When a friend looks at your sketch and says "I see this here," that outside perspective is often where the next idea comes from. I wanted AI to play that role.
V1 reflected the drawing in two parallel formats — ASCII art and keyword text — letting AI speak in its native perceptual language. This is how the machine sees your drawing.

Ver 1.
People know what they want to draw. What they don't know is what else they could draw.
I tested V1 at the Korean National Science Museum. 3-400 visitors, 10 interviews. Users arrived with strong mental images. Reflection, in any format, just mirrored what they already had. No counterweight. No surprise.


From V1 to V3
V1
What I Tried
Two parallel reflection formats: ASCII + keywords
What I Learned
keywordsReflection alone can only mirror—no matter how many formats

V2
Introduced two voices in a 3-panel structure: Draw → Echo → Imagine. Equal-width panels, minimal interface.
The structure was right—two voices worked. But equal-width panels made drawing feel cramped, and Imagine only reimagined once, without giving users any say in how.

V2
Drawing as the center of gravity. Expanded canvas, Echo + Imagine compressed to the side. Echo now reads user intent (pre-selected keywords + drawing tempo). Imagine branches into four provocation modes: What if / The opposite / Stretch it / Question it.
Dialogue isn't just two voices—it's two voices the user can steer, with enough room to actually create.

Design Decisions
Reflection, provocation, and the space to create.
1. Drawing as the center of gravity. V2's three equal panels made drawing compete with AI. V3 gave the canvas the largest space—Echo and Imagine moved to the side. One act, accompanied.
2. Echo reads you, not just your drawing. Mirroring strokes wasn't enough. Echo now combines your pre-selected keyword (Dog) with your drawing tempo (Slow)—interpreting intent and behavior, not pixels.
3. You steer the provocation. V2's Imagine offered one reimagining, take it or leave it. V3 gives users four modes to direct the dialogue: What if / The opposite / Stretch it / Question it.
This is what the backend had to make possible →
Technical Architecture
Stack
Frontend: Vite + TypeScript + React (real-time drawing canvas)
Backend: Supabase Edge Functions (streaming, state sync, schema)
AI Pipelines: Groq (Llama 4 Scout) for Echo · Replicate for Imagine
Build tools: Claude Code, Cursor

Key architecture decisions
1. Dual pipelines, not one model with two prompts. Echo and Imagine had different latency profiles—Echo needed to feel instant, Imagine could take longer. Separating them let me optimize each for its role.
2. Edge functions over traditional backend. Streaming responses had to feel like drawing alongside someone, not waiting for a server. Edge functions kept the loop tight enough to preserve the simultaneous experience the design depended on.
3. Mid-project stack migration. Originally built on OpenAI + Replicate. Switched Echo to Groq + Llama 4 Scout mid-project to cut API costs dramatically without losing quality. The call required weighing both the business constraint (sustainable hosting past thesis) and the interaction quality trade-off.
BUILDING AS A DESIGNER
What it cost, what it taught.
Building this myself wasn't smooth. That's the point.
Debugging across three systems. A failing response could be a frontend state bug, an edge function timeout, or an API rate limit. Learning to read logs across Supabase, Replicate, and the browser console became its own skill.
Documentation fluency. Each API (OpenAI, Replicate, Groq, Supabase) has its own docs patterns. Time-to-first-success varied wildly. This is the exact friction I'd later recognize as a DevEx problem—and it shaped how I think about designing for developers now.
Typing the unknown. TypeScript surfaced a class of bugs I would have shipped without it—shape mismatches between what Replicate returned and what my UI expected.
Knowing when to migrate vs. patch. The Groq migration wasn't planned—it was forced by cost. Making that call with no engineer safety net was the most "product lead" moment of the project.
What this changed about how I design
I prototype in code as fluently as in Figma now. I understand why engineers push back on certain interactions. And I know the real cost of a "simple" feature—which makes my design decisions sharper, not softer.
Looking back
What I learned from Interspace.
Finished images end exploration.
When AI gives a polished answer, users stop asking questions. Unfinished outputs invite continuation. The opportunity isn't better prompts—it's keeping the loop open.
Reflection and expansion need each other.
The two systems are only meaningful together. Echo without Imagine feels like a mirror. Imagine without Echo feels like a slot machine. Dialogue requires both voices.
Dialogue needs direction, not just voices.
V1 had one voice. V2 had two. V3 works because users can steer the second voice—choosing what kind of provocation they want. Real creative partnership isn't just being heard and challenged. It's being challenged on your own terms.
