Rebuild Rebecca Yu’s Dining App: A Hands-On Micro-App Project Using Claude/ChatGPT and Node.js
A classroom-ready plan to rebuild Rebecca Yu's dining micro app with Claude/ChatGPT, Node.js, PWA and Raspberry Pi deployment.
Stop drowning in theory—build a dining micro app in a week with Claude/ChatGPT + Node.js
Students and teachers: if your course feels like a parade of disconnected exercises, here’s a cohesive, remixable classroom project that turns concepts into a portfolio-ready micro app. Inspired by Rebecca Yu’s Where2Eat, this plan walks a class through the full stack: architecture, prompt engineering for Claude and ChatGPT, UI patterns, and two deployment targets—PWA hosting and a Raspberry Pi classroom server. Use this to run a one-week sprint, a semester-long lab, or a portfolio assignment.
Why this project matters in 2026
Micro apps—small, focused web tools built by non-specialists—are now mainstream. In late 2025 and early 2026 we’ve seen three trends that make this project especially timely:
- On-device and local LLMs: Browsers and mobile apps increasingly support local inference (see Puma Browser and other Local AI efforts). That means students can prototype AI features without expensive cloud costs.
- Vibe-coding and rapid AI-assisted iteration: Tools like Claude and ChatGPT accelerate idea-to-prototype cycles. Prompt engineering is now a core development skill.
- Edge and Pi-hosting: Low-cost Raspberry Pi fleets and cheap edge VMs make it practical to run micro-apps for small groups, classrooms, and demos.
Learning outcomes
- Design a small system architecture for an AI-assisted web micro app
- Write Node.js server code that integrates with Claude/ChatGPT APIs
- Create a responsive UI and implement PWA features (manifest, service worker)
- Deploy to both a static PWA host and a Raspberry Pi with a reverse proxy
- Practice prompt engineering and evaluation for restaurant recommendations
Project overview: the dining micro app
The goal: recreate Rebecca Yu’s dining micro app concept as a classroom-ready repo. The app helps a small group choose a place to eat by collecting preferences, group vibes, and constraints, then recommending options.
Core features (MVP)
- User group creation (name + optional photo)
- Short preference capture (cuisines, price, vibe words)
- AI-generated ranked recommendations and short explanations
- Simple feedback loop: thumbs-up/down to refine suggestions
- Export share link or local QR code for classmates
Architecture: simple, modular, and remixable
Keep the architecture minimal so students can remix pieces independently. The recommended stack:
- Frontend: Static HTML/CSS/Vanilla JS or small framework (Svelte/React) if students are comfortable
- Backend: Node.js + Express for prompt orchestration and persistence
- Database: SQLite for local simplicity or lowdb (JSON) for ultra-light classrooms
- AI: Claude and ChatGPT via API (or local LLMs for on-device experiments)
- Deployment: PWA hosted on Netlify/Vercel for demos; optional Raspberry Pi (Raspbian/Ubuntu Server) with nginx + pm2 for in-class offline hosting
High-level diagram
Client (browser PWA) ⇄ Node.js API (Express) ⇄ AI API (Claude/ChatGPT) + DB. For Pi, add nginx reverse proxy and local DNS/QR share.
Prompt engineering: the classroom gold
Prompt engineering turns vague LLM outputs into consistent, useful recommendations. Teach students to craft prompts with role, constraints, examples, and an output format. Below is a production-ready prompt template your class can use and iterate on.
Prompt template (explainable, structured)
System: You are a concise restaurant recommender. Respond in JSON with keys: recommendations (list), rationale (short), and confidence (0-1).
User: Given the group’s preferences and context, suggest up to 5 restaurants. Preferences: {cuisines}, price_level: {price}, time_of_day: {time}, vibe_words: {vibes}, location: {lat,lng or city}. Avoid chains; prefer local spots. Keep each recommendation to: name, short_description, estimated_price, travel_estimate, reason.
Example output:
{
"recommendations": [
{"name": "Taco Bend", "short_description": "Authentic tacos, lively vibe", "estimated_price": "$","travel_estimate": "10 min drive","reason": "Matches 'casual' + taco preference"}
],
"rationale": "Picked local, casual options aligned with budget",
"confidence": 0.78
}
Teaching tips:
- Start with structured outputs (JSON) to simplify client parsing.
- Include a few in-prompt examples (few-shot) for higher quality.
- Use system messages to set style and safety (e.g., no disallowed content).
- For Claude vs ChatGPT: test both; Claude often prefers stepwise reasoning, while ChatGPT (GPT-4o) can be faster and cheaper in some cases in 2026.
Node.js wiring: minimal server to orchestrate LLM calls
Here’s a compact server skeleton that students can clone. It uses Express and node-fetch (or native fetch in Node 20+). Replace API calls with your Claude/ChatGPT client.
// server/index.js (simplified)
import express from 'express'
import bodyParser from 'body-parser'
import { openAIChat } from './llmClient.js' // wrapper for Claude/ChatGPT
const app = express()
app.use(bodyParser.json())
app.post('/api/recommend', async (req, res) => {
const { preferences, context } = req.body
const prompt = buildPrompt(preferences, context) // from template
try {
const llmResp = await openAIChat(prompt)
const data = JSON.parse(llmResp) // because we requested JSON
res.json({ ok: true, data })
} catch (err) {
console.error(err)
res.status(500).json({ ok: false, error: 'LLM failed' })
}
})
app.listen(3000, () => console.log('Server running on http://localhost:3000'))
Additional server responsibilities to teach:
- Rate limiting and basic caching (avoid repeated API calls during iteration)
- Storing feedback (thumbs up/down) to refine prompts
- API key handling via environment variables and .env
Frontend: simple, remixable UI
Keep the UI small so students can customize it. Key screens:
- Landing / Create group
- Preference form (3-6 quick fields + vibe words)
- Results list with rationale and feedback buttons
- Share view with QR code and copy link
Design tips for classroom success
- Use cards for recommendations and include a one-line reason to teach explainability
- Show confidence score visually (badge or subtle color)
- Keep forms short—decision fatigue is the problem you are solving
- Include a “why this” modal that displays the raw LLM rationale for teaching evaluation
PWA features: offline-first demos
Turn the front-end into a Progressive Web App so classmates can install it on phones and test without constant hosting. Key steps:
- Add a manifest.json (name, icons, theme color)
- Register a service worker to cache shell assets and recent suggestions
- Provide a graceful offline UI (saved recommendations + local SQLite/IndexedDB cache)
// register service worker (client)
if ('serviceWorker' in navigator) {
navigator.serviceWorker.register('/sw.js')
}
Teaching opportunities: explain cache-first vs network-first strategies and have students measure load times with/without service worker.
Classroom deployment: Netlify/Vercel + Raspberry Pi option
Offer two deployment paths: cloud PWA for public demos and a Raspberry Pi for offline classroom sessions.
Quick cloud PWA deploy
- Push repo to GitHub
- Connect to Vercel/Netlify (auto-build and simple environment variable setup)
- Add API environment variables securely (LLM keys)
- Publish demo URL and share QR
Raspberry Pi classroom host (offline demo)
Use a Raspberry Pi 4/5 with 4GB+ RAM for small groups. Steps:
- Install Raspberry Pi OS or Ubuntu Server (2025-2026 images work well)
- Install Node.js 20+ and pm2 to run the Node app as a service
- Use nginx as a reverse proxy and optionally a local DNS (dnsmasq) for easy discovery
- For local LLM experiments, students can experiment with on-device LLMs if Pi has the capacity or use a nearby laptop with an LLM container
# sample pi commands
sudo apt update && sudo apt install -y nginx
# clone repo, install deps
pm install
pm run build # front-end
pm start
# use pm2 to keep running
pm2 start server/index.js --name dining-app
Classroom tip: set the Pi to host a captive portal page or print a QR that resolves to the Pi IP so students can join a Wi‑Fi + demo without Internet access.
Evaluation and grading rubric
Use a lightweight rubric that balances engineering with design and prompt craft:
- Architecture and code quality (30%)
- Prompt engineering and reproducible LLM outputs (25%)
- UI/UX and PWA features (20%)
- Deployment and shareability (15%)
- Creativity and documentation (10%)
Advanced extensions (for extra credit)
- Add collaborative group voting in real time (WebSockets) so classmates can watch suggestions evolve.
- Integrate a local knowledge base or vector store (e.g., Pinecone, Milvus) to store historical preferences and make personalized recommendations.
- Implement local LLM inference for privacy experiments—use lightweight models with quantization and test in-browser LLMs or on a beefy Pi/edge node.
- Build an analytics dashboard for instructors to see usage and prompt performance.
Real-world case study & experience
Rebecca Yu’s week-long build is a great example of how focused scope plus AI assistance yields a usable micro app quickly. In class, replicate that cadence: 1 day for ideation and prompts, 2 days for core server-client wiring, 1 day for refinement and testing, and 1 day for deployment & demos.
"Once vibe-coding apps emerged, I started hearing about people with no tech backgrounds successfully building their own apps." — Rebecca Yu (Substack), a model for rapid student prototyping
Practical tips and troubleshooting (common gotchas)
- LLM variability: Always request structured output (JSON) and validate on the server to avoid frontend crashes.
- API costs: Use short prompts and caching during development to reduce calls. Teach students to mock LLM responses for UI work.
- Local hosting: Pi network issues are common—reserve static IPs and test QR links on multiple devices.
- Privacy: Remind students to avoid sending sensitive PII to third-party LLMs; teach redaction techniques when needed.
Actionable starter checklist (class-ready)
- Clone the starter repo and run npm install
- Add your LLM API keys to a .env (instructors provide sandbox keys or use free tiers)
- Implement the prompt template and test the /api/recommend endpoint with Postman or curl
- Build the frontend preference form and fetch results
- Register a service worker and test PWA install on phones
- Deploy to Vercel/Netlify or run on a Raspberry Pi for offline demos
2026 trends to fold into your lessons
- Encourage experiments with local LLMs and browser-based inference—students should compare latency and privacy trade-offs.
- Teach cost-aware engineering: token budgeting, caching, and prompt length optimizations are real-world skills in 2026.
- Introduce edge deployment patterns: small Kubernetes or Docker Swarm clusters for student teams that want to scale beyond a Pi.
Actionable takeaways
- Start small: scope an MVP with 3-5 features and iterate with AI-assisted prototyping.
- Teach prompts as code: store prompt templates in the repo, version them, and evaluate outputs.
- Use PWA + Pi: offer both cloud deploy and an offline Pi option so students learn deployment trade-offs.
- Measure and iterate: collect simple feedback signals and show how they change LLM outputs.
Next steps & classroom resources
Include a starter GitHub repo with:
- Server skeleton and prompt templates
- Minimal frontend PWA scaffolding
- Deployment scripts for Vercel/Netlify and Pi (bash playbooks)
- Rubric and lab schedule
Call to action
Ready to run this in your class? Clone our starter repo, adapt the rubric, and run a week-long sprint. Invite students to remix the prompts and deploy to either a public PWA or a Raspberry Pi server so every group can demo a live micro app. If you want a ready-made lesson pack (slides, starter code, grading rubric), click to download the classroom kit and get step-by-step instructor notes.
Related Reading
- Integration Blueprint: Connecting Micro Apps with Your CRM
- Storage Considerations for On-Device AI and Personalization (2026)
- Local-First Edge Tools for Pop-Ups and Offline Workflows (2026 Practical Guide)
- Teach Discoverability: How Authority Shows Up Across Social, Search, and AI Answers
- Edge Migrations in 2026: Architecting Low-Latency MongoDB Regions with Mongoose.Cloud
- When Email Changes Affect Your Prenatal Care: How AI in Gmail Impacts Appointment Reminders and Lab Results
- Bonding Electronics Housings: Adhesives That Don't Interfere With Wi‑Fi, Sensors or Heat Dissipation
- Lasting Value Buys: 10 Items from This Week’s Deals That Age Well
- Avoiding Defensiveness During Couple Workouts: Calm Phrases That Keep You Moving
- Micro Apps for Content Teams: An API-First Approach to Building Custom Pin Tools
Related Topics
Unknown
Contributor
Senior editor and content strategist. Writing about technology, design, and the future of digital media. Follow along for deep dives into the industry's moving parts.