At the beginning of the year, I tried using large language models to complete a full-stack project; see "Documenting One Attempt to Use AI to Build an Entire Full-Stack Project".
Back then, my view was that it was very difficult to use—even with existing code in other languages as reference, reproducing the results remained challenging, and human involvement was still largely required in writing code (especially for compilation issues and runtime bugs).
However, this year's rapid advancement in agent technology has also accelerated the growth of Vibe Coding, along with my increasing familiarity with the Next.js framework, making this task seem much simpler now.
As a researcher in video and image compression, I've long been frustrated by having to manually extract old experimental results from past data and write matplotlib code to plot RD curves every time. These days, I finally couldn't resist creating an application to manage related experimental data and enable visualization—RD Curve AI, and I welcome everyone to try it out and share suggestions for improvement.




This Vibe Coding process actually took only one day to complete, and I personally found it quite smooth. The general workflow is as follows:
In addition to this prompt, I fed in some previously written code for password encryption and better-auth configuration, because based on prior experience, this part often requires multiple iterations before AI outputs code that meets expectations.
I then created a high-level project prompt, which I could hand over to AI—but still needed minor adjustments afterward. Here’s my revised version:
After some time, the AI successfully built the entire framework. At this point, I began addressing error messages step-by-step, feeding each one back to the AI for resolution.
Once the project ran successfully, I gradually refined both functionality and frontend design—this phase was the most time-consuming. For instance, I later decided to add a BD-Rate calculation feature. Through repeated iterations, I eventually arrived at the project I wanted.
Finally, I want to emphasize that Vibe Coding still cannot help beginners complete complex system tasks. While you don’t necessarily need to fully understand every line of AI-generated code, you must have a solid grasp of the system architecture. I’ll quote a bold statement: "AI (almost) cannot help you complete tasks you yourself cannot do". My familiarity with Next.js grew through self-study over the past year, aided by AI in building small projects or improving open-source ones. Only after accumulating this experience did AI become a powerful efficiency tool.
I am a researcher in video and image compression. Manually writing Python code to plot RD curves whenever new data arrives is extremely tedious. Please help me build a web app with the following features.
Use better-auth for email-based registration and login.
After logging in, users can create a dataset with a name and description.
Within a dataset, users can add a method (e.g., "JPEG") and provide a description.
Users can edit (including delete) a method or all its RD data points.
Users can upload files in any format (e.g., txt, csv) or paste raw text containing experimental results. An AI will parse this input and convert it into standardized JSON data in the format: {"name": "JPEG", "description": "some description", "data":[{bpp: float, psnr: float, ms_ssim: float}, ...]}, which will be displayed for user review before being inserted into the database. If a method with the same name already exists, merge the data points by sorting them on bpp without updating other fields.
If the name or description cannot be parsed, prompt the user to enter them manually—the name must not be empty.
Each data point must have a bpp value representing the x-axis. PSNR, MS-SSIM, etc., are optional and represent the y-axis. Users may also define custom y-axis metrics such as LPIPS. The two default axes (PSNR, MS-SSIM) can be deleted by users.
Each y-axis metric has a mapping function that users must define, specifying how the numerical values should be displayed on the graph. For example, MS-SSIM ranges from 0 to 1, but when displayed on the graph, it should be transformed as -10*log10(1 - ms_ssim). Users can input this mapping function, but please prevent dangerous or malicious code.
AI uses OpenAI-style API endpoints for data parsing, configurable via environment variables (endpoint, API key, model, etc.).
On the dataset page, users can directly view RD curves for each method, with options to hide certain methods or specific data points. This display state can be saved locally in the browser for restoration upon next visit.
All API responses and frontend content must be in English.
The product name is "RD Curve AI".
Frontend should use shadcn/ui component library and lucide icon library.
Database should use Drizzle ORM connected to PostgreSQL.
Use pnpm as the package manager, and initialize/shadcn/ui components via npx commands.
Use Zod for type validation and transformation, storing all shared types in lib/types.ts.
Environment variables should be managed and exported uniformly via lib/config.ts.
When necessary, you may use shadcn mcp to search the component library, lucide-icons mcp to search icons, and context7 mcp to retrieve up-to-date documentation and technical references for the technologies involved.
Re-use existing code in the lib folder whenever possible.
All API information and frontend text must be in English.
# RD Curve AI - Copilot Instructions
You are assisting with "RD Curve AI", a Next.js 16 application designed for video/image compression researchers to manage datasets and plot Rate-Distortion (RD) curves.
## Tech Stack & Key Libraries
- **Framework:** Next.js 16 (App Router), React 19
- **Database:** PostgreSQL, Drizzle ORM (`lib/schema.ts`, `lib/db.ts`)
- **Auth:** Better-Auth (`lib/auth.ts`, `lib/auth-client.ts`)
- **UI:** Tailwind CSS, shadcn/ui, Recharts (`components/rd-chart.tsx`)
- **Validation:** Zod (`lib/types.ts`)
- **AI:** OpenAI (for data parsing)
## Architecture Overview
### Data Model (`lib/schema.ts`)
The core hierarchy is **Dataset -> Method -> DataPoint**.
- **Dataset:** A collection of methods (e.g., "Kodak Dataset").
- **Method:** A specific algorithm (e.g., "JPEG", "HEVC").
- **DataPoint:** A single result with `bpp` (x-axis) and dynamic `metrics` (y-axis) stored in a `jsonb` column.
- **Metric:** Custom definitions for y-axis values (e.g., "PSNR", "MS-SSIM") with a `mapping_function` string for display transformation.
### Authentication Pattern
- **Server-side:** Use `auth.api.getSession({ headers: await headers() })` in API routes/Server Components.
- **Client-side:** Use `authClient` hooks (e.g., `useSession`).
- **Protection:** Always check `if (!session)` in API routes before sensitive operations.
### API Route Pattern (`app/api/**`)
Follow this pattern for Route Handlers:
1. **Auth Check:** Verify session immediately.
2. **Validation:** Parse request body with Zod schemas from `lib/types.ts`.
3. **DB Operation:** Use Drizzle to query/mutate.
4. **Response:** Return `NextResponse.json`.
## Critical Conventions
### Database & Drizzle
- **Schema:** Define all tables in `lib/schema.ts`.
- **IDs:** Use `uuid` with `.defaultRandom()`.
- **Relations:** Use `drizzle-orm` relations for cleaner queries.
- **JSONB:** Use the `metrics` jsonb column in `data_point` for flexible metric storage.
### Frontend Components
- **UI:** Use `components/ui` (shadcn) for primitives, install new ones with `npx shadcn@latest add <component>`.
- **Charts:** Use `Recharts` in `components/rd-chart.tsx`. Handle the `mapping_function` logic (likely via `lib/safe-eval.ts`) when rendering axes.
- **Icons:** Use `lucide-react`.
- **MCP:** Use `context7` mcp for documentation if needed. Use `shadcn` mcp for UI components. Use `lucide-icons` mcp for searching icons.
### Configuration
- **Env Vars:** Access via `lib/config.ts` (e.g., `CONFIG.OPENAI_API_KEY`), NOT `process.env` directly.
- **Types:** Keep shared Zod schemas and TS interfaces in `lib/types.ts`.
### Language
- All API responses and frontend text must be in English.
## Developer Workflows
- **Migrations:** Run `pnpm db:generate` and `pnpm db:migrate` when changing schema.
- **AI Parsing:** The parsing logic (converting text to JSON) should handle unstructured input and map it to the `data_point` structure.