Skip to main content

Overview

Blink lets you build an agent that acts as a feature developer. This type of agent can explore your codebase, understand your patterns, write code in your style, add tests, and create pull requests. It’s designed to help you ship small, well-scoped features faster. This page shows you what’s possible and provides prompts you can use in Edit Mode to build your own feature development agent.

Getting started

  1. Create a new agent directory and run blink init
  2. Start the dev server with blink dev
  3. Press Ctrl+T to toggle to Edit Mode
  4. Describe your feature development agent using the ideas below

What you can build

Here are examples of what you can ask your feature development agent to do once you’ve built it. These show the end result: scroll down to see how to build an agent that can handle these requests.
Implement user profile editing with avatar upload.

Tech stack: React + TypeScript, React Query, Tailwind, Jest + Testing Library
Constraints: Reuse existing Avatar and Form components from src/components/
Acceptance criteria:
- User can edit their name and upload an avatar
- Show optimistic updates while saving
- Include unit tests for form validation
- Add integration test for the full edit flow

Create a minimal implementation with all necessary files, tests, and a draft PR.
Add a POST /api/notifications endpoint that creates user notifications.

Stack: Node.js + Express, Prisma, PostgreSQL
Requirements:
- Accept: userId, title, message, type (info/warning/error)
- Validate all fields (title max 100 chars, message max 500 chars)
- Store in notifications table with timestamp
- Return the created notification with 201 status
- Include error handling and input validation tests

Follow the patterns in src/api/routes/ and match the existing error handling style.
Create a DataTable component that we can reuse across the app.

Stack: React + TypeScript, Tailwind
Features needed:
- Support sorting on any column
- Client-side filtering with search input
- Pagination (show 10, 25, 50, 100 rows)
- Loading and empty states
- Responsive on mobile (stack on small screens)

Make it generic so we can pass in any data shape. Include Storybook stories
showing 3 different use cases, plus tests for sorting and filtering.
Refactor src/services/email-sender.ts to be more testable and maintainable.

Current issues:
- Direct dependency on SendGrid (hard to test)
- No retry logic for failed sends
- Error messages aren't logged properly

Please:
- Extract an EmailProvider interface
- Implement SendGrid and a MockProvider for tests
- Add retry logic with exponential backoff (max 3 attempts)
- Improve error logging with structured data
- Update all existing tests to use the new structure
Users are reporting that the checkout flow fails ~5% of the time with a timeout error.

Stack: Next.js, Stripe, Redis for sessions
Error logs show: "Connection timeout to payment service"
What I know:
- Happens randomly, no pattern by time or user
- Session data exists in Redis
- Stripe API responds normally in test mode

Please investigate: explore the payment flow code, identify the likely root cause,
implement a fix, and add better error handling. Include tests that prevent this
regression.
Migrate our class-based React components to functional components with hooks.

Start with src/components/Dashboard/ (5 components)
Requirements:
- Convert class components to functional components
- Replace componentDidMount/Update with useEffect
- Use useState instead of this.state
- Keep all existing functionality and tests working
- Update tests to use React Testing Library instead of Enzyme
- Open a draft PR with the changes

Match the patterns you see in src/components/NewFeatures/ for consistency.
Add tests for src/utils/date-helpers.ts which currently has no test coverage.

This file has 8 functions for date formatting and manipulation.
Please:
- Write unit tests covering all functions
- Include edge cases (leap years, timezones, invalid inputs)
- Test boundary conditions (start/end of year, etc)
- Aim for 100% code coverage
- Run the test suite and make sure everything passes

Follow the testing patterns in src/utils/__tests__/ and use the same matchers.

Building your feature development agent

Switch to Edit Mode (Ctrl+T) and use these prompts to build your agent. Copy and customize them based on what you want your agent to do.
You don’t need to use all of these prompts. Start with 2-3 that match your needs, test the agent, then add more capabilities as needed.

Make it understand your codebase patterns

Give the agent file reading access. Before writing any code, it should explore
the codebase to understand: existing component patterns, state management
approach, testing libraries and conventions, error handling patterns, and code
style. It should match these patterns in all code it generates.

Make it write code incrementally

When implementing a feature, the agent should start with the smallest working
vertical slice. It should implement one piece at a time: basic structure first,
then data flow, then edge cases, then polish. After each piece, it should verify
the code works before moving to the next piece.

Make it generate tests automatically

Every code change must include tests. The agent should write: unit tests for
individual functions, integration tests for feature flows, and edge case tests
for error handling. Tests should follow existing patterns in the codebase and
achieve meaningful coverage, not just 100% for the sake of it.

Make it create descriptive pull requests

When the implementation is complete, the agent should create a draft PR with:
clear title describing what changed, bullet-point summary of changes, before/after
screenshots for UI changes, testing instructions, and any open questions or
follow-ups. The PR should be ready for human review.

Make it follow your code style

The agent should match the existing code style exactly: indentation, naming
conventions, file organization, import ordering, and comment style. It should
never introduce new patterns without asking first. When in doubt, it should
find similar code and copy that style.

Make it handle errors gracefully

All code should include proper error handling. The agent should: validate inputs,
catch and handle errors appropriately, log errors with useful context, show
user-friendly error messages, and never let errors crash the application. Error
handling should match patterns already in the codebase.

Make it iterate based on feedback

When I provide feedback on generated code, the agent should update the code
and explain what changed. It should ask clarifying questions if feedback is
unclear. It should keep a running list of changes made during iteration so
nothing gets lost.

Make it work with existing tools

Give the agent bash execution access. It should: run the test suite after
changes, use linters and formatters, check TypeScript compilation, run the
dev server to verify changes, and execute any project-specific scripts
(like "npm run validate"). It should fix any issues these tools find.

Make it document as it goes

The agent should add code comments for: complex logic, non-obvious decisions,
TODO items for future work, and API contracts. For new components, it should
add JSDoc comments. For new APIs, it should update API documentation. Keep
comments concise and meaningful.

Make it scope work appropriately

The agent should push back if a feature request is too large. It should suggest:
breaking into smaller PRs, starting with an MVP, or deferring nice-to-have
features. Each implementation should be reviewable in under 30 minutes. If a
feature exceeds that, the agent should propose a phased approach.

How it works behind the scenes

When you use the prompts above, Edit Mode automatically implements the right tools for your agent. You don’t need to code anything; just describe what you want. Common tools include:
  • File reading and writing - Access and modify code files
  • Git operations - Create branches, commits, and pull requests
  • Bash execution - Run tests, linters, and build commands
  • Web search - Research libraries, frameworks, and best practices
  • Code analysis - Understand existing patterns and dependencies
Edit Mode handles all the technical implementation. Your job is to describe how the agent should write code and interact with your development workflow.

Iterating on your agent

As you test, you’ll discover what works and what needs adjustment. Use these Edit Mode prompts to refine your agent’s behavior:
The agent must analyze existing code more carefully before writing. It should:
find 2-3 examples of similar code, identify the style patterns (naming, structure,
imports), and match those patterns exactly. If multiple styles exist, ask which
to follow.
The agent should favor simplicity over cleverness. It should: use straightforward
solutions even if longer, avoid premature optimization, prefer explicit over
implicit, and only add abstraction when there are 3+ instances of duplication.
Ask "is there a simpler way?" before finalizing.
Every feature must include tests that would catch real bugs, not just tests that
pass. The agent should think: what could go wrong? What edge cases exist? What
would break if someone modifies this later? Write tests for those scenarios, not
just happy path.
The agent should limit PRs to: maximum 400 lines of changes (excluding tests),
one primary concern per PR, under 30 minutes to review. If a feature exceeds
this, it must be split into multiple PRs with clear dependencies. The first PR
should add the minimal working version.
Before making changes, the agent should: run the full test suite to establish
baseline, make incremental changes, run tests after each change, and immediately
revert if tests break. Never commit code with failing tests. If tests fail, fix
them or adjust the approach.

Advanced capabilities

Once your basic agent works well, consider these enhancements:
Connect the agent to your CI system (GitHub Actions, CircleCI, etc). When it
creates a PR, it should: monitor CI checks, fix failures automatically if
possible, and report status back to me. If CI fails, it should analyze logs,
identify the issue, and push a fix.
Track feedback I give during code reviews. When similar situations arise, the
agent should remember past feedback and apply those lessons. For example, if I
always ask for null checks in a certain pattern, the agent should add them
proactively.
For any UI changes, the agent should: take screenshots before and after, generate
visual diffs highlighting changes, and include them in the PR description. Use
tools like Playwright or Percy for automated visual testing.
Before starting work, the agent should estimate how long the implementation will
take (in hours). Base estimates on: lines of code to change, number of files
touched, test complexity, and integration points. Track actual vs estimated time
to calibrate future estimates.
For each feature, track important decisions: why we chose approach A over B,
what trade-offs we made, what we deferred for later, and what risks we accepted.
Store this in a DECISIONS.md file in each feature directory. Reference past
decisions when working on related features.

Tips for success

When you’re building:
  • Start with basic code generation matching your style
  • Add test generation early so it becomes a habit
  • Give the agent access to your codebase to learn patterns
  • Test on small features before trying large ones
When you’re ready to scale:
  • Connect to GitHub for automated PR creation
  • Integrate with CI/CD to catch issues early
  • Add visual testing for UI components
  • Build in memory to learn from code reviews
Focus on getting code quality and style consistency right first. You can always add sophistication like CI integration or visual diffs later.
I