Blink lets you build an agent that acts as a feature developer. This type of agent can explore your codebase, understand your patterns, write code in your style, add tests, and create pull requests. It’s designed to help you ship small, well-scoped features faster.This page shows you what’s possible and provides prompts you can use in Edit Mode to build your own feature development agent.
Here are examples of what you can ask your feature development agent to do once you’ve built it. These show the end result: scroll down to see how to build an agent that can handle these requests.
Implement a complete feature with tests
Copy
Implement user profile editing with avatar upload.Tech stack: React + TypeScript, React Query, Tailwind, Jest + Testing LibraryConstraints: Reuse existing Avatar and Form components from src/components/Acceptance criteria:- User can edit their name and upload an avatar- Show optimistic updates while saving- Include unit tests for form validation- Add integration test for the full edit flowCreate a minimal implementation with all necessary files, tests, and a draft PR.
Add a new API endpoint with validation
Copy
Add a POST /api/notifications endpoint that creates user notifications.Stack: Node.js + Express, Prisma, PostgreSQLRequirements:- Accept: userId, title, message, type (info/warning/error)- Validate all fields (title max 100 chars, message max 500 chars)- Store in notifications table with timestamp- Return the created notification with 201 status- Include error handling and input validation testsFollow the patterns in src/api/routes/ and match the existing error handling style.
Build a reusable component
Copy
Create a DataTable component that we can reuse across the app.Stack: React + TypeScript, TailwindFeatures needed:- Support sorting on any column- Client-side filtering with search input- Pagination (show 10, 25, 50, 100 rows)- Loading and empty states- Responsive on mobile (stack on small screens)Make it generic so we can pass in any data shape. Include Storybook storiesshowing 3 different use cases, plus tests for sorting and filtering.
Refactor and improve existing code
Copy
Refactor src/services/email-sender.ts to be more testable and maintainable.Current issues:- Direct dependency on SendGrid (hard to test)- No retry logic for failed sends- Error messages aren't logged properlyPlease:- Extract an EmailProvider interface- Implement SendGrid and a MockProvider for tests- Add retry logic with exponential backoff (max 3 attempts)- Improve error logging with structured data- Update all existing tests to use the new structure
Debug and fix a production issue
Copy
Users are reporting that the checkout flow fails ~5% of the time with a timeout error.Stack: Next.js, Stripe, Redis for sessionsError logs show: "Connection timeout to payment service"What I know:- Happens randomly, no pattern by time or user- Session data exists in Redis- Stripe API responds normally in test modePlease investigate: explore the payment flow code, identify the likely root cause,implement a fix, and add better error handling. Include tests that prevent thisregression.
Migrate code to a new pattern
Copy
Migrate our class-based React components to functional components with hooks.Start with src/components/Dashboard/ (5 components)Requirements:- Convert class components to functional components- Replace componentDidMount/Update with useEffect- Use useState instead of this.state- Keep all existing functionality and tests working- Update tests to use React Testing Library instead of Enzyme- Open a draft PR with the changesMatch the patterns you see in src/components/NewFeatures/ for consistency.
Generate comprehensive tests
Copy
Add tests for src/utils/date-helpers.ts which currently has no test coverage.This file has 8 functions for date formatting and manipulation.Please:- Write unit tests covering all functions- Include edge cases (leap years, timezones, invalid inputs)- Test boundary conditions (start/end of year, etc)- Aim for 100% code coverage- Run the test suite and make sure everything passesFollow the testing patterns in src/utils/__tests__/ and use the same matchers.
Give the agent file reading access. Before writing any code, it should explorethe codebase to understand: existing component patterns, state managementapproach, testing libraries and conventions, error handling patterns, and codestyle. It should match these patterns in all code it generates.
When implementing a feature, the agent should start with the smallest workingvertical slice. It should implement one piece at a time: basic structure first,then data flow, then edge cases, then polish. After each piece, it should verifythe code works before moving to the next piece.
Every code change must include tests. The agent should write: unit tests forindividual functions, integration tests for feature flows, and edge case testsfor error handling. Tests should follow existing patterns in the codebase andachieve meaningful coverage, not just 100% for the sake of it.
When the implementation is complete, the agent should create a draft PR with:clear title describing what changed, bullet-point summary of changes, before/afterscreenshots for UI changes, testing instructions, and any open questions orfollow-ups. The PR should be ready for human review.
The agent should match the existing code style exactly: indentation, namingconventions, file organization, import ordering, and comment style. It shouldnever introduce new patterns without asking first. When in doubt, it shouldfind similar code and copy that style.
All code should include proper error handling. The agent should: validate inputs,catch and handle errors appropriately, log errors with useful context, showuser-friendly error messages, and never let errors crash the application. Errorhandling should match patterns already in the codebase.
When I provide feedback on generated code, the agent should update the codeand explain what changed. It should ask clarifying questions if feedback isunclear. It should keep a running list of changes made during iteration sonothing gets lost.
Give the agent bash execution access. It should: run the test suite afterchanges, use linters and formatters, check TypeScript compilation, run thedev server to verify changes, and execute any project-specific scripts(like "npm run validate"). It should fix any issues these tools find.
The agent should add code comments for: complex logic, non-obvious decisions,TODO items for future work, and API contracts. For new components, it shouldadd JSDoc comments. For new APIs, it should update API documentation. Keepcomments concise and meaningful.
The agent should push back if a feature request is too large. It should suggest:breaking into smaller PRs, starting with an MVP, or deferring nice-to-havefeatures. Each implementation should be reviewable in under 30 minutes. If afeature exceeds that, the agent should propose a phased approach.
When you use the prompts above, Edit Mode automatically implements the right tools for your agent. You don’t need to code anything; just describe what you want. Common tools include:
File reading and writing - Access and modify code files
Git operations - Create branches, commits, and pull requests
Bash execution - Run tests, linters, and build commands
Web search - Research libraries, frameworks, and best practices
Code analysis - Understand existing patterns and dependencies
Edit Mode handles all the technical implementation. Your job is to describe how the agent should write code and interact with your development workflow.
As you test, you’ll discover what works and what needs adjustment. Use these Edit Mode prompts to refine your agent’s behavior:
If code doesn't match your style
Copy
The agent must analyze existing code more carefully before writing. It should:find 2-3 examples of similar code, identify the style patterns (naming, structure,imports), and match those patterns exactly. If multiple styles exist, ask whichto follow.
If implementations are too complex
Copy
The agent should favor simplicity over cleverness. It should: use straightforwardsolutions even if longer, avoid premature optimization, prefer explicit overimplicit, and only add abstraction when there are 3+ instances of duplication.Ask "is there a simpler way?" before finalizing.
If tests are insufficient
Copy
Every feature must include tests that would catch real bugs, not just tests thatpass. The agent should think: what could go wrong? What edge cases exist? Whatwould break if someone modifies this later? Write tests for those scenarios, notjust happy path.
If PRs are too large
Copy
The agent should limit PRs to: maximum 400 lines of changes (excluding tests),one primary concern per PR, under 30 minutes to review. If a feature exceedsthis, it must be split into multiple PRs with clear dependencies. The first PRshould add the minimal working version.
If the agent breaks existing code
Copy
Before making changes, the agent should: run the full test suite to establishbaseline, make incremental changes, run tests after each change, and immediatelyrevert if tests break. Never commit code with failing tests. If tests fail, fixthem or adjust the approach.
Once your basic agent works well, consider these enhancements:
Integrate with your CI/CD pipeline
Copy
Connect the agent to your CI system (GitHub Actions, CircleCI, etc). When itcreates a PR, it should: monitor CI checks, fix failures automatically ifpossible, and report status back to me. If CI fails, it should analyze logs,identify the issue, and push a fix.
Learn from code review feedback
Copy
Track feedback I give during code reviews. When similar situations arise, theagent should remember past feedback and apply those lessons. For example, if Ialways ask for null checks in a certain pattern, the agent should add themproactively.
Generate visual diffs for UI changes
Copy
For any UI changes, the agent should: take screenshots before and after, generatevisual diffs highlighting changes, and include them in the PR description. Usetools like Playwright or Percy for automated visual testing.
Estimate implementation time
Copy
Before starting work, the agent should estimate how long the implementation willtake (in hours). Base estimates on: lines of code to change, number of filestouched, test complexity, and integration points. Track actual vs estimated timeto calibrate future estimates.
Maintain a decision log
Copy
For each feature, track important decisions: why we chose approach A over B,what trade-offs we made, what we deferred for later, and what risks we accepted.Store this in a DECISIONS.md file in each feature directory. Reference pastdecisions when working on related features.