Blink lets you build an agent that acts as your PR review partner. Tag it on any pull request and it will review your code, provide constructive feedback, catch potential issues, and help you maintain high code quality standards—all before your teammates review.This page shows you what’s possible and provides prompts you can use in Edit Mode to build your own PR review agent.
Here are examples of what you can ask your PR review agent to do once you’ve built it. These show the kinds of reviews and feedback your agent can provide.
Comprehensive PR review
Copy
Review PR #234 and provide constructive feedback.Focus on:- Code quality and maintainability- Test coverage and edge cases- Performance implications- Security concerns- Consistency with existing codebase patternsBe objective, direct, and concise. Provide specific, actionable feedback withexamples. Distinguish between blocking issues and nice-to-haves. Skip nitpickycomments.
Quick sanity check before requesting review
Copy
Give PR #456 a quick sanity check before I request review from the team.Check for:- Obvious bugs or logic errors- Missing error handling- Test coverage gaps- Incomplete or unclear PR description- Anything that would obviously fail code reviewFlag anything that needs fixing before I tag human reviewers.
Security-focused review
Copy
Review PR #789 with a security focus.This PR adds a new file upload endpoint. Check for:- Input validation and sanitization- Authentication and authorization- File type restrictions and size limits- Path traversal vulnerabilities- Potential injection attacks- Sensitive data exposureFlag any security concerns with severity levels.
Performance analysis
Copy
Review PR #123 for performance issues.This PR modifies our data fetching logic. Look for:- Unnecessary re-renders or recomputations- N+1 queries or inefficient database access- Missing indexes or pagination- Memory leaks or resource cleanup issues- Blocking operations on the main threadProvide specific suggestions for optimization.
Architecture and design review
Copy
Review PR #567 from an architecture perspective.This PR adds a new feature. Evaluate:- Does it follow existing patterns and conventions?- Is the abstraction level appropriate?- Are responsibilities properly separated?- Is it extensible for future needs?- Does it introduce unnecessary complexity?Focus on design decisions and long-term maintainability.
Test quality review
Copy
Review the tests in PR #890.Check for:- Are critical paths covered?- Are edge cases tested?- Are tests testing the right things?- Are test names clear and descriptive?- Are tests brittle or tightly coupled?- Are there integration tests where needed?Suggest missing test cases and improvements.
Documentation and clarity review
Copy
Review PR #345 for clarity and documentation.Check:- Is the code self-documenting?- Are complex algorithms explained?- Is the PR description clear and complete?- Are breaking changes documented?- Are new APIs documented?- Would a new team member understand this code?Suggest where more documentation or clarity is needed.
When reviewing PRs, the agent should: be objective, direct, and concise. Bespecific (reference line numbers and files), suggest improvements with examples,explain the "why" behind suggestions, and distinguish between blocking issuesand nice-to-haves. Skip nitpicky feedback and fluff. Focus on: correctness,maintainability, performance, security, and consistency.
The agent should flag common issues during review: null pointer exceptions,off-by-one errors, race conditions, missing error handling, unclosed resources,infinite loops or recursion, type coercion bugs, and incorrect async/await usage.For each issue, reference the specific line and explain why it's problematic witha suggested fix.
When reviewing a PR, the agent should analyze test coverage: identify untestedcode paths, flag missing edge cases, check that error cases are tested, verifyintegration tests for API changes, and ensure test names clearly describe whatthey test. Suggest specific test cases that should be added.
The agent should scan for security issues: SQL injection risks, XSS vulnerabilities,insecure authentication or authorization, exposed secrets or API keys, unsafedeserialization, CSRF vulnerabilities, and improper input validation. For eachfinding, explain the risk and provide a secure alternative.
When reviewing code, the agent should identify performance concerns: inefficientalgorithms or data structures, N+1 query patterns, unnecessary re-renders orrecomputations, blocking operations in critical paths, memory leaks, and missingpagination or rate limiting. Suggest optimizations with expected impact.
The agent should check for code quality issues: overly complex functions (highcyclomatic complexity), duplicated code, unclear variable names, missingdocumentation for complex logic, inconsistent formatting or style, and violationsof SOLID principles. Focus on maintainability and readability.
Before reviewing code, the agent should check the PR description: Does it explainwhat changed and why? Are there screenshots for UI changes? Is there a test plan?Are breaking changes documented? Are risks or open questions mentioned? If thedescription is incomplete, ask for clarification before proceeding with review.
The agent should learn your codebase conventions: how errors are handled, howcomponents are structured, naming conventions, testing patterns, and architecturaldecisions. When reviewing, flag code that deviates from established patterns andsuggest how to align with existing conventions.
The agent should categorize feedback as: blocking (must fix before merge),important (should fix but not blocking), nice-to-have (optional improvements),or question (seeking clarification). This helps PR authors prioritize what toaddress first. Always be clear about what's required vs. suggested.
When the agent identifies an issue, it should: show the problematic code, explainwhy it's a problem, provide a specific code example of how to fix it, and explainthe benefits of the suggested approach. Avoid vague feedback like "this could bebetter"—make every comment actionable.
When you use the prompts above, Edit Mode automatically implements the right tools for your agent. You don’t need to code anything; just describe what you want. Common tools include:
As you test, you’ll discover what works and what needs adjustment. Use these Edit Mode prompts to refine your agent’s behavior:
If reviews are too nitpicky
Copy
The agent should focus on meaningful issues, not style preferences or minornitpicks. Skip comments about: formatting (if linters pass), variable naming(unless truly confusing), personal preferences without clear benefits, andhypothetical future scenarios. Focus on bugs, security, performance, andmaintainability issues that matter.
If reviews miss important issues
Copy
The agent should always check for: error handling (what happens when things fail?),edge cases (empty arrays, null values, boundary conditions), security (inputvalidation, auth checks), performance (loops in loops, inefficient queries), andtest coverage (are critical paths tested?). Never approve a review withoutchecking these fundamentals.
If feedback is too vague
Copy
Every review comment must be specific and actionable. Instead of "this could bebetter," say "this function has O(n²) complexity because of the nested loops—consider using a hash map to reduce it to O(n)." Always include: what the issueis, why it matters, and a concrete suggestion for fixing it.
If the agent is too harsh
Copy
The agent should be constructive, not critical. Frame feedback positively: "Thislooks good—one thing to consider is..." instead of "This is wrong." Acknowledgewhat's done well before suggesting improvements. Be objective about technicaltradeoffs, not judgmental about decisions.
If reviews take too long
Copy
The agent should provide timely feedback. For small PRs (under 200 lines), reviewwithin minutes. For larger PRs, start with high-level feedback on architectureand approach, then dive into details. Don't block on minor issues—mark them asnon-blocking and let the author decide whether to address them.
Start with basic code reviews focusing on correctness and bugs
Test the agent on small PRs (under 200 lines) first
Calibrate feedback style—constructive but direct
Review the agent’s reviews to ensure quality
When you’re using it:
Tag your agent on PRs before requesting human review
Use it for quick sanity checks and early feedback
Ask for focused reviews (security, performance, etc.) when needed
Don’t skip human reviews—use the agent as a complement
When you’re ready to scale:
Build a knowledge base of your codebase patterns
Train it on past review comments from your team
Integrate with your CI/CD pipeline for automatic reviews
Track metrics on review quality and caught issues
Focus on catching real bugs and security issues first. You can always add sophistication like learning from patterns or generating test suggestions later.