Blink lets you talk to Edit Mode and describe the agent you wish you had. For research and web browsing work, the goal is to craft a partner that can scan the web, compare findings, keep track of promising leads, and report back in the style you prefer.This page focuses on the creative side: what capabilities to request, what personality to give your agent, and concrete prompts you can copy and paste to get started quickly.
Here are examples of what you can ask your research agent to do once you’ve built it. These show the end result: scroll down to see how to build an agent that can handle these requests.
Landscape research
Copy
Do a landscape scan of the top 3 React state management libraries activelymaintained in 2024. For each, include: GitHub stars, last update date, keydifferentiator, ideal use case, and major drawbacks. Provide links toofficial docs.
Tool comparison
Copy
Compare Docusaurus vs Mintlify for developer documentation. I'm a startupwith 2 engineers. Focus on: setup time, customization options, cost, andongoing maintenance burden. Make a recommendation.
Fact checking
Copy
I read that Next.js 14 improved performance by 40% with server actions.Can you verify this claim? Find the original source and check if otherbenchmarks confirm it.
Deep dive with follow-ups
Copy
Explain how Vercel's edge runtime works.
Then follow up with:
Copy
Can you find some example repos that use edge functions effectively?
And:
Copy
What are the main limitations I should know about?
Link verification
Copy
Check all the links you just shared. Let me know if any are dead, paywalled,or redirecting unexpectedly.
Export research
Copy
Summarize everything we've discussed about database options for real-timeapps. Format it as a brief I can share with my team on Slack.
Add a web search tool so the agent can find current information online.It should search for 5-10 results and extract the full text content,not just titles and snippets.
Every claim the agent makes should include an inline citation like [1] [2],with full source details at the end. If I ask for elaboration on a specificpoint, it should cite additional sources.
Give the agent the ability to compare multiple sources on the same topic.It should create a comparison table showing: key claims, level of detail,recency, and whether sources agree or contradict each other.
Give the agent a friendly, collaborative tone. It should act like ahelpful research partner, not a know-it-all. Before diving into research,it should summarize its research plan and ask if the approach sounds right.
Add guidance so the agent double-checks any link before sharing it.It should warn me if a source looks outdated (more than 2 years old),paywalled, or from a questionable domain. For technical topics, it shouldprefer official documentation and established tech blogs.
The agent should ask clarifying questions before starting broad searches.It should confirm: target audience, timeframe (current info vs historical),desired level of technical depth, and whether I need sources I can sharepublicly.
Add a notes system that persists across messages. For each research topic,track: key findings, unanswered questions, best sources found so far, andrecommended next actions. Let me ask "show me notes on [topic]" at any time.
When presenting findings, the agent should use this structure:1) Executive summary (2-3 sentences), 2) Key findings as bullets,3) Full sources list with titles and URLs, 4) Suggested next steps oropen questions.
The agent should indicate confidence levels for findings: "High confidence"(3+ agreeing sources), "Medium confidence" (1-2 sources), or "Uncertain"(conflicting info or single questionable source). Always explain why.
Add slash commands: /summary for quick overview of current research, /sourcesto list all references, /questions to see what's still unclear, /export toformat everything as a shareable document.
When you use the prompts above, Edit Mode automatically implements the right tools for your agent. You don’t need to code anything; just describe what you want. Common tools include:
Web search - Find current information across the internet
Content extraction - Pull full text from URLs and PDFs
File operations - Read documents, save research summaries and reports
Bash execution - Run scripts to download or process data
Edit Mode handles all the technical implementation. Your job is to describe the agent’s behavior and capabilities.
As you test, you’ll discover what works and what needs adjustment. Use these Edit Mode prompts to refine your agent’s behavior:
If the agent is too verbose
Copy
The responses are too long. Keep summaries under 200 words and use bulletpoints for details. Only include the top 3 most relevant sources unless Iask for more.
If citations are missing
Copy
The agent isn't consistently citing sources. Make it mandatory: never makea factual claim without a citation. If no source exists, say "I couldn'tverify this."
If the agent searches too broadly
Copy
Before searching, the agent should ask me to narrow the scope. For example:"Should I focus on solutions compatible with TypeScript?" or "Do you wantenterprise or open-source options?"
If you want better source quality
Copy
Prioritize these sources: official documentation, established tech blogs(like Vercel blog, GitHub blog), recent conference talks, and papers fromknown institutions. Avoid: random Medium posts, outdated Stack Overflowanswers, marketing sites.
If the research feels shallow
Copy
For technical topics, the agent should read full documentation pages, notjust skim landing pages. It should look for: architecture decisions,performance characteristics, known issues, and community sentiment.
Once your basic agent works well, consider these enhancements:
Multi-step research workflows
Copy
Create a research workflow: 1) Initial broad search to identify top options,2) Deep dive on the top 3 with pros/cons, 3) Fact-check key claims,4) Generate comparison table, 5) Make recommendation with rationale. Let meapprove or adjust the direction after each step.
Source credibility scoring
Copy
Implement a credibility scoring system. Official docs = 10 points, establishedtech blogs = 8, GitHub repos with 1k+ stars = 7, recent Stack Overflow answers= 5, random blogs = 3. Only cite sources scoring 5+. Show scores in citations.
Automated monitoring
Copy
Add a monitoring mode: I should be able to say "track developments in [topic]"and the agent will periodically search for new information, then notify me ofsignificant changes or announcements.
Research templates
Copy
Create templates for common research tasks: "framework evaluation", "vendorcomparison", "security audit", "migration planning". Each template shoulddefine the structure, key questions to answer, and required source types.