Overview

Blink gives you the option to select whatever LLM you prefer to use within the platform. Currently, these LLMs are available through Blink’s hosted service, but we will soon allow users to bring their own LLMs for use within Blink.

How to Switch Models

In the chat interface, you’ll find an LLM selector in the chat field that allows you to easily switch between available models for different tasks and preferences. We’ve seen the most success with two primary models, each with distinct strengths:

Claude Sonnet 4

Best for: Rapid back-and-forth iteration on smaller, self-contained tasks
  • Speed: Optimized for quick responses and snappy feedback loops, delivering results in seconds rather than minutes to keep you in an active development flow.
  • Use cases: Ideal for repository research, code generation, and quick iterations where immediate feedback is more valuable than exhaustive analysis.
  • Cost: Very cost-effective for frequent interactions due to its efficient processing and lower computational requirements per request.
  • Ideal when: You need immediate feedback and want to maintain momentum in an active development session without waiting for lengthy processing times.

GPT-5 Max

Best for: Complex, longer-running tasks that benefit from deep reasoning
  • Reasoning: Provides maximum possible reasoning capabilities for thorough analysis, complex problem-solving, and multi-step logical processes that require deep understanding.
  • Quality: Consistently delivers higher quality outputs for complex tasks through more comprehensive analysis and consideration of edge cases and implications.
  • Cost: More cost-effective than Claude Sonnet 4 for extended work sessions due to its ability to handle complex tasks in fewer iterations.
  • Ideal when: You can initiate a task and step away for several minutes while Blink performs deep analysis, comprehensive reviews, or complex implementations.

Using Models Together

We recommend using both GPT-5 Max and Claude Sonnet 4 alongside each other. You can leverage the strengths of each model within the same project:

Example Workflow

  1. Quick Implementation: Use Claude Sonnet 4 to create a small feature quickly
  2. Deep Review: Switch to GPT-5 Max to review the implementation patterns
  3. Detailed Feedback: Get in-depth code review feedback and architectural insights

When to Use Each Model

Claude Sonnet 4 for:
  • Quick code generation and rapid prototyping when you need immediate results
  • Repository exploration and codebase research that requires fast navigation and understanding
  • Rapid prototyping sessions where speed of iteration is more important than perfection
  • Interactive debugging conversations where you need quick responses to maintain problem-solving momentum
  • Fast iterations on existing code where you’re making incremental improvements or fixes
GPT-5 Max for:
  • Complex architectural decisions that require deep analysis of trade-offs, scalability, and long-term implications
  • Comprehensive code reviews that need thorough examination of security, performance, and maintainability concerns
  • Long-form analysis and documentation tasks that benefit from detailed explanations and comprehensive coverage
  • Multi-step problem solving that involves breaking down complex issues into manageable components and solutions
  • Detailed documentation creation that requires thorough understanding of systems and clear, comprehensive explanations

Best Practices

  • Experiment: Try both models across different types of tasks to understand their unique capabilities and discover which model works best for your specific workflow patterns.
  • Match the task: Choose the model that aligns with your current workflow needs, considering both the complexity of the task and your time constraints.
  • Combine strengths: Use both models together within the same project to create comprehensive development workflows that leverage speed when needed and depth when required.
  • Consider timing: Use Claude Sonnet 4 when you need immediate responses to maintain development flow, and GPT-5 Max when you can afford to wait for deeper analysis and more thorough results.