Blink gives you the option to select whatever LLM you prefer to use within the platform. Currently, these LLMs are available through Blink’s hosted service, but we will soon allow users to bring their own LLMs for use within Blink.
In the chat interface, you’ll find an LLM selector in the chat field that allows you to easily switch between available models for different tasks and preferences.
Best for: Rapid back-and-forth iteration on smaller, self-contained tasks
Speed: Optimized for quick responses and snappy feedback loops, delivering results in seconds rather than minutes to keep you in an active development flow.
Use cases: Ideal for repository research, code generation, and quick iterations where immediate feedback is more valuable than exhaustive analysis.
Cost: Very cost-effective for frequent interactions due to its efficient processing and lower computational requirements per request.
Ideal when: You need immediate feedback and want to maintain momentum in an active development session without waiting for lengthy processing times.
Best for: Complex, longer-running tasks that benefit from deep reasoning
Reasoning: Provides maximum possible reasoning capabilities for thorough analysis, complex problem-solving, and multi-step logical processes that require deep understanding.
Quality: Consistently delivers higher quality outputs for complex tasks through more comprehensive analysis and consideration of edge cases and implications.
Cost: More cost-effective than Claude Sonnet 4 for extended work sessions due to its ability to handle complex tasks in fewer iterations.
Ideal when: You can initiate a task and step away for several minutes while Blink performs deep analysis, comprehensive reviews, or complex implementations.
Experiment: Try both models across different types of tasks to understand their unique capabilities and discover which model works best for your specific workflow patterns.
Match the task: Choose the model that aligns with your current workflow needs, considering both the complexity of the task and your time constraints.
Combine strengths: Use both models together within the same project to create comprehensive development workflows that leverage speed when needed and depth when required.
Consider timing: Use Claude Sonnet 4 when you need immediate responses to maintain development flow, and GPT-5 Max when you can afford to wait for deeper analysis and more thorough results.