Overview

This page outlines practical guidance for working with AI (not just Blink) safely and responsibly. It focuses on protecting secrets and personal data, reducing misuse risks (like prompt injection), and setting up effective guardrails for day‑to‑day usage.

What Not to Share (Ever)

  • Secrets and credentials: API keys are like passwords that give access to paid services, while access tokens let applications act on your behalf. Private keys, SSH keys, and database credentials are digital keys that unlock systems and data. Never paste these into any chat or conversation, as they could be stored and potentially accessed by others.
  • Sensitive customer or employee data: Personal identifiers like social security numbers, health information protected by privacy laws, and financial details like credit card numbers should never be shared. This data is often legally protected and sharing it could violate privacy regulations.
  • Confidential business information: Unreleased intellectual property, legal documents under attorney-client privilege, incident postmortems that reveal security vulnerabilities, and internal security procedures should remain private to protect your organization.
  • Internal system details: URLs to internal systems, server addresses, and infrastructure details not meant for public access could help attackers target your organization’s systems.
If you accidentally pasted a secret in any chat, immediately change that credential (called “rotating”) and check where it might have been used to ensure no unauthorized access occurred.

Core Principles

  • Minimize data sharing: Only include information that’s directly necessary for the AI to complete your specific task. Extra context might seem helpful, but it increases the risk of accidentally sharing sensitive information.
  • Keep humans involved in important decisions: AI should assist human decision-making, not replace it entirely, especially for choices involving legal compliance, financial commitments, or safety-critical situations.
  • Always verify AI outputs: Treat AI responses as first drafts that need human review. Check facts, test any code suggestions, and ensure outputs comply with your organization’s policies and brand guidelines.

Security and Privacy

  • Store credentials securely: Environment variables are settings stored on your computer or server that applications can read without the credentials being visible in your code. Secret managers are specialized tools designed to store and protect sensitive information. Never write credentials directly into code or paste them in chats.
  • Keep credentials on the server side: In web and mobile applications, sensitive credentials should only exist on backend servers, not in browsers or mobile apps where users could potentially access them.
  • Regularly update credentials: Periodically changing your API keys and passwords reduces the risk if they’re ever compromised. Monitor your usage and costs for any unexpected spikes that might indicate unauthorized access.

Safer Prompting Patterns

  • Set clear boundaries upfront: Tell AI what topics to avoid, what safety rules to follow, and whether you need citations for factual claims. This helps prevent problematic outputs before they occur.
  • Provide good examples: Showing the AI 2-3 examples of what you want (called “few-shot prompting”) is usually more effective than vague descriptions of your desired output.
  • Summarize instead of dumping: Rather than pasting entire documents or datasets, provide concise summaries or only the most relevant excerpts to reduce the risk of sharing sensitive information.
  • Request uncertainty indicators: Ask AI to tell you when it’s not confident about something and to provide sources for factual claims so you can verify important information.

Do’s and Don’ts

Do:
  • Use minimal access credentials: Give AI systems only the minimum permissions they need to function, and use temporary credentials that expire automatically rather than permanent ones.
  • Choose enterprise AI plans: Business and enterprise plans typically offer better privacy protections, configurable data retention policies, and options to prevent your data from being used to improve AI models.
  • Maintain change records: Keep track of significant changes made with AI assistance so you can review, audit, or reverse them if necessary.
Don’t:
  • Share sensitive information in chats: Never paste passwords, personal data, or confidential information into AI conversations, as this data might be stored or used in ways you don’t expect.
  • Put credentials in user-facing applications: Never include API keys or other sensitive credentials in websites, mobile apps, or other software that users can access directly.
  • Trust AI outputs without verification: Always fact-check important information, test code suggestions, and review AI-generated content before using it in important contexts.
  • Let AI make final decisions in critical areas: In regulated industries, safety-critical systems, or high-stakes business decisions, AI should inform human decision-makers rather than making autonomous choices.