Invariant AI  ·  Public pledges

The promises we make to everyone who uses Invariant.

Last updated April 2026
Scope All Invariant services
Reading time ~5 minutes

We build capable, honest AI for everyone's long-term well-being. That sentence is easy to say and harder to keep. These commitments are how we try to stay accountable to it — in public, where anyone can hold us to them.

01

Honest by default

Our models are built to tell you the truth even when the truth is inconvenient. If a model doesn't know, it says so. If an answer is uncertain, it's flagged. We will not ship features that trick users into believing the AI is more capable, more confident, or more human than it is.

  • No fabricated certainty. When a model is guessing, we make it sound like a guess.
  • No hidden personas. You always know you're talking to Invariant and which product (Polygon, Airo) is responding.
  • No dark patterns. We don't use manipulative UI — no fake scarcity, no nagging modals you can't dismiss, no checkboxes that default against your interest.
02

Transparent about what we are

We publish what Invariant does, what it can't do, and what it costs — in plain language, before you sign up. When something material changes, we say so at the top of the product, not buried in a changelog.

  • Underlying models disclosed. Our news feed lists which provider and model powers each product release, including when we switch.
  • Plain-language terms. Terms and Privacy are written to be readable. If they're not, that's a bug — file it.
  • Versioned policies. Every policy page shows its effective date. Previous versions are available on request.
03

Your data, your control

You own what you put into Invariant. We use your inputs to run the product and keep it working — not to build a profile of you, not to sell to data brokers, not to train public models on your private conversations.

  • We do not sell your data. Ever. Not anonymized, not aggregated, not "de-identified."
  • No training on private chats without consent. If we ever want to use your content to improve our own models, we'll ask first and give you a clear opt-out.
  • Deletion means deletion. Account deletion wipes your conversations, quizzes, and personal data from live systems within 30 days, and from backups on the normal rotation.
  • Data portability. You can export everything you've created — chats, quizzes, sources — in an open JSON format.
04

Evaluation before shipping

Every model change and every major feature goes through an evaluation pass before it reaches you. We test for correctness, for safety, and for regressions against the previous version — and we don't ship if the numbers move the wrong way.

  • Held-out benchmarks. Each product has a fixed suite it has to pass: quiz accuracy for Airo, tool-use correctness for Polygon, plus refusal and jailbreak resistance.
  • Human review on edge cases. Automated scores aren't enough. Real people sample responses in the categories that matter most — medical, legal, minors, crisis.
  • Rollback ready. If a deployment regresses in production, we revert first and investigate second.
05

Fair and reviewable moderation

We use automated systems to catch abuse at scale. They make mistakes. When they do, you have a right to a human reading your case.

  • Automated actions are labeled as automated. If a system restricted your account, we tell you it was a system and what signal triggered it.
  • Appeals are read by humans. File an appeal and a real person responds — not a canned reply, not a chatbot.
  • We publish enforcement numbers. Our news feed includes periodic transparency reports: how many accounts were actioned, how many appeals we received, how many we reversed.
  • No shadow bans. If we restrict your account, we tell you. You won't silently stop working.
06

We own our mistakes in public

Things will break. Models will be wrong. Features will ship with bugs. When that happens, we write it down and tell you — we don't quietly patch and hope nobody noticed.

  • Incident posts. User-visible outages, security incidents, and significant regressions get a post on our news feed with what happened, what we did, and what we'll change.
  • Security disclosure. Found a vulnerability? Email security@invariant.ai. We respond within two business days and won't pursue researchers acting in good faith.
  • Live status page. Real-time availability and historical uptime is public at our status page.
07

Accessible to everyone

"AI for everyone" is in our one-liner. It has to be more than a tagline. We hold ourselves to keeping the platform usable regardless of budget, device, or disability.

  • A real free tier. The free tier has to be genuinely useful on its own — not a 15-message teaser designed to force an upgrade.
  • Keyboard- and screen-reader-friendly. We test with keyboard-only navigation and common screen readers before we ship UI.
  • Works on modest hardware. We care about bundle size and first-paint because a laptop from 2018 on a phone tether still deserves a fast experience.
08

Thoughtful about minors and education

Airo is used by students. Polygon is used in classrooms. We take that seriously and refuse to treat learners as a growth metric.

  • No advertising to minors. We don't run ads, period — and we especially don't target anyone based on being a student.
  • Academic integrity first. Airo is a study tool; our prompts and features are designed to help you learn, not to generate finished homework.
  • Honest about limits. We're clear when an AI output should be checked against a real source, especially for STEM and factual questions.
09

Safety that isn't theater

"Safe AI" often means models that refuse harmless questions to look responsible. That's not us. We aim for models that are genuinely helpful on hard questions and genuinely firm on dangerous ones — and we're transparent about the line.

  • No performative refusals. If the model refuses, there's a real reason — and we'll tell you what category triggered it.
  • Hard limits, clearly stated. Weapons of mass harm, sexual content involving minors, targeted harassment, credential theft — those are hard nos and always will be.
  • Dual-use handled with care. Security research, medical questions, legal questions — we answer, with appropriate caveats, because refusing to engage is its own kind of harm.
10

Hold us to this

Commitments without accountability are marketing. If we violate one of these, we want to hear about it — and we want you to be able to check our work.

  • Public changelog. Every commitment on this page gets a timestamp. If we revise or retire one, the diff is public in our news feed.
  • Direct line. Write to hello@invariant.ai if you think we're failing one of these. A founder reads every one.
  • Right to walk away. You can delete your account and export your data at any time, for any reason, without talking to sales.