Your Code. Your Models. Your Infrastructure. Always.

Governments compel AI companies to retain data, break privacy agreements, and comply with national security mandates. Fabric is the only AI coding IDE that moves with you — reconfigure to on-premise at any time, bring all your work with you, and switch models instantly.

Fabric Enterprise is an AI coding IDE built for sovereign deployment. It can be reconfigured to run entirely on-premise at any time — all conversation histories, context, and work come with you. Switch seamlessly between cloud models and self-hosted alternatives like GLM-5, Qwen, or Llama. No vendor lock-in, no data hostage scenarios. Available for macOS, Windows, and Linux.

Built for Security-First Organizations

Every architectural decision in Fabric prioritizes data sovereignty and compliance.

On-Premise in One Move

Reconfigure Fabric to run entirely on-premise at any time. All conversation histories, context, and work migrate with you. No data left behind, no vendor lock-in. Your work is yours — architecturally, not just contractually.

Seamless Model Switching

Switch instantly between cloud models (Claude, GPT, Gemini) and self-hosted alternatives (GLM-5, Qwen, Llama, Mistral). When geopolitics shift, your dev tools don't have to. One config change, zero downtime.

Sovereignty by Architecture

License terms can be overridden by government mandate. Fabric's sovereignty is architectural — your data physically cannot leave your environment in on-premise mode. No policy change, executive order, or acquisition can change that.

Your Models, Your Keys

Bring any model — self-hosted GLM-5, Qwen 3.5, Llama, or managed APIs from OpenAI, Anthropic, Google. Yes, local models trade peak performance for sovereignty. That trade-off is yours to make, not your vendor's.

End-to-End Encryption

All data encrypted in transit (TLS 1.3) and at rest (AES-256). API keys encrypted with AES-256-GCM. Hardware security module support available.

Zero Data Retention

No code, prompts, or artifacts are stored, logged, or used for training. In a world where AI companies are being compelled to retain user data, Fabric's zero-retention architecture means there's nothing to compel.

Built for Global Compliance

Whether you operate under GDPR, the Canadian Privacy Act, or defense-grade classification requirements — Fabric's on-premise architecture meets you where your regulations are.

European Union & GDPR

GDPR requires that personal data stays within the EU unless adequate protections exist. Most AI IDEs route code through US-based servers — a compliance risk under Schrems II. Fabric deploys on-premise within your EU infrastructure. No data crosses borders. No adequacy decisions required.

Canada & PIPEDA

Canadian organizations face increasing scrutiny on cross-border data transfers to US cloud providers — especially given recent executive orders compelling US tech companies to retain user data. Fabric keeps your code and AI interactions entirely within Canadian infrastructure.

Defense & Classified Environments

Fabric supports fully air-gapped deployment for classified and restricted environments. No external network calls, no cloud dependencies. Compatible with ITAR, FedRAMP, and national security frameworks across NATO-allied nations.

Data Residency Guarantees

Choose exactly where your data lives. Fabric's on-premise deployment means your code, prompts, and AI context never leave your chosen jurisdiction. No reliance on a vendor's data center locations or their promises about data routing.

Enterprise Capabilities

Custom Fine-Tuned Models

Train models on your codebase, coding standards, and domain knowledge. Fabric's fine-tuning pipeline creates models that write code the way your team does — consistent, compliant, and context-aware.

Role-Based Access Control

Granular RBAC for teams of any size. Control who can access which models, view analytics, manage billing, and configure workspace rules. Integrates with your existing identity provider.

SSO & Identity Federation

SAML 2.0 and OIDC single sign-on. Integrate with Okta, Azure AD, Google Workspace, or any standards-compliant identity provider. Enforce MFA, conditional access, and session policies.

Usage Analytics & Reporting

Real-time dashboards showing AI usage by team, model, and project. Track token consumption, cost allocation, and productivity metrics. Export to your BI tools via API.

Ready to Deploy?

Talk to our team about air-gapped deployment, custom models, and enterprise pricing.

Contact Sales

Enterprise FAQ

Can Fabric run completely air-gapped with no internet access?

Yes. Fabric can be reconfigured at any time to run entirely on-premise with no external network calls. All your conversation histories, context, and work come with you. Run self-hosted models (GLM-5, Qwen, Llama, Mistral) locally — the IDE functions completely offline.

What happens if I need to move off cloud AI providers?

That's exactly what Fabric is built for. Switch from cloud models to self-hosted alternatives with a single configuration change. Your work, context, and history are stored locally — nothing is held hostage in a vendor's cloud. You trade some peak performance for definitive sovereignty, and that trade-off is yours to make whenever you want.

Why can't I just rely on my AI provider's privacy policy?

Because privacy policies are license terms, and governments override license terms. OpenAI was ordered to retain conversations past their agreed expiration dates. AI companies face potential lawsuits, fines, or national security mandates that can change their data practices overnight. Fabric's sovereignty is architectural, not contractual — in on-premise mode, your data physically cannot leave your environment.

What models can I use on-premise?

Any model you can host. GLM-5, Qwen 3.5 (397B, 27B, 35B-A3B), Llama, Mistral, DeepSeek, or any model served via an OpenAI-compatible API (vLLM, Ollama, TGI). Yes, these trade peak performance compared to frontier 2T-parameter MoEs, but they give you definitive sovereignty.

How is Fabric different from Cursor or Copilot for enterprise?

Cursor and Copilot require cloud connectivity and route your code through US-based servers subject to US jurisdiction. Fabric is the only AI coding IDE where you can reconfigure to fully on-premise at any time, bring all your work with you, and switch models instantly. No other IDE offers this level of vendor independence.

What's the deployment architecture?

Fabric deploys as containers (Docker/Kubernetes) in your environment. The architecture includes the IDE client, an API gateway, model routing layer, and optional analytics collector. All components can run within a single VPC or across multiple availability zones.