Getting Started with Fabric

Fabric is an AI coding IDE built for sovereignty, transparency, and model freedom. Set up takes minutes. This guide walks you through download, authentication, model configuration, and your first coding session.

Fabric is a sovereign AI coding IDE that can be reconfigured to run on-premise at any time. It supports any language model through BYOK, offers transparent AI reasoning that shows how the AI analyzes your codebase, and uses patented intelligent model routing to optimize cost and performance. All conversation history and context migrates with you across deployment modes.

Step 1: Download Fabric

macOS

Apple Silicon (M1/M2/M3/M4) and Intel. Universal binary — one download for both architectures.

Requires macOS 12 (Monterey) or later.

Windows

Windows 10 and Windows 11. Standard installer with optional PATH configuration.

Requires Windows 10 version 1903 or later.

Linux

AppImage and .deb packages available. Compatible with Ubuntu 20.04+, Fedora 38+, and other major distributions.

Requires glibc 2.31 or later.

Step 2: Sign In

Launch Fabric and sign in with your Google account. Authentication uses standard OAuth 2.0 — Fabric receives your name and email to create your account. No passwords are stored.

On first sign-in, Fabric creates your account and generates an encrypted API key for the LLM proxy. This key is encrypted at rest with AES-256-GCM and is never stored in plaintext. If you have a subscription, your key is configured automatically with the appropriate rate limits and model access.

Apple and GitHub OAuth sign-in options are also available. Enterprise teams can configure SSO with their identity provider.

Step 3: Choose Your Model

Fabric is model-agnostic. You choose how you want to access AI models based on your needs.

BYOK (Free Tier)

Bring your own API keys from any provider — OpenAI, Anthropic, Google, Mistral, DeepSeek, or any OpenAI-compatible endpoint. Paste your key into Fabric's settings, select your model, and start coding. No feature limits, no request caps. You pay the model provider directly.

Ideal for: Individual developers who already have API keys and want maximum flexibility.

Fabric Hosted Models

Paid plans include pre-configured access to frontier models with intelligent routing. Fabric's proxy handles model selection, rate limiting, and cost optimization automatically. No API key management required — sign in and code.

Ideal for: Teams who want a managed experience with built-in cost optimization.

Self-Hosted Models

Point Fabric at your own model endpoint running vLLM, Ollama, or any OpenAI-compatible API. Use open-weight models like Qwen 3.5, Llama, Mistral, or GLM-5 on your own GPU infrastructure. Required for air-gapped and classified environments.

Ideal for: Enterprises requiring data sovereignty and air-gapped operation.

Step 4: Start Coding

Chat Mode

Ask questions about your codebase, request explanations, or get implementation suggestions. Fabric analyzes your project context and provides answers with transparent reasoning — you see how it arrived at each response. Use this for exploration, planning, and understanding unfamiliar code.

Agentic Mode

Describe a task in natural language and let Fabric execute it: create files, modify code, run commands, and iterate. The AI works autonomously through multi-step tasks while showing its reasoning at each step. Review and approve changes before they are applied.

Autocomplete

Real-time code completion as you type. Fabric suggests completions based on your current file, open tabs, and project context. Completions adapt to your codebase's patterns and conventions. Works with any language and any configured model.

Voice-to-Text

Describe what you want to build by speaking naturally. Fabric converts speech to text and processes it as a prompt. Effective for rapid prototyping and hands-free coding workflows. Available on all platforms.

Configuring Intelligent Model Routing

Fabric's patented intelligent model routing automatically selects the optimal model for each task. Simple operations — autocomplete, boilerplate generation, test scaffolding — use fast, cost-effective models. Complex operations — multi-file refactoring, architectural decisions, security-critical code — use frontier reasoning models.

Routing is automatic by default. When you configure multiple models (or use Fabric's hosted access), the routing engine evaluates task complexity and selects the appropriate model. You can also override routing manually to force a specific model for any request.

For teams, administrators can configure routing policies at the organization level — setting budget thresholds, restricting model access by team or project, and monitoring usage analytics. This creates predictable costs at scale without sacrificing quality where it matters.

Going On-Premise

For enterprise teams that need data sovereignty, Fabric can be reconfigured to run entirely on your infrastructure at any time.

Configure Self-Hosted Models

Deploy open-weight models (Qwen 3.5, Llama, Mistral, GLM-5) on your GPU infrastructure using vLLM or Ollama. Point Fabric at your endpoint URL. All AI processing happens on your hardware — zero external API calls.

Migrate Your Work

All conversation history, project context, settings, and workflow configurations migrate seamlessly between cloud and on-premise deployment. No data is lost. No manual export/import required. The transition is a configuration change, not a migration project.

Air-Gapped Operation

For classified and restricted environments, Fabric operates with zero internet dependency. License validation, model serving, and all IDE features work completely offline. Deploy via approved media transfer to isolated networks.

Enterprise Support

Our enterprise team provides deployment architecture consultation, Kubernetes helm charts, GPU sizing guidance, and ongoing support for on-premise installations. Contact us for environment-specific deployment plans.

Next Steps

Explore Features

See the full feature set including transparent reasoning, agentic workflows, and voice-to-text.

Features
View Pricing

Compare Free (BYOK), Teams, and Enterprise plans. Find the right fit for your organization.

Pricing
Enterprise

On-premise deployment, SSO, RBAC, usage analytics, and dedicated support for your team.

Enterprise
Compare Tools

See how Fabric compares to Cursor, GitHub Copilot, and Windsurf on features, pricing, and sovereignty.

Compare

Frequently Asked Questions

Is Fabric free to use?

Yes. Fabric's BYOK (Bring Your Own Key) tier is completely free with no feature gates or request limits. You provide your own API keys from any model provider (OpenAI, Anthropic, Google, Mistral, or any OpenAI-compatible endpoint), and Fabric provides the IDE, transparent reasoning, and intelligent routing at no cost. Paid plans add team management, SSO, RBAC, analytics, and enterprise support.

Which operating systems does Fabric support?

Fabric is available for macOS (Apple Silicon and Intel), Windows (10 and 11), and Linux (Ubuntu 20.04+, Fedora 38+, and other major distributions). All platforms receive the same features and updates simultaneously.

Do I need my own API keys to use Fabric?

No. You can use Fabric's hosted model access on paid plans, which includes pre-configured access to frontier models with intelligent routing. Alternatively, the free BYOK tier lets you bring your own API keys from any provider. You can also connect to self-hosted models via vLLM or Ollama for complete sovereignty.

Can I switch between models during a coding session?

Yes. Fabric supports seamless instantaneous model switching. You can change models mid-conversation without losing context. This is useful for directing simple tasks to cost-effective models and complex reasoning to frontier models. Intelligent routing can also do this automatically based on task complexity.

How do I set up Fabric for on-premise use?

For on-premise deployment, configure Fabric to point at your self-hosted model endpoint (vLLM, Ollama, or any OpenAI-compatible API). All AI processing happens on your infrastructure. Contact our enterprise team for deployment guides, Kubernetes helm charts, and architecture consultation for air-gapped environments.

Start Building with Fabric

Download Fabric and start coding with transparent AI reasoning, model freedom, and intelligent cost optimization. Free with your own API keys.