📑 Docs • 🌐 Website • 🤝 Contribute • ✍🏽 Blogs
✨ If you would like to help spread the word about Rig, please consider starring the repo!
Warning
Here be dragons! As we plan to ship a torrent of features in the following months, future updates will contain breaking changes. With Rig evolving, we'll annotate changes and highlight migration paths as we encounter them.
- Table of contents
 - What is Rig?
 - High-level features
 - Who's using Rig in production?
 - Get Started
 - Integrations
 
Rig is a Rust library for building scalable, modular, and ergonomic LLM-powered applications.
More information about this crate can be found in the official & crate (API Reference) documentations.
- Full support for LLM completion and embedding workflows
 - Simple but powerful common abstractions over LLM providers (e.g. OpenAI, Cohere) and vector stores (e.g. MongoDB, SQlite, in-memory)
 - Integrate LLMs in your app with minimal boilerplate
 
Below is a non-exhaustive list of companies and people who are using Rig in production:
- Dria Compute Node - a node that serves computation results within the Dria Knowledge Network
 - The MCP Rust SDK - the official Model Context Protocol Rust SDK. Has an example for usage with Rig.
 - Probe - an AI-friendly, fully local semantic code search tool.
 - NINE - Neural Interconnected Nodes Engine, by Nethermind.
 - rig-onchain-kit - the Rig Onchain Kit. Intended to make interactions between Solana/EVM and Rig much easier to implement.
 - Linera Protocol - Decentralized blockchain infrastructure designed for highly scalable, secure, low-latency Web3 applications.
 - Listen - A framework aiming to become the go-to framework for AI portfolio management agents. Powers the Listen app.
 
Are you also using Rig in production? Open an issue to have your name added!
cargo add rig-coreuse rig::{completion::Prompt, providers::openai};
#[tokio::main]
async fn main() {
    // Create OpenAI client and model
    // This requires the `OPENAI_API_KEY` environment variable to be set.
    let openai_client = openai::Client::from_env();
    let gpt4 = openai_client.agent("gpt-4").build();
    // Prompt the model and print its response
    let response = gpt4
        .prompt("Who are you?")
        .await
        .expect("Failed to prompt GPT-4");
    println!("GPT-4: {response}");
}Note using #[tokio::main] requires you enable tokio's macros and rt-multi-thread features
or just full to enable all features (cargo add tokio --features macros,rt-multi-thread).
You can find more examples each crate's examples (ie. rig-core/examples) directory. More detailed use cases walkthroughs are regularly published on our Dev.to Blog and added to Rig's official documentation (docs.rig.rs).
Vector stores are available as separate companion-crates:
- MongoDB: 
rig-mongodb - LanceDB: 
rig-lancedb - Neo4j: 
rig-neo4j - Qdrant: 
rig-qdrant - SQLite: 
rig-sqlite - SurrealDB: 
rig-surrealdb - Milvus: 
rig-milvus - ScyllaDB: 
rig-scylladb - AWS S3Vectors: 
rig-s3vectors 
The following providers are available as separate companion-crates:
- Fastembed: 
rig-fastembed - Eternal AI: 
rig-eternalai