AI-powered transcript analysis microservice for the Jubal Personal Operating System ecosystem.
Grok provides intelligent, multi-step profile-based processing of audio transcripts with support for both local (Ollama) and cloud (OpenRouter) LLM providers. Built with FastAPI and designed for containerized deployment in microservices architectures.
- Docker Desktop 4.0+
- Docker Compose v2.0+
- Shared Jubal Network (for service registry)
# Clone repository
git clone https://github.com/arealicehole/grok.git
cd grok
# Set up environment variables
cp .env.example .env
# Edit .env with your configuration
# Start the service
docker compose up -d
# Verify service health
curl http://localhost:8002/healthRecent Accomplishments:
- β Local LLM Integration (Ollama): Full async provider with model listing and completions
- β Cloud LLM Integration (OpenRouter): Authentication, 10+ models, rate limiting
- β Provider Abstraction: Base classes with intelligent fallback strategies
- β Model Selection Engine: Smart provider selection with global overrides
- β Comprehensive Testing: Full unit test coverage for all providers
- β Docker Integration: Updated containers with new provider endpoints
Next Major Features:
- π Multi-step Profile Processing Engine
- π Built-in Analysis Profiles (Business, Project Planning, Personal)
- π Profile Definition System with Pydantic schemas
- π Step Execution Engine with data flow
| Endpoint | Method | Description | Status | 
|---|---|---|---|
| /health | GET | Service health check | β | 
| /capabilities | GET | Service capabilities declaration | β | 
| /services | GET | List registered Jubal services | β | 
| /providers/status | GET | LLM provider health and availability | β | 
| Endpoint | Method | Description | Status | 
|---|---|---|---|
| /profiles | GET | List available processing profiles | β | 
| /profiles/{id} | GET | Get specific profile details | β | 
| Endpoint | Method | Description | Status | 
|---|---|---|---|
| /process | POST | Process transcript with specified profile | β | 
# Health check
curl http://localhost:8002/health
# List available profiles
curl http://localhost:8002/profiles | jq .
# Process a transcript
curl -X POST http://localhost:8002/process \
  -H "Content-Type: application/json" \
  -d '{
    "job_id": "meeting-001",
    "data": {
      "type": "text/plain",
      "content": "John: Welcome to our meeting. Jane: Thanks, lets discuss the API requirements.",
      "encoding": "utf-8"
    },
    "metadata": {
      "profile_id": "business_meeting"
    }
  }' | jq .βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β                    Jubal Network                        β
β  βββββββββββββββββββ  βββββββββββββββββββ  ββββββββββββ β
β  β   Jubal Core    β  β  Recall Adapter β  β   Grok   β β
β  β    (8000)       β  β     (8001)      β  β  (8002)  β β
β  βββββββββββββββββββ  βββββββββββββββββββ  ββββββββββββ β
β             β                   β               β       β
β             βββββββββββββββββββββΌββββββββββββββββ       β
β                                 β                       β
β  βββββββββββββββββββ            β        ββββββββββββββββ β
β  β  Redis Registry ββββββββββββββ        β  Supabase    β β
β  β     (6379)      β                     β (54325+)     β β
β  βββββββββββββββββββ                     ββββββββββββββββ β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
grok/
βββ app/
β   βββ __init__.py
β   βββ main.py                 # FastAPI application entry
β   βββ config.py               # Settings management
β   βββ models/
β   β   βββ __init__.py
β   β   βββ jubal.py           # Jubal service contracts
β   β   βββ profile.py         # Processing profile schemas
β   βββ services/
β       βββ __init__.py
β       βββ registry.py        # Redis service registry
β       βββ processor.py       # Profile processing engine
βββ profiles/                   # Processing profile definitions
βββ docs/                      # Project documentation
βββ Dockerfile                 # Container definition
βββ docker-compose.yml         # Service orchestration
βββ requirements.txt           # Python dependencies
βββ .env.example               # Environment configuration template
# Service Configuration
GROK_SERVICE_PORT=8002
GROK_DEBUG=true
# Jubal Integration
GROK_JUBAL_CORE_URL=http://jubal-core:8000
GROK_REDIS_URL=redis://jubal-redis:6379/0
# LLM Providers
GROK_OLLAMA_URL=http://host.docker.internal:11434
GROK_OPENROUTER_API_KEY=your_openrouter_api_key_here
GROK_OPENROUTER_APP_NAME=grok-intelligence-engine
GROK_OPENROUTER_APP_URL=
# Default Models
GROK_DEFAULT_MODEL_PROVIDER=local
GROK_DEFAULT_LOCAL_MODEL=llama3.1:8b
GROK_DEFAULT_OPENROUTER_MODEL=openai/gpt-4o-mini
# Processing Configuration
GROK_MAX_CONCURRENT_JOBS=5
GROK_DEFAULT_TEMPERATURE=0.2
GROK_DEFAULT_MAX_TOKENS=2000
# File Management
GROK_PROFILES_DIR=./profiles
GROK_AUTO_PROCESS_NEW_FILES=false- grok-adapter: Main application service (port 8002)
- External Dependencies:
- jubal-redis: Service registry (shared)
- jubal-network: Service communication network (shared)
 
Grok uses multi-step processing profiles to analyze transcripts. Each profile defines a sequence of LLM operations with specific prompts and model configurations.
| Profile ID | Description | Steps | Estimated Tokens | 
|---|---|---|---|
| business_meeting | Extract entities, decisions, action items | 3 | 4,500 | 
| project_planning | Analyze requirements, timelines, risks | 4 | 5,200 | 
| personal_notes | Process notes for organization | 3 | 3,000 | 
{
  "profile_id": "business_meeting",
  "name": "Business Meeting Analysis",
  "description": "Extract entities, decisions, and action items from business meetings",
  "steps": [
    {
      "step_id": "extract_entities",
      "name": "Extract Key Entities",
      "prompt_template": "Extract people, companies, and dates from: {transcript}",
      "llm_config": {
        "provider": "local",
        "model": "llama3.1:8b",
        "temperature": 0.1
      },
      "output_format": "json"
    }
  ]
}# 1. Clone repository
git clone https://github.com/arealicehole/grok.git
cd grok
# 2. Set up Python environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate
pip install -r requirements.txt
# 3. Configure environment
cp .env.example .env
# Edit .env with your configuration
# 4. Start dependencies (Redis, Supabase)
# Note: Use shared Jubal infrastructure
# 5. Run application
python -m uvicorn app.main:app --reload --host 0.0.0.0 --port 8002# Install test dependencies
pip install pytest pytest-asyncio httpx
# Run unit tests
pytest tests/
# Integration testing with curl
./scripts/test_endpoints.sh
# End-to-end testing with sample data
curl -X POST http://localhost:8002/process -d @test_data/business_meeting.json# Build container
docker compose build grok-adapter
# Start services
docker compose up -d
# View logs
docker compose logs grok-adapter -f
# Rebuild and restart
docker compose up grok-adapter --build --force-recreateGrok automatically registers with the Redis service registry on startup:
{
  "service": "grok-adapter",
  "version": "1.0.0",
  "host": "grok-adapter",
  "port": 8002,
  "health_endpoint": "/health",
  "capabilities": ["analyze", "extract", "summarize"]
}All processing requests follow the Jubal envelope contract:
{
  "job_id": "unique-job-identifier",
  "pipeline_id": "optional-pipeline-id",
  "data": {
    "type": "text/plain",
    "content": "transcript content here",
    "encoding": "utf-8"
  },
  "metadata": {
    "profile_id": "business_meeting",
    "overrides": {}
  },
  "trace": {}
}{
  "job_id": "unique-job-identifier",
  "status": "completed",
  "data": {
    "type": "application/json",
    "content": {
      "entities": {"people": ["John", "Jane"], "companies": ["Acme Corp"]},
      "summary": {"key_points": ["API requirements discussed"]},
      "processing_metadata": {"steps_completed": 3, "total_tokens": 150}
    },
    "encoding": "utf-8"
  },
  "error": null,
  "metadata": {
    "profile_id": "business_meeting",
    "timestamp": "2025-09-15T22:41:55.057669+00:00"
  }
}# Build production image
docker compose build grok-adapter
# Deploy with resource limits
docker compose -f docker-compose.prod.yml up -d
# Health check
curl https://your-domain.com/health(Coming in Phase 7)
# deploy.yml for Akash Network
version: "2.0"
services:
  grok:
    image: grok-adapter:latest
    expose:
      - port: 8002
        as: 80
        to:
          - global: true
profiles:
  compute:
    grok:
      resources:
        cpu:
          units: 1.0
        memory:
          size: 2Gi
        storage:
          size: 5Gi
  placement:
    akash:
      attributes:
        host: akash
      signedBy:
        anyOf:
          - "akash1365yvmc4s7awdyj3n2sav7xfx76adc6dnmlx63"
      pricing:
        grok:
          denom: uakt
          amount: 1000
deployment:
  grok:
    akash:
      profile: grok
      count: 1- Development Plan - Comprehensive roadmap and implementation strategy
- Project Scope - Overall Jubal ecosystem context
- Integration Guide - Integration with Jubal services
- API Reference - Detailed API documentation
- Fork the repository
- Create a feature branch: git checkout -b feature/amazing-feature
- Commit changes: git commit -m 'Add amazing feature'
- Push to branch: git push origin feature/amazing-feature
- Open a Pull Request
- Follow FastAPI best practices
- Use Pydantic v2 for data validation
- Maintain test coverage above 80%
- Follow semantic versioning
- Document all API changes
This project is part of the Jubal Personal Operating System ecosystem. See the Jubal repository for licensing information.
- Jubal Core - Personal Operating System framework
- Recall - Audio processing and transcription adapter
For support and questions:
- Open an issue in this repository
- Check the Jubal documentation
- Join the community discussions
Built with β€οΈ for the Jubal ecosystem