SETUP

Everything you need to integrate NeuroFS into your AI infrastructure.

Quick Start

1. Sign in and generate an API key from the Dashboard
2. Ingest a prompt
curl -X POST https://api.neurofs.com/api/v1/ingest \
  -H "Content-Type: application/json" \
  -H "x-api-key: YOUR_API_KEY" \
  -d '{"user_id":"user1","text":"How do I implement async in Rust?"}'
3. Read the routing plan from the response
{
  "adapter_plan":  { "adapters": [["rust_expert", 0.95]] },
  "expert_plan":   { "expert_groups": [["systems", 0.88]] },
  "tool_plan":     { "eligible_tools": ["compiler_check", "doc_search"] },
  "kv_cache_plan": { "confidence": 0.92, "prefetch_topics": ["async_rust"] }
}

OpenClaw Setup

Sign in to view your personalised OpenClaw setup instructions, including your API key and endpoint pre-filled.

Sign In to Get Started →

Integration Examples

Route a prompt through NeuroFS first, then use the activation plan to steer your LLM call — selecting the right system prompt, tools, or model variant automatically.

OpenAI

import openai, requests

NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"

def route_and_call(user_id: str, prompt: str) -> str:
    # 1. Route with NeuroFS
    plan = requests.post(
        f"{NEUROFS}/api/v1/ingest",
        headers={"x-api-key": NEUROFS_KEY},
        json={"user_id": user_id, "text": prompt},
    ).json()

    # 2. Build system prompt from top adapter
    top_adapter = plan["adapter_plan"]["adapters"][0][0]
    system = f"You are a {top_adapter.replace('_', ' ')} assistant."

    # 3. Call OpenAI
    return openai.chat.completions.create(
        model="gpt-4o",
        messages=[
            {"role": "system", "content": system},
            {"role": "user",   "content": prompt},
        ],
    ).choices[0].message.content

Claude

import anthropic, requests

NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"

def route_and_call(user_id: str, prompt: str) -> str:
    # 1. Route with NeuroFS
    plan = requests.post(
        f"{NEUROFS}/api/v1/ingest",
        headers={"x-api-key": NEUROFS_KEY},
        json={"user_id": user_id, "text": prompt},
    ).json()

    # 2. Build system prompt from top adapter
    top_adapter = plan["adapter_plan"]["adapters"][0][0]
    system = f"You are a {top_adapter.replace('_', ' ')} assistant."

    # 3. Call Claude
    client = anthropic.Anthropic()
    return client.messages.create(
        model="claude-opus-4-6",
        max_tokens=1024,
        system=system,
        messages=[{"role": "user", "content": prompt}],
    ).content[0].text

Gemini

import google.generativeai as genai
import requests

NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"

def route_and_call(user_id: str, prompt: str) -> str:
    # 1. Route with NeuroFS
    plan = requests.post(
        f"{NEUROFS}/api/v1/ingest",
        headers={"x-api-key": NEUROFS_KEY},
        json={"user_id": user_id, "text": prompt},
    ).json()

    # 2. Build system prompt from top adapter
    top_adapter = plan["adapter_plan"]["adapters"][0][0]
    system = f"You are a {top_adapter.replace('_', ' ')} assistant."

    # 3. Call Gemini
    genai.configure(api_key="YOUR_GEMINI_KEY")
    model = genai.GenerativeModel("gemini-2.0-flash", system_instruction=system)
    return model.generate_content(prompt).text

API Reference

Ingestion

POST/api/v1/ingestSynchronous full routing
POST/api/v1/ingest/streamSSE progress streaming
POST/api/v1/ingest/responseResponse-aware activation
POST/api/v1/ingest/codeTree-sitter code parsing
POST/api/v1/ingest/asyncBackground job (poll via /jobs/:id)

State & History

GET/api/v1/state/:user_idCurrent activation state
GET/api/v1/state/:user_id/historyEvent replay with filters
GET/api/v1/explain/:user_id/:event_idExplainability tree

Feedback

POST/api/v1/feedbackRecord outcome feedback

Regions

GET/api/v1/regionsList all regions

Realtime

WS/ws/activations/:user_idLive activation WebSocket stream

Sessions

POST/api/v1/sessionsCreate multi-agent session
POST/api/v1/sessions/:id/ingestPer-agent ingestion
POST/api/v1/sessions/:id/mergeMerge agent branches

Authentication

All endpoints (except /api/v1/health) require an API key supplied via one of:

  • x-api-key: YOUR_API_KEY header
  • Authorization: Bearer YOUR_API_KEY header

Generate your API key from the Dashboard → API Keys section (requires sign-in). See pricing for per-plan rate limits.

Rate limits are enforced per API key using a sliding 60-second window. Free tier: 20 rpm · Starter: 60 rpm · Pro: 200 rpm.