SETUP
Everything you need to integrate NeuroFS into your AI infrastructure.
Quick Start
1. Sign in and generate an API key from the Dashboard
2. Ingest a prompt
curl -X POST https://api.neurofs.com/api/v1/ingest \
-H "Content-Type: application/json" \
-H "x-api-key: YOUR_API_KEY" \
-d '{"user_id":"user1","text":"How do I implement async in Rust?"}'3. Read the routing plan from the response
{
"adapter_plan": { "adapters": [["rust_expert", 0.95]] },
"expert_plan": { "expert_groups": [["systems", 0.88]] },
"tool_plan": { "eligible_tools": ["compiler_check", "doc_search"] },
"kv_cache_plan": { "confidence": 0.92, "prefetch_topics": ["async_rust"] }
}OpenClaw Setup
Sign in to view your personalised OpenClaw setup instructions, including your API key and endpoint pre-filled.
Sign In to Get Started →Integration Examples
Route a prompt through NeuroFS first, then use the activation plan to steer your LLM call — selecting the right system prompt, tools, or model variant automatically.
OpenAI
import openai, requests
NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"
def route_and_call(user_id: str, prompt: str) -> str:
# 1. Route with NeuroFS
plan = requests.post(
f"{NEUROFS}/api/v1/ingest",
headers={"x-api-key": NEUROFS_KEY},
json={"user_id": user_id, "text": prompt},
).json()
# 2. Build system prompt from top adapter
top_adapter = plan["adapter_plan"]["adapters"][0][0]
system = f"You are a {top_adapter.replace('_', ' ')} assistant."
# 3. Call OpenAI
return openai.chat.completions.create(
model="gpt-4o",
messages=[
{"role": "system", "content": system},
{"role": "user", "content": prompt},
],
).choices[0].message.contentClaude
import anthropic, requests
NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"
def route_and_call(user_id: str, prompt: str) -> str:
# 1. Route with NeuroFS
plan = requests.post(
f"{NEUROFS}/api/v1/ingest",
headers={"x-api-key": NEUROFS_KEY},
json={"user_id": user_id, "text": prompt},
).json()
# 2. Build system prompt from top adapter
top_adapter = plan["adapter_plan"]["adapters"][0][0]
system = f"You are a {top_adapter.replace('_', ' ')} assistant."
# 3. Call Claude
client = anthropic.Anthropic()
return client.messages.create(
model="claude-opus-4-6",
max_tokens=1024,
system=system,
messages=[{"role": "user", "content": prompt}],
).content[0].textGemini
import google.generativeai as genai
import requests
NEUROFS = "https://api.neurofs.com"
NEUROFS_KEY = "YOUR_API_KEY"
def route_and_call(user_id: str, prompt: str) -> str:
# 1. Route with NeuroFS
plan = requests.post(
f"{NEUROFS}/api/v1/ingest",
headers={"x-api-key": NEUROFS_KEY},
json={"user_id": user_id, "text": prompt},
).json()
# 2. Build system prompt from top adapter
top_adapter = plan["adapter_plan"]["adapters"][0][0]
system = f"You are a {top_adapter.replace('_', ' ')} assistant."
# 3. Call Gemini
genai.configure(api_key="YOUR_GEMINI_KEY")
model = genai.GenerativeModel("gemini-2.0-flash", system_instruction=system)
return model.generate_content(prompt).textAPI Reference
Ingestion
POST
/api/v1/ingestSynchronous full routingPOST
/api/v1/ingest/streamSSE progress streamingPOST
/api/v1/ingest/responseResponse-aware activationPOST
/api/v1/ingest/codeTree-sitter code parsingPOST
/api/v1/ingest/asyncBackground job (poll via /jobs/:id)State & History
GET
/api/v1/state/:user_idCurrent activation stateGET
/api/v1/state/:user_id/historyEvent replay with filtersGET
/api/v1/explain/:user_id/:event_idExplainability treeFeedback
POST
/api/v1/feedbackRecord outcome feedbackRegions
GET
/api/v1/regionsList all regionsRealtime
WS
/ws/activations/:user_idLive activation WebSocket streamSessions
POST
/api/v1/sessionsCreate multi-agent sessionPOST
/api/v1/sessions/:id/ingestPer-agent ingestionPOST
/api/v1/sessions/:id/mergeMerge agent branchesAuthentication
All endpoints (except /api/v1/health) require an API key supplied via one of:
x-api-key: YOUR_API_KEYheaderAuthorization: Bearer YOUR_API_KEYheader
Generate your API key from the Dashboard → API Keys section (requires sign-in). See pricing for per-plan rate limits.
Rate limits are enforced per API key using a sliding 60-second window. Free tier: 20 rpm · Starter: 60 rpm · Pro: 200 rpm.