AI governance
Provider routing, budget controls, and usage oversight
Active provider: mistral. Fallback: ollama_local. Monthly budget: EUR 25.
Workspace modules
Switch active provider between Mistral, mock AI, Ollama local, self-hosted vLLM, OpenAI optional, and Gemini optional.
Review estimated AI usage by feature: expert profiles, challenge briefs, board briefs, translations, embeddings, and anonymization risk checks.
Configure hard stop, monthly budget, and warning thresholds at 50%, 80%, and 95%.
Test Mistral, Ollama, and private endpoint availability before enabling production workflows.
Disable individual AI features when budget, privacy, or provider status requires it.
Keep prompt logging disabled by default and retain only minimal operational usage metadata.