Skip to content

Settings API

Base path: /api/v1/settings

Overview

The Settings API manages application configuration: LLM provider selection, model parameters, API keys for various providers (OpenAI, Anthropic, Aliyun, Volcengine, Ollama), embedding/reranker models, and other system settings. Values are merged from environment variables with DB overrides; API keys are masked in responses.

Endpoints

MethodPathDescription
GET/settingsGet all settings
PUT/settingsUpdate settings (partial)
GET/settings/modelsList available models per provider
POST/settings/test-connectionTest LLM provider connection
GET/settings/healthHealth check

GET /api/v1/settings

Description: Return merged settings (DB overrides .env). API keys are masked (e.g. sk-12***abcd).

Response: ApiResponse[SettingsSchema]

SettingsSchema

FieldTypeDescription
llm_providerstringDefault LLM provider (openai, anthropic, aliyun, volcengine, ollama, mock)
llm_modelstringDefault model (overrides provider default)
llm_temperaturefloatTemperature (0.0–2.0)
llm_max_tokensintMax tokens
openai_api_keystringOpenAI API key (masked)
openai_modelstringOpenAI model
anthropic_api_keystringAnthropic API key (masked)
anthropic_modelstringAnthropic model
aliyun_api_keystringAliyun API key (masked)
aliyun_base_urlstringAliyun base URL
aliyun_modelstringAliyun model
volcengine_api_keystringVolcengine API key (masked)
volcengine_base_urlstringVolcengine base URL
volcengine_modelstringVolcengine model
ollama_base_urlstringOllama base URL
ollama_modelstringOllama model
embedding_modelstringEmbedding model name
reranker_modelstringReranker model name
data_dirstringData directory path
cuda_visible_devicesstringCUDA device IDs
semantic_scholar_api_keystringSemantic Scholar API key (masked)
unpaywall_emailstringUnpaywall email

Get Settings Example

bash
curl -X GET "http://localhost:8000/api/v1/settings"
json
{
  "code": 200,
  "message": "success",
  "data": {
    "llm_provider": "openai",
    "llm_model": "gpt-4o-mini",
    "llm_temperature": 0.7,
    "llm_max_tokens": 4096,
    "openai_api_key": "sk-12***abcd",
    "openai_model": "gpt-4o-mini",
    "anthropic_api_key": "",
    "anthropic_model": "",
    "aliyun_api_key": "",
    "aliyun_base_url": "",
    "aliyun_model": "",
    "volcengine_api_key": "",
    "volcengine_base_url": "",
    "volcengine_model": "",
    "ollama_base_url": "http://localhost:11434",
    "ollama_model": "",
    "embedding_model": "BAAI/bge-m3",
    "reranker_model": "",
    "data_dir": "/data0/djx/omelette",
    "cuda_visible_devices": "",
    "semantic_scholar_api_key": "",
    "unpaywall_email": ""
  }
}

PUT /api/v1/settings

Description: Update user-configurable settings. Only non-null fields are applied. Masked API keys (containing ***) are skipped to avoid overwriting secrets.

Request: SettingsUpdateSchema (partial, all fields optional)

FieldTypeConstraints
llm_providerstring
llm_modelstring
llm_temperaturefloat0.0–2.0
llm_max_tokensint1–128000
openai_api_keystring
openai_modelstring
anthropic_api_keystring
anthropic_modelstring
aliyun_api_keystring
aliyun_base_urlstring
aliyun_modelstring
volcengine_api_keystring
volcengine_base_urlstring
volcengine_modelstring
ollama_base_urlstring
ollama_modelstring

Response: ApiResponse[SettingsSchema] (updated merged settings)

Update Settings Example

bash
curl -X PUT "http://localhost:8000/api/v1/settings" \
  -H "Content-Type: application/json" \
  -d '{"llm_provider": "openai", "llm_model": "gpt-4o-mini"}'

GET /api/v1/settings/models

Description: Return available LLM providers and their model lists.

Response: ApiResponse[list[ProviderModelInfo]]

ProviderModelInfo

FieldTypeDescription
providerstringProvider ID
display_namestringDisplay name
modelsstring[]List of model IDs
requires_api_keyboolWhether API key is required
requires_base_urlboolWhether base URL is configurable
default_base_urlstringDefault base URL if applicable

List Models Example

bash
curl -X GET "http://localhost:8000/api/v1/settings/models"
json
{
  "code": 200,
  "message": "success",
  "data": [
    {
      "provider": "openai",
      "display_name": "OpenAI",
      "models": ["gpt-4o", "gpt-4o-mini", "gpt-4.1", "gpt-4.1-mini", "gpt-4.1-nano", "o3-mini"],
      "requires_api_key": true,
      "requires_base_url": false,
      "default_base_url": ""
    },
    {
      "provider": "ollama",
      "display_name": "Ollama (本地)",
      "models": ["llama3", "llama3.1", "mistral", "qwen2", "deepseek-r1"],
      "requires_api_key": false,
      "requires_base_url": true,
      "default_base_url": "http://localhost:11434"
    }
  ]
}

POST /api/v1/settings/test-connection

Description: Test the current LLM configuration by sending a simple prompt. Uses merged settings from DB (no request body).

Response: ApiResponse[dict]

FieldTypeDescription
successboolWhether the test succeeded
responsestringFirst 200 chars of LLM response (on success)
errorstringError message (on failure)

Test Connection Example

bash
curl -X POST "http://localhost:8000/api/v1/settings/test-connection"

Success:

json
{
  "code": 200,
  "message": "success",
  "data": {
    "success": true,
    "response": "OK."
  }
}

Failure:

json
{
  "code": 500,
  "message": "Connection test failed",
  "data": {
    "success": false,
    "error": "Invalid API key"
  }
}

GET /api/v1/settings/health

Description: Simple health check endpoint.

Response: ApiResponse[dict]

FieldTypeDescription
statusstring"healthy"
versionstringApplication version

Health Check Example

bash
curl -X GET "http://localhost:8000/api/v1/settings/health"
json
{
  "code": 200,
  "message": "success",
  "data": {
    "status": "healthy",
    "version": "0.1.0"
  }
}

Error Codes

CodeDescription
200Success
400Bad request (e.g. invalid temperature range)
422Validation error (invalid request body)
500Server error (e.g. connection test failure)

Released under the MIT License.