Skip to content

[FEAT]: Add custom base URL for Anthropic provider and optional model discovery for Generic OpenAI #5234

@raucodes

Description

@raucodes

What would you like to see?

What would you like to see?

I would like to request two related improvements for self-hosted / compatible API backends:

  1. Add custom base URL / custom host support to the Anthropic provider
  2. Add optional model discovery / model selection support to the Generic OpenAI provider

Why is this needed?

There are now local/self-hosted backends that expose APIs compatible with OpenAI and/or Anthropic.

For example, some backends support:

  • OpenAI-compatible /v1/chat/completions
  • OpenAI-compatible /v1/models
  • Anthropic-compatible /v1/messages
  • streaming
  • reasoning / thinking
  • tool calling

A good example is oMLX, which advertises itself as a drop-in replacement for both OpenAI and Anthropic APIs.

Problem today

1. Anthropic provider

The Anthropic connector appears to be tied to Anthropic-hosted endpoints only, with no way to set a custom host / base URL.

That makes it impossible to use Anthropic-compatible self-hosted backends directly.

2. Generic OpenAI provider

The Generic OpenAI connector works with custom hosts, but it appears to require a manually entered model name and does not seem to expose the same model selection UX as more specific providers.

This means that even when a backend supports GET /v1/models, users cannot conveniently pick from available models in the UI.

Expected behavior

Anthropic provider

Please allow:

  • API Key
  • Model
  • Base URL / Custom Host

Example:

  • https://api.anthropic.com
  • http://localhost:8080
  • https://my-self-hosted-backend.example.com

Generic OpenAI provider

Please optionally support:

  • calling GET /v1/models
  • populating the model dropdown from that endpoint
  • falling back to manual model entry when model discovery is unavailable

Why this would help

This would make AnythingLLM much more flexible for:

  • self-hosted backends
  • OpenAI-compatible servers
  • Anthropic-compatible servers
  • private/local inference infrastructure
  • advanced users who do not want to rely only on hosted APIs

It would also reduce friction when testing backends that already expose standards-compatible endpoints.

Additional context

Right now, Generic OpenAI is often the best workaround for self-hosted backends because it supports custom hosts, but it may miss provider-specific UX features such as model selection.

At the same time, the Anthropic provider could be a better fit for Anthropic-compatible servers, but only if custom base URLs are supported.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions