LLM Connections
Choose the right provider route for cloud, local, or cross-device use.
Abolitus does not ship its own hosted model. Every response comes from a route that you choose.
The right route depends on what you care about most: setup speed, model variety, price control, hardware independence, or strict local privacy.
The Four Main Routes
OpenRouter
OpenRouter is the default recommendation for most users.
Why people choose it:
- One browser-safe cloud route for many model families.
- Fast setup.
- Good fit when you want to compare models without rewriting your workflow.
- Useful if you want access to current cloud families such as Claude Opus 4.7, GPT-5.5, Gemini 3.1 Pro, and other models exposed by OpenRouter at the time you connect.
Good for:
- First-time setup.
- Testing multiple model families quickly.
- Users who want a simple cloud route with strong catalog breadth.
Less ideal for:
- Users who want every part of the workflow to stay on local hardware.
NanoGPT
NanoGPT is a good alternative when you want a cloud route with a crypto-friendly payment posture.
Why people choose it:
- Browser-safe route.
- Good fit if you prefer that provider's payment model.
- Useful when you want cloud access but do not want your workflow tied to a conventional subscription setup.
Good for:
- Cloud usage with privacy-conscious payment preferences.
- Users who already manage NanoGPT balances.
Local OpenAI-Compatible Endpoint
This is the route for users who want full control over inference.
Common examples:
- Ollama.
- LM Studio.
- Other OpenAI-compatible endpoints that expose standard model listing and chat-completions routes.
Good for:
- Maximum privacy.
- Low marginal cost once your hardware is ready.
- Open-source model workflows, including local families such as Gemma 4 when your local stack supports them.
Tradeoffs:
- You manage performance, VRAM limits, model downloads, and local server stability.
- Some weaker local models may need more prompt guidance than top cloud models.
Desktop Tunnel
Desktop Tunnel is for remote access to a desktop-hosted local model from another device.
Good for:
- Using your desktop model from a phone or tablet.
- Keeping heavy inference on one machine while reading and replying from another.
Important limit:
- This is a Premium feature.
Routes That Are Not Directly Supported
Direct browser calls to some vendor APIs are not a practical fit for Abolitus because those vendors block or restrict browser-origin usage.
In practice, this means:
- Use OpenRouter instead of direct OpenAI browser calls.
- Use OpenRouter instead of direct Anthropic browser calls.
- Use a supported compatible route if a direct vendor browser call is not available.
How to Choose Quickly
If you are undecided, use this shortcut:
- Want the easiest setup: choose OpenRouter.
- Want crypto-friendly cloud usage: choose NanoGPT.
- Want maximum privacy and local control: choose Local.
- Want to use a desktop model from another device: choose Desktop Tunnel.
Local Route Setup Notes
Local routes need two things:
- A running server.
- Browser access from the Abolitus page origin.
Ollama
Typical default URL:
http://localhost:11434/v1Typical checklist:
- Start the Ollama server.
- Make sure the server allows requests from your browser origin.
- Confirm the model is actually installed.
- Add the local provider route in Abolitus.
- Verify that the model list loads.
LM Studio
Typical checklist:
- Start the local server.
- Enable the OpenAI-compatible API surface.
- Enable browser access if required by your setup.
- Add the server URL in Abolitus.
- Confirm the model list loads.
Other OpenAI-Compatible Endpoints
If your route exposes compatible model-list and chat-completions endpoints, Abolitus can often use it even if it is not branded as Ollama or LM Studio.
Model Choice Advice
When you are deciding what model to use for roleplay, think in terms of behavior rather than hype.
Choose a higher-end cloud model when you need:
- Better scene coherence over long turns.
- Better instruction-following.
- Better reasoning around layered world rules.
- More stable character voice in complex scenes.
Choose a local model when you need:
- Strong privacy.
- Predictable cost.
- Maximum experimentation.
- A route that stays fully under your control.
Common Connection Problems
The model list does not appear
Check:
- Wrong base URL.
- Missing API key.
- Local server not running.
- Browser access blocked by route configuration.
The provider exists but replies fail
Check:
- Model selected but not actually available.
- Cloud balance or provider-side availability.
- Local model unloaded or out of memory.
- Tunnel host not active when using Desktop Tunnel.
Replies feel worse than expected
The route may be working correctly, but the model may simply not fit the job. Before changing all your prompts, test a different model family or adjust your sampler and wrapper settings.