Local Server Setup
- RIFT supports local servers for private offline transcription and even LLM transformations.
- rift-local is the recommended server: it handles model downloads, server configuration, and supports multiple backends.
Quick Start
uv tool install rift-local && rift-local serve --openThat's it! Your browser will open with the Local source pre-selected. Click Enable Voice Input to start.
Don't have uv?
brew install uvOr see the uv installation docs for other platforms. Requires Python 3.10+.
Alternative: install with pip
python3 -m venv .venv && source .venv/bin/activate
pip install rift-localAdvanced Usage
Available Models
sherpa-onnx
| Model | Params | Disk | Notes |
|---|---|---|---|
nemotron-en | 0.6B | 600MB | Best accuracy (int8) |
zipformer-en-kroko | ~30M | 68MB | Lightweight, fast |
moonshine
| Model | Params | Disk | Notes |
|---|---|---|---|
moonshine-en-medium | 245M | 190MB | Default; best moonshine accuracy |
moonshine-en-small | 123M | 95MB | Balanced |
moonshine-en-tiny | 34M | 26MB | Fastest |
Alternative Server (without Python)
See the sherpa-onnx C++ server setup guide.