docs: Use pymdownx.snippets for easier docs management

This commit is contained in:
Andrei Betlen 2023-09-12 22:28:58 -04:00
parent 2787663a25
commit 109123c4f0
3 changed files with 9 additions and 94 deletions

docs/ Normal file
View File

@ -0,0 +1 @@
-8<- ""

View File

@ -1,95 +1,5 @@
# Getting Started
title: Getting Started
## 🦙 Python Bindings for `llama.cpp`
[![PyPI - Python Version](](
[![PyPI - License](](
[![PyPI - Downloads](](
Simple Python bindings for **@ggerganov's** [`llama.cpp`]( library.
This package provides:
- Low-level access to C API via `ctypes` interface.
- High-level Python API for text completion
- OpenAI-like API
- LangChain compatibility
## Installation
Install from PyPI:
pip install llama-cpp-python
## High-level API
>>> from llama_cpp import Llama
>>> llm = Llama(model_path="./models/7B/ggml-model.bin")
>>> output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
>>> print(output)
"id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
"object": "text_completion",
"created": 1679561337,
"model": "./models/7B/ggml-model.bin",
"choices": [
"text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
"index": 0,
"logprobs": None,
"finish_reason": "stop"
"usage": {
"prompt_tokens": 14,
"completion_tokens": 28,
"total_tokens": 42
## Web Server
`llama-cpp-python` offers a web server which aims to act as a drop-in replacement for the OpenAI API.
This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).
To install the server package and get started:
pip install llama-cpp-python[server]
export MODEL=./models/7B/ggml-model.bin
python3 -m llama_cpp.server
Navigate to [http://localhost:8000/docs](http://localhost:8000/docs) to see the OpenAPI documentation.
## Low-level API
The low-level API is a direct `ctypes` binding to the C API provided by `llama.cpp`.
The entire API can be found in [llama_cpp/]( and should mirror [llama.h](
## Development
This package is under active development and I welcome any contributions.
To get started, clone the repository and install the package in development mode:
git clone
cd llama-cpp-python
git submodule update --init --recursive
# Will need to be re-run any time vendor/llama.cpp is updated
pip install --upgrade pip
pip install -e .[all]
## License
This project is licensed under the terms of the MIT license.
-8<- ""

View File

@ -17,5 +17,9 @@ markdown_extensions:
line_spans: __span
pygments_lang_class: true
- pymdownx.inlinehilite
- pymdownx.magiclink:
repo_url_shorthand: true
user: abetlen
repo: llama-cpp-python
- pymdownx.snippets
- pymdownx.superfences