llama.cpp/docs/index.md
2023-04-15 22:31:14 -04:00

3.7 KiB

Getting Started

🦙 Python Bindings for llama.cpp

Documentation Tests PyPI PyPI - Python Version PyPI - License PyPI - Downloads

Simple Python bindings for @ggerganov's llama.cpp library. This package provides:

  • Low-level access to C API via ctypes interface.
  • High-level Python API for text completion
    • OpenAI-like API
    • LangChain compatibility

Installation

Install from PyPI:

pip install llama-cpp-python

High-level API

>>> from llama_cpp import Llama
>>> llm = Llama(model_path="./models/7B/ggml-model.bin")
>>> output = llm("Q: Name the planets in the solar system? A: ", max_tokens=32, stop=["Q:", "\n"], echo=True)
>>> print(output)
{
  "id": "cmpl-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx",
  "object": "text_completion",
  "created": 1679561337,
  "model": "./models/7B/ggml-model.bin",
  "choices": [
    {
      "text": "Q: Name the planets in the solar system? A: Mercury, Venus, Earth, Mars, Jupiter, Saturn, Uranus, Neptune and Pluto.",
      "index": 0,
      "logprobs": None,
      "finish_reason": "stop"
    }
  ],
  "usage": {
    "prompt_tokens": 14,
    "completion_tokens": 28,
    "total_tokens": 42
  }
}

Web Server

llama-cpp-python offers a web server which aims to act as a drop-in replacement for the OpenAI API. This allows you to use llama.cpp compatible models with any OpenAI compatible client (language libraries, services, etc).

To install the server package and get started:

pip install llama-cpp-python[server]
export MODEL=./models/7B/ggml-model.bin
python3 -m llama_cpp.server

Navigate to http://localhost:8000/docs to see the OpenAPI documentation.

Low-level API

The low-level API is a direct ctypes binding to the C API provided by llama.cpp. The entire API can be found in llama_cpp/llama_cpp.py and should mirror llama.h.

Development

This package is under active development and I welcome any contributions.

To get started, clone the repository and install the package in development mode:

git clone git@github.com:abetlen/llama-cpp-python.git
git submodule update --init --recursive
# Will need to be re-run any time vendor/llama.cpp is updated
python3 setup.py develop

API Reference

::: llama_cpp.Llama options: members: - init - tokenize - detokenize - reset - eval - sample - generate - create_embedding - embed - create_completion - call - create_chat_completion - set_cache - token_bos - token_eos show_root_heading: true

::: llama_cpp.LlamaCache

::: llama_cpp.llama_cpp options: show_if_no_docstring: true

License

This project is licensed under the terms of the MIT license.