llama.cpp/CHANGELOG.md
2023-11-08 00:54:54 -05:00

9.3 KiB

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[Unreleased]

[0.2.15]

  • Update llama.cpp to ggerganov/llama.cpp@0a7c980b6f
  • Add support for Llava1.5 multimodal models by @damian0815 and @abetlen in #821
  • Update OpenAI API compatibility to match dev day update by @abetlen in #821
  • Add seed parameter to completion and chat_completion functions of Llama class by @abetlen in 86aeb9f3a1
  • Add JSON mode support to constrain chat completion to JSON objects by @abetlen in b30b9c338b

[0.2.14]

  • Update llama.cpp to ggerganov/llama.cpp@f0b30ef7dc
  • Add support for Huggingface Autotokenizer Chat Formats by @bioshazard and @abetlen in #790 and bbffdaebaa
  • Fix llama-2 chat format by @earonesty in #869
  • Add support for functionary chat format by @abetlen in #784
  • Migrate inference from deprecated llama_evalAPI to llama_batch and llama_decode by @abetlen in #795

[0.2.13]

  • Update llama.cpp to ggerganov/llama.cpp@51b2fc11f7
  • Fix name 'open' is not defined exception when deleting model by @abetlen in 011b95d7f3
  • Fix tokenization of special characters by @antoine-lizee in #850

[0.2.12]

  • Update llama.cpp to ggerganov/llama.cpp@50337961a6
  • Fix missing n_seq_id in llama_batch by @NickAlgra in #842
  • Fix for shared libraries on Windows that start with lib prefix by @sujeendran in #848
  • Fix exception raised in __del__ when freeing models by @cebtenzzre in #846
  • Performance improvement for logit bias by @zolastro in #851
  • Fix suffix check arbitrary code execution bug by @mtasic85 in #854
  • Fix typo in function_call parameter in llama_types.py by @akatora28 in #849
  • Fix streaming not returning finish_reason by @gmcgoldr in #798
  • Fix n_gpu_layers check to allow values less than 1 for server by @hxy9243 in #826
  • Supppress stdout and stderr when freeing model by @paschembri in #803
  • Fix llama2 chat format by @delock in #808
  • Add validation for tensor_split size by @eric1932 #820
  • Print stack trace on server error by @abetlen in d6a130a052
  • Update docs for gguf by @johnccshen in #783
  • Add chatml chat format by @abetlen in 305482bd41

[0.2.11]

  • Fix bug in llama_model_params object has no attribute logits_all by @abetlen in d696251fbe

[0.2.10]

  • Fix bug 'llama_model_params' object has no attribute 'embedding' by @abetlen in 42bb721d64

[0.2.9]

  • Fix critical bug in pip installation of v0.2.8 due to .git directory in ac853e01e1

[0.2.8]

  • Update llama.cpp to ggerganov/llama.cpp@40e07a60f9
  • Add configurable chat formats by @abetlen in #711
  • Fix rope scaling bug by @Josh-XT in #767
  • Fix missing numa parameter in server by @abetlen in d9bce17794

[0.2.7]

  • Update llama.cpp to ggerganov/llama.cpp@a98b1633d5
  • Install required runtime dlls to package directory on windows by @abetlen in 8d75016549
  • Add openai-processing-ms to server response header by @Tradunsky in #748
  • Bump minimum version of scikit-build-core to 0.5.1 to fix msvc cmake issue by @abetlen in 1ed0f3ebe1
  • Update llama_types.py to better match the openai api, old names are aliased to new ones by @abetlen in dbca136fea

[0.2.6]

  • Update llama.cpp to 80291a1d02a07f7f66666fb576c5b1e75aa48b46

[0.2.5]

  • Fix docker images missing starlette-context dependency by @abetlen in 2291798900
  • Fix loading dll in Windows Isolation Containers by @abetlen in 8474665625
  • Fix build issue on m1 macs by @abetlen in dbd3a6d1ed
  • Update docs to gguf and add hw acceleration docs for server by @jasonacox in #688

[0.2.4]

  • Add NUMA support. NOTE low level api users must call llama_backend_init at the start of their programs by abetlen in f4090a0bb2
  • Fix tensor_split server cli argument by @abetlen in c4c440ba2d
  • Made all Llama init parameters into keyword-only parameters by @abetlen in c8f9b8a734
  • Added server params for low_vram, main_gpu, lora_base, and lora_path by @abetlen in 2920c4bf7e
  • Removed server params for rms_norm_eps and n_gqa by @abetlen in 2920c4bf7e
  • Fix boolean cli options by @abetlen in c999325e8e and 0449d29b9f
  • Silence Pydantic Settings warnings about model_alias setting by @earonesty in #705

[0.2.3]

  • Update llama.cpp to ggerganov/llama.cpp@71ca2fad7d
  • Add X-Request-ID request header for mirroring custom IDs by @devrimcavusoglu in #703
  • Add pyproject extra for scikit-build-core to ensure compatible pathspec version by @abetlen in 6cfc54284b
  • Fix issue with Literal and Optional cli arguments not working by @abetlen in #702

[0.2.2]

  • Fix bug in pip install of v0.2.1 due to scikit-build-core removing all .metal files in the source distribution (see #701)

[0.2.1]

  • Fix bug in pip install of v0.2.0 due to .git folder being included in the source distribution (see #701)

[0.2.0]

  • Migrated to scikit-build-core build system by @abetlen in #499
  • Use numpy views for LogitsProcessor and StoppingCriteria instead of python lists by @abetlen in #499
  • Drop support for end-of-life Python3.7 by @abetlen in #499
  • Convert low level llama.cpp constants to use basic python types instead of ctypes types by @abetlen in #499

[0.1.85]

  • Add llama_cpp.__version__ attribute by @janvdp in #684
  • Fix low level api examples by @jbochi in #680

[0.1.84]

  • Update llama.cpp

[0.1.83]

  • Update llama.cpp

[0.1.82]

  • Update llama.cpp

[0.1.81]

  • Update llama.cpp

[0.1.80]

  • Update llama.cpp

[0.1.79]

  • GGUF Support (breaking change requiring new model format)

[0.1.78]

  • Grammar based sampling via LlamaGrammar which can be passed to completions
  • Make n_gpu_layers == -1 offload all layers

[0.1.77]

  • (llama.cpp) Update llama.cpp add support for LLaMa 2 70B
  • (server) Add temporary n_gqa and rms_norm_eps parameters required for LLaMa 2 70B

[0.1.76]

  • (llama.cpp) Update llama.cpp add support for LLaMa 2 70B

[0.1.75]

  • Update llama.cpp

[0.1.74]

  • (server) OpenAI style error responses

[0.1.73]

  • (server) Add rope parameters to server settings

[0.1.72]

  • (llama.cpp) Update llama.cpp added custom_rope for extended context lengths

[0.1.71]

  • (llama.cpp) Update llama.cpp

  • (server) Fix several pydantic v2 migration bugs

[0.1.70]

  • (Llama.create_completion) Revert change so that max_tokens is not truncated to context_size in create_completion
  • (server) Fixed changed settings field names from pydantic v2 migration

[0.1.69]

  • (server) Streaming requests can are now interrupted pre-maturely when a concurrent request is made. Can be controlled with the interrupt_requests setting.
  • (server) Moved to fastapi v0.100.0 and pydantic v2
  • (docker) Added a new "simple" image that builds llama.cpp from source when started.
  • (server) performance improvements by avoiding unnecessary memory allocations during sampling

[0.1.68]

  • (llama.cpp) Update llama.cpp

[0.1.67]

  • Fix performance bug in Llama model by pre-allocating memory tokens and logits.
  • Fix bug in Llama model where the model was not free'd after use.

[0.1.66]

  • (llama.cpp) New model API

  • Performance issue during eval caused by looped np.concatenate call

  • State pickling issue when saving cache to disk

[0.1.65]

  • (llama.cpp) Fix struct misalignment bug

[0.1.64]

  • (llama.cpp) Update llama.cpp
  • Fix docs for seed. Set -1 for random.

[0.1.63]

  • (llama.cpp) Add full gpu utilisation in CUDA
  • (llama.cpp) Add get_vocab
  • (llama.cpp) Add low_vram parameter
  • (server) Add logit_bias parameter

[0.1.62]

  • Metal support working
  • Cache re-enabled

[0.1.61]

  • Fix broken pip installation

[0.1.60]

NOTE: This release was deleted due to a bug with the packaging system that caused pip installations to fail.

  • Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.
  • Temporarily disable cache for completion requests

[v0.1.59]

  • (llama.cpp) k-quants support
  • (server) mirostat sampling parameters to server
  • Support both .so and .dylib for libllama on MacOS

[v0.1.58]

  • (llama.cpp) Metal Silicon support

[v0.1.57]

  • (llama.cpp) OpenLlama 3B support

[v0.1.56]

  • (misc) Added first version of the changelog
  • (server) Use async routes
  • (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.
  • (python-api) Performance bug in stop sequence check slowing down streaming.