llama.cpp/CHANGELOG.md
2023-06-09 11:01:42 -04:00

1 KiB

Changelog

All notable changes to this project will be documented in this file.

The format is based on Keep a Changelog, and this project adheres to Semantic Versioning.

[Unreleased]

Addeed

  • (build-system) Migrate from scikit-build to scikit-build-core

Fixed

  • Truncate max_tokens in create_completion so requested tokens doesn't exceed context size.

[v0.1.59]

Added

  • (llama.cpp) k-quants support
  • (server) mirostat sampling parameters to server

Fixed

  • Support both .so and .dylib for libllama on MacOS

[v0.1.58]

Added

  • (llama.cpp) Metal Silicon support

[v0.1.57]

Added

  • (llama.cpp) OpenLlama 3B support

[v0.1.56]

Added

  • (misc) Added first version of the changelog
  • (server) Use async routes
  • (python-api) Use numpy for internal buffers to reduce memory usage and improve performance.

Fixed

  • (python-api) Performance bug in stop sequence check slowing down streaming.