llama.cpp/CHANGELOG.md

30 lines
636 B
Markdown
Raw Normal View History

2023-05-26 21:32:34 +00:00
# Changelog
All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/),
and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## [Unreleased]
2023-06-05 22:17:29 +00:00
- Added: k-quants support
2023-06-05 03:31:51 +00:00
## [v0.1.58]
2023-06-05 03:30:42 +00:00
- Added: Metal Silicon support
2023-06-05 03:30:10 +00:00
## [v0.1.57]
- Added: OpenLlama 3B support
2023-05-30 07:07:36 +00:00
## [v0.1.56]
2023-05-26 21:32:34 +00:00
### Added
2023-05-27 00:23:49 +00:00
- Added first version of the changelog
2023-05-30 01:39:19 +00:00
- Server: Use async routes
2023-05-27 00:26:08 +00:00
- Use numpy for internal buffers to reduce memory usage and improve performance.
2023-05-27 00:23:49 +00:00
### Fixed
- Performance bug in stop sequence check slowing down streaming.