Commit graph

1377 commits

Author SHA1 Message Date
Andrei Betlen 3babe3512c Fix mirostat sampling 2024-01-19 08:31:59 -05:00
Andrei Betlen 141293a75b Fix python3.8 support 2024-01-19 08:17:49 -05:00
Andrei Betlen 656f3d8968 Bump version 2024-01-18 21:30:36 -05:00
Andrei Betlen 03ed547bfd Remove templates doc 2024-01-18 21:23:26 -05:00
Andrei Betlen 3ca86ab390 Update llama.cpp 2024-01-18 21:22:45 -05:00
Andrei Betlen be23404ed4 Cleanup pyproject 2024-01-18 21:22:19 -05:00
Andrei Betlen 89cce50f8c Update llama.cpp 2024-01-18 21:21:49 -05:00
Andrei Betlen b8fc1c7d83 feat: Add ability to load chat format from huggingface autotokenizer or tokenizer_config.json files. 2024-01-18 21:21:37 -05:00
Andrei Betlen 48c3b77e6f Offload KQV by default 2024-01-18 11:08:57 -05:00
Austin 6bfe98bd80
Integration of Jinja2 Templating (#875)
* feat: Add support for jinja templating

Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>

* fix: Refactor chat formatter and update interface for jinja templates

- Simplify the `llama2_template` in `llama_jinja_format.py` by removing unnecessary line breaks for readability without affecting functionality.
- Update `ChatFormatterInterface` constructor to accept a more generic `Optional[object]` type for the template parameter, enhancing flexibility.
- Introduce a `template` property to `ChatFormatterInterface` for standardized access to the template string.
- Replace `MetaSingleton` metaclass with `Singleton` for the `ChatFormatterFactory` to streamline the singleton implementation.

These changes enhance code readability, maintain usability, and ensure consistency in the chat formatter's design pattern usage.

* Add outline for Jinja2 templating integration documentation

Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>

* Add jinja2 as a dependency with version range for Hugging Face transformers compatibility

Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>

* Update jinja2 version constraint for mkdocs-material compatibility

Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>

* Fix attribute name in AutoChatFormatter

- Changed attribute name from `self._renderer` to `self._environment`

---------

Signed-off-by: teleprint-me <77757836+teleprint-me@users.noreply.github.com>
2024-01-17 09:47:52 -05:00
Andrei Betlen 52adc23115 Update llama.cpp 2024-01-17 09:27:40 -05:00
Andrei Betlen 7b46bb5a78 Re-order classes in llama.py 2024-01-17 09:16:13 -05:00
Andrei Betlen cc4630e66f Move helper classes to _internals submodule 2024-01-17 09:14:00 -05:00
Andrei Betlen 3b92419132 Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
Andrei Betlen 6981597835 Merge branch 'main' of https://github.com/abetlen/llama-cpp-python into main 2024-01-16 19:35:59 -05:00
Andrei Betlen d5dbb3f8de Update llama.cpp 2024-01-16 19:35:57 -05:00
Jerry Liu 84380fe9a6
Add llamaindex integration to readme (#1092) 2024-01-16 19:10:50 -05:00
Kyle Mistele 9c36688b33
fix(cli): allow passing n_ctx=0 to openAI API server args to use model n_ctx_train field per #1015 (#1093) 2024-01-16 18:54:06 -05:00
anil cfb7da98ed
Support Accept text/event-stream in chat and completion endpoints, resolves #1083 (#1088)
Co-authored-by: Anil Pathak <anil@heyday.com>
Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2024-01-16 12:52:52 -05:00
Andrei Betlen e39778f8eb Update llama.cpp 2024-01-16 11:56:44 -05:00
Andrei Betlen 4b11fa83c0 Bump version 2024-01-15 12:54:51 -05:00
Andrei Betlen 84615adbc6 Add split_mode option. Closes #1085 2024-01-15 12:49:20 -05:00
Phil H 76aafa6149
Implement GGUF metadata KV overrides (#1011)
* Implement GGUF metadata overrides

* whitespace fix

* Fix kv overrides.

* Fix pointer and pickle

* Match llama.cpp kv_overrides cli argument

---------

Co-authored-by: Andrei <abetlen@gmail.com>
2024-01-15 12:29:29 -05:00
yieldthought 7eff42c239
Avoid "LookupError: unknown encoding: ascii" when open() called in a destructor (#1012)
The existing code often causes "LookupError: unknown encoding: ascii" when open() called in a destructor. Saving open in self.open is not enough to avoid this. Instead, we can avoid reopening /dev/null every time by doing it once when the module is loaded.
2024-01-15 10:52:10 -05:00
anil 1eaace8ea3
Fix low_level_api_chat_cpp example to match current API (#1086)
* Fix low_level_api_chat_cpp to match current API

* Fix low_level_api_chat_cpp to match current API

* Using None instead of empty string to so that default prompt template can be used if no prompt provided

---------

Co-authored-by: Anil Pathak <anil@heyday.com>
2024-01-15 10:46:35 -05:00
Mark Neumann c689ccc728
Fix Pydantic model parsing (#1087) 2024-01-15 10:45:57 -05:00
Andrei Betlen 5502ac8876 Update llama.cpp 2024-01-15 10:12:10 -05:00
Andrei Betlen 359ae73643 Update llama.cpp 2024-01-14 08:17:22 -05:00
Andrei Betlen 7c898d5684 Update llama.cpp 2024-01-13 22:37:49 -05:00
Andrei Betlen bb610b9428 Update llama.cpp 2024-01-11 22:51:12 -05:00
Andrei Betlen f0159663d9 Bump version 2024-01-10 02:51:17 -05:00
Stephen Hankinson df3be58d6c
Add ability to pass in penalize_nl param (#1068) 2024-01-10 02:46:27 -05:00
Joseph Turian 2ddce7294e
print_grammar to stderr (#1052) 2024-01-10 02:46:03 -05:00
Andrei Betlen 431cb3ec81 Update llama.cpp 2024-01-09 15:32:39 -05:00
Andrei Betlen 1ae05c102b Update llama.cpp 2024-01-08 14:51:29 -05:00
Andrei Betlen 142a9e1bc3 Update llama.cpp 2024-01-05 16:20:50 -05:00
Andrei Betlen 75d0527fd7 Bump version 2024-01-04 18:30:12 -05:00
Andrei Betlen fffcd0181c Update llama.cpp 2024-01-04 18:26:00 -05:00
Fedor Moiseev 907b9e9d42
Add Saiga chat format. (#1050) 2024-01-04 18:12:58 -05:00
Caleb Hoff f766b70c9a
Fix: Correct typo in README.md (#1058)
In Llama.create_chat_completion, the `tool_choice` property does not have an s on the end.
2024-01-04 18:12:32 -05:00
xaviviro cf743ec5d3
Added ChatGLM chat format (#1059)
Co-authored-by: Xavier Vinaixa Rosello <xaviviro@MacBook-Pro-de-Xavier.local>
2024-01-04 18:12:02 -05:00
Andrei Betlen eb9c7d4ed8 Update llama.cpp 2024-01-03 22:04:04 -05:00
Andrei Betlen 011c3630f5 Bump version 2023-12-27 17:35:02 -05:00
Andrei Betlen 969ea6a2c0 Update llama.cpp 2023-12-27 17:33:26 -05:00
Andrei Betlen f952d45c2c Update llama.cpp 2023-12-24 01:34:36 -05:00
Andrei Betlen f6f157c06d Update bug report instructions for new build process. 2023-12-22 15:35:51 -05:00
Andrei Betlen 92284f32cb Add HIP_PATH to dll search directories for windows users. 2023-12-22 15:29:56 -05:00
Andrei Betlen 2b0d3f36fa set llama_max_devices using library function 2023-12-22 15:19:28 -05:00
Andrei Betlen d9a1d90fd7 Fix typo 2023-12-22 15:12:27 -05:00
Andrei Betlen 37556bf9c4 Bump version 2023-12-22 14:55:58 -05:00