Commit graph

451 commits

Author SHA1 Message Date
Andrei Betlen 011b95d7f3 Fix name 'open' is not defined exception. Closes #860 2023-11-02 15:30:55 -04:00
Andrei Betlen fa83cc5f9c Update llama.cpp
Fix build examples

Exclude examples directory

Revert cmake changes

Try actions/checkout@v4

Try to update submodules

Revert

Update llama.cpp

Fix build examples

Exclude examples directory

Revert cmake changes

Try actions/checkout@v4

Try to update submodules

Revert
2023-11-02 14:28:15 -04:00
Antoine Lizee 4d4e0f11e2 fix: tokenization of special characters: (#850)
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
2023-11-02 14:28:14 -04:00
Andrei Betlen 6b3aa7fc8f Bump version 2023-11-01 19:25:03 -04:00
Sujeendran Menon 7b136bb5b1
Fix for shared library not found and compile issues in Windows (#848)
* fix windows library dll name issue

* Updated README.md Windows instructions

* Update llama_cpp.py to handle different windows dll file versions
2023-11-01 18:55:57 -04:00
cebtenzzre eefd76fe81
llama: fix exception in Llama.__del__ (#846) 2023-11-01 18:53:57 -04:00
David Ponce 3fc9147218
Iterate over tokens that should be biased rather than the entire vocabulary. (#851) 2023-11-01 18:53:47 -04:00
Marko Tasic 9c8f4dca5f
fixed Llama._create_completion suffix check, it can be either None or str instance (#854) 2023-11-01 18:52:50 -04:00
Daniel Thuerck 5f8f369d1b
Pass-Through grammar parameter in web server. (#855) Closes #778 2023-11-01 18:51:12 -04:00
Adam Katora 25cb710281
Update llama_types.py (#849)
Minor typo fix, funcion -> function
2023-11-01 18:50:11 -04:00
Andrei Betlen d808fd436c Update llama.cpp 2023-10-31 21:29:35 -04:00
Andrei Betlen 53861c9e53 Update llama.cpp 2023-10-24 03:13:32 -04:00
gmcgoldr 09a8406c83
Fix streaming doesn't return finish reason (#798)
When streaming the yield that contains the finish can be skipped. This change ensures that yield isn't skipped.
2023-10-19 02:55:56 -04:00
Andrei Betlen 28c2b884e2 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-10-19 02:55:31 -04:00
Andrei Betlen ff580031d2 Update llama.cpp 2023-10-19 02:55:08 -04:00
Xiaoyu Kevin Hu a315128d66
update value check for n_gpu_layers field (#826) 2023-10-18 18:25:25 -04:00
Pierre Alexandre SCHEMBRI 10304d75fc
Make use of suppress_stdout_stderr when freeing model (#803) 2023-10-15 13:52:43 -04:00
Ma, Guokai a1ac199980
Fix repeat greeting (#808)
* fix repeated greeting

* remove seperator between role and message
2023-10-15 13:52:21 -04:00
Eric Liu b50166500e
Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES (#820)
* Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES

* reword
2023-10-15 13:51:51 -04:00
Andrei Betlen d6a130a052 Print traceback on server error 2023-10-10 15:56:04 -04:00
Andrei Betlen 43dfe1e2ab Update llama.cpp 2023-10-05 16:07:49 -04:00
Andrei Betlen a7d17b8ac9 Update llama.cpp 2023-10-03 15:23:35 -04:00
Andrei Betlen 305482bd41 Add chatml chat format 2023-09-30 21:01:34 -04:00
Andrei Betlen 5ef5280ef9 Log server exceptions to stdout 2023-09-30 19:13:36 -04:00
Andrei Betlen fab4bccc35 Bump version 2023-09-30 16:04:46 -04:00
Andrei Betlen d696251fbe Fix logits_all bug 2023-09-30 16:02:35 -04:00
Andrei Betlen 6ee413d79e Bump version 2023-09-30 13:23:09 -04:00
Andrei Betlen 42bb721d64 Fix bug in embedding 2023-09-30 13:20:22 -04:00
Andrei Betlen 5d62d55a82 Bump version 2023-09-30 00:07:06 -04:00
Andrei Betlen 386c88b68e Bump version 2023-09-29 20:07:31 -04:00
Andrei Betlen d9bce17794 Update server params 2023-09-29 19:59:12 -04:00
Andrei Betlen 3720c739d4 Update llama.cpp 2023-09-29 19:58:21 -04:00
Andrei 3bca7708fb
Configurable Chat Formats (#711)
* Add configurable default chat completion format.

* Remove chat_template file to avoid circular import

* Update llama_types

* Add chat format
2023-09-29 19:52:04 -04:00
Josh XT a945404b4a
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility

* Fix defaults

* Fix op

* Remove backwards compatibility

* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen 1a1c3dc418 Update llama.cpp 2023-09-28 22:42:03 -04:00
Andrei Betlen 4177ae6d34 Bump version 2023-09-25 14:38:38 -04:00
Viacheslav/Slava Tradunsky 3d5e5b1c04
Adds openai-processing-ms response header (#748) 2023-09-25 13:55:58 -04:00
Andrei Betlen dbca136fea Update llama_types and names to match openai api 2023-09-20 15:38:26 -04:00
Andrei Betlen 38e34c97f0 Update llama.cpp 2023-09-18 16:11:27 -04:00
Andrei Betlen 8d75016549 Install required runtime dlls to package directory on windows 2023-09-16 14:57:49 -04:00
Andrei Betlen acf18fcdf0 Bump version 2023-09-15 14:22:21 -04:00
Andrei Betlen b047b3034e Remove confusing helpstring from server cli args. Closes #719 2023-09-15 14:09:43 -04:00
Andrei Betlen 24fec0b242 Bump version 2023-09-14 18:33:08 -04:00
Andrei Betlen 8474665625 Update base_path to fix issue resolving dll in windows isolation container. 2023-09-14 14:51:43 -04:00
Andrei Betlen 507bcc7171 Bump version 2023-09-13 23:15:23 -04:00
Andrei Betlen 0449d29b9f Fix boolean env vars and cli arguments 2023-09-13 23:09:57 -04:00
earonesty 58a6e42cc0
Update app.py (#705) 2023-09-13 23:01:34 -04:00
Andrei Betlen f4090a0bb2 Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs. 2023-09-13 23:00:43 -04:00
Andrei Betlen c999325e8e Fix boolean cli flags 2023-09-13 22:56:10 -04:00
Andrei Betlen 4daf77e546 Format 2023-09-13 21:23:23 -04:00