Commit graph

518 commits

Author SHA1 Message Date
Josh XT a945404b4a
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility

* Fix defaults

* Fix op

* Remove backwards compatibility

* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen 1a1c3dc418 Update llama.cpp 2023-09-28 22:42:03 -04:00
Andrei Betlen 4177ae6d34 Bump version 2023-09-25 14:38:38 -04:00
Viacheslav/Slava Tradunsky 3d5e5b1c04
Adds openai-processing-ms response header (#748) 2023-09-25 13:55:58 -04:00
Andrei Betlen dbca136fea Update llama_types and names to match openai api 2023-09-20 15:38:26 -04:00
Andrei Betlen 38e34c97f0 Update llama.cpp 2023-09-18 16:11:27 -04:00
Andrei Betlen 8d75016549 Install required runtime dlls to package directory on windows 2023-09-16 14:57:49 -04:00
Andrei Betlen acf18fcdf0 Bump version 2023-09-15 14:22:21 -04:00
Andrei Betlen b047b3034e Remove confusing helpstring from server cli args. Closes #719 2023-09-15 14:09:43 -04:00
Andrei Betlen 24fec0b242 Bump version 2023-09-14 18:33:08 -04:00
Andrei Betlen 8474665625 Update base_path to fix issue resolving dll in windows isolation container. 2023-09-14 14:51:43 -04:00
Andrei Betlen 507bcc7171 Bump version 2023-09-13 23:15:23 -04:00
Andrei Betlen 0449d29b9f Fix boolean env vars and cli arguments 2023-09-13 23:09:57 -04:00
earonesty 58a6e42cc0
Update app.py (#705) 2023-09-13 23:01:34 -04:00
Andrei Betlen f4090a0bb2 Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs. 2023-09-13 23:00:43 -04:00
Andrei Betlen c999325e8e Fix boolean cli flags 2023-09-13 22:56:10 -04:00
Andrei Betlen 4daf77e546 Format 2023-09-13 21:23:23 -04:00
Andrei Betlen 2920c4bf7e Update server params. Added lora_base, lora_path, low_vram, and main_gpu. Removed rms_norm_eps and n_gqa (deprecated in llama.cpp) 2023-09-13 21:23:13 -04:00
Andrei Betlen 6a20293fc2 Reorder init params to match llama.cpp order 2023-09-13 21:20:26 -04:00
Andrei Betlen c8f9b8a734 Explicitly make all init params other than model_path into keyword only params 2023-09-13 21:19:47 -04:00
Andrei Betlen a68f9e2791 Add kwargs to init to catch extra params 2023-09-13 21:19:02 -04:00
Andrei Betlen 9e345a47a2 remove print 2023-09-13 21:12:27 -04:00
Andrei Betlen 517f9ed80b Convert missed llama.cpp constants into standard python types 2023-09-13 21:11:52 -04:00
Andrei Betlen c4c440ba2d Fix tensor_split cli option 2023-09-13 20:00:42 -04:00
Andrei Betlen 203ede4ba2 Bump version 2023-09-13 18:07:08 -04:00
Andrei Betlen 759405c84b Fix issue with Literal and Optional cli arguments not working. Closes #702 2023-09-13 18:06:12 -04:00
Devrim da9df78db0
Add X-Request-ID request header for mirroring custom IDs. (#703) 2023-09-13 16:18:31 -04:00
Andrei Betlen 8e13520796 Bump version 2023-09-13 01:47:58 -04:00
Andrei Betlen 2787663a25 Bump version 2023-09-12 21:00:01 -04:00
Andrei Betlen 6e89775759 Bump version 2023-09-12 18:57:01 -04:00
Andrei Betlen bb4e67e7aa Using dynamic version 2023-09-12 18:56:36 -04:00
Andrei Betlen 1910793f56 Merge branch 'main' into v0.2-wip 2023-09-12 16:43:32 -04:00
Andrei Betlen c7901f1141 Bump version 2023-09-12 16:16:40 -04:00
janvdp 33ce931cce merge upstream 2023-09-09 21:21:04 +02:00
Andrei Betlen d3f63211ef Update llama.cpp 2023-09-09 12:12:32 -04:00
janvdp da0fdafc32 import version in __init__.py 2023-09-05 21:09:28 +02:00
janvdp 6e8e64d09a add version file 2023-09-05 21:09:08 +02:00
Andrei Betlen 186626d58e Update llama.cpp 2023-09-01 14:26:13 -04:00
Andrei Betlen 47de3ab104 Update llama.cpp 2023-08-29 07:36:20 -04:00
Andrei Betlen 3f76e1de52 cjk pr minor cleanup 2023-08-29 07:21:59 -04:00
Andrei bae44ec8bf
Merge pull request #309 from MeouSker77/fix-CJK
Fix CJK and emoji stream output
2023-08-29 06:58:10 -04:00
Andrei Betlen e0dcbc28a1 Update llama.cpp 2023-08-28 10:33:45 -04:00
Andrei Betlen 4887973c22 Update llama.cpp 2023-08-27 12:59:20 -04:00
Andrei Betlen 3a29d65f45 Update llama.cpp 2023-08-26 23:36:24 -04:00
Andrei Betlen 5de8009706 Add copilot-codex completions endpoint for drop-in copilot usage 2023-08-25 17:49:14 -04:00
Andrei Betlen ac47d55577 Merge branch 'main' into v0.2-wip 2023-08-25 15:45:22 -04:00
Andrei Betlen ef23d1e545 Update llama.cpp 2023-08-25 14:35:53 -04:00
Andrei Betlen 48cf43b427 Use _with_model variants for tokenization 2023-08-25 13:43:16 -04:00
Andrei Betlen 8ac59465b9 Strip leading space when de-tokenizing. 2023-08-25 04:56:48 -04:00
Andrei Betlen c2d1deaa8a Update llama.cpp 2023-08-24 18:01:42 -04:00
Andrei Betlen db982a861f Fix 2023-08-24 01:01:12 -04:00
Andrei Betlen 4ed632c4b3 Remove deprecated params 2023-08-24 01:01:05 -04:00
Andrei Betlen cf405f6764 Merge branch 'main' into v0.2-wip 2023-08-24 00:30:51 -04:00
Andrei Betlen bbbf0f4fc4 Update llama.cpp 2023-08-24 00:17:00 -04:00
Andrei Betlen e632c59fa0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-17 20:53:04 -04:00
c0sogi a240aa6b25 Fix typos in llama_grammar 2023-08-17 21:00:44 +09:00
Andrei Betlen 620cd2fd69 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-14 22:41:47 -04:00
Andrei Betlen 5788f1f2b2 Remove unnused import 2023-08-14 22:41:37 -04:00
Andrei 6dfb98117e
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
2023-08-14 22:40:41 -04:00
Andrei b99e758045
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
2023-08-14 22:40:10 -04:00
Andrei Betlen b345d60987 Update llama.cpp 2023-08-14 22:33:30 -04:00
Billy Cao c471871d0b
make n_gpu_layers=-1 offload all layers 2023-08-13 11:21:28 +08:00
Billy Cao d018c7b01d
Add doc string for n_gpu_layers argument 2023-08-12 18:41:47 +08:00
Hannes Krumbiegel 17dd7fa8e0 Add py.typed 2023-08-11 09:58:48 +02:00
MeouSker77 88184ed217 fix CJK output again 2023-08-09 22:04:35 +08:00
Andrei Betlen 66fb0345e8 Move grammar to function call argument 2023-08-08 15:08:54 -04:00
Andrei Betlen 1e844d3238 fix 2023-08-08 15:07:28 -04:00
Andrei Betlen 843b7ccd90 Merge branch 'main' into c0sogi/main 2023-08-08 14:43:02 -04:00
Andrei Betlen d015bdb4f8 Add mul_mat_q option 2023-08-08 14:35:06 -04:00
Andrei Betlen f6a7850e1a Update llama.cpp 2023-08-08 14:30:58 -04:00
c0sogi 0d7d2031a9 prevent memory access error by llama_grammar_free 2023-08-07 17:02:33 +09:00
c0sogi b07713cb9f reset grammar for every generation 2023-08-07 15:16:25 +09:00
c0sogi 418aa83b01 Added grammar based sampling 2023-08-07 02:21:37 +09:00
c0sogi ac188a21f3 Added low level grammar API 2023-08-05 14:43:35 +09:00
Andrei Betlen ce57920e60 Suppress llama.cpp output when loading model. 2023-07-28 14:45:18 -04:00
Andrei Betlen a9b9f0397c Format 2023-07-28 01:53:08 -04:00
Andrei Betlen abc538fcd5 fix: annoying bug where attribute exceptions were droining out file not found exceptions 2023-07-28 01:43:00 -04:00
Shouyi Wang 426dbfe3f4 Change tensor_split from array to pointer 2023-07-25 18:29:59 +10:00
Andrei Betlen 078902a6fe Add llama_grammar_accept_token 2023-07-24 15:55:26 -04:00
Andrei Betlen bf901773b0 Add llama_sample_grammar 2023-07-24 15:42:31 -04:00
Andrei Betlen 1b6997d69f Convert constants to python types and allow python types in low-level api 2023-07-24 15:42:07 -04:00
Andrei Betlen 343480364f Merge branch 'main' into v0.2-wip 2023-07-24 15:26:08 -04:00
Andrei Betlen 11dd2bf382 Add temporary rms_norm_eps parameter 2023-07-24 14:09:24 -04:00
Andrei Betlen 8cd64d4ac3 Add rms_eps_norm 2023-07-24 13:52:12 -04:00
bretello 0f09f10e8c
add support for llama2 70b 2023-07-24 19:38:24 +02:00
Andrei Betlen 77c9f496b0 Merge branch 'main' into v0.2-wip 2023-07-24 13:19:54 -04:00
Andrei Betlen 401309d11c Revert "Merge pull request #521 from bretello/main"
This reverts commit 07f0f3a386, reversing
changes made to d8a3ddbb1c.
2023-07-24 13:11:10 -04:00
Andrei 07f0f3a386
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
2023-07-24 13:09:28 -04:00
Andrei Betlen d8a3ddbb1c Update llama.cpp 2023-07-24 13:08:06 -04:00
Andrei Betlen 985d559971 Update llama.cpp 2023-07-24 13:04:34 -04:00
bretello 8be7d67f7e
raise exception when llama_load_model_from_file fails 2023-07-24 14:42:37 +02:00
Andrei Betlen 436036aa67 Merge branch 'main' into v0.2-wip 2023-07-21 12:42:38 -04:00
Andrei Betlen b83728ad1e Update llama.cpp 2023-07-21 12:33:27 -04:00
Andrei Betlen 0538ba1dab Merge branch 'main' into v0.2-wip 2023-07-20 19:06:26 -04:00
Andrei Betlen 01435da740 Update llama.cpp 2023-07-20 18:54:25 -04:00
Andrei Betlen 28a111704b Fix compatibility with older python versions 2023-07-20 18:52:10 -04:00
Andrei Betlen d10ce62714 Revert ctypes argtype change 2023-07-20 18:51:53 -04:00
Andrei 365d9a4367
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Vinicius a8551477f5
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float] 2023-07-20 17:29:11 -03:00
Carlos Tejada 0756a2d3fb Now the last token sent when stream=True 2023-07-19 22:47:14 -04:00
Andrei Betlen 0b121a7456 Format 2023-07-19 03:48:27 -04:00
Andrei Betlen b43917c144 Add functions parameters 2023-07-19 03:48:20 -04:00
Andrei Betlen 19ba9d3845 Use numpy arrays for logits_processors and stopping_criteria. Closes #491 2023-07-18 19:27:41 -04:00
shutup 5ed8bf132f expose RoPE param to server start 2023-07-18 16:34:36 +08:00
c0sogi 1551ba10bd Added RouteErrorHandler for server 2023-07-16 14:57:39 +09:00
Andrei Betlen 8ab098e49d Re-order Llama class params 2023-07-15 15:35:08 -04:00
Andrei Betlen e4f9db37db Fix context_params struct layout 2023-07-15 15:34:55 -04:00
Andrei Betlen f0797a6054 Merge branch main into custom_rope 2023-07-15 15:11:01 -04:00
randoentity 3f8f276f9f Add bindings for custom_rope 2023-07-10 17:37:46 +02:00
Andrei Betlen a86bfdf0a5 bugfix: truncate completion max_tokens to fit context length by default 2023-07-09 18:13:29 -04:00
Andrei Betlen 6f70cc4b7d bugfix: pydantic settings missing / changed fields 2023-07-09 18:03:31 -04:00
Andrei 5d756de314
Merge branch 'main' into add_unlimited_max_tokens 2023-07-08 02:37:38 -04:00
Andrei b8e0bed295
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen d6e6aad927 bugfix: fix compatibility bug with openai api on last token 2023-07-08 00:06:11 -04:00
Andrei Betlen 4f2b5d0b53 Format 2023-07-08 00:05:10 -04:00
Andrei Betlen 34c505edf2 perf: convert pointer to byref 2023-07-07 22:54:07 -04:00
Andrei Betlen 52753b77f5 Upgrade fastapi to 0.100.0 and pydantic v2 2023-07-07 21:38:46 -04:00
Andrei Betlen 11eae75211 perf: avoid allocating new buffers during sampling 2023-07-07 19:28:53 -04:00
Andrei Betlen a14d8a9b3f perf: assign to candidates data structure instead 2023-07-07 18:58:43 -04:00
wu-qing-157 9e61661518 fix indexing token_logprobs after sorting 2023-07-07 10:18:49 +00:00
Andrei Betlen 57d8ec3899 Add setting to control request interruption 2023-07-07 03:37:23 -04:00
Andrei Betlen 4c7cdcca00 Add interruptible streaming requests for llama-cpp-python server. Closes #183 2023-07-07 03:04:17 -04:00
Andrei Betlen 98ae4e58a3 Update llama.cpp 2023-07-06 17:57:56 -04:00
Andrei Betlen b994296c75 Update llama.cpp 2023-07-05 01:00:14 -04:00
Andrei Betlen c67f786360 Update llama.cpp 2023-06-29 01:08:15 -04:00
Andrei Betlen e34f4414cf Hotfix: logits_all bug 2023-06-29 00:57:27 -04:00
Andrei Betlen a2ede37bd5 Load logits directly into scores buffer 2023-06-29 00:45:46 -04:00
Andrei Betlen b95b0ffbeb Use pre-allocated buffers to store input_ids and scores 2023-06-29 00:40:47 -04:00
Andrei Betlen a5e059c053 Free model when llama is unloaded. Closes #434 2023-06-28 23:58:55 -04:00
Andrei Betlen 3379dc40a1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-26 08:50:48 -04:00
Andrei Betlen 952228407e Update llama.cpp 2023-06-26 08:50:38 -04:00
Andrei Betlen b4a3db3e54 Update type signature 2023-06-26 08:50:30 -04:00
Andrei 5eb4ebb041
Merge branch 'main' into fix-state-pickle 2023-06-26 08:45:02 -04:00
samfundev d788fb49bf
Only concatenate after all batches are done 2023-06-24 15:51:46 -04:00
Andrei 877ca6d016
Merge branch 'main' into fix-state-pickle 2023-06-23 15:13:07 -04:00
Alexey 282698b6d3
server: pass seed param from command line to llama 2023-06-23 00:19:24 +04:00
Andrei Betlen e37798777e Update llama.cpp 2023-06-20 11:25:10 -04:00
Andrei Betlen d410f12fae Update docs. Closes #386 2023-06-17 13:38:48 -04:00
Andrei Betlen 9f528f4715 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-17 13:37:17 -04:00
Andrei Betlen d7153abcf8 Update llama.cpp 2023-06-16 23:11:14 -04:00
imaprogrammer fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception 2023-06-16 14:11:57 +05:30
Andrei Betlen 1e20be6d0c Add low_vram to server settings 2023-06-14 22:13:42 -04:00
Andrei Betlen 44b83cada5 Add low_vram parameter 2023-06-14 22:12:33 -04:00
Andrei Betlen f7c5cfaf50 Format server options 2023-06-14 22:08:28 -04:00
Andrei Betlen 9c41a3e990 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:50:43 -04:00
Andrei f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Andrei Betlen f27393ab7e Add additional verbose logs for cache 2023-06-14 21:46:48 -04:00
Andrei Betlen 4cefb70cd0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:40:19 -04:00
Andrei Betlen 715f98c591 Update llama.cpp 2023-06-14 21:40:13 -04:00
Okabintaro 10b0cb727b fix: Make LLamaState pickable for disk cache
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00