Commit graph

413 commits

Author SHA1 Message Date
c0sogi a240aa6b25 Fix typos in llama_grammar 2023-08-17 21:00:44 +09:00
Andrei Betlen 620cd2fd69 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-14 22:41:47 -04:00
Andrei Betlen 5788f1f2b2 Remove unnused import 2023-08-14 22:41:37 -04:00
Andrei 6dfb98117e
Merge pull request #600 from Vuizur/main
Add py.typed to conform with PEP 561
2023-08-14 22:40:41 -04:00
Andrei b99e758045
Merge pull request #604 from aliencaocao/main-1
Add doc string for n_gpu_layers argument and make -1 offload all layers
2023-08-14 22:40:10 -04:00
Andrei Betlen b345d60987 Update llama.cpp 2023-08-14 22:33:30 -04:00
Billy Cao c471871d0b
make n_gpu_layers=-1 offload all layers 2023-08-13 11:21:28 +08:00
Billy Cao d018c7b01d
Add doc string for n_gpu_layers argument 2023-08-12 18:41:47 +08:00
Hannes Krumbiegel 17dd7fa8e0 Add py.typed 2023-08-11 09:58:48 +02:00
MeouSker77 88184ed217 fix CJK output again 2023-08-09 22:04:35 +08:00
Andrei Betlen 66fb0345e8 Move grammar to function call argument 2023-08-08 15:08:54 -04:00
Andrei Betlen 1e844d3238 fix 2023-08-08 15:07:28 -04:00
Andrei Betlen 843b7ccd90 Merge branch 'main' into c0sogi/main 2023-08-08 14:43:02 -04:00
Andrei Betlen d015bdb4f8 Add mul_mat_q option 2023-08-08 14:35:06 -04:00
Andrei Betlen f6a7850e1a Update llama.cpp 2023-08-08 14:30:58 -04:00
c0sogi 0d7d2031a9 prevent memory access error by llama_grammar_free 2023-08-07 17:02:33 +09:00
c0sogi b07713cb9f reset grammar for every generation 2023-08-07 15:16:25 +09:00
c0sogi 418aa83b01 Added grammar based sampling 2023-08-07 02:21:37 +09:00
c0sogi ac188a21f3 Added low level grammar API 2023-08-05 14:43:35 +09:00
Andrei Betlen ce57920e60 Suppress llama.cpp output when loading model. 2023-07-28 14:45:18 -04:00
Andrei Betlen a9b9f0397c Format 2023-07-28 01:53:08 -04:00
Andrei Betlen abc538fcd5 fix: annoying bug where attribute exceptions were droining out file not found exceptions 2023-07-28 01:43:00 -04:00
Shouyi Wang 426dbfe3f4 Change tensor_split from array to pointer 2023-07-25 18:29:59 +10:00
Andrei Betlen 078902a6fe Add llama_grammar_accept_token 2023-07-24 15:55:26 -04:00
Andrei Betlen bf901773b0 Add llama_sample_grammar 2023-07-24 15:42:31 -04:00
Andrei Betlen 1b6997d69f Convert constants to python types and allow python types in low-level api 2023-07-24 15:42:07 -04:00
Andrei Betlen 343480364f Merge branch 'main' into v0.2-wip 2023-07-24 15:26:08 -04:00
Andrei Betlen 11dd2bf382 Add temporary rms_norm_eps parameter 2023-07-24 14:09:24 -04:00
Andrei Betlen 8cd64d4ac3 Add rms_eps_norm 2023-07-24 13:52:12 -04:00
bretello 0f09f10e8c
add support for llama2 70b 2023-07-24 19:38:24 +02:00
Andrei Betlen 77c9f496b0 Merge branch 'main' into v0.2-wip 2023-07-24 13:19:54 -04:00
Andrei Betlen 401309d11c Revert "Merge pull request #521 from bretello/main"
This reverts commit 07f0f3a386, reversing
changes made to d8a3ddbb1c.
2023-07-24 13:11:10 -04:00
Andrei 07f0f3a386
Merge pull request #521 from bretello/main
raise exception when `llama_load_model_from_file` fails
2023-07-24 13:09:28 -04:00
Andrei Betlen d8a3ddbb1c Update llama.cpp 2023-07-24 13:08:06 -04:00
Andrei Betlen 985d559971 Update llama.cpp 2023-07-24 13:04:34 -04:00
bretello 8be7d67f7e
raise exception when llama_load_model_from_file fails 2023-07-24 14:42:37 +02:00
Andrei Betlen 436036aa67 Merge branch 'main' into v0.2-wip 2023-07-21 12:42:38 -04:00
Andrei Betlen b83728ad1e Update llama.cpp 2023-07-21 12:33:27 -04:00
Andrei Betlen 0538ba1dab Merge branch 'main' into v0.2-wip 2023-07-20 19:06:26 -04:00
Andrei Betlen 01435da740 Update llama.cpp 2023-07-20 18:54:25 -04:00
Andrei Betlen 28a111704b Fix compatibility with older python versions 2023-07-20 18:52:10 -04:00
Andrei Betlen d10ce62714 Revert ctypes argtype change 2023-07-20 18:51:53 -04:00
Andrei 365d9a4367
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Vinicius a8551477f5
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float] 2023-07-20 17:29:11 -03:00
Carlos Tejada 0756a2d3fb Now the last token sent when stream=True 2023-07-19 22:47:14 -04:00
Andrei Betlen 0b121a7456 Format 2023-07-19 03:48:27 -04:00
Andrei Betlen b43917c144 Add functions parameters 2023-07-19 03:48:20 -04:00
Andrei Betlen 19ba9d3845 Use numpy arrays for logits_processors and stopping_criteria. Closes #491 2023-07-18 19:27:41 -04:00
shutup 5ed8bf132f expose RoPE param to server start 2023-07-18 16:34:36 +08:00
c0sogi 1551ba10bd Added RouteErrorHandler for server 2023-07-16 14:57:39 +09:00
Andrei Betlen 8ab098e49d Re-order Llama class params 2023-07-15 15:35:08 -04:00
Andrei Betlen e4f9db37db Fix context_params struct layout 2023-07-15 15:34:55 -04:00
Andrei Betlen f0797a6054 Merge branch main into custom_rope 2023-07-15 15:11:01 -04:00
randoentity 3f8f276f9f Add bindings for custom_rope 2023-07-10 17:37:46 +02:00
Andrei Betlen a86bfdf0a5 bugfix: truncate completion max_tokens to fit context length by default 2023-07-09 18:13:29 -04:00
Andrei Betlen 6f70cc4b7d bugfix: pydantic settings missing / changed fields 2023-07-09 18:03:31 -04:00
Andrei 5d756de314
Merge branch 'main' into add_unlimited_max_tokens 2023-07-08 02:37:38 -04:00
Andrei b8e0bed295
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen d6e6aad927 bugfix: fix compatibility bug with openai api on last token 2023-07-08 00:06:11 -04:00
Andrei Betlen 4f2b5d0b53 Format 2023-07-08 00:05:10 -04:00
Andrei Betlen 34c505edf2 perf: convert pointer to byref 2023-07-07 22:54:07 -04:00
Andrei Betlen 52753b77f5 Upgrade fastapi to 0.100.0 and pydantic v2 2023-07-07 21:38:46 -04:00
Andrei Betlen 11eae75211 perf: avoid allocating new buffers during sampling 2023-07-07 19:28:53 -04:00
Andrei Betlen a14d8a9b3f perf: assign to candidates data structure instead 2023-07-07 18:58:43 -04:00
wu-qing-157 9e61661518 fix indexing token_logprobs after sorting 2023-07-07 10:18:49 +00:00
Andrei Betlen 57d8ec3899 Add setting to control request interruption 2023-07-07 03:37:23 -04:00
Andrei Betlen 4c7cdcca00 Add interruptible streaming requests for llama-cpp-python server. Closes #183 2023-07-07 03:04:17 -04:00
Andrei Betlen 98ae4e58a3 Update llama.cpp 2023-07-06 17:57:56 -04:00
Andrei Betlen b994296c75 Update llama.cpp 2023-07-05 01:00:14 -04:00
Andrei Betlen c67f786360 Update llama.cpp 2023-06-29 01:08:15 -04:00
Andrei Betlen e34f4414cf Hotfix: logits_all bug 2023-06-29 00:57:27 -04:00
Andrei Betlen a2ede37bd5 Load logits directly into scores buffer 2023-06-29 00:45:46 -04:00
Andrei Betlen b95b0ffbeb Use pre-allocated buffers to store input_ids and scores 2023-06-29 00:40:47 -04:00
Andrei Betlen a5e059c053 Free model when llama is unloaded. Closes #434 2023-06-28 23:58:55 -04:00
Andrei Betlen 3379dc40a1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-26 08:50:48 -04:00
Andrei Betlen 952228407e Update llama.cpp 2023-06-26 08:50:38 -04:00
Andrei Betlen b4a3db3e54 Update type signature 2023-06-26 08:50:30 -04:00
Andrei 5eb4ebb041
Merge branch 'main' into fix-state-pickle 2023-06-26 08:45:02 -04:00
samfundev d788fb49bf
Only concatenate after all batches are done 2023-06-24 15:51:46 -04:00
Andrei 877ca6d016
Merge branch 'main' into fix-state-pickle 2023-06-23 15:13:07 -04:00
Alexey 282698b6d3
server: pass seed param from command line to llama 2023-06-23 00:19:24 +04:00
Andrei Betlen e37798777e Update llama.cpp 2023-06-20 11:25:10 -04:00
Andrei Betlen d410f12fae Update docs. Closes #386 2023-06-17 13:38:48 -04:00
Andrei Betlen 9f528f4715 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-17 13:37:17 -04:00
Andrei Betlen d7153abcf8 Update llama.cpp 2023-06-16 23:11:14 -04:00
imaprogrammer fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception 2023-06-16 14:11:57 +05:30
Andrei Betlen 1e20be6d0c Add low_vram to server settings 2023-06-14 22:13:42 -04:00
Andrei Betlen 44b83cada5 Add low_vram parameter 2023-06-14 22:12:33 -04:00
Andrei Betlen f7c5cfaf50 Format server options 2023-06-14 22:08:28 -04:00
Andrei Betlen 9c41a3e990 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:50:43 -04:00
Andrei f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Andrei Betlen f27393ab7e Add additional verbose logs for cache 2023-06-14 21:46:48 -04:00
Andrei Betlen 4cefb70cd0 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-14 21:40:19 -04:00
Andrei Betlen 715f98c591 Update llama.cpp 2023-06-14 21:40:13 -04:00
Okabintaro 10b0cb727b fix: Make LLamaState pickable for disk cache
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00
Gabor 3129a0e7e5 correction to add back environment variable support <3 docker 2023-06-11 01:11:24 +01:00
Gabor 3ea31930e5 fixes abetlen/llama-cpp-python #358 2023-06-11 00:58:08 +01:00
Andrei Betlen 21acd7901f Re-enable cache 2023-06-10 12:22:31 -04:00
Andrei Betlen 6639371407 Update llama.cpp 2023-06-10 12:17:38 -04:00
Tanner Hobson eb7645b3ba Add support for logit_bias and logit_bias_type parameters 2023-06-09 13:13:08 -04:00