Commit graph

325 commits

Author SHA1 Message Date
Andrei Betlen 0538ba1dab Merge branch 'main' into v0.2-wip 2023-07-20 19:06:26 -04:00
Andrei Betlen 01435da740 Update llama.cpp 2023-07-20 18:54:25 -04:00
Andrei Betlen 28a111704b Fix compatibility with older python versions 2023-07-20 18:52:10 -04:00
Andrei Betlen d10ce62714 Revert ctypes argtype change 2023-07-20 18:51:53 -04:00
Andrei 365d9a4367
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Vinicius a8551477f5
Update llama_cpp.py - Fix c_char_p to Array[c_char_p] and c_float to Array[c_float] 2023-07-20 17:29:11 -03:00
Carlos Tejada 0756a2d3fb Now the last token sent when stream=True 2023-07-19 22:47:14 -04:00
Andrei Betlen 0b121a7456 Format 2023-07-19 03:48:27 -04:00
Andrei Betlen b43917c144 Add functions parameters 2023-07-19 03:48:20 -04:00
Andrei Betlen 19ba9d3845 Use numpy arrays for logits_processors and stopping_criteria. Closes #491 2023-07-18 19:27:41 -04:00
shutup 5ed8bf132f expose RoPE param to server start 2023-07-18 16:34:36 +08:00
c0sogi 1551ba10bd Added RouteErrorHandler for server 2023-07-16 14:57:39 +09:00
Andrei Betlen 8ab098e49d Re-order Llama class params 2023-07-15 15:35:08 -04:00
Andrei Betlen e4f9db37db Fix context_params struct layout 2023-07-15 15:34:55 -04:00
Andrei Betlen f0797a6054 Merge branch main into custom_rope 2023-07-15 15:11:01 -04:00
randoentity 3f8f276f9f Add bindings for custom_rope 2023-07-10 17:37:46 +02:00
Andrei Betlen a86bfdf0a5 bugfix: truncate completion max_tokens to fit context length by default 2023-07-09 18:13:29 -04:00
Andrei Betlen 6f70cc4b7d bugfix: pydantic settings missing / changed fields 2023-07-09 18:03:31 -04:00
Andrei 5d756de314
Merge branch 'main' into add_unlimited_max_tokens 2023-07-08 02:37:38 -04:00
Andrei b8e0bed295
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen d6e6aad927 bugfix: fix compatibility bug with openai api on last token 2023-07-08 00:06:11 -04:00
Andrei Betlen 4f2b5d0b53 Format 2023-07-08 00:05:10 -04:00
Andrei Betlen 34c505edf2 perf: convert pointer to byref 2023-07-07 22:54:07 -04:00
Andrei Betlen 52753b77f5 Upgrade fastapi to 0.100.0 and pydantic v2 2023-07-07 21:38:46 -04:00
Andrei Betlen 11eae75211 perf: avoid allocating new buffers during sampling 2023-07-07 19:28:53 -04:00
Andrei Betlen a14d8a9b3f perf: assign to candidates data structure instead 2023-07-07 18:58:43 -04:00
wu-qing-157 9e61661518 fix indexing token_logprobs after sorting 2023-07-07 10:18:49 +00:00
Andrei Betlen 57d8ec3899 Add setting to control request interruption 2023-07-07 03:37:23 -04:00
Andrei Betlen 4c7cdcca00 Add interruptible streaming requests for llama-cpp-python server. Closes #183 2023-07-07 03:04:17 -04:00
Andrei Betlen 98ae4e58a3 Update llama.cpp 2023-07-06 17:57:56 -04:00
Andrei Betlen b994296c75 Update llama.cpp 2023-07-05 01:00:14 -04:00
Andrei Betlen c67f786360 Update llama.cpp 2023-06-29 01:08:15 -04:00
Andrei Betlen e34f4414cf Hotfix: logits_all bug 2023-06-29 00:57:27 -04:00
Andrei Betlen a2ede37bd5 Load logits directly into scores buffer 2023-06-29 00:45:46 -04:00
Andrei Betlen b95b0ffbeb Use pre-allocated buffers to store input_ids and scores 2023-06-29 00:40:47 -04:00
Andrei Betlen a5e059c053 Free model when llama is unloaded. Closes #434 2023-06-28 23:58:55 -04:00
Andrei Betlen 3379dc40a1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-26 08:50:48 -04:00
Andrei Betlen 952228407e Update llama.cpp 2023-06-26 08:50:38 -04:00
Andrei Betlen b4a3db3e54 Update type signature 2023-06-26 08:50:30 -04:00
Andrei 5eb4ebb041
Merge branch 'main' into fix-state-pickle 2023-06-26 08:45:02 -04:00
samfundev d788fb49bf
Only concatenate after all batches are done 2023-06-24 15:51:46 -04:00
Andrei 877ca6d016
Merge branch 'main' into fix-state-pickle 2023-06-23 15:13:07 -04:00
Alexey 282698b6d3
server: pass seed param from command line to llama 2023-06-23 00:19:24 +04:00
Andrei Betlen e37798777e Update llama.cpp 2023-06-20 11:25:10 -04:00
Andrei Betlen d410f12fae Update docs. Closes #386 2023-06-17 13:38:48 -04:00
Andrei Betlen 9f528f4715 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-17 13:37:17 -04:00
Andrei Betlen d7153abcf8 Update llama.cpp 2023-06-16 23:11:14 -04:00
imaprogrammer fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception 2023-06-16 14:11:57 +05:30
Andrei Betlen 1e20be6d0c Add low_vram to server settings 2023-06-14 22:13:42 -04:00
Andrei Betlen 44b83cada5 Add low_vram parameter 2023-06-14 22:12:33 -04:00