Commit graph

200 commits

Author SHA1 Message Date
Andrei Betlen 620cd2fd69 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-08-14 22:41:47 -04:00
Andrei Betlen 5788f1f2b2 Remove unnused import 2023-08-14 22:41:37 -04:00
Billy Cao c471871d0b
make n_gpu_layers=-1 offload all layers 2023-08-13 11:21:28 +08:00
Billy Cao d018c7b01d
Add doc string for n_gpu_layers argument 2023-08-12 18:41:47 +08:00
Andrei Betlen 66fb0345e8 Move grammar to function call argument 2023-08-08 15:08:54 -04:00
Andrei Betlen 1e844d3238 fix 2023-08-08 15:07:28 -04:00
Andrei Betlen 843b7ccd90 Merge branch 'main' into c0sogi/main 2023-08-08 14:43:02 -04:00
Andrei Betlen d015bdb4f8 Add mul_mat_q option 2023-08-08 14:35:06 -04:00
c0sogi b07713cb9f reset grammar for every generation 2023-08-07 15:16:25 +09:00
c0sogi 418aa83b01 Added grammar based sampling 2023-08-07 02:21:37 +09:00
Andrei Betlen ce57920e60 Suppress llama.cpp output when loading model. 2023-07-28 14:45:18 -04:00
Andrei Betlen a9b9f0397c Format 2023-07-28 01:53:08 -04:00
Andrei Betlen abc538fcd5 fix: annoying bug where attribute exceptions were droining out file not found exceptions 2023-07-28 01:43:00 -04:00
Shouyi Wang 426dbfe3f4 Change tensor_split from array to pointer 2023-07-25 18:29:59 +10:00
Andrei Betlen 11dd2bf382 Add temporary rms_norm_eps parameter 2023-07-24 14:09:24 -04:00
Andrei Betlen 8cd64d4ac3 Add rms_eps_norm 2023-07-24 13:52:12 -04:00
bretello 0f09f10e8c
add support for llama2 70b 2023-07-24 19:38:24 +02:00
Andrei 365d9a4367
Merge pull request #481 from c0sogi/main
Added `RouteErrorHandler` for server
2023-07-20 17:41:42 -04:00
Carlos Tejada 0756a2d3fb Now the last token sent when stream=True 2023-07-19 22:47:14 -04:00
c0sogi 1551ba10bd Added RouteErrorHandler for server 2023-07-16 14:57:39 +09:00
Andrei Betlen 8ab098e49d Re-order Llama class params 2023-07-15 15:35:08 -04:00
Andrei Betlen f0797a6054 Merge branch main into custom_rope 2023-07-15 15:11:01 -04:00
randoentity 3f8f276f9f Add bindings for custom_rope 2023-07-10 17:37:46 +02:00
Andrei Betlen a86bfdf0a5 bugfix: truncate completion max_tokens to fit context length by default 2023-07-09 18:13:29 -04:00
Andrei 5d756de314
Merge branch 'main' into add_unlimited_max_tokens 2023-07-08 02:37:38 -04:00
Andrei b8e0bed295
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen d6e6aad927 bugfix: fix compatibility bug with openai api on last token 2023-07-08 00:06:11 -04:00
Andrei Betlen 4f2b5d0b53 Format 2023-07-08 00:05:10 -04:00
Andrei Betlen 34c505edf2 perf: convert pointer to byref 2023-07-07 22:54:07 -04:00
Andrei Betlen 11eae75211 perf: avoid allocating new buffers during sampling 2023-07-07 19:28:53 -04:00
Andrei Betlen a14d8a9b3f perf: assign to candidates data structure instead 2023-07-07 18:58:43 -04:00
wu-qing-157 9e61661518 fix indexing token_logprobs after sorting 2023-07-07 10:18:49 +00:00
Andrei Betlen e34f4414cf Hotfix: logits_all bug 2023-06-29 00:57:27 -04:00
Andrei Betlen a2ede37bd5 Load logits directly into scores buffer 2023-06-29 00:45:46 -04:00
Andrei Betlen b95b0ffbeb Use pre-allocated buffers to store input_ids and scores 2023-06-29 00:40:47 -04:00
Andrei Betlen a5e059c053 Free model when llama is unloaded. Closes #434 2023-06-28 23:58:55 -04:00
Andrei Betlen 3379dc40a1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-06-26 08:50:48 -04:00
Andrei Betlen 952228407e Update llama.cpp 2023-06-26 08:50:38 -04:00
Andrei Betlen b4a3db3e54 Update type signature 2023-06-26 08:50:30 -04:00
Andrei 5eb4ebb041
Merge branch 'main' into fix-state-pickle 2023-06-26 08:45:02 -04:00
samfundev d788fb49bf
Only concatenate after all batches are done 2023-06-24 15:51:46 -04:00
Andrei 877ca6d016
Merge branch 'main' into fix-state-pickle 2023-06-23 15:13:07 -04:00
Andrei Betlen d410f12fae Update docs. Closes #386 2023-06-17 13:38:48 -04:00
imaprogrammer fd9f294b3a
Update llama.py: Added how many input tokens in ValueError exception 2023-06-16 14:11:57 +05:30
Andrei Betlen 44b83cada5 Add low_vram parameter 2023-06-14 22:12:33 -04:00
Andrei f568baeef1
Merge pull request #351 from player1537-forks/th/add-logits-bias-parameter
Add support for `logit_bias` and `logit_bias_type` parameters
2023-06-14 21:49:56 -04:00
Okabintaro 10b0cb727b fix: Make LLamaState pickable for disk cache
I fixed the issue by making the saved state a bytes object instead of the ctypes one which can't be pickled.
2023-06-13 12:03:31 +02:00
Andrei Betlen 21acd7901f Re-enable cache 2023-06-10 12:22:31 -04:00
Tanner Hobson eb7645b3ba Add support for logit_bias and logit_bias_type parameters 2023-06-09 13:13:08 -04:00
Andrei Betlen 0da655b3be Temporarily disable cache until save state bug is fixed. 2023-06-09 11:10:24 -04:00