Commit graph

789 commits

Author SHA1 Message Date
Andrei 15e0e0a937
Merge pull request #390 from SubhranshuSharma/main
added termux with root instructions
2023-07-14 16:53:23 -04:00
Andrei Betlen 118b7f6d5c fix: tensor_split should be optional list 2023-07-14 16:52:48 -04:00
Andrei Betlen 25b3494e11 Minor fix to tensor_split parameter 2023-07-14 16:40:53 -04:00
Andrei Betlen e6c67c8f7d Update llama.cpp 2023-07-14 16:40:31 -04:00
Andrei 82b11c8c16
Merge pull request #460 from shouyiwang/tensor_split
Add support for llama.cpp's --tensor-split parameter
2023-07-14 16:33:54 -04:00
Shouyi Wang 579f526246 Resolve merge conflicts 2023-07-14 14:37:01 +10:00
Andrei Betlen 6705f9b6c6 Bump version 2023-07-13 23:32:06 -04:00
Andrei Betlen de4cc5a233 bugfix: pydantic v2 fields 2023-07-13 23:25:12 -04:00
Andrei Betlen 896ab7b88a Update llama.cpp 2023-07-13 23:24:55 -04:00
Andrei Betlen 7bb0024cd0 Fix uvicorn dependency 2023-07-12 19:31:43 -04:00
Andrei Betlen f6c9d17f6b Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-07-09 18:20:06 -04:00
Andrei Betlen 8e0f6253db Bump version 2023-07-09 18:20:04 -04:00
Andrei Betlen c988c2ac0b Bump version 2023-07-09 18:19:37 -04:00
Andrei Betlen df3d545938 Update changelog 2023-07-09 18:13:41 -04:00
Andrei Betlen a86bfdf0a5 bugfix: truncate completion max_tokens to fit context length by default 2023-07-09 18:13:29 -04:00
Andrei Betlen 6f70cc4b7d bugfix: pydantic settings missing / changed fields 2023-07-09 18:03:31 -04:00
Andrei Betlen 0f3c474a49 Bump version 2023-07-09 11:44:29 -04:00
Andrei Betlen 9aa64163db Update llama.cpp 2023-07-09 11:40:59 -04:00
Shouyi Wang 9f21f548a5 Add tensor split 2023-07-09 23:00:59 +10:00
Andrei Betlen 99f064e681 docker: Add libopenblas to simple image 2023-07-09 01:36:39 -04:00
Andrei Betlen 00da643929 Update llama.cpp 2023-07-08 20:30:34 -04:00
Andrei Betlen 3c85c41573 docker: update path to dockerfile 2023-07-08 04:04:11 -04:00
Andrei Betlen 1f5e748a7e docker: fix docker build action args 2023-07-08 04:00:43 -04:00
Andrei Betlen 9e153fd11d docker: update context path 2023-07-08 03:44:51 -04:00
Andrei Betlen 5b7d76608d docker: add checkout action to dockerfile 2023-07-08 03:43:17 -04:00
Andrei Betlen 3a2635b9e1 Update docker workflow for new simple image 2023-07-08 03:37:28 -04:00
Andrei Betlen 670fe4b701 Update changelog 2023-07-08 03:37:12 -04:00
Andrei 24724202ee
Merge pull request #64 from jm12138/add_unlimited_max_tokens
Add unlimited max_tokens
2023-07-08 02:38:06 -04:00
Andrei 5d756de314
Merge branch 'main' into add_unlimited_max_tokens 2023-07-08 02:37:38 -04:00
Andrei 236c4cf442
Merge pull request #456 from AgentJ-WR/patch-1
Show how to adjust context window in README.md
2023-07-08 02:32:20 -04:00
Andrei 7952ca50c9
Merge pull request #452 from audreyfeldroy/update-macos-metal-gpu-step-4
Update macOS Metal GPU step 4
2023-07-08 02:32:09 -04:00
Andrei b8e0bed295
Merge pull request #453 from wu-qing-157/main
Fix incorrect token_logprobs (due to indexing after sorting)
2023-07-08 02:31:52 -04:00
Andrei Betlen d6e6aad927 bugfix: fix compatibility bug with openai api on last token 2023-07-08 00:06:11 -04:00
Andrei Betlen 4f2b5d0b53 Format 2023-07-08 00:05:10 -04:00
AgentJ-WR ea4fbadab3
Show how to adjust context window in README.md 2023-07-07 23:24:57 -04:00
Andrei Betlen 34c505edf2 perf: convert pointer to byref 2023-07-07 22:54:07 -04:00
Andrei Betlen 52753b77f5 Upgrade fastapi to 0.100.0 and pydantic v2 2023-07-07 21:38:46 -04:00
Andrei Betlen 11eae75211 perf: avoid allocating new buffers during sampling 2023-07-07 19:28:53 -04:00
Andrei Betlen 7887376bff Update llama.cpp 2023-07-07 19:06:54 -04:00
Andrei Betlen a14d8a9b3f perf: assign to candidates data structure instead 2023-07-07 18:58:43 -04:00
wu-qing-157 9e61661518 fix indexing token_logprobs after sorting 2023-07-07 10:18:49 +00:00
Audrey Roy Greenfeld d270ec231a Update macOS Metal GPU step 4
* Update "today" to version 0.1.62
* Fix numbering (there were 2 step 4's)
2023-07-07 11:15:04 +01:00
Andrei Betlen ca11673061 Add universal docker image 2023-07-07 03:38:51 -04:00
Andrei Betlen 57d8ec3899 Add setting to control request interruption 2023-07-07 03:37:23 -04:00
Andrei Betlen cc542b4452 Update llama.cpp 2023-07-07 03:04:54 -04:00
Andrei Betlen 4c7cdcca00 Add interruptible streaming requests for llama-cpp-python server. Closes #183 2023-07-07 03:04:17 -04:00
Andrei Betlen 98ae4e58a3 Update llama.cpp 2023-07-06 17:57:56 -04:00
Andrei Betlen a1b2d5c09b Bump version 2023-07-05 01:06:46 -04:00
Andrei Betlen b994296c75 Update llama.cpp 2023-07-05 01:00:14 -04:00
Andrei 058b134ab6
Merge pull request #443 from abetlen/dependabot/pip/mkdocs-material-9.1.18
Bump mkdocs-material from 9.1.17 to 9.1.18
2023-07-05 00:40:46 -04:00