Commit graph

558 commits

Author SHA1 Message Date
Andrei Betlen fafe47114c Update llama.cpp 2023-05-21 17:47:21 -04:00
Andrei Betlen 8f49ca0287 Bump version 2023-05-20 08:53:40 -04:00
Andrei Betlen 76b1d2cd20 Change properties to functions to match token functions 2023-05-20 08:24:06 -04:00
Andrei Betlen a7ba85834f Add n_ctx, n_vocab, and n_embd properties 2023-05-20 08:13:41 -04:00
Your Name 0b079a658c make git module accessible anonymously 2023-05-20 02:25:59 +01:00
Simon Chabot e783f1c191 feat: make embedding support list of string as input
makes the /v1/embedding route similar to OpenAI api.
2023-05-20 01:23:32 +02:00
Andrei Betlen 01a010be52 Fix llama_cpp and Llama type signatures. Closes #221 2023-05-19 11:59:33 -04:00
Andrei Betlen fb57b9470b Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-05-19 03:19:32 -04:00
Andrei Betlen f82d85fbee Bump version 2023-05-19 03:19:27 -04:00
Andrei Betlen c7788c85ab Add Guidance example 2023-05-19 03:16:58 -04:00
Andrei Betlen a8cd169251 Bugfix: Stop sequences can be strings 2023-05-19 03:15:08 -04:00
Andrei Betlen f0812c4d8c Add upgrade instructions to the README 2023-05-19 02:20:41 -04:00
Andrei Betlen 17d4271b04 Fix logprobs for completions and implement for streaming logprobs. 2023-05-19 02:20:27 -04:00
Andrei Betlen a634a2453b Allow first logprob token to be null to match openai api 2023-05-19 02:04:57 -04:00
Andrei Betlen dc39cc0fa4 Use server sent events function for streaming completion 2023-05-19 02:04:30 -04:00
Andrei 69f9d50090
Merge pull request #235 from Pipboyguy/main
Decrement CUDA version and bump Ubuntu
2023-05-18 13:42:04 -04:00
Andrei Betlen f0ec6e615e Stream tokens instead of text chunks 2023-05-18 11:35:59 -04:00
Andrei Betlen 21d8f5fa9f Remove unnused union 2023-05-18 11:35:15 -04:00
Marcel Coetzee 6ece8a225a Set CUDA_VERSION as build ARG
Signed-off-by: Marcel Coetzee <marcel@mooncoon.com>
2023-05-18 16:59:42 +02:00
Marcel Coetzee 6c57d38552 Decrement CUDA version and bump Ubuntu
Signed-off-by: Marcel Coetzee <marcel@mooncoon.com>
2023-05-18 16:02:42 +02:00
Andrei Betlen 50e136252a Update llama.cpp 2023-05-17 16:14:12 -04:00
Andrei Betlen db10e0078b Update docs 2023-05-17 16:14:01 -04:00
Andrei Betlen 61d58e7b35 Check for CUDA_PATH before adding 2023-05-17 15:26:38 -04:00
Andrei Betlen 7c95895626 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-05-17 15:19:32 -04:00
Andrei 47921a312c
Merge pull request #225 from aneeshjoy/main
Fixed CUBLAS DLL load issues on Windows
2023-05-17 15:17:37 -04:00
Aneesh Joy e9794f91f2
Fixd CUBLAS dll load issue in Windows 2023-05-17 18:04:58 +01:00
Andrei Betlen 70695c430b Move docs link up 2023-05-17 11:40:12 -04:00
Andrei Betlen 4f342795e5 Update token checks 2023-05-17 03:35:13 -04:00
Andrei Betlen 626003c884 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-05-17 02:00:48 -04:00
Andrei Betlen f5c2f998ab Format 2023-05-17 02:00:39 -04:00
Andrei Betlen d28b753ed2 Implement penalize_nl 2023-05-17 01:53:26 -04:00
Andrei Betlen f11e2a781c Fix last_n_tokens_size 2023-05-17 01:42:51 -04:00
Andrei Betlen 7e55244540 Fix top_k value. Closes #220 2023-05-17 01:41:42 -04:00
Andrei Betlen e37a808bc0 Update llama.cpp 2023-05-16 23:33:53 -04:00
Andrei Betlen a7c9e38287 Update variable name 2023-05-16 18:07:25 -04:00
Andrei Betlen a3352923c7 Add model_alias option to override model_path in completions. Closes #39 2023-05-16 17:22:00 -04:00
Andrei Betlen 214589e462 Update llama.cpp 2023-05-16 17:20:45 -04:00
Andrei Betlen a65125c0bd Add sampling defaults for generate 2023-05-16 09:35:50 -04:00
Andrei Betlen 341c50b5b0 Fix CMakeLists.txt 2023-05-16 09:07:14 -04:00
Andrei 1a13d76c48
Merge pull request #215 from zxybazh/main
Update README.md
2023-05-15 17:57:58 -04:00
Xiyou Zhou 408dd14e5b
Update README.md
Fix typo.
2023-05-15 14:52:25 -07:00
Andrei e0cca841bf
Merge pull request #214 from abetlen/dependabot/pip/mkdocs-material-9.1.12
Bump mkdocs-material from 9.1.11 to 9.1.12
2023-05-15 17:24:06 -04:00
dependabot[bot] 7526b3f6f9
Bump mkdocs-material from 9.1.11 to 9.1.12
Bumps [mkdocs-material](https://github.com/squidfunk/mkdocs-material) from 9.1.11 to 9.1.12.
- [Release notes](https://github.com/squidfunk/mkdocs-material/releases)
- [Changelog](https://github.com/squidfunk/mkdocs-material/blob/master/CHANGELOG)
- [Commits](https://github.com/squidfunk/mkdocs-material/compare/9.1.11...9.1.12)

---
updated-dependencies:
- dependency-name: mkdocs-material
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>
2023-05-15 21:05:54 +00:00
Andrei cda9cecd5f
Merge pull request #212 from mzbac/patch-1
chore: add note for Mac m1 installation
2023-05-15 16:19:00 -04:00
Andrei Betlen cbac19bf24 Add winmode arg only on windows if python version supports it 2023-05-15 09:15:01 -04:00
Anchen 3718799b37
chore: add note for Mac m1 installation 2023-05-15 20:46:59 +10:00
Andrei Betlen c804efe3f0 Fix obscure Wndows DLL issue. Closes #208 2023-05-14 22:08:11 -04:00
Andrei Betlen ceec21f1e9 Update llama.cpp 2023-05-14 22:07:35 -04:00
Andrei Betlen d90c9df326 Bump version 2023-05-14 00:04:49 -04:00
Andrei Betlen cdf59768f5 Update llama.cpp 2023-05-14 00:04:22 -04:00