Commit graph

  • 9c8f4dca5f
    fixed Llama._create_completion suffix check, it can be either None or str instance (#854) Marko Tasic 2023-11-01 23:52:50 +0100
  • 5f8f369d1b
    Pass-Through grammar parameter in web server. (#855) Closes #778 Daniel Thuerck 2023-11-01 23:51:12 +0100
  • 25cb710281
    Update llama_types.py (#849) Adam Katora 2023-11-01 18:50:11 -0400
  • bdf5254658 Update llama.cpp Andrei Betlen 2023-11-01 14:15:56 -0400
  • d808fd436c Update llama.cpp Andrei Betlen 2023-10-31 21:29:35 -0400
  • 53861c9e53 Update llama.cpp Andrei Betlen 2023-10-24 03:13:32 -0400
  • acf50f179a Update llama.cpp Andrei Betlen 2023-10-20 11:17:31 -0400
  • 5a045fcbbc Update llama.cpp Andrei Betlen 2023-10-19 17:37:07 -0400
  • ef03d77b59 Enable finish reason tests Andrei Betlen 2023-10-19 02:56:45 -0400
  • 09a8406c83
    Fix streaming doesn't return finish reason (#798) gmcgoldr 2023-10-19 02:55:56 -0400
  • 28c2b884e2 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-10-19 02:55:31 -0400
  • cbeef36510 Re-enable tests completion function Andrei Betlen 2023-10-19 02:55:29 -0400
  • ff580031d2 Update llama.cpp Andrei Betlen 2023-10-19 02:55:08 -0400
  • a315128d66
    update value check for n_gpu_layers field (#826) Xiaoyu Kevin Hu 2023-10-18 17:25:25 -0500
  • d989ac86e6 Update llama.cpp Andrei Betlen 2023-10-15 15:12:57 -0400
  • 10304d75fc
    Make use of suppress_stdout_stderr when freeing model (#803) Pierre Alexandre SCHEMBRI 2023-10-15 19:52:43 +0200
  • a1ac199980
    Fix repeat greeting (#808) Ma, Guokai 2023-10-16 01:52:21 +0800
  • b50166500e
    Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES (#820) Eric Liu 2023-10-15 10:51:51 -0700
  • f30aa20126 Update llama.cpp Andrei Betlen 2023-10-12 02:24:50 -0400
  • 622bff19b2 Update llama.cpp Andrei Betlen 2023-10-10 19:23:35 -0400
  • d6a130a052 Print traceback on server error Andrei Betlen 2023-10-10 15:56:04 -0400
  • 43dfe1e2ab Update llama.cpp Andrei Betlen 2023-10-05 16:07:49 -0400
  • 2c0456acf0 Update llama.cpp Andrei Betlen 2023-10-04 20:19:31 -0400
  • c305be6db6 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-10-03 15:23:37 -0400
  • a7d17b8ac9 Update llama.cpp Andrei Betlen 2023-10-03 15:23:35 -0400
  • b76724cddc
    Update instruction to download GGUF model (#783) ccshen 2023-10-02 23:46:47 +0800
  • 305482bd41 Add chatml chat format Andrei Betlen 2023-09-30 21:01:34 -0400
  • 5ef5280ef9 Log server exceptions to stdout Andrei Betlen 2023-09-30 19:13:36 -0400
  • f0af1c7201 Update llama.cpp Andrei Betlen 2023-09-30 19:09:50 -0400
  • fab4bccc35 Bump version Andrei Betlen 2023-09-30 16:04:46 -0400
  • d696251fbe Fix logits_all bug Andrei Betlen 2023-09-30 16:02:35 -0400
  • 6ee413d79e Bump version Andrei Betlen 2023-09-30 13:23:09 -0400
  • 42bb721d64 Fix bug in embedding Andrei Betlen 2023-09-30 13:20:22 -0400
  • bca965325d Update CHANGELOG Andrei Betlen 2023-09-30 00:08:45 -0400
  • 5d62d55a82 Bump version Andrei Betlen 2023-09-30 00:07:06 -0400
  • ac853e01e1 Include git directories Andrei Betlen 2023-09-30 00:01:14 -0400
  • 9e76613629 Remove git repo exclude Andrei Betlen 2023-09-29 23:28:59 -0400
  • b4939c2d99 Revert BUILD_NUMBER fix Andrei Betlen 2023-09-29 23:28:45 -0400
  • 541aaff45e Quote fix attempt #2 Andrei Betlen 2023-09-29 23:05:26 -0400
  • 39e5feb138 Fix quote issue Andrei Betlen 2023-09-29 23:01:38 -0400
  • 3c6e98f945 Use dev versioning for test pypi Andrei Betlen 2023-09-29 22:57:49 -0400
  • 1cca20304b Revert update to publish test pypi Andrei Betlen 2023-09-29 22:48:17 -0400
  • 85e4d08a2e Update publish to test pypi workflow Andrei Betlen 2023-09-29 22:32:31 -0400
  • 43f8fc371a Potential fix for pip install bug Andrei Betlen 2023-09-29 22:24:22 -0400
  • 386c88b68e Bump version Andrei Betlen 2023-09-29 20:07:31 -0400
  • d9bce17794 Update server params Andrei Betlen 2023-09-29 19:59:12 -0400
  • 3720c739d4 Update llama.cpp Andrei Betlen 2023-09-29 19:58:21 -0400
  • 3bca7708fb
    Configurable Chat Formats (#711) Andrei 2023-09-29 19:52:04 -0400
  • a945404b4a
    Fix rope scaling defaults (#767) Josh XT 2023-09-29 16:03:57 -0400
  • a72efc77de Update llama.cpp Andrei Betlen 2023-09-28 23:25:14 -0400
  • 1a1c3dc418 Update llama.cpp Andrei Betlen 2023-09-28 22:42:03 -0400
  • 4177ae6d34 Bump version Andrei Betlen 2023-09-25 14:38:38 -0400
  • 1ed0f3ebe1 Bump scikit-build-core version to one that includes fix for windows cmake. Andrei Betlen 2023-09-25 14:20:09 -0400
  • f7b785a00f Update CHANGELOG Andrei Betlen 2023-09-25 13:58:23 -0400
  • cf8ae5a69c Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-09-25 13:57:00 -0400
  • 5da57734bc Update llama.cpp Andrei Betlen 2023-09-25 13:56:52 -0400
  • 3d5e5b1c04
    Adds openai-processing-ms response header (#748) Viacheslav/Slava Tradunsky 2023-09-25 13:55:58 -0400
  • dbca136fea Update llama_types and names to match openai api Andrei Betlen 2023-09-20 15:38:26 -0400
  • 15000fca69 Update llama.cpp Andrei Betlen 2023-09-20 14:38:44 -0400
  • 0b2464c32b Ignore version if set by pyenv Andrei Betlen 2023-09-20 12:28:28 -0400
  • 3afbf2eb75 Update CHANGELOG Andrei Betlen 2023-09-18 16:20:56 -0400
  • 6e167a285e Update CHANGELOG Andrei Betlen 2023-09-18 16:11:34 -0400
  • 38e34c97f0 Update llama.cpp Andrei Betlen 2023-09-18 16:11:27 -0400
  • 8d75016549 Install required runtime dlls to package directory on windows Andrei Betlen 2023-09-16 14:57:49 -0400
  • acf18fcdf0 Bump version Andrei Betlen 2023-09-15 14:22:21 -0400
  • c7f45a7468 Update llama.cpp Andrei Betlen 2023-09-15 14:16:34 -0400
  • b047b3034e Remove confusing helpstring from server cli args. Closes #719 Andrei Betlen 2023-09-15 14:09:43 -0400
  • 24fec0b242 Bump version Andrei Betlen 2023-09-14 18:33:08 -0400
  • dbd3a6d1ed Fix issue installing on m1 macs Andrei Betlen 2023-09-14 18:25:44 -0400
  • 482ecd79c9 Revert "Update llama.cpp" Andrei Betlen 2023-09-14 17:03:18 -0400
  • f73e385c33 Update llama.cpp Andrei Betlen 2023-09-14 16:37:33 -0400
  • ca4eb952a6 Revert "Update llama.cpp" Andrei Betlen 2023-09-14 15:28:50 -0400
  • 7da8e0fbf1 Merge branch 'main' of github.com:abetlen/llama_cpp_python into main Andrei Betlen 2023-09-14 14:51:50 -0400
  • 8474665625 Update base_path to fix issue resolving dll in windows isolation container. Andrei Betlen 2023-09-14 14:51:43 -0400
  • 40b22909dc
    Update examples from ggml to gguf and add hw-accel note for Web Server (#688) Jason Cox 2023-09-14 11:48:21 -0700
  • aa2f8a5008 Update llama.cpp Andrei Betlen 2023-09-14 14:44:59 -0400
  • 2291798900 Fix dockerfiles to install starlette-context Andrei Betlen 2023-09-14 14:40:16 -0400
  • 65a2a20050 Enable make fallback for scikit-build-core Andrei Betlen 2023-09-14 11:43:55 -0400
  • 255d653ae3 Add documentation and changelog links in pyproject Andrei Betlen 2023-09-14 04:00:37 -0400
  • 95d54808a5 Upgrade pip for editable installs Andrei Betlen 2023-09-14 02:01:45 -0400
  • 507bcc7171 Bump version Andrei Betlen 2023-09-13 23:15:23 -0400
  • 3e2250a12e Update CHANGELOG Andrei Betlen 2023-09-13 23:14:22 -0400
  • 60119dbaeb Update CHANGELOG Andrei Betlen 2023-09-13 23:13:19 -0400
  • 0449d29b9f Fix boolean env vars and cli arguments Andrei Betlen 2023-09-13 23:09:57 -0400
  • 58a6e42cc0
    Update app.py (#705) earonesty 2023-09-13 23:01:34 -0400
  • f4090a0bb2 Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs. Andrei Betlen 2023-09-13 23:00:43 -0400
  • c999325e8e Fix boolean cli flags Andrei Betlen 2023-09-13 22:56:10 -0400
  • 83764c5aee Update CHANGELOG Andrei Betlen 2023-09-13 21:58:53 -0400
  • 4daf77e546 Format Andrei Betlen 2023-09-13 21:23:23 -0400
  • 2920c4bf7e Update server params. Added lora_base, lora_path, low_vram, and main_gpu. Removed rms_norm_eps and n_gqa (deprecated in llama.cpp) Andrei Betlen 2023-09-13 21:23:13 -0400
  • 6a20293fc2 Reorder init params to match llama.cpp order Andrei Betlen 2023-09-13 21:20:26 -0400
  • c8f9b8a734 Explicitly make all init params other than model_path into keyword only params Andrei Betlen 2023-09-13 21:19:47 -0400
  • a68f9e2791 Add kwargs to init to catch extra params Andrei Betlen 2023-09-13 21:19:02 -0400
  • 9e345a47a2 remove print Andrei Betlen 2023-09-13 21:12:27 -0400
  • 517f9ed80b Convert missed llama.cpp constants into standard python types Andrei Betlen 2023-09-13 21:11:52 -0400
  • c4c440ba2d Fix tensor_split cli option Andrei Betlen 2023-09-13 20:00:42 -0400
  • 203ede4ba2 Bump version Andrei Betlen 2023-09-13 18:07:08 -0400
  • 759405c84b Fix issue with Literal and Optional cli arguments not working. Closes #702 Andrei Betlen 2023-09-13 18:06:12 -0400
  • 6cfc54284b Add pyproject extra for scikit-build-core to ensure compatible pathspec version Andrei Betlen 2023-09-13 16:51:57 -0400
  • cacfd562ba Update llama.cpp Andrei Betlen 2023-09-13 16:51:00 -0400