Commit graph

25 commits

Author SHA1 Message Date
chiensen b938cccf05
Add Pygmalion chat format (#986) 2023-12-11 20:44:04 -05:00
Gardner Bickford c2d63a7148
fix: Typo in the Open Orca chat format #874 (#947) 2023-11-26 15:39:18 -05:00
Andrei Betlen 8c3aa7858b Merge branch 'main' of github.com:abetlen/llama_cpp_python into main 2023-11-24 00:15:36 -05:00
Andrei Betlen de2e2bc083 misc fix verbose printing in functionary model 2023-11-23 20:14:23 -05:00
mrfakename d68fc07b1b
Add Zephyr format (#937) 2023-11-23 01:20:08 -05:00
caiyesd 4184835078
Add chat format to support baichuan (#938)
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-23 01:19:50 -05:00
caiyesd b8f29f4bf0
Add baichuan-2 chat format (#936)
Signed-off-by: caiyesd <caiyesd@gmail.com>
2023-11-22 06:08:06 -05:00
Andrei Betlen 7a3f87846b Format 2023-11-21 04:02:20 -05:00
Andrei Betlen 07e47f55ba Add support for logit_bias outside of server api. Closes #827 2023-11-21 03:59:46 -05:00
mrfakename ef65fc5ff4
Add MistralLite, Intel, and OpenChat prompt formats (#927)
* Add MistralLite format

* Update llama_chat_format.py

* Update llama_chat_format.py
2023-11-21 00:19:25 -05:00
TK-Master b8438f70b5
Added support for min_p (#921)
* Added support for min_p

My small contribution to this great project.

Ref: https://github.com/ggerganov/llama.cpp/pull/3841

Closes: https://github.com/abetlen/llama-cpp-python/issues/911

* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen b84d76a844 Fix: add default stop sequence to chatml chat format 2023-11-10 04:24:48 -05:00
Andrei Betlen 1b376c62b7 Update functionary for new OpenAI API 2023-11-10 02:51:58 -05:00
Andrei Betlen b62c449839 Bugfix: missing response_format for functionary and llava chat handlers 2023-11-09 00:55:23 -05:00
Andrei Betlen ca4cb88351 Fix destructor NoneType is not callable error 2023-11-08 11:05:45 -05:00
Andrei Betlen b30b9c338b Add JSON mode support. Closes #881 2023-11-08 00:07:16 -05:00
Andrei Betlen 64f5153c35 Add seed parameter to chat handlers 2023-11-07 23:41:29 -05:00
Damian Stewart aab74f0b2b
Multimodal Support (Llava 1.5) (#821)
* llava v1.5 integration

* Point llama.cpp to fork

* Add llava shared library target

* Fix type

* Update llama.cpp

* Add llava api

* Revert changes to llama and llama_cpp

* Update llava example

* Add types for new gpt-4-vision-preview api

* Fix typo

* Update llama.cpp

* Update llama_types to match OpenAI v1 API

* Update ChatCompletionFunction type

* Reorder request parameters

* More API type fixes

* Even More Type Updates

* Add parameter for custom chat_handler to Llama class

* Fix circular import

* Convert to absolute imports

* Fix

* Fix pydantic Jsontype bug

* Accept list of prompt tokens in create_completion

* Add llava1.5 chat handler

* Add Multimodal notebook

* Clean up examples

* Add server docs

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-11-07 22:48:51 -05:00
Andrei Betlen bbffdaebaa Refactor autotokenizer format to reusable function 2023-11-06 09:07:27 -05:00
Joe 4ff8def4d0
#717: Add support for Huggingface Autotokenizer (#790)
Co-authored-by: Andrei <abetlen@gmail.com>
2023-11-05 18:06:36 -05:00
earonesty 3580e2c5df
Update llama_chat_format.py (#869)
* Update llama_chat_format.py

properly formal llama2 with first-message prompt embedded

* Update llama_chat_format.py
2023-11-05 17:00:13 -05:00
Andrei 3af7b21ff1
Add functionary support (#784)
* Add common grammars and json-schema-to-grammar utility function from llama.cpp

* Pass functions to format function

* Add basic functionary formatting

* Add LlamaChatHandler for more complex chat use cases

* Add function calling example notebook

* Add support for regular chat completions alongside function calling
2023-11-03 02:12:14 -04:00
Ma, Guokai a1ac199980
Fix repeat greeting (#808)
* fix repeated greeting

* remove seperator between role and message
2023-10-15 13:52:21 -04:00
Andrei Betlen 305482bd41 Add chatml chat format 2023-09-30 21:01:34 -04:00
Andrei 3bca7708fb
Configurable Chat Formats (#711)
* Add configurable default chat completion format.

* Remove chat_template file to avoid circular import

* Update llama_types

* Add chat format
2023-09-29 19:52:04 -04:00