Commit graph

268 commits

Author SHA1 Message Date
kddubey 6b2e0e05b4
perf: Don't convert logprobs arrays to lists (#1021) 2023-12-18 14:28:12 -05:00
Brandon Roberts 62944df142
Bugfix: Remove f16_kv, add offload_kqv field (#1019)
F16_KV appears to have been removed here: af99c6fbfc

This addresses two issues:

 - #995 which just requests to add the KV cache offloading param
 - #1006 a NULL ptr exception when using the embeddings (introduced by
   leaving f16_kv in the fields struct)
2023-12-18 14:27:11 -05:00
Daniele Morotti f1c631dc53
Bug fixed with n_ctx=0 (#1015)
If the n_ctx is set to 0 the code should use the maximum context length of the selected model, but it didn't work. There was a problem with the initialization of this parameter and a related problem with 'n_batch'.
2023-12-16 18:59:50 -05:00
kddubey 5a8944672f
Fix logits_to_logprobs for 2-D and 3-D logits (#1002)
* Fix logits_to_logprobs for 2-D and 3-D logits

* Set dtype to single

* Test size
2023-12-16 18:59:26 -05:00
Tanner Hobson ef22e478db
Replace logits_to_logprobs implementation with numpy equivalent to llama.cpp (#991)
See #990. This change makes the logits_to_logprobs function equivalent to the version in the llama.cpp repository. It uses numpy so it's much faster than the previous version.
2023-12-11 20:46:27 -05:00
Andrei Betlen ec26f364cc Remove f16_kv 2023-12-11 10:25:37 -05:00
kddubey b069d06346
Fix #891 (#952) 2023-11-29 05:39:52 -05:00
Andrei Betlen 6308f21d5e docs: Update Llama docs 2023-11-26 15:56:40 -05:00
Andrei Betlen 4026166e68 docs: Update completion and chat_completion parameter docstrings 2023-11-24 03:24:19 -05:00
Andrei Betlen b6bb7ac76a docs: Add Llama class example 2023-11-22 23:10:04 -05:00
Andrei Betlen 7a3f87846b Format 2023-11-21 04:02:20 -05:00
Andrei Betlen 422ebc89ce Fix: Add logit_bias to all completion api methods 2023-11-21 04:01:36 -05:00
Andrei Betlen 07e47f55ba Add support for logit_bias outside of server api. Closes #827 2023-11-21 03:59:46 -05:00
TK-Master b8438f70b5
Added support for min_p (#921)
* Added support for min_p

My small contribution to this great project.

Ref: https://github.com/ggerganov/llama.cpp/pull/3841

Closes: https://github.com/abetlen/llama-cpp-python/issues/911

* Fix for negative temp (sample_softmax)
2023-11-20 23:21:33 -05:00
Andrei Betlen a34d480141 Fix #929 2023-11-20 22:50:59 -05:00
Andrei Betlen 6f0b0b1b84 Fix sampling bug when logits_all=False 2023-11-10 05:15:41 -05:00
Andrei Betlen d9b38e3e3a Potential bugfix for eval 2023-11-10 04:41:19 -05:00
Andrei Betlen e7962d2c73 Fix: default max_tokens matches openai api (16 for completion, max length for chat completion) 2023-11-10 02:49:27 -05:00
Andrei Betlen fd41ed3a90 Add set_seed to Llama class 2023-11-08 11:09:41 -05:00
Andrei Betlen ca4cb88351 Fix destructor NoneType is not callable error 2023-11-08 11:05:45 -05:00
Andrei Betlen b30b9c338b Add JSON mode support. Closes #881 2023-11-08 00:07:16 -05:00
Andrei Betlen 86aeb9f3a1 Add seed parameter support for completion and chat_completion requests. Closes #884 2023-11-07 23:37:28 -05:00
Damian Stewart aab74f0b2b
Multimodal Support (Llava 1.5) (#821)
* llava v1.5 integration

* Point llama.cpp to fork

* Add llava shared library target

* Fix type

* Update llama.cpp

* Add llava api

* Revert changes to llama and llama_cpp

* Update llava example

* Add types for new gpt-4-vision-preview api

* Fix typo

* Update llama.cpp

* Update llama_types to match OpenAI v1 API

* Update ChatCompletionFunction type

* Reorder request parameters

* More API type fixes

* Even More Type Updates

* Add parameter for custom chat_handler to Llama class

* Fix circular import

* Convert to absolute imports

* Fix

* Fix pydantic Jsontype bug

* Accept list of prompt tokens in create_completion

* Add llava1.5 chat handler

* Add Multimodal notebook

* Clean up examples

* Add server docs

---------

Co-authored-by: Andrei Betlen <abetlen@gmail.com>
2023-11-07 22:48:51 -05:00
Andrei Betlen be0add1b2d Fix type bug 2023-11-06 09:30:38 -05:00
Andrei Betlen e214a58422 Refactor Llama class internals 2023-11-06 09:16:36 -05:00
Andrei Betlen 2ec043af76 Clean up stdout / stderr suppression 2023-11-03 13:02:15 -04:00
Andrei Betlen 4ea7027c41 Rename internal only module utils to _utils 2023-11-03 12:55:55 -04:00
Andrei Betlen df9362eeea Update llama.cpp 2023-11-03 11:34:50 -04:00
Andrei 3af7b21ff1
Add functionary support (#784)
* Add common grammars and json-schema-to-grammar utility function from llama.cpp

* Pass functions to format function

* Add basic functionary formatting

* Add LlamaChatHandler for more complex chat use cases

* Add function calling example notebook

* Add support for regular chat completions alongside function calling
2023-11-03 02:12:14 -04:00
Andrei ab028cb878
Migrate inference to llama_batch and llama_decode api (#795)
* Add low-level batching notebook

* fix: tokenization of special characters: (#850)

It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly

* Update CHANGELOG

* Cleanup

* Fix runner label

* Update notebook

* Use llama_decode and batch api

* Support logits_all parameter

---------

Co-authored-by: Antoine Lizee <antoine.lizee@gmail.com>
2023-11-02 20:13:57 -04:00
Andrei Betlen fa83cc5f9c Update llama.cpp
Fix build examples

Exclude examples directory

Revert cmake changes

Try actions/checkout@v4

Try to update submodules

Revert

Update llama.cpp

Fix build examples

Exclude examples directory

Revert cmake changes

Try actions/checkout@v4

Try to update submodules

Revert
2023-11-02 14:28:15 -04:00
Antoine Lizee 4d4e0f11e2 fix: tokenization of special characters: (#850)
It should behave like llama.cpp, where most out of the box usages
treat special characters accordingly
2023-11-02 14:28:14 -04:00
cebtenzzre eefd76fe81
llama: fix exception in Llama.__del__ (#846) 2023-11-01 18:53:57 -04:00
Marko Tasic 9c8f4dca5f
fixed Llama._create_completion suffix check, it can be either None or str instance (#854) 2023-11-01 18:52:50 -04:00
Andrei Betlen 53861c9e53 Update llama.cpp 2023-10-24 03:13:32 -04:00
gmcgoldr 09a8406c83
Fix streaming doesn't return finish reason (#798)
When streaming the yield that contains the finish can be skipped. This change ensures that yield isn't skipped.
2023-10-19 02:55:56 -04:00
Andrei Betlen ff580031d2 Update llama.cpp 2023-10-19 02:55:08 -04:00
Pierre Alexandre SCHEMBRI 10304d75fc
Make use of suppress_stdout_stderr when freeing model (#803) 2023-10-15 13:52:43 -04:00
Eric Liu b50166500e
Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES (#820)
* Add validation for tensor_split size exceeding LLAMA_MAX_DEVICES

* reword
2023-10-15 13:51:51 -04:00
Andrei Betlen d696251fbe Fix logits_all bug 2023-09-30 16:02:35 -04:00
Andrei Betlen 42bb721d64 Fix bug in embedding 2023-09-30 13:20:22 -04:00
Andrei 3bca7708fb
Configurable Chat Formats (#711)
* Add configurable default chat completion format.

* Remove chat_template file to avoid circular import

* Update llama_types

* Add chat format
2023-09-29 19:52:04 -04:00
Josh XT a945404b4a
Fix rope scaling defaults (#767)
* Fix rope scale with backwards compatibility

* Fix defaults

* Fix op

* Remove backwards compatibility

* Check single val
2023-09-29 16:03:57 -04:00
Andrei Betlen 1a1c3dc418 Update llama.cpp 2023-09-28 22:42:03 -04:00
Andrei Betlen 38e34c97f0 Update llama.cpp 2023-09-18 16:11:27 -04:00
Andrei Betlen f4090a0bb2 Add numa support, low level api users must now explicitly call llama_backend_init at the start of their programs. 2023-09-13 23:00:43 -04:00
Andrei Betlen 6a20293fc2 Reorder init params to match llama.cpp order 2023-09-13 21:20:26 -04:00
Andrei Betlen c8f9b8a734 Explicitly make all init params other than model_path into keyword only params 2023-09-13 21:19:47 -04:00
Andrei Betlen a68f9e2791 Add kwargs to init to catch extra params 2023-09-13 21:19:02 -04:00
Andrei Betlen 9e345a47a2 remove print 2023-09-13 21:12:27 -04:00