llama.cpp/llama_cpp
2024-04-30 15:50:30 -04:00
..
server feat: Add option to enable flash_attn to Lllama params and ModelSettings 2024-04-30 09:29:16 -04:00
__init__.py chore: Bump version 2024-04-30 09:39:56 -04:00
_internals.py fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks. Closes #796 Closes #729 2024-04-30 15:45:34 -04:00
_logger.py fix: Use llama_log_callback to avoid suppress_stdout_stderr 2024-02-05 21:52:12 -05:00
_utils.py fix: Suppress all logs when verbose=False, use hardcoded fileno's to work in colab notebooks. Closes #796 Closes #729 2024-04-30 15:45:34 -04:00
llama.py fix: wrong parameter for flash attention in pickle __getstate__ 2024-04-30 09:32:47 -04:00
llama_cache.py Move cache classes to llama_cache submodule. 2024-01-17 09:09:12 -05:00
llama_chat_format.py fix: Change default verbose value of verbose in image chat format handlers to True to match Llama 2024-04-30 15:50:30 -04:00
llama_cpp.py feat: Update llama.cpp 2024-04-30 09:27:55 -04:00
llama_grammar.py fix: UTF-8 handling with grammars (#1415) 2024-04-30 14:33:23 -04:00
llama_speculative.py Add speculative decoding (#1120) 2024-01-31 14:08:14 -05:00
llama_tokenizer.py fix: LlamaHFTokenizer now receives pre_tokens 2024-02-23 12:23:24 -05:00
llama_types.py feat: Allow for possibly non-pooled embeddings (#1380) 2024-04-25 21:32:44 -04:00
llava_cpp.py feat: Generic Chat Formats, Tool Calling, and Huggingface Pull Support for Multimodal Models (Obsidian, LLaVA1.6, Moondream) (#1147) 2024-04-30 01:35:38 -04:00
py.typed Add py.typed 2023-08-11 09:58:48 +02:00