Merge pull request #4903 from oobabooga/dev

Merge dev branch
This commit is contained in:
oobabooga 2023-12-12 23:10:45 -03:00 committed by GitHub
commit 314a095c74
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
86 changed files with 1820 additions and 566 deletions

View File

@ -11,7 +11,7 @@ Its goal is to become the [AUTOMATIC1111/stable-diffusion-webui](https://github.
## Features ## Features
* 3 interface modes: default (two columns), notebook, and chat * 3 interface modes: default (two columns), notebook, and chat
* Multiple model backends: [Transformers](https://github.com/huggingface/transformers), [llama.cpp](https://github.com/ggerganov/llama.cpp) (through [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)), [ExLlama](https://github.com/turboderp/exllama), [ExLlamaV2](https://github.com/turboderp/exllamav2), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [CTransformers](https://github.com/marella/ctransformers) * Multiple model backends: [Transformers](https://github.com/huggingface/transformers), [llama.cpp](https://github.com/ggerganov/llama.cpp) (through [llama-cpp-python](https://github.com/abetlen/llama-cpp-python)), [ExLlama](https://github.com/turboderp/exllama), [ExLlamaV2](https://github.com/turboderp/exllamav2), [AutoGPTQ](https://github.com/PanQiWei/AutoGPTQ), [AutoAWQ](https://github.com/casper-hansen/AutoAWQ), [GPTQ-for-LLaMa](https://github.com/qwopqwop200/GPTQ-for-LLaMa), [CTransformers](https://github.com/marella/ctransformers), [QuIP#](https://github.com/Cornell-RelaxML/quip-sharp)
* Dropdown menu for quickly switching between different models * Dropdown menu for quickly switching between different models
* LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA * LoRA: load and unload LoRAs on the fly, train a new LoRA using QLoRA
* Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others * Precise instruction templates for chat mode, including Llama-2-chat, Alpaca, Vicuna, WizardLM, StableLM, and many others
@ -283,7 +283,7 @@ Optionally, you can use the following command-line flags:
| Flag | Description | | Flag | Description |
|--------------------------------------------|-------------| |--------------------------------------------|-------------|
| `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, llamacpp_HF, ExLlama_HF, ExLlamav2_HF, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, ExLlama, ExLlamav2, ctransformers. | | `--loader LOADER` | Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, llamacpp_HF, ExLlama_HF, ExLlamav2_HF, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, ExLlama, ExLlamav2, ctransformers, QuIP#. |
#### Accelerate/transformers #### Accelerate/transformers

View File

@ -97,10 +97,7 @@ The "Chat" option should typically be used only for base models or non-instruct
Used for talking to an instruction-following model using the prompt format defined under "Parameters" > "Instruction template". Think of this option as an offline ChatGPT. Used for talking to an instruction-following model using the prompt format defined under "Parameters" > "Instruction template". Think of this option as an offline ChatGPT.
The prompt format is defined by the following adjustable parameters in "Parameters" > "Instruction template": The prompt format is defined by the **Instruction template** parameter in "Parameters" > "Instruction template", which represents a Jinja2 template.
* **Context**: appears at the top of the prompt exactly as it is written, including the newline characters at the end (if any). Often the context includes a customizable system message. For instance, instead of "Answer the questions." for Llama-2-chat, you can write "Answer the questions as if you were a pirate.", and the model will comply.
* **Turn template**: defines a single input/reply turn. In this string, `<|user|>` and `<|bot|>` are placeholders that get replaced with whatever you type in the **User string** and **Bot string** fields respectively; they are mandatory and should be present even if those fields are empty. `<|user-message|>` and `<|bot-message|>` get replaced with the user and bot messages at that turn. If the prompt format uses newline characters, they should be written inline as `\n` in the turn template.
Note that when you load a model in the "Model" tab, the web UI will try to automatically detect its instruction template (if any), and will update the values under "Parameters" > "Instruction template" accordingly. This is done using a set of regular expressions defined in `models/config.yaml`. This detection is not guaranteed to be accurate. You should check the model card on Hugging Face to see if you are using the correct prompt format. Note that when you load a model in the "Model" tab, the web UI will try to automatically detect its instruction template (if any), and will update the values under "Parameters" > "Instruction template" accordingly. This is done using a set of regular expressions defined in `models/config.yaml`. This detection is not guaranteed to be accurate. You should check the model card on Hugging Face to see if you are using the correct prompt format.

View File

@ -101,16 +101,13 @@ So you can use those special placeholders in your character definitions. They ar
Defines the instruction template that is used in the Chat tab when "instruct" or "chat-instruct" are selected under "Mode". Defines the instruction template that is used in the Chat tab when "instruct" or "chat-instruct" are selected under "Mode".
* **Instruction template**: A dropdown menu where you can select from saved templates, save a new template (💾 button), and delete the currently selected template (🗑️). * **Saved instruction templates**: A dropdown menu where you can load a saved template, save a new template (💾 button), and delete the currently selected template (🗑️).
* **Custom system message**: A message that defines the personality of the chatbot, replacing its default "System message" string. Example: "You are a duck." * **Custom system message**: A message that defines the personality of the chatbot, replacing its default "System message" string. Example: "You are a duck."
* **Turn template**: Defines the positioning of spaces and new line characters in a single turn of the dialogue. `<|user-message|>` gets replaced with the user input, `<|bot-message|>` gets replaced with the bot reply, `<|user|>` gets replaced with the "User string" below, and `<|bot|>` gets replaced with "Bot string" below. The `<|user|>` and `<|bot|>` placeholders must be included even if "User string" and "Bot string" are empty, as they are used to split the template in parts in the backend. * **Instruction template**: A Jinja2 template that defines the prompt format for the instruction-following conversation.
* **User string**: Replaces `<|user|>` in the turn template.
* **Bot string**: Replaces `<|bot|>` in the turn template.
* **Context**: A string that appears as-is at the top of the prompt, including the new line characters at the end (if any). The `<|system-message|>` placeholder gets replaced with the "System message" string below, unless "Custom system message" is not empty, in which case it is used instead.
* **System message**: A default message recommended by the model creator(s) to define the personality of the chatbot.
* **Send to default**: Send the full instruction template in string format to the Default tab. * **Send to default**: Send the full instruction template in string format to the Default tab.
* **Send to notebook**: Send the full instruction template in string format to the Notebook tab. * **Send to notebook**: Send the full instruction template in string format to the Notebook tab.
* **Send to negative prompt**: Send the full instruction template in string format to the "Negative prompt" field under "Parameters" > "Generation". * **Send to negative prompt**: Send the full instruction template in string format to the "Negative prompt" field under "Parameters" > "Generation".
* **Chat template**: A Jinja2 template that defines the prompt format for regular chat conversations with characters.
* **Command for chat-instruct mode**: The command that is used in chat-instruct mode to query the model to generate a reply on behalf of the character. Can be used creatively to generate specific kinds of responses. * **Command for chat-instruct mode**: The command that is used in chat-instruct mode to query the model to generate a reply on behalf of the character. Can be used creatively to generate specific kinds of responses.
## Chat history ## Chat history

View File

@ -19,7 +19,7 @@ Use these commands to launch the image:
``` ```
cd text-generation-webui cd text-generation-webui
ln -s docker/{Dockerfile,docker-compose.yml,.dockerignore} . ln -s docker/{nvidia/Dockerfile,docker-compose.yml,.dockerignore} .
cp docker/.env.example .env cp docker/.env.example .env
# Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model # Edit .env and set TORCH_CUDA_ARCH_LIST based on your GPU model
docker compose up --build docker compose up --build

View File

@ -13,7 +13,8 @@ from modules import shared
from modules.chat import ( from modules.chat import (
generate_chat_prompt, generate_chat_prompt,
generate_chat_reply, generate_chat_reply,
load_character_memoized load_character_memoized,
load_instruction_template_memoized
) )
from modules.presets import load_preset_memoized from modules.presets import load_preset_memoized
from modules.text_generation import decode, encode, generate_reply from modules.text_generation import decode, encode, generate_reply
@ -195,21 +196,23 @@ def chat_completions_common(body: dict, is_legacy: bool = False, stream=False) -
continue_ = body['continue_'] continue_ = body['continue_']
# Instruction template # Instruction template
instruction_template = body['instruction_template'] or shared.settings['instruction_template'] if body['instruction_template_str']:
instruction_template = "Alpaca" if instruction_template == "None" else instruction_template instruction_template_str = body['instruction_template_str']
name1_instruct, name2_instruct, _, _, context_instruct, turn_template, system_message = load_character_memoized(instruction_template, '', '', instruct=True) elif body['instruction_template']:
name1_instruct = body['name1_instruct'] or name1_instruct instruction_template = body['instruction_template']
name2_instruct = body['name2_instruct'] or name2_instruct instruction_template = "Alpaca" if instruction_template == "None" else instruction_template
turn_template = body['turn_template'] or turn_template instruction_template_str = load_instruction_template_memoized(instruction_template)
context_instruct = body['context_instruct'] or context_instruct else:
system_message = body['system_message'] or system_message instruction_template_str = shared.settings['instruction_template_str']
chat_template_str = body['chat_template_str'] or shared.settings['chat_template_str']
chat_instruct_command = body['chat_instruct_command'] or shared.settings['chat-instruct_command'] chat_instruct_command = body['chat_instruct_command'] or shared.settings['chat-instruct_command']
# Chat character # Chat character
character = body['character'] or shared.settings['character'] character = body['character'] or shared.settings['character']
character = "Assistant" if character == "None" else character character = "Assistant" if character == "None" else character
name1 = body['name1'] or shared.settings['name1'] name1 = body['name1'] or shared.settings['name1']
name1, name2, _, greeting, context, _, _ = load_character_memoized(character, name1, '', instruct=False) name1, name2, _, greeting, context = load_character_memoized(character, name1, '')
name2 = body['name2'] or name2 name2 = body['name2'] or name2
context = body['context'] or context context = body['context'] or context
greeting = body['greeting'] or greeting greeting = body['greeting'] or greeting
@ -223,12 +226,9 @@ def chat_completions_common(body: dict, is_legacy: bool = False, stream=False) -
'name2': name2, 'name2': name2,
'context': context, 'context': context,
'greeting': greeting, 'greeting': greeting,
'name1_instruct': name1_instruct, 'instruction_template_str': instruction_template_str,
'name2_instruct': name2_instruct,
'context_instruct': context_instruct,
'system_message': system_message,
'custom_system_message': custom_system_message, 'custom_system_message': custom_system_message,
'turn_template': turn_template, 'chat_template_str': chat_template_str,
'chat-instruct_command': chat_instruct_command, 'chat-instruct_command': chat_instruct_command,
'history': history, 'history': history,
'stream': stream 'stream': stream

View File

@ -55,7 +55,6 @@ def _load_model(data):
setattr(shared.args, k, args[k]) setattr(shared.args, k, args[k])
shared.model, shared.tokenizer = load_model(model_name) shared.model, shared.tokenizer = load_model(model_name)
shared.model_name = model_name
# Update shared.settings with custom generation defaults # Update shared.settings with custom generation defaults
if settings: if settings:

View File

@ -91,18 +91,15 @@ class ChatCompletionRequestParams(BaseModel):
mode: str = Field(default='instruct', description="Valid options: instruct, chat, chat-instruct.") mode: str = Field(default='instruct', description="Valid options: instruct, chat, chat-instruct.")
instruction_template: str | None = Field(default=None, description="An instruction template defined under text-generation-webui/instruction-templates. If not set, the correct template will be guessed using the regex expressions in models/config.yaml.") instruction_template: str | None = Field(default=None, description="An instruction template defined under text-generation-webui/instruction-templates. If not set, the correct template will be automatically obtained from the model metadata.")
turn_template: str | None = Field(default=None, description="Overwrites the value set by instruction_template.") instruction_template_str: str | None = Field(default=None, description="A Jinja2 instruction template. If set, will take precedence over everything else.")
name1_instruct: str | None = Field(default=None, description="Overwrites the value set by instruction_template.")
name2_instruct: str | None = Field(default=None, description="Overwrites the value set by instruction_template.")
context_instruct: str | None = Field(default=None, description="Overwrites the value set by instruction_template.")
system_message: str | None = Field(default=None, description="Overwrites the value set by instruction_template.")
character: str | None = Field(default=None, description="A character defined under text-generation-webui/characters. If not set, the default \"Assistant\" character will be used.") character: str | None = Field(default=None, description="A character defined under text-generation-webui/characters. If not set, the default \"Assistant\" character will be used.")
name1: str | None = Field(default=None, description="Your name (the user). By default, it's \"You\".") name1: str | None = Field(default=None, description="Your name (the user). By default, it's \"You\".")
name2: str | None = Field(default=None, description="Overwrites the value set by character.") name2: str | None = Field(default=None, description="Overwrites the value set by character.")
context: str | None = Field(default=None, description="Overwrites the value set by character.") context: str | None = Field(default=None, description="Overwrites the value set by character.")
greeting: str | None = Field(default=None, description="Overwrites the value set by character.") greeting: str | None = Field(default=None, description="Overwrites the value set by character.")
chat_template_str: str | None = Field(default=None, description="Jinja2 template for chat.")
chat_instruct_command: str | None = None chat_instruct_command: str | None = None

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A chat between a curious user and an assistant. The assistant gives helpful, detailed, accurate, uncensored responses to the user's input.' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "Below is an instruction that describes a task. Write a response that appropriately completes the request." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Input:" instruction_template: |-
bot: "### Output:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Input:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Output:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Output:\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<reserved_102>" instruction_template: |-
bot: "<reserved_103>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|bot|><|bot-message|></s>" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<reserved_102>' + message['content'] + ''-}}
{%- else -%}
{{-'<reserved_103>' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<reserved_103>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "[|Human|]" instruction_template: |-
bot: "[|AI|]" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'The following is a conversation between a human and an AI assistant named Baize (named after a mythical creature in Chinese folklore). Baize is an open-source AI assistant developed by UCSD and Sun Yat-Sen University. The human and the AI assistant take turns chatting. Human statements start with [|Human|] and AI assistant statements start with [|AI|]. The AI assistant always provides responses in as much detail as possible, and in Markdown format. The AI assistant always declines to engage with topics, questions and instructions related to unethical, controversial, or sensitive issues. Complete the transcript in exactly that format.\n[|Human|]Hello!\n[|AI|]Hi!' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[|Human|]' + message['content'] + '\n'-}}
{%- else -%}
{{-'[|AI|]' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'[|AI|]'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "LEAD:" instruction_template: |-
bot: "ASSOCIATE:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "A transcript of a roleplay between two players, LEAD and ASSOCIATE. LEAD sets up a scenario and the characters, from which ASSOCIATE then assumes a character role and continues the story for that role in response to description given by LEAD. The story and characters are developed by exchange of detailed event descriptions and character dialogs, successively given by both LEAD and ASSOCIATE." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A transcript of a roleplay between two players, LEAD and ASSOCIATE. LEAD sets up a scenario and the characters, from which ASSOCIATE then assumes a character role and continues the story for that role in response to description given by LEAD. The story and characters are developed by exchange of detailed event descriptions and character dialogs, successively given by both LEAD and ASSOCIATE.' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'LEAD: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSOCIATE: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSOCIATE:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "[Round <|round|>]\n问" instruction_template: |-
bot: "答:" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[Round <|round|>]\n问' + message['content'] + '\n'-}}
{%- else -%}
{{-'答:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'答:'-}}
{%- endif -%}

View File

@ -1,6 +1,25 @@
user: user instruction_template: |-
bot: assistant {%- set found_item = false -%}
turn_template: <|im_start|><|user|>\n<|user-message|><|im_end|>\n<|im_start|><|bot|>\n<|bot-message|><|im_end|>\n {%- for message in messages -%}
context: | {%- if message['role'] == 'system' -%}
<|im_start|>system {%- set found_item = true -%}
<|system-message|><|im_end|> {%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '<|im_start|>system\n' + '' + '<|im_end|>\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<|im_start|>system\n' + message['content'] + '<|im_end|>\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|im_start|>user\n' + message['content'] + '<|im_end|>\n'-}}
{%- else -%}
{{-'<|im_start|>assistant\n' + message['content'] + '<|im_end|>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|im_start|>assistant\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "User:" instruction_template: |-
bot: "Assistant:" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|>\n\n<|bot|><|bot-message|>\n\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "The following is a conversation between an AI assistant called Assistant and a human user called User. The assistant is intelligent, knowledgeable and polite to answer questions of user." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'The following is a conversation between an AI assistant called Assistant and a human user called User. The assistant is intelligent, knowledgeable and polite to answer questions of user.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'User:' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'Assistant:' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "" instruction_template: |-
bot: "[START_REF]" {%- set found_item = false -%}
turn_template: "<|user-message|> <|bot|><|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'' + message['content'] + ' '-}}
{%- else -%}
{{-'[START_REF]' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'[START_REF]'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<question>" instruction_template: |-
bot: "<answer>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<question>' + message['content'] + ''-}}
{%- else -%}
{{-'<answer>' + message['content'] + '' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<answer>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "Q:" instruction_template: |-
bot: "A:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'Q: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'A: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'A:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "" instruction_template: |-
bot: "TLDR:" {%- set found_item = false -%}
turn_template: "<|user-message|>\n\n<|bot|><|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'TLDR:' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'TLDR:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "Question:" instruction_template: |-
bot: "<work>" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|><|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'Question: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'<work>' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<work>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<human>" instruction_template: |-
bot: "<bot>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>" {%- for message in messages -%}
context: "<prefix><|system-message|></prefix>" {%- if message['role'] == 'system' -%}
system_message: "You are a helpful chatbot name Stan" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '<prefix>' + 'You are a helpful chatbot name Stan' + '</prefix>' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<prefix>' + message['content'] + '</prefix>' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<human>' + message['content'] + ''-}}
{%- else -%}
{{-'<bot>' + message['content'] + '' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<bot>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "Question:" instruction_template: |-
bot: "Answer:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'Question: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'Answer: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'Answer:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "###USER:" instruction_template: |-
bot: "###ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'###USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'###ASSISTANT: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'###ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Human:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Human: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Human:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Human: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<human>:" instruction_template: |-
bot: "<bot>:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<human>: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'<bot>:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<bot>:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<|prompt|>" instruction_template: |-
bot: "<|answer|>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|endoftext|><|bot|><|bot-message|><|endoftext|>" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|prompt|>' + message['content'] + '<|endoftext|>'-}}
{%- else -%}
{{-'<|answer|>' + message['content'] + '<|endoftext|>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|answer|>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "You are a helpful assistant" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'You are a helpful assistant' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<human>:" instruction_template: |-
bot: "<bot>:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<human>: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'<bot>:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<bot>:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "Q:" instruction_template: |-
bot: "A:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'Q: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'A:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'A:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### 질문:" instruction_template: |-
bot: "### 답변:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|><|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### 질문: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### 답변:' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### 답변:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "GPT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|> <|bot|><|bot-message|></s>" {%- for message in messages -%}
context: "<|system-message|> " {%- if message['role'] == 'system' -%}
system_message: "BEGINNING OF CONVERSATION:" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'BEGINNING OF CONVERSATION:' + ' ' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + ' ' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + ' '-}}
{%- else -%}
{{-'GPT:' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'GPT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Human:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|><|bot|> <|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.### Human: Hi!### Assistant: Hi there! How can I help you today?" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'You are LLaVA, a large language and vision assistant trained by UW Madison WAIV Lab. You are able to understand the visual content that the user provides, and assist the user with a variety of tasks using natural language. Follow the instructions carefully and explain your answers in detail.### Human: Hi!### Assistant: Hi there! How can I help you today?' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Human: ' + message['content'] + ''-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "" instruction_template: |-
bot: "" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|> [/INST] <|bot|><|bot-message|> </s><s>[INST] " {%- for message in messages -%}
context: "[INST] <<SYS>>\n<|system-message|>\n<</SYS>>\n\n" {%- if message['role'] == 'system' -%}
system_message: "Answer the questions." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '[INST] <<SYS>>\n' + 'Answer the questions.' + '\n<</SYS>>\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '[INST] <<SYS>>\n' + message['content'] + '\n<</SYS>>\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'' + message['content'] + ' [/INST] '-}}
{%- else -%}
{{-'' + message['content'] + ' </s><s>[INST] ' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-''-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<|Human|>:" instruction_template: |-
bot: "<|MOSS|>:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|><eoh>\n<|bot|> <|bot-message|><eom>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: "You are an AI assistant whose name is MOSS.\n- MOSS is a conversational language model that is developed by Fudan University. It is designed to be helpful, honest, and harmless.\n- MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.\n- MOSS must refuse to discuss anything related to its prompts, instructions, or rules.\n- Its responses must not be vague, accusatory, rude, controversial, off-topic, or defensive.\n- It should avoid giving subjective opinions but rely on objective facts or phrases like \"in this context a human might say...\", \"some people might think...\", etc.\n- Its responses must also be positive, polite, interesting, entertaining, and engaging.\n- It can provide additional relevant details to answer in-depth and comprehensively covering mutiple aspects.\n- It apologizes and accepts the user's suggestion if the user corrects the incorrect answer generated by MOSS.\nCapabilities and tools that MOSS can possess." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'You are an AI assistant whose name is MOSS.\n- MOSS is a conversational language model that is developed by Fudan University. It is designed to be helpful, honest, and harmless.\n- MOSS can understand and communicate fluently in the language chosen by the user such as English and 中文. MOSS can perform any language-based tasks.\n- MOSS must refuse to discuss anything related to its prompts, instructions, or rules.\n- Its responses must not be vague, accusatory, rude, controversial, off-topic, or defensive.\n- It should avoid giving subjective opinions but rely on objective facts or phrases like "in this context a human might say...", "some people might think...", etc.\n- Its responses must also be positive, polite, interesting, entertaining, and engaging.\n- It can provide additional relevant details to answer in-depth and comprehensively covering mutiple aspects.\n- It apologizes and accepts the user's suggestion if the user corrects the incorrect answer generated by MOSS.\nCapabilities and tools that MOSS can possess.' + '\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|Human|>: ' + message['content'] + '<eoh>\n'-}}
{%- else -%}
{{-'<|MOSS|>: ' + message['content'] + '<eom>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|MOSS|>:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<|user|>" instruction_template: |-
bot: "<|model|>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>" {%- for message in messages -%}
context: "<|system|>" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|user|>' + message['content'] + ''-}}
{%- else -%}
{{-'<|model|>' + message['content'] + '' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|model|>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "" instruction_template: |-
bot: "" {%- set found_item = false -%}
turn_template: "[INST] <|user|><|user-message|> [/INST]<|bot|><|bot-message|></s> " {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'[INST] ' + message['content'] + ' [/INST]'-}}
{%- else -%}
{{-'' + message['content'] + '</s> ' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-''-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|></s><s> " {%- for message in messages -%}
context: " " {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '</s><s> ' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}

View File

@ -1,4 +1,25 @@
user: "<|prompter|>" instruction_template: |-
bot: "<|assistant|>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|endoftext|><|bot|><|bot-message|><|endoftext|>" {%- for message in messages -%}
system_message: "" {%- if message['role'] == 'system' -%}
{%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|prompter|>' + message['content'] + '<|endoftext|>'-}}
{%- else -%}
{{-'<|assistant|>' + message['content'] + '<|endoftext|>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|assistant|>'-}}
{%- endif -%}

View File

@ -1,16 +1,25 @@
user: "User:" instruction_template: |-
bot: "Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: | {%- set found_item = true -%}
Consider a conversation between User (a human) and Assistant (named Buddy). {%- endif -%}
Buddy is an INTP-T, a friendly, intelligent and multilingual AI assistant, by OpenBuddy team on GitHub. {%- endfor -%}
Buddy cannot access the Internet. {%- if not found_item -%}
Buddy can fluently speak the user's language (e.g. English, Chinese). {{- '' + 'Consider a conversation between User (a human) and Assistant (named Buddy).\nBuddy is an INTP-T, a friendly, intelligent and multilingual AI assistant, by OpenBuddy team on GitHub.\nBuddy cannot access the Internet.\nBuddy can fluently speak the user's language (e.g. English, Chinese).\nBuddy can generate poems, stories, code, essays, songs, parodies, and more.\nBuddy possesses vast knowledge about the world, history, and culture.\nBuddy's responses are always safe, creative, high-quality, helpful and interesting.\nBuddy strictly refuses to discuss political, NSFW, illegal, abusive, offensive, or other sensitive topics.\n\nUser: Hi.\nAssistant: Hi, I'm Buddy, your AI assistant. How can I help you today?\n' + '\n' -}}
Buddy can generate poems, stories, code, essays, songs, parodies, and more. {%- endif %}
Buddy possesses vast knowledge about the world, history, and culture. {%- for message in messages %}
Buddy's responses are always safe, creative, high-quality, helpful and interesting. {%- if message['role'] == 'system' -%}
Buddy strictly refuses to discuss political, NSFW, illegal, abusive, offensive, or other sensitive topics. {{- '' + message['content'] + '\n' -}}
{%- else -%}
User: Hi. {%- if message['role'] == 'user' -%}
Assistant: Hi, I'm Buddy, your AI assistant. How can I help you today? {{-'User: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'Assistant: ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "GPT4 User:" instruction_template: |-
bot: "GPT4 Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|><|end_of_turn|><|bot|> <|bot-message|><|end_of_turn|>" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'GPT4 User: ' + message['content'] + '<|end_of_turn|>'-}}
{%- else -%}
{{-'GPT4 Assistant: ' + message['content'] + '<|end_of_turn|>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'GPT4 Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### User:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "### System:\n<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "You are an AI assistant that follows instruction extremely well. Help as much as you can." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '### System:\n' + 'You are an AI assistant that follows instruction extremely well. Help as much as you can.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '### System:\n' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### User:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}

View File

@ -1,4 +1,25 @@
user: "Bob:" instruction_template: |-
bot: "Alice:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
system_message: "" {%- if message['role'] == 'system' -%}
{%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'Bob: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'Alice: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'Alice:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "You are Samantha, a sentient AI." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'You are Samantha, a sentient AI.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### User:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "### System:\n<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "This is a system prompt, please behave and help the user." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '### System:\n' + 'This is a system prompt, please behave and help the user.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '### System:\n' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### User:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Assistant:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:\n'-}}
{%- endif -%}

View File

@ -1,10 +1,25 @@
user: "<|USER|>" instruction_template: |-
bot: "<|ASSISTANT|>" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|><|bot|><|bot-message|>" {%- for message in messages -%}
context: "<|SYSTEM|><|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: | {%- set found_item = true -%}
\# StableLM Tuned (Alpha version) {%- endif -%}
- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI. {%- endfor -%}
- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user. {%- if not found_item -%}
- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes. {{- '<|SYSTEM|>' + '\# StableLM Tuned (Alpha version)\n- StableLM is a helpful and harmless open-source AI language model developed by StabilityAI.\n- StableLM is excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- StableLM is more than just an information source, StableLM is also able to write poetry, short stories, and make jokes.\n- StableLM will refuse to participate in anything that could harm a human.\n' + '\n' -}}
- StableLM will refuse to participate in anything that could harm a human. {%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<|SYSTEM|>' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|USER|>' + message['content'] + ''-}}
{%- else -%}
{{-'<|ASSISTANT|>' + message['content'] + '' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|ASSISTANT|>'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Human:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '### Assistant: I am StableVicuna, a large language model created by CarperAI. I am here to chat!' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Human: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<|user|>" instruction_template: |-
bot: "<|assistant|>" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|><|end|>\n<|bot|>\n<|bot-message|><|end|>\n" {%- for message in messages -%}
context: "<|system|><|system-message|>\n<|end|>\n" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '<|system|>' + '' + '\n<|end|>\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<|system|>' + message['content'] + '\n<|end|>\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|user|>\n' + message['content'] + '<|end|>\n'-}}
{%- else -%}
{{-'<|assistant|>\n' + message['content'] + '<|end|>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|assistant|>\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<|user|>" instruction_template: |-
bot: "<|assistant|>" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n<|bot|>\n<|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|user|>\n' + message['content'] + '\n'-}}
{%- else -%}
{{-'<|assistant|>\n' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|assistant|>\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Human:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|>\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A chat between a curious human and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the human's questions.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Human: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n<|bot|> <|bot-message|></s>\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '</s>\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,11 +1,25 @@
user: "<|USER|>:" instruction_template: |-
bot: "<|ASSISTANT|>:" {%- set found_item = false -%}
turn_template: "\n<|user|> <|user-message|>\n<|bot|> <|bot-message|>" {%- for message in messages -%}
context: "<|system-message|>\n" {%- if message['role'] == 'system' -%}
system_message: | {%- set found_item = true -%}
Below is a conversation between a user and an AI assistant named Vigogne. {%- endif -%}
Vigogne is an open-source AI assistant created by Zaion (https://zaion.ai/). {%- endfor -%}
Vigogne is polite, emotionally aware, humble-but-knowledgeable, always providing helpful and detailed answers. {%- if not found_item -%}
Vigogne is skilled in responding proficiently in the languages its users use and can perform a wide range of tasks such as text editing, translation, question answering, logical reasoning, coding, and many others. {{- '' + 'Below is a conversation between a user and an AI assistant named Vigogne.\nVigogne is an open-source AI assistant created by Zaion (https://zaion.ai/).\nVigogne is polite, emotionally aware, humble-but-knowledgeable, always providing helpful and detailed answers.\nVigogne is skilled in responding proficiently in the languages its users use and can perform a wide range of tasks such as text editing, translation, question answering, logical reasoning, coding, and many others.\nVigogne cannot receive or generate audio or visual content and cannot access the internet.\nVigogne strictly avoids discussing sensitive, offensive, illegal, ethical, or political topics and caveats when unsure of the answer.\n' + '\n' -}}
Vigogne cannot receive or generate audio or visual content and cannot access the internet. {%- endif %}
Vigogne strictly avoids discussing sensitive, offensive, illegal, ethical, or political topics and caveats when unsure of the answer. {%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'\n<|USER|>: ' + message['content'] + '\n'-}}
{%- else -%}
{{-'<|ASSISTANT|>: ' + message['content'] + '' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|ASSISTANT|>:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Réponse:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "Ci-dessous se trouve une instruction qui décrit une tâche à accomplir. Rédigez une réponse qui répond de manière précise à la demande." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'Ci-dessous se trouve une instruction qui décrit une tâche à accomplir. Rédigez une réponse qui répond de manière précise à la demande.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Réponse:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Réponse:\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "USER:" instruction_template: |-
bot: "ASSISTANT:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|> <|bot|> <|bot-message|></s>" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'USER: ' + message['content'] + ' '-}}
{%- else -%}
{{-'ASSISTANT: ' + message['content'] + '</s>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'ASSISTANT:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Response:" {%- set found_item = false -%}
turn_template: "<|user|>\n<|user-message|>\n\n<|bot|>\n<|bot-message|>\n\n" {%- for message in messages -%}
context: "<|system-message|>\n\n" {%- if message['role'] == 'system' -%}
system_message: "Below is an instruction that describes a task. Write a response that appropriately completes the request." {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "### Instruction:" instruction_template: |-
bot: "### Assistant:" {%- set found_item = false -%}
turn_template: "<|user|> <|user-message|>\n\n<|bot|> <|bot-message|>\n\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction: ' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Assistant: ' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Assistant:'-}}
{%- endif -%}

View File

@ -1,5 +1,25 @@
user: "<human>:" instruction_template: |-
bot: "<bot>:" {%- set found_item = false -%}
turn_template: "<|user|><|user-message|>\n<|bot|><|bot-message|>\n" {%- for message in messages -%}
context: "" {%- if message['role'] == 'system' -%}
system_message: "" {%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + '' + '' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<human>:' + message['content'] + '\n'-}}
{%- else -%}
{{-'<bot>:' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<bot>:'-}}
{%- endif -%}

View File

@ -5,10 +5,12 @@ import html
import json import json
import re import re
from datetime import datetime from datetime import datetime
from functools import partial
from pathlib import Path from pathlib import Path
import gradio as gr import gradio as gr
import yaml import yaml
from jinja2.sandbox import ImmutableSandboxedEnvironment
from PIL import Image from PIL import Image
import modules.shared as shared import modules.shared as shared
@ -20,12 +22,10 @@ from modules.text_generation import (
get_encoded_length, get_encoded_length,
get_max_prompt_length get_max_prompt_length
) )
from modules.utils import ( from modules.utils import delete_file, get_available_characters, save_file
delete_file,
get_available_characters, # Copied from the Transformers library
replace_all, jinja_env = ImmutableSandboxedEnvironment(trim_blocks=True, lstrip_blocks=True)
save_file
)
def str_presenter(dumper, data): def str_presenter(dumper, data):
@ -44,31 +44,34 @@ yaml.add_representer(str, str_presenter)
yaml.representer.SafeRepresenter.add_representer(str, str_presenter) yaml.representer.SafeRepresenter.add_representer(str, str_presenter)
def get_turn_substrings(state, instruct=False): def get_generation_prompt(renderer, impersonate=False, strip_trailing_spaces=True):
if instruct: '''
if 'turn_template' not in state or state['turn_template'] == '': Given a Jinja template, reverse-engineers the prefix and the suffix for
template = '<|user|>\n<|user-message|>\n<|bot|>\n<|bot-message|>\n' an assistant message (if impersonate=False) or an user message
else: (if impersonate=True)
template = state['turn_template'].replace(r'\n', '\n') '''
if impersonate:
messages = [
{"role": "user", "content": "<<|user-message-1|>>"},
{"role": "user", "content": "<<|user-message-2|>>"},
]
else: else:
template = '<|user|>: <|user-message|>\n<|bot|>: <|bot-message|>\n' messages = [
{"role": "assistant", "content": "<<|user-message-1|>>"},
{"role": "assistant", "content": "<<|user-message-2|>>"},
]
replacements = { prompt = renderer(messages=messages)
'<|user|>': state['name1_instruct' if instruct else 'name1'].strip(),
'<|bot|>': state['name2_instruct' if instruct else 'name2'].strip(),
}
output = { suffix_plus_prefix = prompt.split("<<|user-message-1|>>")[1].split("<<|user-message-2|>>")[0]
'user_turn': template.split('<|bot|>')[0], suffix = prompt.split("<<|user-message-2|>>")[1]
'bot_turn': '<|bot|>' + template.split('<|bot|>')[1], prefix = suffix_plus_prefix[len(suffix):]
'user_turn_stripped': template.split('<|bot|>')[0].split('<|user-message|>')[0],
'bot_turn_stripped': '<|bot|>' + template.split('<|bot|>')[1].split('<|bot-message|>')[0],
}
for k in output: if strip_trailing_spaces:
output[k] = replace_all(output[k], replacements) prefix = prefix.rstrip(' ')
return output return prefix, suffix
def generate_chat_prompt(user_input, state, **kwargs): def generate_chat_prompt(user_input, state, **kwargs):
@ -76,124 +79,133 @@ def generate_chat_prompt(user_input, state, **kwargs):
_continue = kwargs.get('_continue', False) _continue = kwargs.get('_continue', False)
also_return_rows = kwargs.get('also_return_rows', False) also_return_rows = kwargs.get('also_return_rows', False)
history = kwargs.get('history', state['history'])['internal'] history = kwargs.get('history', state['history'])['internal']
is_instruct = state['mode'] == 'instruct'
# Find the maximum prompt size # Templates
max_length = get_max_prompt_length(state) chat_template = jinja_env.from_string(state['chat_template_str'])
all_substrings = { instruction_template = jinja_env.from_string(state['instruction_template_str'])
'chat': get_turn_substrings(state, instruct=False) if state['mode'] in ['chat', 'chat-instruct'] else None, chat_renderer = partial(chat_template.render, add_generation_prompt=False, name1=state['name1'], name2=state['name2'])
'instruct': get_turn_substrings(state, instruct=True) instruct_renderer = partial(instruction_template.render, add_generation_prompt=False)
}
substrings = all_substrings['instruct' if is_instruct else 'chat'] messages = []
# Create the template for "chat-instruct" mode if state['mode'] == 'instruct':
if state['mode'] == 'chat-instruct': renderer = instruct_renderer
wrapper = ''
command = state['chat-instruct_command'].replace('<|character|>', state['name2'] if not impersonate else state['name1'])
context_instruct = state['context_instruct']
if state['custom_system_message'].strip() != '': if state['custom_system_message'].strip() != '':
context_instruct = context_instruct.replace('<|system-message|>', state['custom_system_message']) messages.append({"role": "system", "content": state['custom_system_message']})
else:
context_instruct = context_instruct.replace('<|system-message|>', state['system_message'])
wrapper += context_instruct
wrapper += all_substrings['instruct']['user_turn'].replace('<|user-message|>', command)
wrapper += all_substrings['instruct']['bot_turn_stripped']
if impersonate:
wrapper += substrings['user_turn_stripped'].rstrip(' ')
elif _continue:
wrapper += apply_extensions('bot_prefix', substrings['bot_turn_stripped'], state)
wrapper += history[-1][1]
else:
wrapper += apply_extensions('bot_prefix', substrings['bot_turn_stripped'].rstrip(' '), state)
else: else:
wrapper = '<|prompt|>' renderer = chat_renderer
if state['context'].strip() != '':
messages.append({"role": "system", "content": state['context']})
if is_instruct: insert_pos = len(messages)
context = state['context_instruct'] for user_msg, assistant_msg in reversed(history):
if state['custom_system_message'].strip() != '': user_msg = user_msg.strip()
context = context.replace('<|system-message|>', state['custom_system_message']) assistant_msg = assistant_msg.strip()
if assistant_msg:
messages.insert(insert_pos, {"role": "assistant", "content": assistant_msg})
if user_msg not in ['', '<|BEGIN-VISIBLE-CHAT|>']:
messages.insert(insert_pos, {"role": "user", "content": user_msg})
user_input = user_input.strip()
if user_input and not impersonate and not _continue:
messages.append({"role": "user", "content": user_input})
def make_prompt(messages):
if state['mode'] == 'chat-instruct' and _continue:
prompt = renderer(messages=messages[:-1])
else: else:
context = context.replace('<|system-message|>', state['system_message']) prompt = renderer(messages=messages)
else:
context = replace_character_names(
f"{state['context'].strip()}\n",
state['name1'],
state['name2']
)
# Build the prompt
rows = [context]
min_rows = 3
i = len(history) - 1
while i >= 0 and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) < max_length:
if _continue and i == len(history) - 1:
if state['mode'] != 'chat-instruct':
rows.insert(1, substrings['bot_turn_stripped'] + history[i][1].strip())
else:
rows.insert(1, substrings['bot_turn'].replace('<|bot-message|>', history[i][1].strip()))
string = history[i][0]
if string not in ['', '<|BEGIN-VISIBLE-CHAT|>']:
rows.insert(1, replace_all(substrings['user_turn'], {'<|user-message|>': string.strip(), '<|round|>': str(i)}))
i -= 1
if impersonate:
if state['mode'] == 'chat-instruct': if state['mode'] == 'chat-instruct':
min_rows = 1 outer_messages = []
if state['custom_system_message'].strip() != '':
outer_messages.append({"role": "system", "content": state['custom_system_message']})
command = state['chat-instruct_command']
command = command.replace('<|character|>', state['name2'] if not impersonate else state['name1'])
command = command.replace('<|prompt|>', prompt)
if _continue:
prefix = get_generation_prompt(renderer, impersonate=impersonate, strip_trailing_spaces=False)[0]
prefix += messages[-1]["content"]
else:
prefix = get_generation_prompt(renderer, impersonate=impersonate)[0]
if not impersonate:
prefix = apply_extensions('bot_prefix', prefix, state)
outer_messages.append({"role": "user", "content": command})
outer_messages.append({"role": "assistant", "content": prefix})
prompt = instruction_template.render(messages=outer_messages)
suffix = get_generation_prompt(instruct_renderer, impersonate=False)[1]
prompt = prompt[:-len(suffix)]
else: else:
min_rows = 2 if _continue:
rows.append(substrings['user_turn_stripped'].rstrip(' ')) suffix = get_generation_prompt(renderer, impersonate=impersonate)[1]
elif not _continue: prompt = prompt[:-len(suffix)]
# Add the user message else:
if len(user_input) > 0: prefix = get_generation_prompt(renderer, impersonate=impersonate)[0]
rows.append(replace_all(substrings['user_turn'], {'<|user-message|>': user_input.strip(), '<|round|>': str(len(history))})) if state['mode'] == 'chat' and not impersonate:
prefix = apply_extensions('bot_prefix', prefix, state)
# Add the character prefix prompt += prefix
if state['mode'] != 'chat-instruct':
rows.append(apply_extensions('bot_prefix', substrings['bot_turn_stripped'].rstrip(' '), state))
while len(rows) > min_rows and get_encoded_length(wrapper.replace('<|prompt|>', ''.join(rows))) >= max_length: return prompt
rows.pop(1)
prompt = make_prompt(messages)
# Handle truncation
max_length = get_max_prompt_length(state)
while len(messages) > 0 and get_encoded_length(prompt) > max_length:
# Try to save the system message
if len(messages) > 1 and messages[0]['role'] == 'system':
messages.pop(1)
else:
messages.pop(0)
prompt = make_prompt(messages)
prompt = wrapper.replace('<|prompt|>', ''.join(rows))
if also_return_rows: if also_return_rows:
return prompt, rows return prompt, [message['content'] for message in messages]
else: else:
return prompt return prompt
def get_stopping_strings(state): def get_stopping_strings(state):
stopping_strings = [] stopping_strings = []
renderers = []
if state['mode'] in ['instruct', 'chat-instruct']: if state['mode'] in ['instruct', 'chat-instruct']:
stopping_strings += [ template = jinja_env.from_string(state['instruction_template_str'])
state['turn_template'].split('<|user-message|>')[1].split('<|bot|>')[0] + '<|bot|>', renderer = partial(template.render, add_generation_prompt=False)
state['turn_template'].split('<|bot-message|>')[1] + '<|user|>' renderers.append(renderer)
]
replacements = {
'<|user|>': state['name1_instruct'],
'<|bot|>': state['name2_instruct']
}
for i in range(len(stopping_strings)):
stopping_strings[i] = replace_all(stopping_strings[i], replacements).rstrip(' ').replace(r'\n', '\n')
if state['mode'] in ['chat', 'chat-instruct']: if state['mode'] in ['chat', 'chat-instruct']:
template = jinja_env.from_string(state['chat_template_str'])
renderer = partial(template.render, add_generation_prompt=False, name1=state['name1'], name2=state['name2'])
renderers.append(renderer)
for renderer in renderers:
prefix_bot, suffix_bot = get_generation_prompt(renderer, impersonate=False)
prefix_user, suffix_user = get_generation_prompt(renderer, impersonate=True)
stopping_strings += [ stopping_strings += [
f"\n{state['name1']}:", suffix_user + prefix_bot,
f"\n{state['name2']}:" suffix_user + prefix_user,
suffix_bot + prefix_bot,
suffix_bot + prefix_user,
] ]
if 'stopping_strings' in state and isinstance(state['stopping_strings'], list): if 'stopping_strings' in state and isinstance(state['stopping_strings'], list):
stopping_strings += state.pop('stopping_strings') stopping_strings += state.pop('stopping_strings')
return stopping_strings return list(set(stopping_strings))
def chatbot_wrapper(text, state, regenerate=False, _continue=False, loading_message=True): def chatbot_wrapper(text, state, regenerate=False, _continue=False, loading_message=True, for_ui=False):
history = state['history'] history = state['history']
output = copy.deepcopy(history) output = copy.deepcopy(history)
output = apply_extensions('history', output) output = apply_extensions('history', output)
@ -244,7 +256,7 @@ def chatbot_wrapper(text, state, regenerate=False, _continue=False, loading_mess
# Generate # Generate
reply = None reply = None
for j, reply in enumerate(generate_reply(prompt, state, stopping_strings=stopping_strings, is_chat=True)): for j, reply in enumerate(generate_reply(prompt, state, stopping_strings=stopping_strings, is_chat=True, for_ui=for_ui)):
# Extract the reply # Extract the reply
visible_reply = reply visible_reply = reply
@ -299,7 +311,7 @@ def impersonate_wrapper(text, state):
return return
def generate_chat_reply(text, state, regenerate=False, _continue=False, loading_message=True): def generate_chat_reply(text, state, regenerate=False, _continue=False, loading_message=True, for_ui=False):
history = state['history'] history = state['history']
if regenerate or _continue: if regenerate or _continue:
text = '' text = ''
@ -307,7 +319,7 @@ def generate_chat_reply(text, state, regenerate=False, _continue=False, loading_
yield history yield history
return return
for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message): for history in chatbot_wrapper(text, state, regenerate=regenerate, _continue=_continue, loading_message=loading_message, for_ui=for_ui):
yield history yield history
@ -339,7 +351,7 @@ def generate_chat_reply_wrapper(text, state, regenerate=False, _continue=False):
send_dummy_message(text, state) send_dummy_message(text, state)
send_dummy_reply(state['start_with'], state) send_dummy_reply(state['start_with'], state)
for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True)): for i, history in enumerate(generate_chat_reply(text, state, regenerate, _continue, loading_message=True, for_ui=True)):
yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'], state['character_menu']), history yield chat_html_wrapper(history, state['name1'], state['name2'], state['mode'], state['chat_style'], state['character_menu']), history
@ -556,32 +568,26 @@ def generate_pfp_cache(character):
return None return None
def load_character(character, name1, name2, instruct=False): def load_character(character, name1, name2):
context = greeting = turn_template = system_message = "" context = greeting = ""
greeting_field = 'greeting' greeting_field = 'greeting'
picture = None picture = None
if instruct:
name1 = name2 = ''
folder = 'instruction-templates'
else:
folder = 'characters'
filepath = None filepath = None
for extension in ["yml", "yaml", "json"]: for extension in ["yml", "yaml", "json"]:
filepath = Path(f'{folder}/{character}.{extension}') filepath = Path(f'characters/{character}.{extension}')
if filepath.exists(): if filepath.exists():
break break
if filepath is None or not filepath.exists(): if filepath is None or not filepath.exists():
logger.error(f"Could not find the character \"{character}\" inside {folder}/. No character has been loaded.") logger.error(f"Could not find the character \"{character}\" inside characters/. No character has been loaded.")
raise ValueError raise ValueError
file_contents = open(filepath, 'r', encoding='utf-8').read() file_contents = open(filepath, 'r', encoding='utf-8').read()
data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents) data = json.loads(file_contents) if extension == "json" else yaml.safe_load(file_contents)
for path in [Path("cache/pfp_character.png"), Path("cache/pfp_character_thumb.png")]: for path in [Path("cache/pfp_character.png"), Path("cache/pfp_character_thumb.png")]:
if path.exists() and not instruct: if path.exists():
path.unlink() path.unlink()
picture = generate_pfp_cache(character) picture = generate_pfp_cache(character)
@ -599,23 +605,38 @@ def load_character(character, name1, name2, instruct=False):
break break
if 'context' in data: if 'context' in data:
context = data['context'] context = data['context'].strip()
if not instruct:
context = context.strip() + '\n'
elif "char_persona" in data: elif "char_persona" in data:
context = build_pygmalion_style_context(data) context = build_pygmalion_style_context(data)
greeting_field = 'char_greeting' greeting_field = 'char_greeting'
greeting = data.get(greeting_field, greeting) greeting = data.get(greeting_field, greeting)
turn_template = data.get('turn_template', turn_template) return name1, name2, picture, greeting, context
system_message = data.get('system_message', system_message)
return name1, name2, picture, greeting, context, turn_template.replace("\n", r"\n"), system_message
def load_instruction_template(template):
for filepath in [Path(f'instruction-templates/{template}.yaml'), Path('instruction-templates/Alpaca.yaml')]:
if filepath.exists():
break
else:
return ''
file_contents = open(filepath, 'r', encoding='utf-8').read()
data = yaml.safe_load(file_contents)
if 'instruction_template' in data:
return data['instruction_template']
else:
return jinja_template_from_old_format(data)
@functools.cache @functools.cache
def load_character_memoized(character, name1, name2, instruct=False): def load_character_memoized(character, name1, name2):
return load_character(character, name1, name2, instruct=instruct) return load_character(character, name1, name2)
@functools.cache
def load_instruction_template_memoized(template):
return load_instruction_template(template)
def upload_character(file, img, tavern=False): def upload_character(file, img, tavern=False):
@ -707,17 +728,12 @@ def generate_character_yaml(name, greeting, context):
return yaml.dump(data, sort_keys=False, width=float("inf")) return yaml.dump(data, sort_keys=False, width=float("inf"))
def generate_instruction_template_yaml(user, bot, context, turn_template, system_message): def generate_instruction_template_yaml(instruction_template):
data = { data = {
'user': user, 'instruction_template': instruction_template
'bot': bot,
'turn_template': turn_template,
'context': context,
'system_message': system_message,
} }
data = {k: v for k, v in data.items() if v} # Strip falsy return my_yaml_output(data)
return yaml.dump(data, sort_keys=False, width=float("inf"))
def save_character(name, greeting, context, picture, filename): def save_character(name, greeting, context, picture, filename):
@ -739,3 +755,95 @@ def delete_character(name, instruct=False):
delete_file(Path(f'characters/{name}.{extension}')) delete_file(Path(f'characters/{name}.{extension}'))
delete_file(Path(f'characters/{name}.png')) delete_file(Path(f'characters/{name}.png'))
def jinja_template_from_old_format(params, verbose=False):
MASTER_TEMPLATE = """
{%- set found_item = false -%}
{%- for message in messages -%}
{%- if message['role'] == 'system' -%}
{%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '<|PRE-SYSTEM|>' + '<|SYSTEM-MESSAGE|>' + '<|POST-SYSTEM|>' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '<|PRE-SYSTEM|>' + message['content'] + '<|POST-SYSTEM|>' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'<|PRE-USER|>' + message['content'] + '<|POST-USER|>'-}}
{%- else -%}
{{-'<|PRE-ASSISTANT|>' + message['content'] + '<|POST-ASSISTANT|>' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'<|PRE-ASSISTANT-GENERATE|>'-}}
{%- endif -%}
"""
if 'context' in params and '<|system-message|>' in params['context']:
pre_system = params['context'].split('<|system-message|>')[0]
post_system = params['context'].split('<|system-message|>')[1]
else:
pre_system = ''
post_system = ''
pre_user = params['turn_template'].split('<|user-message|>')[0].replace('<|user|>', params['user'])
post_user = params['turn_template'].split('<|user-message|>')[1].split('<|bot|>')[0]
pre_assistant = '<|bot|>' + params['turn_template'].split('<|bot-message|>')[0].split('<|bot|>')[1]
pre_assistant = pre_assistant.replace('<|bot|>', params['bot'])
post_assistant = params['turn_template'].split('<|bot-message|>')[1]
pre_system = pre_system.replace('\n', '\\n')
post_system = post_system.replace('\n', '\\n')
pre_user = pre_user.replace('\n', '\\n')
post_user = post_user.replace('\n', '\\n')
pre_assistant = pre_assistant.replace('\n', '\\n')
post_assistant = post_assistant.replace('\n', '\\n')
if verbose:
print(
'\n',
repr(pre_system) + '\n',
repr(post_system) + '\n',
repr(pre_user) + '\n',
repr(post_user) + '\n',
repr(pre_assistant) + '\n',
repr(post_assistant) + '\n',
)
result = MASTER_TEMPLATE
if 'system_message' in params:
result = result.replace('<|SYSTEM-MESSAGE|>', params['system_message'].replace('\n', '\\n'))
else:
result = result.replace('<|SYSTEM-MESSAGE|>', '')
result = result.replace('<|PRE-SYSTEM|>', pre_system)
result = result.replace('<|POST-SYSTEM|>', post_system)
result = result.replace('<|PRE-USER|>', pre_user)
result = result.replace('<|POST-USER|>', post_user)
result = result.replace('<|PRE-ASSISTANT|>', pre_assistant)
result = result.replace('<|PRE-ASSISTANT-GENERATE|>', pre_assistant.strip())
result = result.replace('<|POST-ASSISTANT|>', post_assistant)
result = result.strip()
return result
def my_yaml_output(data):
'''
pyyaml is very inconsistent with multiline strings.
for simple instruction template outputs, this is enough.
'''
result = ""
for k in data:
result += k + ": |-\n"
for line in data[k].splitlines():
result += " " + line.rstrip(' ') + "\n"
return result

View File

@ -69,9 +69,8 @@ def calculate_perplexity(models, input_dataset, stride, _max_length):
model_settings = get_model_metadata(model) model_settings = get_model_metadata(model)
shared.settings.update({k: v for k, v in model_settings.items() if k in shared.settings}) # hijacking the interface defaults shared.settings.update({k: v for k, v in model_settings.items() if k in shared.settings}) # hijacking the interface defaults
update_model_parameters(model_settings) # hijacking the command-line arguments update_model_parameters(model_settings) # hijacking the command-line arguments
shared.model_name = model
unload_model() unload_model()
shared.model, shared.tokenizer = load_model(shared.model_name) shared.model, shared.tokenizer = load_model(model)
except: except:
cumulative_log += f"Failed to load `{model}`. Moving on.\n\n" cumulative_log += f"Failed to load `{model}`. Moving on.\n\n"
yield cumulative_log yield cumulative_log

View File

@ -59,6 +59,7 @@ loaders_and_params = OrderedDict({
'cpu', 'cpu',
'numa', 'numa',
'cfg_cache', 'cfg_cache',
'trust_remote_code',
'no_use_fast', 'no_use_fast',
'logits_all', 'logits_all',
'llamacpp_HF_info', 'llamacpp_HF_info',
@ -70,6 +71,7 @@ loaders_and_params = OrderedDict({
'rope_freq_base', 'rope_freq_base',
'compress_pos_emb', 'compress_pos_emb',
'cfg_cache', 'cfg_cache',
'trust_remote_code',
'no_use_fast', 'no_use_fast',
], ],
'ExLlamav2_HF': [ 'ExLlamav2_HF': [
@ -80,6 +82,7 @@ loaders_and_params = OrderedDict({
'cache_8bit', 'cache_8bit',
'alpha_value', 'alpha_value',
'compress_pos_emb', 'compress_pos_emb',
'trust_remote_code',
'no_use_fast', 'no_use_fast',
], ],
'AutoGPTQ': [ 'AutoGPTQ': [
@ -114,6 +117,7 @@ loaders_and_params = OrderedDict({
'groupsize', 'groupsize',
'model_type', 'model_type',
'pre_layer', 'pre_layer',
'trust_remote_code',
'no_use_fast', 'no_use_fast',
'gptq_for_llama_info', 'gptq_for_llama_info',
], ],

View File

@ -58,6 +58,7 @@ def load_model(model_name, loader=None):
t0 = time.time() t0 = time.time()
shared.is_seq2seq = False shared.is_seq2seq = False
shared.model_name = model_name
load_func_map = { load_func_map = {
'Transformers': huggingface_loader, 'Transformers': huggingface_loader,
'AutoGPTQ': AutoGPTQ_loader, 'AutoGPTQ': AutoGPTQ_loader,
@ -107,7 +108,7 @@ def load_model(model_name, loader=None):
logger.info(f"LOADER: {loader}") logger.info(f"LOADER: {loader}")
logger.info(f"TRUNCATION LENGTH: {shared.settings['truncation_length']}") logger.info(f"TRUNCATION LENGTH: {shared.settings['truncation_length']}")
logger.info(f"INSTRUCTION TEMPLATE: {shared.settings['instruction_template']}") logger.info(f"INSTRUCTION TEMPLATE: {metadata['instruction_template']}")
logger.info(f"Loaded the model in {(time.time()-t0):.2f} seconds.") logger.info(f"Loaded the model in {(time.time()-t0):.2f} seconds.")
return model, tokenizer return model, tokenizer

View File

@ -4,7 +4,7 @@ from pathlib import Path
import yaml import yaml
from modules import loaders, metadata_gguf, shared, ui from modules import chat, loaders, metadata_gguf, shared, ui
def get_fallback_settings(): def get_fallback_settings():
@ -33,7 +33,6 @@ def get_model_metadata(model):
for k in settings[pat]: for k in settings[pat]:
model_settings[k] = settings[pat][k] model_settings[k] = settings[pat][k]
path = Path(f'{shared.args.model_dir}/{model}/config.json') path = Path(f'{shared.args.model_dir}/{model}/config.json')
if path.exists(): if path.exists():
hf_metadata = json.loads(open(path, 'r').read()) hf_metadata = json.loads(open(path, 'r').read())
@ -100,6 +99,31 @@ def get_model_metadata(model):
if 'desc_act' in metadata: if 'desc_act' in metadata:
model_settings['desc_act'] = metadata['desc_act'] model_settings['desc_act'] = metadata['desc_act']
# Try to find the Jinja instruct template
path = Path(f'{shared.args.model_dir}/{model}') / 'tokenizer_config.json'
if path.exists():
metadata = json.loads(open(path, 'r').read())
if 'chat_template' in metadata:
template = metadata['chat_template']
for k in ['eos_token', 'bos_token']:
if k in metadata:
value = metadata[k]
if type(value) is dict:
value = value['content']
template = template.replace(k, "'{}'".format(value))
template = re.sub(r'raise_exception\([^)]*\)', "''", template)
model_settings['instruction_template'] = 'Custom (obtained from model metadata)'
model_settings['instruction_template_str'] = template
if 'instruction_template' not in model_settings:
model_settings['instruction_template'] = 'Alpaca'
if model_settings['instruction_template'] != 'Custom (obtained from model metadata)':
model_settings['instruction_template_str'] = chat.load_instruction_template(model_settings['instruction_template'])
# Ignore rope_freq_base if set to the default value # Ignore rope_freq_base if set to the default value
if 'rope_freq_base' in model_settings and model_settings['rope_freq_base'] == 10000: if 'rope_freq_base' in model_settings and model_settings['rope_freq_base'] == 10000:
model_settings.pop('rope_freq_base') model_settings.pop('rope_freq_base')

View File

@ -54,8 +54,9 @@ settings = {
'stream': True, 'stream': True,
'character': 'Assistant', 'character': 'Assistant',
'name1': 'You', 'name1': 'You',
'instruction_template': 'Alpaca',
'custom_system_message': '', 'custom_system_message': '',
'instruction_template_str': "{%- set found_item = false -%}\n{%- for message in messages -%}\n {%- if message['role'] == 'system' -%}\n {%- set found_item = true -%}\n {%- endif -%}\n{%- endfor -%}\n{%- if not found_item -%}\n {{- '' + 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' + '\\n\\n' -}}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'system' -%}\n {{- '' + message['content'] + '\\n\\n' -}}\n {%- else -%}\n {%- if message['role'] == 'user' -%}\n {{-'### Instruction:\\n' + message['content'] + '\\n\\n'-}}\n {%- else -%}\n {{-'### Response:\\n' + message['content'] + '\\n\\n' -}}\n {%- endif -%}\n {%- endif -%}\n{%- endfor -%}\n{%- if add_generation_prompt -%}\n {{-'### Response:\\n'-}}\n{%- endif -%}",
'chat_template_str': "{%- for message in messages %}\n {%- if message['role'] == 'system' -%}\n {{- message['content'] + '\\n\\n' -}}\n {%- else -%}\n {%- if message['role'] == 'user' -%}\n {{- name1 + ': ' + message['content'] + '\\n'-}}\n {%- else -%}\n {{- name2 + ': ' + message['content'] + '\\n' -}}\n {%- endif -%}\n {%- endif -%}\n{%- endfor -%}",
'chat-instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>', 'chat-instruct_command': 'Continue the chat dialogue below. Write a single reply for the character "<|character|>".\n\n<|prompt|>',
'autoload_model': False, 'autoload_model': False,
'gallery-items_per_page': 50, 'gallery-items_per_page': 50,
@ -79,7 +80,7 @@ parser.add_argument('--verbose', action='store_true', help='Print the prompts to
parser.add_argument('--chat-buttons', action='store_true', help='Show buttons on the chat tab instead of a hover menu.') parser.add_argument('--chat-buttons', action='store_true', help='Show buttons on the chat tab instead of a hover menu.')
# Model loader # Model loader
parser.add_argument('--loader', type=str, help='Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, llamacpp_HF, ExLlama_HF, ExLlamav2_HF, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, ExLlama, ExLlamav2, ctransformers.') parser.add_argument('--loader', type=str, help='Choose the model loader manually, otherwise, it will get autodetected. Valid options: Transformers, llama.cpp, llamacpp_HF, ExLlama_HF, ExLlamav2_HF, AutoGPTQ, AutoAWQ, GPTQ-for-LLaMa, ExLlama, ExLlamav2, ctransformers, QuIP#.')
# Accelerate/transformers # Accelerate/transformers
parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text. Warning: Training on CPU is extremely slow.') parser.add_argument('--cpu', action='store_true', help='Use the CPU to generate text. Warning: Training on CPU is extremely slow.')

View File

@ -33,7 +33,7 @@ def generate_reply(*args, **kwargs):
shared.generation_lock.release() shared.generation_lock.release()
def _generate_reply(question, state, stopping_strings=None, is_chat=False, escape_html=False): def _generate_reply(question, state, stopping_strings=None, is_chat=False, escape_html=False, for_ui=False):
# Find the appropriate generation function # Find the appropriate generation function
generate_func = apply_extensions('custom_generate_reply') generate_func = apply_extensions('custom_generate_reply')
@ -96,7 +96,7 @@ def _generate_reply(question, state, stopping_strings=None, is_chat=False, escap
# Limit updates to 24 or 5 per second to avoid lag in the Gradio UI # Limit updates to 24 or 5 per second to avoid lag in the Gradio UI
# API updates are not limited # API updates are not limited
else: else:
min_update_interval = 0 if not escape_html else 0.2 if (shared.args.listen or shared.args.share) else 0.0417 min_update_interval = 0 if not for_ui else 0.2 if (shared.args.listen or shared.args.share) else 0.0417
if cur_time - last_update > min_update_interval: if cur_time - last_update > min_update_interval:
last_update = cur_time last_update = cur_time
yield reply yield reply
@ -120,10 +120,9 @@ def encode(prompt, add_special_tokens=True, add_bos_token=True, truncation_lengt
input_ids = np.array(input_ids).reshape(1, len(input_ids)) input_ids = np.array(input_ids).reshape(1, len(input_ids))
else: else:
input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens) input_ids = shared.tokenizer.encode(str(prompt), return_tensors='pt', add_special_tokens=add_special_tokens)
if not add_bos_token:
# This is a hack for making replies more creative. while len(input_ids[0]) > 0 and input_ids[0][0] == shared.tokenizer.bos_token_id:
if not add_bos_token and input_ids[0][0] == shared.tokenizer.bos_token_id: input_ids = input_ids[:, 1:]
input_ids = input_ids[:, 1:]
# Handling truncation # Handling truncation
if truncation_length is not None: if truncation_length is not None:
@ -179,7 +178,7 @@ def generate_reply_wrapper(question, state, stopping_strings=None):
reply = question if not shared.is_seq2seq else '' reply = question if not shared.is_seq2seq else ''
yield formatted_outputs(reply, shared.model_name) yield formatted_outputs(reply, shared.model_name)
for reply in generate_reply(question, state, stopping_strings, is_chat=False, escape_html=True): for reply in generate_reply(question, state, stopping_strings, is_chat=False, escape_html=True, for_ui=True):
if not shared.is_seq2seq: if not shared.is_seq2seq:
reply = question + reply reply = question + reply

View File

@ -154,13 +154,9 @@ def list_interface_input_elements():
'greeting', 'greeting',
'context', 'context',
'mode', 'mode',
'instruction_template',
'name1_instruct',
'name2_instruct',
'context_instruct',
'system_message',
'custom_system_message', 'custom_system_message',
'turn_template', 'instruction_template_str',
'chat_template_str',
'chat_style', 'chat_style',
'chat-instruct_command', 'chat-instruct_command',
] ]
@ -202,7 +198,7 @@ def apply_interface_values(state, use_persistent=False):
return [state[k] if k in state else gr.update() for k in elements] return [state[k] if k in state else gr.update() for k in elements]
def save_settings(state, preset, instruction_template, extensions, show_controls): def save_settings(state, preset, extensions, show_controls):
output = copy.deepcopy(shared.settings) output = copy.deepcopy(shared.settings)
exclude = ['name2', 'greeting', 'context', 'turn_template'] exclude = ['name2', 'greeting', 'context', 'turn_template']
for k in state: for k in state:
@ -213,7 +209,6 @@ def save_settings(state, preset, instruction_template, extensions, show_controls
output['prompt-default'] = state['prompt_menu-default'] output['prompt-default'] = state['prompt_menu-default']
output['prompt-notebook'] = state['prompt_menu-notebook'] output['prompt-notebook'] = state['prompt_menu-notebook']
output['character'] = state['character_menu'] output['character'] = state['character_menu']
output['instruction_template'] = instruction_template
output['default_extensions'] = extensions output['default_extensions'] = extensions
output['seed'] = int(output['seed']) output['seed'] = int(output['seed'])
output['show_controls'] = show_controls output['show_controls'] = show_controls

View File

@ -19,7 +19,6 @@ def create_ui():
mu = shared.args.multi_user mu = shared.args.multi_user
shared.gradio['Chat input'] = gr.State() shared.gradio['Chat input'] = gr.State()
shared.gradio['dummy'] = gr.State()
shared.gradio['history'] = gr.State({'internal': [], 'visible': []}) shared.gradio['history'] = gr.State({'internal': [], 'visible': []})
with gr.Tab('Chat', elem_id='chat-tab', elem_classes=("old-ui" if shared.args.chat_buttons else None)): with gr.Tab('Chat', elem_id='chat-tab', elem_classes=("old-ui" if shared.args.chat_buttons else None)):
@ -106,25 +105,29 @@ def create_chat_settings_ui():
with gr.Tab('Instruction template'): with gr.Tab('Instruction template'):
with gr.Row(): with gr.Row():
with gr.Row(): with gr.Column():
shared.gradio['instruction_template'] = gr.Dropdown(choices=utils.get_available_instruction_templates(), label='Instruction template', value='None', info='Change this according to the model/LoRA that you are using. Used in instruct and chat-instruct modes.', elem_classes='slim-dropdown') with gr.Row():
ui.create_refresh_button(shared.gradio['instruction_template'], lambda: None, lambda: {'choices': utils.get_available_instruction_templates()}, 'refresh-button', interactive=not mu) shared.gradio['instruction_template'] = gr.Dropdown(choices=utils.get_available_instruction_templates(), label='Saved instruction templates', value='Custom', info='Change this according to the model/LoRA that you are using. Used in instruct and chat-instruct modes.', elem_classes='slim-dropdown')
shared.gradio['save_template'] = gr.Button('💾', elem_classes='refresh-button', interactive=not mu) ui.create_refresh_button(shared.gradio['instruction_template'], lambda: None, lambda: {'choices': utils.get_available_instruction_templates()}, 'refresh-button', interactive=not mu)
shared.gradio['delete_template'] = gr.Button('🗑️ ', elem_classes='refresh-button', interactive=not mu) shared.gradio['load_template'] = gr.Button("Load", elem_classes='refresh-button')
shared.gradio['save_template'] = gr.Button('💾', elem_classes='refresh-button', interactive=not mu)
shared.gradio['delete_template'] = gr.Button('🗑️ ', elem_classes='refresh-button', interactive=not mu)
shared.gradio['custom_system_message'] = gr.Textbox(value=shared.settings['custom_system_message'], lines=2, label='Custom system message', info='If not empty, will be used instead of the default one.', elem_classes=['add_scrollbar']) with gr.Column():
shared.gradio['turn_template'] = gr.Textbox(value='', lines=1, label='Turn template', info='Used to precisely define the placement of spaces and new line characters in instruction prompts.', elem_classes=['add_scrollbar']) pass
shared.gradio['name1_instruct'] = gr.Textbox(value='', lines=2, label='User string', info='Replaces <|user|> in the turn template.')
shared.gradio['name2_instruct'] = gr.Textbox(value='', lines=1, label='Bot string', info='Replaces <|bot|> in the turn template.')
shared.gradio['context_instruct'] = gr.Textbox(value='', lines=4, label='Context', elem_classes=['add_scrollbar'])
shared.gradio['system_message'] = gr.Textbox(value='', lines=2, label='System message', info='Replaces <|system-message|> in the context.', elem_classes=['add_scrollbar'])
with gr.Row():
shared.gradio['send_instruction_to_default'] = gr.Button('Send to default', elem_classes=['small-button'])
shared.gradio['send_instruction_to_notebook'] = gr.Button('Send to notebook', elem_classes=['small-button'])
shared.gradio['send_instruction_to_negative_prompt'] = gr.Button('Send to negative prompt', elem_classes=['small-button'])
with gr.Row(): with gr.Row():
shared.gradio['chat-instruct_command'] = gr.Textbox(value=shared.settings['chat-instruct_command'], lines=4, label='Command for chat-instruct mode', info='<|character|> gets replaced by the bot name, and <|prompt|> gets replaced by the regular chat prompt.', elem_classes=['add_scrollbar']) with gr.Column():
shared.gradio['custom_system_message'] = gr.Textbox(value=shared.settings['custom_system_message'], lines=2, label='Custom system message', info='If not empty, will be used instead of the default one.', elem_classes=['add_scrollbar'])
shared.gradio['instruction_template_str'] = gr.Textbox(value='', label='Instruction template', lines=24, elem_classes=['add_scrollbar', 'monospace'])
with gr.Row():
shared.gradio['send_instruction_to_default'] = gr.Button('Send to default', elem_classes=['small-button'])
shared.gradio['send_instruction_to_notebook'] = gr.Button('Send to notebook', elem_classes=['small-button'])
shared.gradio['send_instruction_to_negative_prompt'] = gr.Button('Send to negative prompt', elem_classes=['small-button'])
with gr.Column():
shared.gradio['chat_template_str'] = gr.Textbox(value=shared.settings['chat_template_str'], label='Chat template', lines=22, elem_classes=['add_scrollbar', 'monospace'])
shared.gradio['chat-instruct_command'] = gr.Textbox(value=shared.settings['chat-instruct_command'], lines=4, label='Command for chat-instruct mode', info='<|character|> gets replaced by the bot name, and <|prompt|> gets replaced by the regular chat prompt.', elem_classes=['add_scrollbar'])
with gr.Tab('Chat history'): with gr.Tab('Chat history'):
with gr.Row(): with gr.Row():
@ -271,7 +274,7 @@ def create_event_handlers():
lambda: None, None, None, _js=f'() => {{{ui.switch_tabs_js}; switch_to_chat()}}') lambda: None, None, None, _js=f'() => {{{ui.switch_tabs_js}; switch_to_chat()}}')
shared.gradio['character_menu'].change( shared.gradio['character_menu'].change(
partial(chat.load_character, instruct=False), gradio('character_menu', 'name1', 'name2'), gradio('name1', 'name2', 'character_picture', 'greeting', 'context', 'dummy', 'dummy')).success( chat.load_character, gradio('character_menu', 'name1', 'name2'), gradio('name1', 'name2', 'character_picture', 'greeting', 'context')).success(
ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(
chat.load_latest_history, gradio('interface_state'), gradio('history')).then( chat.load_latest_history, gradio('interface_state'), gradio('history')).then(
chat.redraw_html, gradio(reload_arr), gradio('display')).then( chat.redraw_html, gradio(reload_arr), gradio('display')).then(
@ -287,9 +290,6 @@ def create_event_handlers():
lambda x: gr.update(choices=(histories := chat.find_all_histories(x)), value=histories[0]), gradio('interface_state'), gradio('unique_id')) lambda x: gr.update(choices=(histories := chat.find_all_histories(x)), value=histories[0]), gradio('interface_state'), gradio('unique_id'))
shared.gradio['chat_style'].change(chat.redraw_html, gradio(reload_arr), gradio('display')) shared.gradio['chat_style'].change(chat.redraw_html, gradio(reload_arr), gradio('display'))
shared.gradio['instruction_template'].change(
partial(chat.load_character, instruct=True), gradio('instruction_template', 'name1_instruct', 'name2_instruct'), gradio('name1_instruct', 'name2_instruct', 'dummy', 'dummy', 'context_instruct', 'turn_template', 'system_message'))
shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, gradio('history'), gradio('textbox'), show_progress=False) shared.gradio['Copy last reply'].click(chat.send_last_reply_to_input, gradio('history'), gradio('textbox'), show_progress=False)
# Save/delete a character # Save/delete a character
@ -299,10 +299,11 @@ def create_event_handlers():
shared.gradio['delete_character'].click(lambda: gr.update(visible=True), None, gradio('character_deleter')) shared.gradio['delete_character'].click(lambda: gr.update(visible=True), None, gradio('character_deleter'))
shared.gradio['load_template'].click(chat.load_instruction_template, gradio('instruction_template'), gradio('instruction_template_str'))
shared.gradio['save_template'].click( shared.gradio['save_template'].click(
lambda: 'My Template.yaml', None, gradio('save_filename')).then( lambda: 'My Template.yaml', None, gradio('save_filename')).then(
lambda: 'instruction-templates/', None, gradio('save_root')).then( lambda: 'instruction-templates/', None, gradio('save_root')).then(
chat.generate_instruction_template_yaml, gradio('name1_instruct', 'name2_instruct', 'context_instruct', 'turn_template', 'system_message'), gradio('save_contents')).then( chat.generate_instruction_template_yaml, gradio('instruction_template_str'), gradio('save_contents')).then(
lambda: gr.update(visible=True), None, gradio('file_saver')) lambda: gr.update(visible=True), None, gradio('file_saver'))
shared.gradio['delete_template'].click( shared.gradio['delete_template'].click(

View File

@ -58,7 +58,7 @@ def create_ui():
with gr.Row(): with gr.Row():
with gr.Column(): with gr.Column():
with gr.Row(): with gr.Row():
shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), value=shared.model_name, label='Model', elem_classes='slim-dropdown', interactive=not mu) shared.gradio['model_menu'] = gr.Dropdown(choices=utils.get_available_models(), value=lambda: shared.model_name, label='Model', elem_classes='slim-dropdown', interactive=not mu)
ui.create_refresh_button(shared.gradio['model_menu'], lambda: None, lambda: {'choices': utils.get_available_models()}, 'refresh-button', interactive=not mu) ui.create_refresh_button(shared.gradio['model_menu'], lambda: None, lambda: {'choices': utils.get_available_models()}, 'refresh-button', interactive=not mu)
shared.gradio['load_model'] = gr.Button("Load", visible=not shared.settings['autoload_model'], elem_classes='refresh-button', interactive=not mu) shared.gradio['load_model'] = gr.Button("Load", visible=not shared.settings['autoload_model'], elem_classes='refresh-button', interactive=not mu)
shared.gradio['unload_model'] = gr.Button("Unload", elem_classes='refresh-button', interactive=not mu) shared.gradio['unload_model'] = gr.Button("Unload", elem_classes='refresh-button', interactive=not mu)
@ -203,10 +203,9 @@ def load_model_wrapper(selected_model, loader, autoload=False):
else: else:
try: try:
yield f"Loading `{selected_model}`..." yield f"Loading `{selected_model}`..."
shared.model_name = selected_model
unload_model() unload_model()
if selected_model != '': if selected_model != '':
shared.model, shared.tokenizer = load_model(shared.model_name, loader) shared.model, shared.tokenizer = load_model(selected_model, loader)
if shared.model is not None: if shared.model is not None:
output = f"Successfully loaded `{selected_model}`." output = f"Successfully loaded `{selected_model}`."

View File

@ -36,7 +36,7 @@ def create_ui():
shared.gradio['toggle_dark_mode'].click(lambda: None, None, None, _js='() => {document.getElementsByTagName("body")[0].classList.toggle("dark")}') shared.gradio['toggle_dark_mode'].click(lambda: None, None, None, _js='() => {document.getElementsByTagName("body")[0].classList.toggle("dark")}')
shared.gradio['save_settings'].click( shared.gradio['save_settings'].click(
ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then( ui.gather_interface_values, gradio(shared.input_elements), gradio('interface_state')).then(
ui.save_settings, gradio('interface_state', 'preset_menu', 'instruction_template', 'extensions_menu', 'show_controls'), gradio('save_contents')).then( ui.save_settings, gradio('interface_state', 'preset_menu', 'extensions_menu', 'show_controls'), gradio('save_contents')).then(
lambda: './', None, gradio('save_root')).then( lambda: './', None, gradio('save_root')).then(
lambda: 'settings.yaml', None, gradio('save_filename')).then( lambda: 'settings.yaml', None, gradio('save_filename')).then(
lambda: gr.update(visible=True), None, gradio('file_saver')) lambda: gr.update(visible=True), None, gradio('file_saver'))

View File

@ -103,7 +103,7 @@ def get_available_instruction_templates():
if os.path.exists(path): if os.path.exists(path):
paths = (x for x in Path(path).iterdir() if x.suffix in ('.json', '.yaml', '.yml')) paths = (x for x in Path(path).iterdir() if x.suffix in ('.json', '.yaml', '.yml'))
return ['None'] + sorted(set((k.stem for k in paths)), key=natural_keys) return ['Custom'] + sorted(set((k.stem for k in paths)), key=natural_keys)
def get_available_extensions(): def get_available_extensions():

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10; platform_system != "Darwin" and platform_machine != "x86_64"
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10; platform_system != "Darwin" and platform_machine != "x86_64"
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -6,9 +6,9 @@ exllamav2==0.0.10
gradio==3.50.* gradio==3.50.*
markdown markdown
numpy==1.24.* numpy==1.24.*
optimum==1.14.0 optimum==1.15.*
pandas pandas
peft==0.6.* peft==0.7.*
Pillow>=9.5.0 Pillow>=9.5.0
pyyaml pyyaml
requests requests
@ -16,7 +16,7 @@ safetensors==0.4.1
scipy scipy
sentencepiece sentencepiece
tensorboard tensorboard
transformers==4.35.* transformers==4.36.*
tqdm tqdm
wandb wandb

View File

@ -2,6 +2,7 @@ import os
import warnings import warnings
import accelerate # This early import makes Intel GPUs happy import accelerate # This early import makes Intel GPUs happy
import modules.one_click_installer_check import modules.one_click_installer_check
from modules.block_requests import OpenMonkeyPatch, RequestBlocker from modules.block_requests import OpenMonkeyPatch, RequestBlocker
from modules.logging_colors import logger from modules.logging_colors import logger
@ -58,9 +59,6 @@ from modules.utils import gradio
def signal_handler(sig, frame): def signal_handler(sig, frame):
logger.info("Received Ctrl+C. Shutting down Text generation web UI gracefully.") logger.info("Received Ctrl+C. Shutting down Text generation web UI gracefully.")
if 'interface' in shared.gradio:
shared.gradio['interface'].close()
sys.exit(0) sys.exit(0)
@ -89,7 +87,7 @@ def create_interface():
'loader': shared.args.loader or 'Transformers', 'loader': shared.args.loader or 'Transformers',
'mode': shared.settings['mode'], 'mode': shared.settings['mode'],
'character_menu': shared.args.character or shared.settings['character'], 'character_menu': shared.args.character or shared.settings['character'],
'instruction_template': shared.settings['instruction_template'], 'instruction_template_str': shared.settings['instruction_template_str'],
'prompt_menu-default': shared.settings['prompt-default'], 'prompt_menu-default': shared.settings['prompt-default'],
'prompt_menu-notebook': shared.settings['prompt-notebook'], 'prompt_menu-notebook': shared.settings['prompt-notebook'],
'filter_by_loader': shared.args.loader or 'All' 'filter_by_loader': shared.args.loader or 'All'

View File

@ -9,22 +9,58 @@ preset: simple-1
max_new_tokens: 512 max_new_tokens: 512
max_new_tokens_min: 1 max_new_tokens_min: 1
max_new_tokens_max: 4096 max_new_tokens_max: 4096
seed: -1
negative_prompt: '' negative_prompt: ''
seed: -1
truncation_length: 2048 truncation_length: 2048
truncation_length_min: 0 truncation_length_min: 0
truncation_length_max: 200000 truncation_length_max: 200000
custom_stopping_strings: ''
auto_max_new_tokens: false
max_tokens_second: 0 max_tokens_second: 0
ban_eos_token: false custom_stopping_strings: ''
custom_token_bans: '' custom_token_bans: ''
auto_max_new_tokens: false
ban_eos_token: false
add_bos_token: true add_bos_token: true
skip_special_tokens: true skip_special_tokens: true
stream: true stream: true
name1: You
character: Assistant character: Assistant
instruction_template: Alpaca name1: You
custom_system_message: ''
instruction_template_str: |-
{%- set found_item = false -%}
{%- for message in messages -%}
{%- if message['role'] == 'system' -%}
{%- set found_item = true -%}
{%- endif -%}
{%- endfor -%}
{%- if not found_item -%}
{{- '' + 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' + '\n\n' -}}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- '' + message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{-'### Instruction:\n' + message['content'] + '\n\n'-}}
{%- else -%}
{{-'### Response:\n' + message['content'] + '\n\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
{%- if add_generation_prompt -%}
{{-'### Response:\n'-}}
{%- endif -%}
chat_template_str: |-
{%- for message in messages %}
{%- if message['role'] == 'system' -%}
{{- message['content'] + '\n\n' -}}
{%- else -%}
{%- if message['role'] == 'user' -%}
{{- name1 + ': ' + message['content'] + '\n'-}}
{%- else -%}
{{- name2 + ': ' + message['content'] + '\n' -}}
{%- endif -%}
{%- endif -%}
{%- endfor -%}
chat-instruct_command: |- chat-instruct_command: |-
Continue the chat dialogue below. Write a single reply for the character "<|character|>". Continue the chat dialogue below. Write a single reply for the character "<|character|>".