Use different language for prompt size too large. (#3004)

Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
AT 2024-09-27 12:29:22 -04:00 committed by GitHub
parent f9d6be8afb
commit ea1ade8668
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
4 changed files with 8 additions and 1 deletions

View File

@ -161,7 +161,9 @@ bool LLModel::decodePrompt(std::function<bool(int32_t)> promptCallback,
std::vector<Token> embd_inp,
bool isResponse) {
if ((int) embd_inp.size() > promptCtx.n_ctx - 4) {
responseCallback(-1, "ERROR: The prompt size exceeds the context window size and cannot be processed.");
// FIXME: (Adam) We should find a way to bubble these strings to the UI level to allow for
// translation
responseCallback(-1, "Your message was too long and could not be processed. Please try again with something shorter.");
std::cerr << implementation().modelType() << " ERROR: The prompt is " << embd_inp.size() <<
" tokens and the context window is " << promptCtx.n_ctx << "!\n";
return false;

View File

@ -11,6 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- Rebase llama.cpp on latest upstream as of September 26th ([#2998](https://github.com/nomic-ai/gpt4all/pull/2998))
- Change the error message when a message is too long ([#3004](https://github.com/nomic-ai/gpt4all/pull/3004))
## [2.8.2] - 2024-08-14

View File

@ -11,6 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- Rebase llama.cpp on latest upstream as of September 26th ([#2998](https://github.com/nomic-ai/gpt4all/pull/2998))
- Change the error message when a message is too long ([#3004](https://github.com/nomic-ai/gpt4all/pull/3004))
### Fixed
- Fix a crash when attempting to continue a chat loaded from disk ([#2995](https://github.com/nomic-ai/gpt4all/pull/2995))

View File

@ -706,6 +706,9 @@ bool ChatLLM::handleResponse(int32_t token, const std::string &response)
#endif
// check for error
// FIXME (Adam) The error messages should not be treated as a model response or part of the
// normal conversation. They should be serialized along with the conversation, but the strings
// are separate and we should preserve info that these are error messages and not actual model responses.
if (token < 0) {
m_response.append(response);
m_trimmedResponse = remove_leading_whitespace(m_response);