From 90de2d32f8c22d4a0858d77b1b40bcbe50330260 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 7 Aug 2024 11:20:15 -0400
Subject: [PATCH 01/66] chat: add CHANGELOG.md (#2699)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 82 +++++++++++++++++++++++++++++++++++++++
1 file changed, 82 insertions(+)
create mode 100644 gpt4all-chat/CHANGELOG.md
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
new file mode 100644
index 00000000..114320a7
--- /dev/null
+++ b/gpt4all-chat/CHANGELOG.md
@@ -0,0 +1,82 @@
+# Changelog
+
+All notable changes to this project will be documented in this file.
+
+The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
+
+## [Unreleased]
+
+### Added
+- Add Qwen2-1.5B-Instruct to models3.json (by [@ThiloteE](https://github.com/ThiloteE) in [#2759](https://github.com/nomic-ai/gpt4all/pull/2759))
+
+### Changed
+- Add missing entries to Italian transltation (by [@Harvester62](https://github.com/Harvester62) in [#2783](https://github.com/nomic-ai/gpt4all/pull/2783))
+
+### Fixed
+- Case-insensitive LocalDocs source icon detection (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2761](https://github.com/nomic-ai/gpt4all/pull/2761))
+- Fix comparison of pre- and post-release versions for update check and models3.json ([#2762](https://github.com/nomic-ai/gpt4all/pull/2762), [#2772](https://github.com/nomic-ai/gpt4all/pull/2772))
+- Fix several backend issues ([#2778](https://github.com/nomic-ai/gpt4all/pull/2778))
+ - Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+ - CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+
+## [3.1.1] - 2024-07-27
+
+### Added
+- Add Llama 3.1 8B Instruct to models3.json (by [@3Simplex](https://github.com/3Simplex) in [#2731](https://github.com/nomic-ai/gpt4all/pull/2731) and [#2732](https://github.com/nomic-ai/gpt4all/pull/2732))
+- Portuguese (BR) translation (by [thiagojramos](https://github.com/thiagojramos) in [#2733](https://github.com/nomic-ai/gpt4all/pull/2733))
+- Support adding arbitrary OpenAI-compatible models by URL (by [@supersonictw](https://github.com/supersonictw) in [#2683](https://github.com/nomic-ai/gpt4all/pull/2683))
+- Support Llama 3.1 RoPE scaling ([#2758](https://github.com/nomic-ai/gpt4all/pull/2758))
+
+### Changed
+- Add missing entries to Chinese (Simplified) translation (by [wuodoo](https://github.com/wuodoo) in [#2716](https://github.com/nomic-ai/gpt4all/pull/2716) and [#2749](https://github.com/nomic-ai/gpt4all/pull/2749))
+- Update translation files and add missing paths to CMakeLists.txt ([#2735](https://github.com/nomic-ai/gpt4all/2735))
+
+## [3.1.0] - 2024-07-24
+
+### Added
+- Generate suggested follow-up questions ([#2634](https://github.com/nomic-ai/gpt4all/pull/2634), [#2723](https://github.com/nomic-ai/gpt4all/pull/2723))
+ - Also add options for the chat name and follow-up question prompt templates
+- Scaffolding for translations ([#2612](https://github.com/nomic-ai/gpt4all/pull/2612))
+- Spanish (MX) translation (by [@jstayco](https://github.com/jstayco) in [#2654](https://github.com/nomic-ai/gpt4all/pull/2654))
+- Chinese (Simplified) translation by mikage ([#2657](https://github.com/nomic-ai/gpt4all/pull/2657))
+- Dynamic changes of language and locale at runtime ([#2659](https://github.com/nomic-ai/gpt4all/pull/2659), [#2677](https://github.com/nomic-ai/gpt4all/pull/2677))
+- Romanian translation by [@SINAPSA\_IC](https://github.com/SINAPSA_IC) ([#2662](https://github.com/nomic-ai/gpt4all/pull/2662))
+- Chinese (Traditional) translation (by [@supersonictw](https://github.com/supersonictw) in [#2661](https://github.com/nomic-ai/gpt4all/pull/2661))
+- Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2700](https://github.com/nomic-ai/gpt4all/pull/2700))
+
+### Changed
+- Customize combo boxes and context menus to fit the new style ([#2535](https://github.com/nomic-ai/gpt4all/pull/2535))
+- Improve view bar scaling and Model Settings layout ([#2520](https://github.com/nomic-ai/gpt4all/pull/2520)
+- Make the logo spin while the model is generating ([#2557](https://github.com/nomic-ai/gpt4all/pull/2557))
+- Server: Reply to wrong GET/POST method with HTTP 405 instead of 404 (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2615](https://github.com/nomic-ai/gpt4all/pull/2615))
+- Update theme for menus (by [@3Simplex](https://github.com/3Simplex) in [#2578](https://github.com/nomic-ai/gpt4all/pull/2578))
+- Move the "stop" button to the message box ([#2561](https://github.com/nomic-ai/gpt4all/pull/2561))
+- Build with CUDA 11.8 for better compatibility ([#2639](https://github.com/nomic-ai/gpt4all/pull/2639))
+- Make links in latest news section clickable ([#2643](https://github.com/nomic-ai/gpt4all/pull/2643))
+- Support translation of settings choices ([#2667](https://github.com/nomic-ai/gpt4all/pull/2667), [#2690](https://github.com/nomic-ai/gpt4all/pull/2690))
+- Improve LocalDocs view's error message (by @cosmic-snow in [#2679](https://github.com/nomic-ai/gpt4all/pull/2679))
+- Ignore case of LocalDocs file extensions ([#2642](https://github.com/nomic-ai/gpt4all/pull/2642), [#2684](https://github.com/nomic-ai/gpt4all/pull/2684))
+- Update llama.cpp to commit 87e397d00 from July 19th ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694), [#2702](https://github.com/nomic-ai/gpt4all/pull/2702))
+ - Add support for GPT-NeoX, Gemma 2, OpenELM, ChatGLM, and Jais architectures (all with Vulkan support)
+ - Add support for DeepSeek-V2 architecture (no Vulkan support)
+ - Enable Vulkan support for StarCoder2, XVERSE, Command R, and OLMo
+- Show scrollbar in chat collections list as needed (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2691](https://github.com/nomic-ai/gpt4all/pull/2691))
+
+### Removed
+- Remove support for GPT-J models ([#2676](https://github.com/nomic-ai/gpt4all/pull/2676), [#2693](https://github.com/nomic-ai/gpt4all/pull/2693))
+
+### Fixed
+- Fix placement of thumbs-down and datalake opt-in dialogs ([#2540](https://github.com/nomic-ai/gpt4all/pull/2540))
+- Select the correct folder with the Linux fallback folder dialog ([#2541](https://github.com/nomic-ai/gpt4all/pull/2541))
+- Fix clone button sometimes producing blank model info ([#2545](https://github.com/nomic-ai/gpt4all/pull/2545))
+- Fix jerky chat view scrolling ([#2555](https://github.com/nomic-ai/gpt4all/pull/2555))
+- Fix "reload" showing for chats with missing models ([#2520](https://github.com/nomic-ai/gpt4all/pull/2520)
+- Fix property binding loop warning ([#2601](https://github.com/nomic-ai/gpt4all/pull/2601))
+- Fix UI hang with certain chat view content ([#2543](https://github.com/nomic-ai/gpt4all/pull/2543))
+- Fix crash when Kompute falls back to CPU ([#2640](https://github.com/nomic-ai/gpt4all/pull/2640))
+- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
+- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
+
+[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/v3.1.1...HEAD
+[3.1.1]: https://github.com/nomic-ai/gpt4all/compare/v3.1.0...v3.1.1
+[3.1.0]: https://github.com/nomic-ai/gpt4all/compare/v3.0.0...v3.1.0
From be66ec8ab5d32e614675d2bea6a03d945f3b8f69 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 7 Aug 2024 11:25:24 -0400
Subject: [PATCH 02/66] chat: faster KV shift, continue generating, fix stop
sequences (#2781)
* Don't stop generating at end of context
* Use llama_kv_cache ops to shift context
* Fix and improve reverse prompt detection
* Replace prompt recalc callback with a flag to disallow context shift
---
gpt4all-backend/CMakeLists.txt | 2 +-
gpt4all-backend/llamamodel.cpp | 35 ++-
gpt4all-backend/llamamodel_impl.h | 3 +-
gpt4all-backend/llmodel.h | 14 +-
gpt4all-backend/llmodel_c.cpp | 4 +-
gpt4all-backend/llmodel_c.h | 11 +-
gpt4all-backend/llmodel_shared.cpp | 275 +++++++++++-------
gpt4all-bindings/python/gpt4all/_pyllmodel.py | 10 +-
gpt4all-chat/CMakeLists.txt | 3 +-
gpt4all-chat/chat.cpp | 10 +-
gpt4all-chat/chat.h | 8 +-
gpt4all-chat/chatapi.cpp | 4 +-
gpt4all-chat/chatapi.h | 35 ++-
gpt4all-chat/chatllm.cpp | 78 +----
gpt4all-chat/chatllm.h | 13 +-
gpt4all-chat/qml/ChatView.qml | 10 +-
16 files changed, 285 insertions(+), 230 deletions(-)
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index b47c4505..5862c726 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -33,7 +33,7 @@ set(LLMODEL_VERSION_PATCH 0)
set(LLMODEL_VERSION "${LLMODEL_VERSION_MAJOR}.${LLMODEL_VERSION_MINOR}.${LLMODEL_VERSION_PATCH}")
project(llmodel VERSION ${LLMODEL_VERSION} LANGUAGES CXX C)
-set(CMAKE_CXX_STANDARD 20)
+set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
set(BUILD_SHARED_LIBS ON)
diff --git a/gpt4all-backend/llamamodel.cpp b/gpt4all-backend/llamamodel.cpp
index cab0e75a..f07a05e8 100644
--- a/gpt4all-backend/llamamodel.cpp
+++ b/gpt4all-backend/llamamodel.cpp
@@ -531,10 +531,7 @@ size_t LLamaModel::restoreState(const uint8_t *src)
std::vector LLamaModel::tokenize(PromptContext &ctx, const std::string &str, bool special)
{
bool atStart = m_tokenize_last_token == -1;
- bool insertSpace = atStart || (
- llama_token_get_attr(d_ptr->model, m_tokenize_last_token)
- & (LLAMA_TOKEN_ATTR_CONTROL | LLAMA_TOKEN_ATTR_USER_DEFINED | LLAMA_TOKEN_ATTR_UNKNOWN)
- );
+ bool insertSpace = atStart || isSpecialToken(m_tokenize_last_token);
std::vector fres(str.length() + 4);
int32_t fres_len = llama_tokenize_gpt4all(
d_ptr->model, str.c_str(), str.length(), fres.data(), fres.size(), /*add_special*/ atStart,
@@ -546,6 +543,12 @@ std::vector LLamaModel::tokenize(PromptContext &ctx, const std::
return fres;
}
+bool LLamaModel::isSpecialToken(Token id) const
+{
+ return llama_token_get_attr(d_ptr->model, id)
+ & (LLAMA_TOKEN_ATTR_CONTROL | LLAMA_TOKEN_ATTR_USER_DEFINED | LLAMA_TOKEN_ATTR_UNKNOWN);
+}
+
std::string LLamaModel::tokenToString(Token id) const
{
std::vector result(8, 0);
@@ -595,6 +598,30 @@ bool LLamaModel::evalTokens(PromptContext &ctx, const std::vector &toke
return res == 0;
}
+void LLamaModel::shiftContext(PromptContext &promptCtx)
+{
+ // infinite text generation via context shifting
+
+ // erase up to n_ctx*contextErase tokens
+ int n_keep = shouldAddBOS();
+ int n_past = promptCtx.n_past;
+ int n_discard = std::min(n_past - n_keep, int(promptCtx.n_ctx * promptCtx.contextErase));
+
+ assert(n_discard > 0);
+ if (n_discard <= 0)
+ return;
+
+ std::cerr << "Llama: context full, swapping: n_past = " << n_past << ", n_keep = " << n_keep
+ << ", n_discard = " << n_discard << "\n";
+
+ // erase the first n_discard tokens from the context
+ llama_kv_cache_seq_rm (d_ptr->ctx, 0, n_keep, n_keep + n_discard);
+ llama_kv_cache_seq_add(d_ptr->ctx, 0, n_keep + n_discard, n_past, -n_discard);
+
+ promptCtx.tokens.erase(promptCtx.tokens.begin() + n_keep, promptCtx.tokens.begin() + n_keep + n_discard);
+ promptCtx.n_past = promptCtx.tokens.size();
+}
+
int32_t LLamaModel::contextLength() const
{
return llama_n_ctx(d_ptr->ctx);
diff --git a/gpt4all-backend/llamamodel_impl.h b/gpt4all-backend/llamamodel_impl.h
index 019e5532..7c698ffa 100644
--- a/gpt4all-backend/llamamodel_impl.h
+++ b/gpt4all-backend/llamamodel_impl.h
@@ -6,7 +6,6 @@
#include "llmodel.h"
-#include
#include
#include
#include
@@ -54,9 +53,11 @@ private:
protected:
std::vector tokenize(PromptContext &ctx, const std::string &str, bool special) override;
+ bool isSpecialToken(Token id) const override;
std::string tokenToString(Token id) const override;
Token sampleToken(PromptContext &ctx) const override;
bool evalTokens(PromptContext &ctx, const std::vector &tokens) const override;
+ void shiftContext(PromptContext &promptCtx) override;
int32_t contextLength() const override;
const std::vector &endTokens() const override;
bool shouldAddBOS() const override;
diff --git a/gpt4all-backend/llmodel.h b/gpt4all-backend/llmodel.h
index f95dc3a8..04a510dc 100644
--- a/gpt4all-backend/llmodel.h
+++ b/gpt4all-backend/llmodel.h
@@ -134,7 +134,7 @@ public:
int32_t n_batch = 9;
float repeat_penalty = 1.10f;
int32_t repeat_last_n = 64; // last n tokens to penalize
- float contextErase = 0.75f; // percent of context to erase if we exceed the context window
+ float contextErase = 0.5f; // percent of context to erase if we exceed the context window
};
using ProgressCallback = std::function;
@@ -159,7 +159,7 @@ public:
const std::string &promptTemplate,
std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &ctx,
bool special = false,
std::string *fakeReply = nullptr);
@@ -213,9 +213,11 @@ protected:
// These are pure virtual because subclasses need to implement as the default implementation of
// 'prompt' above calls these functions
virtual std::vector tokenize(PromptContext &ctx, const std::string &str, bool special = false) = 0;
+ virtual bool isSpecialToken(Token id) const = 0;
virtual std::string tokenToString(Token id) const = 0;
virtual Token sampleToken(PromptContext &ctx) const = 0;
virtual bool evalTokens(PromptContext &ctx, const std::vector &tokens) const = 0;
+ virtual void shiftContext(PromptContext &promptCtx) = 0;
virtual int32_t contextLength() const = 0;
virtual const std::vector &endTokens() const = 0;
virtual bool shouldAddBOS() const = 0;
@@ -232,10 +234,6 @@ protected:
return -1;
}
- // This is a helper function called from the default implementation of 'prompt' but it can be
- // shared by all base classes so it isn't virtual
- void recalculateContext(PromptContext &promptCtx, std::function recalculate);
-
const Implementation *m_implementation = nullptr;
ProgressCallback m_progressCallback;
@@ -249,11 +247,11 @@ protected:
bool decodePrompt(std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx,
std::vector embd_inp);
void generateResponse(std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx);
Token m_tokenize_last_token = -1; // not serialized
diff --git a/gpt4all-backend/llmodel_c.cpp b/gpt4all-backend/llmodel_c.cpp
index d6ba95e8..f3fd68ff 100644
--- a/gpt4all-backend/llmodel_c.cpp
+++ b/gpt4all-backend/llmodel_c.cpp
@@ -106,7 +106,7 @@ void llmodel_prompt(llmodel_model model, const char *prompt,
const char *prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
- llmodel_recalculate_callback recalculate_callback,
+ bool allow_context_shift,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply)
@@ -135,7 +135,7 @@ void llmodel_prompt(llmodel_model model, const char *prompt,
auto *fake_reply_p = fake_reply ? &fake_reply_str : nullptr;
// Call the C++ prompt method
- wrapper->llModel->prompt(prompt, prompt_template, prompt_callback, response_func, recalculate_callback,
+ wrapper->llModel->prompt(prompt, prompt_template, prompt_callback, response_func, allow_context_shift,
wrapper->promptContext, special, fake_reply_p);
// Update the C context by giving access to the wrappers raw pointers to std::vector data
diff --git a/gpt4all-backend/llmodel_c.h b/gpt4all-backend/llmodel_c.h
index 75cfb862..327bea2e 100644
--- a/gpt4all-backend/llmodel_c.h
+++ b/gpt4all-backend/llmodel_c.h
@@ -74,13 +74,6 @@ typedef bool (*llmodel_prompt_callback)(int32_t token_id);
*/
typedef bool (*llmodel_response_callback)(int32_t token_id, const char *response);
-/**
- * Callback type for recalculation of context.
- * @param whether the model is recalculating the context.
- * @return a bool indicating whether the model should keep generating.
- */
-typedef bool (*llmodel_recalculate_callback)(bool is_recalculating);
-
/**
* Embedding cancellation callback for use with llmodel_embed.
* @param batch_sizes The number of tokens in each batch that will be embedded.
@@ -175,7 +168,7 @@ uint64_t llmodel_restore_state_data(llmodel_model model, const uint8_t *src);
* @param prompt_template A string representing the input prompt template.
* @param prompt_callback A callback function for handling the processing of prompt.
* @param response_callback A callback function for handling the generated response.
- * @param recalculate_callback A callback function for handling recalculation requests.
+ * @param allow_context_shift Whether to allow shifting of context to make room for more input.
* @param special True if special tokens in the prompt should be processed, false otherwise.
* @param fake_reply A string to insert into context as the model's reply, or NULL to generate one.
* @param ctx A pointer to the llmodel_prompt_context structure.
@@ -184,7 +177,7 @@ void llmodel_prompt(llmodel_model model, const char *prompt,
const char *prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
- llmodel_recalculate_callback recalculate_callback,
+ bool allow_context_shift,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply);
diff --git a/gpt4all-backend/llmodel_shared.cpp b/gpt4all-backend/llmodel_shared.cpp
index 68ea42e4..7477254a 100644
--- a/gpt4all-backend/llmodel_shared.cpp
+++ b/gpt4all-backend/llmodel_shared.cpp
@@ -11,42 +11,9 @@
#include
#include
#include
-#include
#include
-// TODO(cebtenzzre): replace this with llama_kv_cache_seq_shift for llamamodel (GPT-J needs this as-is)
-// FIXME(jared): if recalculate returns false, we leave n_past recalculate)
-{
- int n_keep = shouldAddBOS();
- const int32_t n_discard = (promptCtx.n_ctx - n_keep) * promptCtx.contextErase;
-
- // Erase the first percentage of context from the tokens
- std::cerr << implementation().modelType() << ": reached the end of the context window so resizing\n";
- promptCtx.tokens.erase(promptCtx.tokens.begin() + n_keep, promptCtx.tokens.begin() + n_keep + n_discard);
-
- size_t i = n_keep;
- promptCtx.n_past = n_keep;
- while (i < promptCtx.tokens.size()) {
- size_t batch_end = std::min(i + promptCtx.n_batch, promptCtx.tokens.size());
- std::vector batch(promptCtx.tokens.begin() + i, promptCtx.tokens.begin() + batch_end);
- assert(promptCtx.n_past + int32_t(batch.size()) <= promptCtx.n_ctx);
- if (!evalTokens(promptCtx, batch)) {
- std::cerr << "LLModel ERROR: Failed to process prompt\n";
- goto stop_generating;
- }
- promptCtx.n_past += batch.size();
- if (!recalculate(true))
- goto stop_generating;
- i = batch_end;
- }
- assert(promptCtx.n_past == int32_t(promptCtx.tokens.size()));
-
-stop_generating:
- recalculate(false);
-}
+namespace ranges = std::ranges;
static bool parsePromptTemplate(const std::string &tmpl, std::vector &placeholders, std::string &err)
{
@@ -75,7 +42,7 @@ void LLModel::prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx,
bool special,
std::string *fakeReply)
@@ -92,12 +59,21 @@ void LLModel::prompt(const std::string &prompt,
return;
}
- // make sure token cache matches decode offset
- if (promptCtx.tokens.size() < promptCtx.n_past) {
+ // sanity checks
+ if (promptCtx.n_past > contextLength()) {
std::ostringstream ss;
- ss << "expected n_past to be at most " << promptCtx.tokens.size() << ", got " << promptCtx.n_past;
+ ss << "n_past=" << promptCtx.n_past << " is past end of context length=" << contextLength();
throw std::out_of_range(ss.str());
}
+ if (promptCtx.n_past > promptCtx.tokens.size()) {
+ std::ostringstream ss;
+ ss << "n_past=" << promptCtx.n_past << " is past end of token cache length=" << promptCtx.tokens.size();
+ throw std::out_of_range(ss.str());
+ }
+
+ promptCtx.n_ctx = contextLength();
+ promptCtx.n_batch = std::min(promptCtx.n_batch, LLMODEL_MAX_PROMPT_BATCH);
+
if (promptCtx.n_past < promptCtx.tokens.size())
promptCtx.tokens.resize(promptCtx.n_past);
m_tokenize_last_token = promptCtx.tokens.empty() ? -1 : promptCtx.tokens.back(); // not serialized
@@ -149,15 +125,15 @@ void LLModel::prompt(const std::string &prompt,
promptCtx.n_past = old_n_past; // restore n_past so decodePrompt can increment it
// decode the user prompt
- if (!decodePrompt(promptCallback, responseCallback, recalculateCallback, promptCtx, embd_inp))
+ if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp))
return; // error
// decode the assistant's reply, either generated or spoofed
if (fakeReply == nullptr) {
- generateResponse(responseCallback, recalculateCallback, promptCtx);
+ generateResponse(responseCallback, allowContextShift, promptCtx);
} else {
embd_inp = tokenize(promptCtx, *fakeReply, false);
- if (!decodePrompt(promptCallback, responseCallback, recalculateCallback, promptCtx, embd_inp))
+ if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp))
return; // error
}
@@ -172,19 +148,16 @@ void LLModel::prompt(const std::string &prompt,
}
if (!asstSuffix.empty()) {
embd_inp = tokenize(promptCtx, asstSuffix, true);
- decodePrompt(promptCallback, responseCallback, recalculateCallback, promptCtx, embd_inp);
+ decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp);
}
}
// returns false on error
bool LLModel::decodePrompt(std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx,
std::vector embd_inp) {
- // save the context size
- promptCtx.n_ctx = contextLength();
-
if ((int) embd_inp.size() > promptCtx.n_ctx - 4) {
responseCallback(-1, "ERROR: The prompt size exceeds the context window size and cannot be processed.");
std::cerr << implementation().modelType() << " ERROR: The prompt is " << embd_inp.size() <<
@@ -192,9 +165,14 @@ bool LLModel::decodePrompt(std::function promptCallback,
return false;
}
- promptCtx.n_predict = std::min(promptCtx.n_predict, promptCtx.n_ctx - (int) embd_inp.size());
- promptCtx.n_past = std::min(promptCtx.n_past, promptCtx.n_ctx);
- promptCtx.n_batch = std::min(promptCtx.n_batch, LLMODEL_MAX_PROMPT_BATCH);
+ // FIXME(jared): There are mitigations for this situation, such as making room before
+ // copying the prompt context, or restoring the KV cache when we restore the prompt
+ // context.
+ if (!allowContextShift && promptCtx.n_past + embd_inp.size() > promptCtx.n_ctx) {
+ std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_eval=" << embd_inp.size()
+ << ", n_ctx=" << promptCtx.n_ctx << "\n";
+ return false;
+ }
// process the prompt in batches
size_t i = 0;
@@ -204,7 +182,8 @@ bool LLModel::decodePrompt(std::function promptCallback,
// Check if the context has run out...
if (promptCtx.n_past + int32_t(batch.size()) > promptCtx.n_ctx) {
- recalculateContext(promptCtx, recalculateCallback);
+ assert(allowContextShift);
+ shiftContext(promptCtx);
assert(promptCtx.n_past + int32_t(batch.size()) <= promptCtx.n_ctx);
}
@@ -226,70 +205,170 @@ bool LLModel::decodePrompt(std::function promptCallback,
return true;
}
+/*
+ * If string s overlaps with the string key such that some prefix of the key is at the end
+ * of the string, return the position in s where the first match starts. Otherwise, return
+ * std::string::npos. Examples:
+ * s = "bfo", key = "foo" -> 1
+ * s = "fooa", key = "foo" -> npos
+ */
+static std::string::size_type stringsOverlap(const std::string &s, const std::string &key)
+{
+ if (s.empty() || key.empty())
+ throw std::invalid_argument("arguments to stringsOverlap must not be empty");
+
+ for (int start = std::max(0, int(s.size()) - int(key.size())); start < s.size(); start++) {
+ if (s.compare(start, s.size(), key, 0, s.size() - start) == 0)
+ return start;
+ }
+ return std::string::npos;
+}
+
void LLModel::generateResponse(std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx) {
+ static const char *stopSequences[] {
+ "### Instruction", "### Prompt", "### Response", "### Human", "### Assistant", "### Context",
+ };
+
+ // Don't even start if there is no room
+ if (!promptCtx.n_predict)
+ return;
+ if (!allowContextShift && promptCtx.n_past >= promptCtx.n_ctx) {
+ std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_ctx=" << promptCtx.n_ctx
+ << "\n";
+ return;
+ }
+
std::string cachedResponse;
std::vector cachedTokens;
- std::unordered_set reversePrompts
- = { "### Instruction", "### Prompt", "### Response", "### Human", "### Assistant", "### Context" };
+ int n_predicted = 0;
- // predict next tokens
- for (int i = 0; i < promptCtx.n_predict; i++) {
+ // Predict next tokens
+ for (bool stop = false; !stop;) {
+ // Sample next token
+ std::optional new_tok = sampleToken(promptCtx);
+ std::string new_piece = tokenToString(new_tok.value());
+ cachedTokens.push_back(new_tok.value());
+ cachedResponse += new_piece;
- // sample next token
- auto id = sampleToken(promptCtx);
+ auto accept = [this, &promptCtx, &cachedTokens, &new_tok, allowContextShift]() -> bool {
+ // Shift context if out of space
+ if (promptCtx.n_past >= promptCtx.n_ctx) {
+ (void)allowContextShift;
+ assert(allowContextShift);
+ shiftContext(promptCtx);
+ assert(promptCtx.n_past < promptCtx.n_ctx);
+ }
- // Check if the context has run out...
- if (promptCtx.n_past + 1 > promptCtx.n_ctx) {
- recalculateContext(promptCtx, recalculateCallback);
- assert(promptCtx.n_past + 1 <= promptCtx.n_ctx);
- }
+ // Accept the token
+ Token tok = std::exchange(new_tok, std::nullopt).value();
+ if (!evalTokens(promptCtx, { tok })) {
+ // TODO(jared): raise an exception
+ std::cerr << implementation().modelType() << " ERROR: Failed to predict next token\n";
+ return false;
+ }
- if (!evalTokens(promptCtx, { id })) {
- std::cerr << implementation().modelType() << " ERROR: Failed to predict next token\n";
- return;
- }
+ promptCtx.tokens.push_back(tok);
+ promptCtx.n_past += 1;
+ return true;
+ };
- // display text
+ // Check for EOS
+ auto lengthLimit = std::string::npos;
for (const auto token : endTokens()) {
- if (id == token) return;
- }
-
- const std::string str = tokenToString(id);
-
- // Check if the provided str is part of our reverse prompts
- bool foundPartialReversePrompt = false;
- const std::string completed = cachedResponse + std::string(str);
- if (reversePrompts.find(completed) != reversePrompts.end())
- return;
-
- // Check if it partially matches our reverse prompts and if so, cache
- for (const auto& s : reversePrompts) {
- if (s.compare(0, completed.size(), completed) == 0) {
- foundPartialReversePrompt = true;
- cachedResponse = completed;
- break;
+ if (new_tok == token) {
+ stop = true;
+ lengthLimit = cachedResponse.size() - new_piece.size();
}
}
- // Regardless the token gets added to our cache
- cachedTokens.push_back(id);
+ if (lengthLimit != std::string::npos) {
+ // EOS matched
+ } else if (!isSpecialToken(new_tok.value())) {
+ // Check if the response contains a stop sequence
+ for (const auto &p : stopSequences) {
+ auto match = cachedResponse.find(p);
+ if (match != std::string::npos) stop = true;
+ lengthLimit = std::min(lengthLimit, match);
+ if (match == 0) break;
+ }
- // Continue if we have found a partial match
- if (foundPartialReversePrompt)
- continue;
-
- // Empty the cache
- for (auto t : cachedTokens) {
- promptCtx.tokens.push_back(t);
- promptCtx.n_past += 1;
- //TODO: Conversion to std::string can be avoided here...
- if (!responseCallback(t, std::string(tokenToString(t))))
- return;
+ // Check if the response matches the start of a stop sequence
+ if (lengthLimit == std::string::npos) {
+ for (const auto &p : stopSequences) {
+ auto match = stringsOverlap(cachedResponse, p);
+ lengthLimit = std::min(lengthLimit, match);
+ if (match == 0) break;
+ }
+ }
+ } else if (ranges::contains(stopSequences, new_piece)) {
+ // Special tokens must exactly match a stop sequence
+ stop = true;
+ lengthLimit = cachedResponse.size() - new_piece.size();
+ }
+
+ // Optionally stop if the context will run out
+ if (!allowContextShift && promptCtx.n_past + cachedTokens.size() >= promptCtx.n_ctx) {
+ std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_ctx="
+ << promptCtx.n_ctx << "\n";
+ stop = true;
+ }
+
+ // Empty the cache, up to the length limit
+ std::string::size_type responseLength = 0;
+ while (!cachedTokens.empty()) {
+ Token tok = cachedTokens.front();
+ std::string piece = tokenToString(tok);
+
+ // Stop if the piece (or part of it) does not fit within the length limit
+ if (responseLength + (stop ? 1 : piece.size()) > lengthLimit)
+ break;
+
+ // Remove token from cache
+ assert(cachedResponse.starts_with(piece));
+ cachedTokens.erase(cachedTokens.begin(), cachedTokens.begin() + 1);
+ cachedResponse.erase(cachedResponse.begin(), cachedResponse.begin() + piece.size());
+
+ // Accept the token, if needed (not cached)
+ if (cachedTokens.empty() && new_tok && !accept())
+ return;
+
+ // Send the token
+ if (!responseCallback(tok, piece) || ++n_predicted >= promptCtx.n_predict) {
+ stop = true;
+ break;
+ }
+
+ // FIXME(jared): we could avoid printing partial stop sequences if we didn't have to
+ // output token IDs and could cache a partial token for the next prompt call
+ responseLength += piece.size();
+ }
+ assert(cachedTokens.empty() == cachedResponse.empty());
+
+ // Accept the token, if needed (in cache)
+ if (new_tok) {
+ assert(!cachedTokens.empty() && cachedTokens.back() == new_tok);
+ if (stop) {
+ cachedTokens.pop_back();
+ } else if (!accept()) {
+ return;
+ }
}
- cachedTokens.clear();
}
+
+ auto &tokens = promptCtx.tokens;
+ if (tokens.size() < cachedTokens.size()) {
+ /* This is theoretically possible if the longest stop sequence is greater than
+ * n_ctx * contextErase tokens. */
+ throw std::runtime_error("shifted too much context, can't go back");
+ }
+
+ auto discard_start = tokens.end() - cachedTokens.size();
+ assert(std::equal(discard_start, tokens.end(), cachedTokens.begin()));
+ tokens.erase(discard_start, tokens.end());
+
+ promptCtx.n_past -= cachedTokens.size();
}
void LLModel::embed(
diff --git a/gpt4all-bindings/python/gpt4all/_pyllmodel.py b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
index 4c52a5b1..a4952fe3 100644
--- a/gpt4all-bindings/python/gpt4all/_pyllmodel.py
+++ b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
@@ -128,7 +128,6 @@ llmodel.llmodel_isModelLoaded.restype = ctypes.c_bool
PromptCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_int32)
ResponseCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_int32, ctypes.c_char_p)
-RecalculateCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_bool)
EmbCancelCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_uint), ctypes.c_uint, ctypes.c_char_p)
llmodel.llmodel_prompt.argtypes = [
@@ -137,7 +136,7 @@ llmodel.llmodel_prompt.argtypes = [
ctypes.c_char_p,
PromptCallback,
ResponseCallback,
- RecalculateCallback,
+ ctypes.c_bool,
ctypes.POINTER(LLModelPromptContext),
ctypes.c_bool,
ctypes.c_char_p,
@@ -513,7 +512,7 @@ class LLModel:
ctypes.c_char_p(prompt_template.encode()),
PromptCallback(self._prompt_callback),
ResponseCallback(self._callback_decoder(callback)),
- RecalculateCallback(self._recalculate_callback),
+ True,
self.context,
special,
ctypes.c_char_p(),
@@ -606,8 +605,3 @@ class LLModel:
@staticmethod
def _prompt_callback(token_id: int) -> bool:
return True
-
- # Empty recalculate callback
- @staticmethod
- def _recalculate_callback(is_recalculating: bool) -> bool:
- return is_recalculating
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index e19b1c4e..07acef15 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -1,7 +1,7 @@
cmake_minimum_required(VERSION 3.16)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
-set(CMAKE_CXX_STANDARD 20)
+set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(APPLE)
@@ -31,7 +31,6 @@ project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
-set(CMAKE_CXX_STANDARD_REQUIRED ON)
option(GPT4ALL_TRANSLATIONS OFF "Build with translations")
option(GPT4ALL_LOCALHOST OFF "Build installer for localhost repo")
diff --git a/gpt4all-chat/chat.cpp b/gpt4all-chat/chat.cpp
index 2747ff9b..a44022c0 100644
--- a/gpt4all-chat/chat.cpp
+++ b/gpt4all-chat/chat.cpp
@@ -62,7 +62,7 @@ void Chat::connectLLM()
connect(m_llmodel, &ChatLLM::responseStopped, this, &Chat::responseStopped, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelLoadingError, this, &Chat::handleModelLoadingError, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelLoadingWarning, this, &Chat::modelLoadingWarning, Qt::QueuedConnection);
- connect(m_llmodel, &ChatLLM::recalcChanged, this, &Chat::handleRecalculating, Qt::QueuedConnection);
+ connect(m_llmodel, &ChatLLM::restoringFromTextChanged, this, &Chat::handleRestoringFromText, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::generatedNameChanged, this, &Chat::generatedNameChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::generatedQuestionFinished, this, &Chat::generatedQuestionFinished, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::reportSpeed, this, &Chat::handleTokenSpeedChanged, Qt::QueuedConnection);
@@ -252,9 +252,9 @@ void Chat::serverNewPromptResponsePair(const QString &prompt)
m_chatModel->appendResponse("Response: ", prompt);
}
-bool Chat::isRecalc() const
+bool Chat::restoringFromText() const
{
- return m_llmodel->isRecalc();
+ return m_llmodel->restoringFromText();
}
void Chat::unloadAndDeleteLater()
@@ -320,10 +320,10 @@ void Chat::generatedQuestionFinished(const QString &question)
emit generatedQuestionsChanged();
}
-void Chat::handleRecalculating()
+void Chat::handleRestoringFromText()
{
Network::globalInstance()->trackChatEvent("recalc_context", { {"length", m_chatModel->count()} });
- emit recalcChanged();
+ emit restoringFromTextChanged();
}
void Chat::handleModelLoadingError(const QString &error)
diff --git a/gpt4all-chat/chat.h b/gpt4all-chat/chat.h
index c9b95f55..065c624e 100644
--- a/gpt4all-chat/chat.h
+++ b/gpt4all-chat/chat.h
@@ -27,7 +27,7 @@ class Chat : public QObject
Q_PROPERTY(QString response READ response NOTIFY responseChanged)
Q_PROPERTY(ModelInfo modelInfo READ modelInfo WRITE setModelInfo NOTIFY modelInfoChanged)
Q_PROPERTY(bool responseInProgress READ responseInProgress NOTIFY responseInProgressChanged)
- Q_PROPERTY(bool isRecalc READ isRecalc NOTIFY recalcChanged)
+ Q_PROPERTY(bool restoringFromText READ restoringFromText NOTIFY restoringFromTextChanged)
Q_PROPERTY(bool isServer READ isServer NOTIFY isServerChanged)
Q_PROPERTY(ResponseState responseState READ responseState NOTIFY responseStateChanged)
Q_PROPERTY(QList collectionList READ collectionList NOTIFY collectionListChanged)
@@ -88,7 +88,7 @@ public:
ResponseState responseState() const;
ModelInfo modelInfo() const;
void setModelInfo(const ModelInfo &modelInfo);
- bool isRecalc() const;
+ bool restoringFromText() const;
Q_INVOKABLE void unloadModel();
Q_INVOKABLE void reloadModel();
@@ -144,7 +144,7 @@ Q_SIGNALS:
void processSystemPromptRequested();
void modelChangeRequested(const ModelInfo &modelInfo);
void modelInfoChanged();
- void recalcChanged();
+ void restoringFromTextChanged();
void loadDefaultModelRequested();
void loadModelRequested(const ModelInfo &modelInfo);
void generateNameRequested();
@@ -167,7 +167,7 @@ private Q_SLOTS:
void responseStopped(qint64 promptResponseMs);
void generatedNameChanged(const QString &name);
void generatedQuestionFinished(const QString &question);
- void handleRecalculating();
+ void handleRestoringFromText();
void handleModelLoadingError(const QString &error);
void handleTokenSpeedChanged(const QString &tokenSpeed);
void handleDatabaseResultsChanged(const QList &results);
diff --git a/gpt4all-chat/chatapi.cpp b/gpt4all-chat/chatapi.cpp
index 1cf94173..b443f24c 100644
--- a/gpt4all-chat/chatapi.cpp
+++ b/gpt4all-chat/chatapi.cpp
@@ -90,13 +90,13 @@ void ChatAPI::prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &promptCtx,
bool special,
std::string *fakeReply) {
Q_UNUSED(promptCallback);
- Q_UNUSED(recalculateCallback);
+ Q_UNUSED(allowContextShift);
Q_UNUSED(special);
if (!isModelLoaded()) {
diff --git a/gpt4all-chat/chatapi.h b/gpt4all-chat/chatapi.h
index 51ba8067..59b68f58 100644
--- a/gpt4all-chat/chatapi.h
+++ b/gpt4all-chat/chatapi.h
@@ -69,7 +69,7 @@ public:
const std::string &promptTemplate,
std::function promptCallback,
std::function responseCallback,
- std::function recalculateCallback,
+ bool allowContextShift,
PromptContext &ctx,
bool special,
std::string *fakeReply) override;
@@ -97,38 +97,57 @@ protected:
// them as they are only called from the default implementation of 'prompt' which we override and
// completely replace
- std::vector tokenize(PromptContext &ctx, const std::string &str, bool special) override {
+ std::vector tokenize(PromptContext &ctx, const std::string &str, bool special) override
+ {
(void)ctx;
(void)str;
(void)special;
throw std::logic_error("not implemented");
}
- std::string tokenToString(Token id) const override {
+ bool isSpecialToken(Token id) const override
+ {
(void)id;
throw std::logic_error("not implemented");
}
- Token sampleToken(PromptContext &ctx) const override {
+ std::string tokenToString(Token id) const override
+ {
+ (void)id;
+ throw std::logic_error("not implemented");
+ }
+
+ Token sampleToken(PromptContext &ctx) const override
+ {
(void)ctx;
throw std::logic_error("not implemented");
}
- bool evalTokens(PromptContext &ctx, const std::vector &tokens) const override {
+ bool evalTokens(PromptContext &ctx, const std::vector &tokens) const override
+ {
(void)ctx;
(void)tokens;
throw std::logic_error("not implemented");
}
- int32_t contextLength() const override {
+ void shiftContext(PromptContext &promptCtx) override
+ {
+ (void)promptCtx;
throw std::logic_error("not implemented");
}
- const std::vector &endTokens() const override {
+ int32_t contextLength() const override
+ {
throw std::logic_error("not implemented");
}
- bool shouldAddBOS() const override {
+ const std::vector &endTokens() const override
+ {
+ throw std::logic_error("not implemented");
+ }
+
+ bool shouldAddBOS() const override
+ {
throw std::logic_error("not implemented");
}
diff --git a/gpt4all-chat/chatllm.cpp b/gpt4all-chat/chatllm.cpp
index 18319cee..e9fb7f31 100644
--- a/gpt4all-chat/chatllm.cpp
+++ b/gpt4all-chat/chatllm.cpp
@@ -102,7 +102,7 @@ ChatLLM::ChatLLM(Chat *parent, bool isServer)
: QObject{nullptr}
, m_promptResponseTokens(0)
, m_promptTokens(0)
- , m_isRecalc(false)
+ , m_restoringFromText(false)
, m_shouldBeLoaded(false)
, m_forceUnloadModel(false)
, m_markedForDeletion(false)
@@ -712,17 +712,6 @@ bool ChatLLM::handleResponse(int32_t token, const std::string &response)
return !m_stopGenerating;
}
-bool ChatLLM::handleRecalculate(bool isRecalc)
-{
-#if defined(DEBUG)
- qDebug() << "recalculate" << m_llmThread.objectName() << isRecalc;
-#endif
- if (m_isRecalc != isRecalc) {
- m_isRecalc = isRecalc;
- emit recalcChanged();
- }
- return !m_stopGenerating;
-}
bool ChatLLM::prompt(const QList &collectionList, const QString &prompt)
{
if (m_restoreStateFromText) {
@@ -776,7 +765,6 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
auto promptFunc = std::bind(&ChatLLM::handlePrompt, this, std::placeholders::_1);
auto responseFunc = std::bind(&ChatLLM::handleResponse, this, std::placeholders::_1,
std::placeholders::_2);
- auto recalcFunc = std::bind(&ChatLLM::handleRecalculate, this, std::placeholders::_1);
emit promptProcessing();
m_ctx.n_predict = n_predict;
m_ctx.top_k = top_k;
@@ -796,10 +784,12 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
m_timer->start();
if (!docsContext.isEmpty()) {
auto old_n_predict = std::exchange(m_ctx.n_predict, 0); // decode localdocs context without a response
- m_llModelInfo.model->prompt(docsContext.toStdString(), "%1", promptFunc, responseFunc, recalcFunc, m_ctx);
+ m_llModelInfo.model->prompt(docsContext.toStdString(), "%1", promptFunc, responseFunc,
+ /*allowContextShift*/ true, m_ctx);
m_ctx.n_predict = old_n_predict; // now we are ready for a response
}
- m_llModelInfo.model->prompt(prompt.toStdString(), promptTemplate.toStdString(), promptFunc, responseFunc, recalcFunc, m_ctx);
+ m_llModelInfo.model->prompt(prompt.toStdString(), promptTemplate.toStdString(), promptFunc, responseFunc,
+ /*allowContextShift*/ true, m_ctx);
#if defined(DEBUG)
printf("\n");
fflush(stdout);
@@ -904,10 +894,9 @@ void ChatLLM::generateName()
auto promptTemplate = MySettings::globalInstance()->modelPromptTemplate(m_modelInfo);
auto promptFunc = std::bind(&ChatLLM::handleNamePrompt, this, std::placeholders::_1);
auto responseFunc = std::bind(&ChatLLM::handleNameResponse, this, std::placeholders::_1, std::placeholders::_2);
- auto recalcFunc = std::bind(&ChatLLM::handleNameRecalculate, this, std::placeholders::_1);
LLModel::PromptContext ctx = m_ctx;
m_llModelInfo.model->prompt(chatNamePrompt.toStdString(), promptTemplate.toStdString(),
- promptFunc, responseFunc, recalcFunc, ctx);
+ promptFunc, responseFunc, /*allowContextShift*/ false, ctx);
std::string trimmed = trim_whitespace(m_nameResponse);
if (trimmed != m_nameResponse) {
m_nameResponse = trimmed;
@@ -944,15 +933,6 @@ bool ChatLLM::handleNameResponse(int32_t token, const std::string &response)
return words.size() <= 3;
}
-bool ChatLLM::handleNameRecalculate(bool isRecalc)
-{
-#if defined(DEBUG)
- qDebug() << "name recalc" << m_llmThread.objectName() << isRecalc;
-#endif
- Q_UNUSED(isRecalc);
- return true;
-}
-
bool ChatLLM::handleQuestionPrompt(int32_t token)
{
#if defined(DEBUG)
@@ -991,15 +971,6 @@ bool ChatLLM::handleQuestionResponse(int32_t token, const std::string &response)
return true;
}
-bool ChatLLM::handleQuestionRecalculate(bool isRecalc)
-{
-#if defined(DEBUG)
- qDebug() << "name recalc" << m_llmThread.objectName() << isRecalc;
-#endif
- Q_UNUSED(isRecalc);
- return true;
-}
-
void ChatLLM::generateQuestions(qint64 elapsed)
{
Q_ASSERT(isModelLoaded());
@@ -1019,12 +990,11 @@ void ChatLLM::generateQuestions(qint64 elapsed)
auto promptTemplate = MySettings::globalInstance()->modelPromptTemplate(m_modelInfo);
auto promptFunc = std::bind(&ChatLLM::handleQuestionPrompt, this, std::placeholders::_1);
auto responseFunc = std::bind(&ChatLLM::handleQuestionResponse, this, std::placeholders::_1, std::placeholders::_2);
- auto recalcFunc = std::bind(&ChatLLM::handleQuestionRecalculate, this, std::placeholders::_1);
LLModel::PromptContext ctx = m_ctx;
QElapsedTimer totalTime;
totalTime.start();
- m_llModelInfo.model->prompt(suggestedFollowUpPrompt,
- promptTemplate.toStdString(), promptFunc, responseFunc, recalcFunc, ctx);
+ m_llModelInfo.model->prompt(suggestedFollowUpPrompt, promptTemplate.toStdString(), promptFunc, responseFunc,
+ /*allowContextShift*/ false, ctx);
elapsed += totalTime.elapsed();
emit responseStopped(elapsed);
}
@@ -1039,15 +1009,6 @@ bool ChatLLM::handleSystemPrompt(int32_t token)
return !m_stopGenerating;
}
-bool ChatLLM::handleSystemRecalculate(bool isRecalc)
-{
-#if defined(DEBUG)
- qDebug() << "system recalc" << m_llmThread.objectName() << isRecalc;
-#endif
- Q_UNUSED(isRecalc);
- return false;
-}
-
bool ChatLLM::handleRestoreStateFromTextPrompt(int32_t token)
{
#if defined(DEBUG)
@@ -1057,15 +1018,6 @@ bool ChatLLM::handleRestoreStateFromTextPrompt(int32_t token)
return !m_stopGenerating;
}
-bool ChatLLM::handleRestoreStateFromTextRecalculate(bool isRecalc)
-{
-#if defined(DEBUG)
- qDebug() << "restore state from text recalc" << m_llmThread.objectName() << isRecalc;
-#endif
- Q_UNUSED(isRecalc);
- return false;
-}
-
// this function serialized the cached model state to disk.
// we want to also serialize n_ctx, and read it at load time.
bool ChatLLM::serialize(QDataStream &stream, int version, bool serializeKV)
@@ -1268,7 +1220,6 @@ void ChatLLM::processSystemPrompt()
m_ctx = LLModel::PromptContext();
auto promptFunc = std::bind(&ChatLLM::handleSystemPrompt, this, std::placeholders::_1);
- auto recalcFunc = std::bind(&ChatLLM::handleSystemRecalculate, this, std::placeholders::_1);
const int32_t n_predict = MySettings::globalInstance()->modelMaxLength(m_modelInfo);
const int32_t top_k = MySettings::globalInstance()->modelTopK(m_modelInfo);
@@ -1294,7 +1245,7 @@ void ChatLLM::processSystemPrompt()
#endif
auto old_n_predict = std::exchange(m_ctx.n_predict, 0); // decode system prompt without a response
// use "%1%2" and not "%1" to avoid implicit whitespace
- m_llModelInfo.model->prompt(systemPrompt, "%1%2", promptFunc, nullptr, recalcFunc, m_ctx, true);
+ m_llModelInfo.model->prompt(systemPrompt, "%1%2", promptFunc, nullptr, /*allowContextShift*/ true, m_ctx, true);
m_ctx.n_predict = old_n_predict;
#if defined(DEBUG)
printf("\n");
@@ -1311,14 +1262,13 @@ void ChatLLM::processRestoreStateFromText()
if (!isModelLoaded() || !m_restoreStateFromText || m_isServer)
return;
- m_isRecalc = true;
- emit recalcChanged();
+ m_restoringFromText = true;
+ emit restoringFromTextChanged();
m_stopGenerating = false;
m_ctx = LLModel::PromptContext();
auto promptFunc = std::bind(&ChatLLM::handleRestoreStateFromTextPrompt, this, std::placeholders::_1);
- auto recalcFunc = std::bind(&ChatLLM::handleRestoreStateFromTextRecalculate, this, std::placeholders::_1);
const QString promptTemplate = MySettings::globalInstance()->modelPromptTemplate(m_modelInfo);
const int32_t n_predict = MySettings::globalInstance()->modelMaxLength(m_modelInfo);
@@ -1351,7 +1301,7 @@ void ChatLLM::processRestoreStateFromText()
auto responseText = response.second.toStdString();
m_llModelInfo.model->prompt(prompt.second.toStdString(), promptTemplate.toStdString(), promptFunc, nullptr,
- recalcFunc, m_ctx, false, &responseText);
+ /*allowContextShift*/ true, m_ctx, false, &responseText);
}
if (!m_stopGenerating) {
@@ -1359,8 +1309,8 @@ void ChatLLM::processRestoreStateFromText()
m_stateFromText.clear();
}
- m_isRecalc = false;
- emit recalcChanged();
+ m_restoringFromText = false;
+ emit restoringFromTextChanged();
m_pristineLoadedState = false;
}
diff --git a/gpt4all-chat/chatllm.h b/gpt4all-chat/chatllm.h
index 252b77bd..d123358a 100644
--- a/gpt4all-chat/chatllm.h
+++ b/gpt4all-chat/chatllm.h
@@ -93,7 +93,7 @@ class Chat;
class ChatLLM : public QObject
{
Q_OBJECT
- Q_PROPERTY(bool isRecalc READ isRecalc NOTIFY recalcChanged)
+ Q_PROPERTY(bool restoringFromText READ restoringFromText NOTIFY restoringFromTextChanged)
Q_PROPERTY(QString deviceBackend READ deviceBackend NOTIFY loadedModelInfoChanged)
Q_PROPERTY(QString device READ device NOTIFY loadedModelInfoChanged)
Q_PROPERTY(QString fallbackReason READ fallbackReason NOTIFY loadedModelInfoChanged)
@@ -121,7 +121,7 @@ public:
ModelInfo modelInfo() const;
void setModelInfo(const ModelInfo &info);
- bool isRecalc() const { return m_isRecalc; }
+ bool restoringFromText() const { return m_restoringFromText; }
void acquireModel();
void resetModel();
@@ -172,7 +172,7 @@ public Q_SLOTS:
void processRestoreStateFromText();
Q_SIGNALS:
- void recalcChanged();
+ void restoringFromTextChanged();
void loadedModelInfoChanged();
void modelLoadingPercentageChanged(float);
void modelLoadingError(const QString &error);
@@ -201,19 +201,14 @@ protected:
int32_t repeat_penalty_tokens);
bool handlePrompt(int32_t token);
bool handleResponse(int32_t token, const std::string &response);
- bool handleRecalculate(bool isRecalc);
bool handleNamePrompt(int32_t token);
bool handleNameResponse(int32_t token, const std::string &response);
- bool handleNameRecalculate(bool isRecalc);
bool handleSystemPrompt(int32_t token);
bool handleSystemResponse(int32_t token, const std::string &response);
- bool handleSystemRecalculate(bool isRecalc);
bool handleRestoreStateFromTextPrompt(int32_t token);
bool handleRestoreStateFromTextResponse(int32_t token, const std::string &response);
- bool handleRestoreStateFromTextRecalculate(bool isRecalc);
bool handleQuestionPrompt(int32_t token);
bool handleQuestionResponse(int32_t token, const std::string &response);
- bool handleQuestionRecalculate(bool isRecalc);
void saveState();
void restoreState();
@@ -236,7 +231,7 @@ private:
QThread m_llmThread;
std::atomic m_stopGenerating;
std::atomic m_shouldBeLoaded;
- std::atomic m_isRecalc;
+ std::atomic m_restoringFromText; // status indication
std::atomic m_forceUnloadModel;
std::atomic m_markedForDeletion;
bool m_isServer;
diff --git a/gpt4all-chat/qml/ChatView.qml b/gpt4all-chat/qml/ChatView.qml
index 2be8d75e..920e2759 100644
--- a/gpt4all-chat/qml/ChatView.qml
+++ b/gpt4all-chat/qml/ChatView.qml
@@ -834,7 +834,7 @@ Rectangle {
to: 360
duration: 1000
loops: Animation.Infinite
- running: currentResponse && (currentChat.responseInProgress || currentChat.isRecalc)
+ running: currentResponse && (currentChat.responseInProgress || currentChat.restoringFromText)
}
}
}
@@ -867,13 +867,13 @@ Rectangle {
color: theme.mutedTextColor
}
RowLayout {
- visible: currentResponse && ((value === "" && currentChat.responseInProgress) || currentChat.isRecalc)
+ visible: currentResponse && ((value === "" && currentChat.responseInProgress) || currentChat.restoringFromText)
Text {
color: theme.mutedTextColor
font.pixelSize: theme.fontSizeLarger
text: {
- if (currentChat.isRecalc)
- return qsTr("recalculating context ...");
+ if (currentChat.restoringFromText)
+ return qsTr("restoring from text ...");
switch (currentChat.responseState) {
case Chat.ResponseStopped: return qsTr("response stopped ...");
case Chat.LocalDocsRetrieval: return qsTr("retrieving localdocs: %1 ...").arg(currentChat.collectionList.join(", "));
@@ -1861,7 +1861,7 @@ Rectangle {
}
}
function sendMessage() {
- if (textInput.text === "" || currentChat.responseInProgress || currentChat.isRecalc)
+ if (textInput.text === "" || currentChat.responseInProgress || currentChat.restoringFromText)
return
currentChat.stopGenerating()
From de7cb36fccf1c0c88d7bd42385bf0cafe99ed910 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 7 Aug 2024 11:27:50 -0400
Subject: [PATCH 03/66] python: reduce size of wheels built by CI, other build
tweaks (#2802)
* Read CMAKE_CUDA_ARCHITECTURES directly
* Disable CUBINs for python build in CI
* Search for CUDA 11 as well as CUDA 12
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 10 ++++--
gpt4all-backend/CMakeLists.txt | 4 ---
gpt4all-backend/llama.cpp.cmake | 13 +++-----
gpt4all-bindings/python/CHANGELOG.md | 8 +++++
gpt4all-bindings/python/gpt4all/_pyllmodel.py | 33 ++++++++++++-------
gpt4all-bindings/python/setup.py | 2 +-
6 files changed, 42 insertions(+), 28 deletions(-)
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index e71693a4..3b1f7e4a 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -881,7 +881,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
sudo apt-get update
- sudo apt-get install -y cmake build-essential vulkan-sdk cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server libmysqlclient21 libodbc2 libpq5
+ sudo apt-get install -y cmake build-essential vulkan-sdk cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server
pip install setuptools wheel cmake
- run:
name: Build C library
@@ -889,7 +889,9 @@ jobs:
export PATH=$PATH:/usr/local/cuda/bin
git submodule update --init --recursive
cd gpt4all-backend
- cmake -B build -DCMAKE_BUILD_TYPE=Release
+ cmake -B build -DCMAKE_BUILD_TYPE=Release \
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
+ -DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build -j$(nproc)
- run:
name: Build wheel
@@ -986,7 +988,9 @@ jobs:
$Env:PATH += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
cd gpt4all-backend
- cmake -G Ninja -B build -DCMAKE_BUILD_TYPE=Release -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
+ cmake -G Ninja -B build -DCMAKE_BUILD_TYPE=Release `
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
+ -DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build --parallel
- run:
name: Build wheel
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index 5862c726..e6210d74 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -63,10 +63,6 @@ if (LLMODEL_VULKAN)
list(APPEND BUILD_VARIANTS vulkan vulkan-avxonly)
endif()
if (LLMODEL_CUDA)
- if (DEFINED CMAKE_CUDA_ARCHITECTURES)
- set(GGML_CUDA_ARCHITECTURES "${CMAKE_CUDA_ARCHITECTURES}")
- endif()
-
include(CheckLanguage)
check_language(CUDA)
if (NOT CMAKE_CUDA_COMPILER)
diff --git a/gpt4all-backend/llama.cpp.cmake b/gpt4all-backend/llama.cpp.cmake
index eac2dcf7..a541da03 100644
--- a/gpt4all-backend/llama.cpp.cmake
+++ b/gpt4all-backend/llama.cpp.cmake
@@ -378,19 +378,19 @@ function(include_ggml SUFFIX)
find_package(CUDAToolkit REQUIRED)
set(CUDAToolkit_BIN_DIR ${CUDAToolkit_BIN_DIR} PARENT_SCOPE)
- if (NOT DEFINED GGML_CUDA_ARCHITECTURES)
+ if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
# 52 == lowest CUDA 12 standard
# 60 == f16 CUDA intrinsics
# 61 == integer CUDA intrinsics
# 70 == compute capability at which unrolling a loop in mul_mat_q kernels is faster
if (GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
- set(GGML_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
+ set(CMAKE_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
else()
- set(GGML_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
- #set(GGML_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
+ set(CMAKE_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
+ #set(CMAKE_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
endif()
endif()
- message(STATUS "Using CUDA architectures: ${GGML_CUDA_ARCHITECTURES}")
+ message(STATUS "Using CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
set(GGML_HEADERS_CUDA ${DIRECTORY}/ggml/include/ggml-cuda.h)
file(GLOB GGML_HEADERS_CUDA "${DIRECTORY}/ggml/src/ggml-cuda/*.cuh")
@@ -1018,9 +1018,6 @@ function(include_ggml SUFFIX)
C_STANDARD 11
C_STANDARD_REQUIRED true
)
- if (GGML_CUDA_ARCHITECTURES)
- set_property(TARGET ggml${SUFFIX} llama${SUFFIX} PROPERTY CUDA_ARCHITECTURES "${GGML_CUDA_ARCHITECTURES}")
- endif()
target_compile_options(ggml${SUFFIX} PRIVATE "${GGML_COMPILE_OPTS}")
target_compile_options(llama${SUFFIX} PRIVATE "${GGML_COMPILE_OPTS}")
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index fce39350..20c8eece 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
+## [Unreleased]
+
+### Changed
+- Search for pip-installed CUDA 11 as well as CUDA 12 ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
+- Stop shipping CUBINs to reduce wheel size ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
+
## [2.8.0] - 2024-08-05
### Added
@@ -16,6 +22,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Detect use of a Python interpreter under Rosetta for a clearer error message ([#2793](https://github.com/nomic-ai/gpt4all/pull/2793))
### Changed
+- Build against CUDA 11.8 instead of CUDA 12 for better compatibility with older drivers ([#2639](https://github.com/nomic-ai/gpt4all/pull/2639))
- Update llama.cpp to commit 87e397d00 from July 19th ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
### Removed
@@ -33,4 +40,5 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.0...HEAD
[2.8.0]: https://github.com/nomic-ai/gpt4all/compare/python-v2.7.0...python-v2.8.0
diff --git a/gpt4all-bindings/python/gpt4all/_pyllmodel.py b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
index a4952fe3..88be2949 100644
--- a/gpt4all-bindings/python/gpt4all/_pyllmodel.py
+++ b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
@@ -39,25 +39,34 @@ if platform.system() == "Darwin" and platform.processor() == "i386":
Please install GPT4All in an environment that uses a native ARM64 Python interpreter.
"""))
+
+def _load_cuda(rtver: str, blasver: str) -> None:
+ if platform.system() == "Linux":
+ cudalib = f"lib/libcudart.so.{rtver}"
+ cublaslib = f"lib/libcublas.so.{blasver}"
+ else: # Windows
+ cudalib = fr"bin\cudart64_{rtver.replace(".", "")}.dll"
+ cublaslib = fr"bin\cublas64_{blasver}.dll"
+
+ # preload the CUDA libs so the backend can find them
+ ctypes.CDLL(os.path.join(cuda_runtime.__path__[0], cudalib), mode=ctypes.RTLD_GLOBAL)
+ ctypes.CDLL(os.path.join(cublas.__path__[0], cublaslib), mode=ctypes.RTLD_GLOBAL)
+
+
# Find CUDA libraries from the official packages
cuda_found = False
-if platform.system() in ('Linux', 'Windows'):
+if platform.system() in ("Linux", "Windows"):
try:
from nvidia import cuda_runtime, cublas
except ImportError:
pass # CUDA is optional
else:
- if platform.system() == 'Linux':
- cudalib = 'lib/libcudart.so.12'
- cublaslib = 'lib/libcublas.so.12'
- else: # Windows
- cudalib = r'bin\cudart64_12.dll'
- cublaslib = r'bin\cublas64_12.dll'
-
- # preload the CUDA libs so the backend can find them
- ctypes.CDLL(os.path.join(cuda_runtime.__path__[0], cudalib), mode=ctypes.RTLD_GLOBAL)
- ctypes.CDLL(os.path.join(cublas.__path__[0], cublaslib), mode=ctypes.RTLD_GLOBAL)
- cuda_found = True
+ for rtver, blasver in [("12", "12"), ("11.0", "11")]:
+ try:
+ _load_cuda(rtver, blasver)
+ cuda_found = True
+ except OSError: # dlopen() does not give specific error codes
+ pass # try the next one
# TODO: provide a config file to make this more robust
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index 1fe93873..e92fba61 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -68,7 +68,7 @@ def get_long_description():
setup(
name=package_name,
- version="2.8.0",
+ version="2.8.1.dev0",
description="Python bindings for GPT4All",
long_description=get_long_description(),
long_description_content_type="text/markdown",
From c950fdd84e31757d5f832acae9d483d6c4f101d1 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 7 Aug 2024 18:59:57 -0400
Subject: [PATCH 04/66] changelogs: add PR 2781 (#2809)
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/CHANGELOG.md | 5 +++++
gpt4all-chat/CHANGELOG.md | 4 ++++
2 files changed, 9 insertions(+)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 20c8eece..2e5be01f 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -9,6 +9,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- Search for pip-installed CUDA 11 as well as CUDA 12 ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
- Stop shipping CUBINs to reduce wheel size ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
+- Use llama\_kv\_cache ops to shift context faster ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+- Don't stop generating at end of context ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+
+### Fixed
+- Make reverse prompt detection work more reliably and prevent it from breaking output ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
## [2.8.0] - 2024-08-05
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 114320a7..4084b4a0 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -11,6 +11,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- Add missing entries to Italian transltation (by [@Harvester62](https://github.com/Harvester62) in [#2783](https://github.com/nomic-ai/gpt4all/pull/2783))
+- Use llama\_kv\_cache ops to shift context faster ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+- Don't stop generating at end of context ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
### Fixed
- Case-insensitive LocalDocs source icon detection (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2761](https://github.com/nomic-ai/gpt4all/pull/2761))
@@ -18,6 +20,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several backend issues ([#2778](https://github.com/nomic-ai/gpt4all/pull/2778))
- Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+- Make reverse prompt detection work more reliably and prevent it from breaking output ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+- Disallow context shift for chat name and follow-up generation to prevent bugs ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
## [3.1.1] - 2024-07-27
From 26113a17fbe17bd3edff8e6c56ee60e57d7f0def Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Thu, 8 Aug 2024 11:49:01 -0400
Subject: [PATCH 05/66] don't use ranges::contains due to clang incompatibility
(#2812)
Signed-off-by: Jared Van Bortel
---
gpt4all-backend/CMakeLists.txt | 2 +-
gpt4all-backend/llmodel_shared.cpp | 2 +-
gpt4all-chat/CMakeLists.txt | 2 +-
3 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index e6210d74..6b38b954 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -33,7 +33,7 @@ set(LLMODEL_VERSION_PATCH 0)
set(LLMODEL_VERSION "${LLMODEL_VERSION_MAJOR}.${LLMODEL_VERSION_MINOR}.${LLMODEL_VERSION_PATCH}")
project(llmodel VERSION ${LLMODEL_VERSION} LANGUAGES CXX C)
-set(CMAKE_CXX_STANDARD 23)
+set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
set(BUILD_SHARED_LIBS ON)
diff --git a/gpt4all-backend/llmodel_shared.cpp b/gpt4all-backend/llmodel_shared.cpp
index 7477254a..570f62c6 100644
--- a/gpt4all-backend/llmodel_shared.cpp
+++ b/gpt4all-backend/llmodel_shared.cpp
@@ -302,7 +302,7 @@ void LLModel::generateResponse(std::function
if (match == 0) break;
}
}
- } else if (ranges::contains(stopSequences, new_piece)) {
+ } else if (ranges::find(stopSequences, new_piece) < std::end(stopSequences)) {
// Special tokens must exactly match a stop sequence
stop = true;
lengthLimit = cachedResponse.size() - new_piece.size();
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 07acef15..09066a4b 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -1,7 +1,7 @@
cmake_minimum_required(VERSION 3.16)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
-set(CMAKE_CXX_STANDARD 23)
+set(CMAKE_CXX_STANDARD 20)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(APPLE)
From 0fcf1dda5f9593de885fa06f0f2d4afe36cc3648 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Thu, 8 Aug 2024 12:23:11 -0400
Subject: [PATCH 06/66] ci: update XCode for C++20 ranges::find (#2813)
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 18 +++++++++---------
1 file changed, 9 insertions(+), 9 deletions(-)
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index 3b1f7e4a..f56fc0f7 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -29,7 +29,7 @@ jobs:
- run: echo "CircleCI pipeline triggered"
build-offline-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
- run:
@@ -101,7 +101,7 @@ jobs:
sign-offline-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
# attach to a workspace containing unsigned dmg
@@ -136,7 +136,7 @@ jobs:
notarize-offline-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
- attach_workspace:
@@ -161,7 +161,7 @@ jobs:
build-online-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
- run:
@@ -235,7 +235,7 @@ jobs:
sign-online-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
# attach to a workspace containing unsigned dmg
@@ -270,7 +270,7 @@ jobs:
notarize-online-chat-installer-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
- attach_workspace:
@@ -782,7 +782,7 @@ jobs:
build-gpt4all-chat-macos:
macos:
- xcode: 14.0.0
+ xcode: 15.4.0
steps:
- checkout
- run:
@@ -1061,7 +1061,7 @@ jobs:
build-bindings-backend-macos:
macos:
- xcode: "14.0.0"
+ xcode: 15.4.0
steps:
- checkout
- run:
@@ -1167,7 +1167,7 @@ jobs:
- runtimes/linux-x64/*-*.so
build-nodejs-macos:
macos:
- xcode: "14.0.0"
+ xcode: 15.4.0
steps:
- checkout
- attach_workspace:
From d59b1331f9cbca25e3ad190d4c6579b372adcc89 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Thu, 8 Aug 2024 13:41:47 -0400
Subject: [PATCH 07/66] chat: translation tweaks (#2797)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CMakeLists.txt | 6 ++---
gpt4all-chat/mysettings.cpp | 27 ++++++++-----------
gpt4all-chat/mysettings.h | 3 ++-
.../{gpt4all_en.ts => gpt4all_en_US.ts} | 26 +++++++++---------
4 files changed, 29 insertions(+), 33 deletions(-)
rename gpt4all-chat/translations/{gpt4all_en.ts => gpt4all_en_US.ts} (99%)
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 09066a4b..1e7066a4 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -32,8 +32,8 @@ project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
-option(GPT4ALL_TRANSLATIONS OFF "Build with translations")
-option(GPT4ALL_LOCALHOST OFF "Build installer for localhost repo")
+option(GPT4ALL_TRANSLATIONS "Build with translations" OFF)
+option(GPT4ALL_LOCALHOST "Build installer for localhost repo" OFF)
option(GPT4ALL_OFFLINE_INSTALLER "Build an offline installer" OFF)
option(GPT4ALL_SIGN_INSTALL "Sign installed binaries and installers (requires signing identities)" OFF)
@@ -231,7 +231,7 @@ qt_add_qml_module(chat
if (GPT4ALL_TRANSLATIONS)
qt_add_translations(chat
TS_FILES
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_en.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_en_US.ts
${CMAKE_SOURCE_DIR}/translations/gpt4all_es_MX.ts
${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_CN.ts
${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_TW.ts
diff --git a/gpt4all-chat/mysettings.cpp b/gpt4all-chat/mysettings.cpp
index b29ec431..a1fc4fa4 100644
--- a/gpt4all-chat/mysettings.cpp
+++ b/gpt4all-chat/mysettings.cpp
@@ -593,7 +593,7 @@ QString MySettings::languageAndLocale() const
QString MySettings::filePathForLocale(const QLocale &locale)
{
// Check and see if we have a translation for the chosen locale and set it if possible otherwise
- // we return the filepath for the 'en' translation
+ // we return the filepath for the 'en_US' translation
QStringList uiLanguages = locale.uiLanguages();
for (int i = 0; i < uiLanguages.size(); ++i)
uiLanguages[i].replace('-', '_');
@@ -604,18 +604,18 @@ QString MySettings::filePathForLocale(const QLocale &locale)
// rather than having to recompile all of GPT4All
QString directory = modelPath();
for (const QString &bcp47Name : uiLanguages) {
- QString filePath = QString("%1/gpt4all_%2.qm").arg(directory).arg(bcp47Name);
+ QString filePath = u"%1/gpt4all_%2.qm"_s.arg(directory, bcp47Name);
QFileInfo filePathInfo(filePath);
if (filePathInfo.exists()) return filePath;
}
// Now scan the internal built-in translations
for (QString bcp47Name : uiLanguages) {
- QString filePath = QString(":/i18n/gpt4all_%1.qm").arg(bcp47Name);
+ QString filePath = u":/i18n/gpt4all_%1.qm"_s.arg(bcp47Name);
QFileInfo filePathInfo(filePath);
if (filePathInfo.exists()) return filePath;
}
- return QString(":/i18n/gpt4all_en.qm");
+ return u":/i18n/gpt4all_en_US.qm"_s;
}
void MySettings::setLanguageAndLocale(const QString &bcp47Name)
@@ -634,11 +634,10 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
// If we previously installed a translator, then remove it
if (m_translator) {
- if (!qGuiApp->removeTranslator(m_translator)) {
+ if (!qGuiApp->removeTranslator(m_translator.get())) {
qDebug() << "ERROR: Failed to remove the previous translator";
} else {
- delete m_translator;
- m_translator = nullptr;
+ m_translator.reset();
}
}
@@ -646,24 +645,20 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
Q_ASSERT(!m_translator);
const QString filePath = filePathForLocale(locale);
- // Installing the default gpt4all_en.qm fails presumably because it has no strings that are
- // different from the ones stored in the binary
- if (!m_translator && !filePath.endsWith("en.qm")) {
+ if (!m_translator) {
// Create a new translator object on the heap
- m_translator = new QTranslator(this);
+ m_translator = std::make_unique(this);
bool success = m_translator->load(filePath);
Q_ASSERT(success);
if (!success) {
qDebug() << "ERROR: Failed to load translation file:" << filePath;
- delete m_translator;
- m_translator = nullptr;
+ m_translator.reset();
}
// If we've successfully loaded it, then try and install it
- if (!qGuiApp->installTranslator(m_translator)) {
+ if (!qGuiApp->installTranslator(m_translator.get())) {
qDebug() << "ERROR: Failed to install the translator:" << filePath;
- delete m_translator;
- m_translator = nullptr;
+ m_translator.reset();
}
}
diff --git a/gpt4all-chat/mysettings.h b/gpt4all-chat/mysettings.h
index 64eb54c5..3db8b234 100644
--- a/gpt4all-chat/mysettings.h
+++ b/gpt4all-chat/mysettings.h
@@ -11,6 +11,7 @@
#include
#include
+#include
#include
namespace MySettingsEnums {
@@ -245,7 +246,7 @@ private:
const QStringList m_deviceList;
const QStringList m_embeddingsDeviceList;
const QStringList m_uiLanguages;
- QTranslator *m_translator = nullptr;
+ std::unique_ptr m_translator;
private:
explicit MySettings();
diff --git a/gpt4all-chat/translations/gpt4all_en.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
similarity index 99%
rename from gpt4all-chat/translations/gpt4all_en.ts
rename to gpt4all-chat/translations/gpt4all_en_US.ts
index e74a45e9..83ed70fd 100644
--- a/gpt4all-chat/translations/gpt4all_en.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -1,6 +1,6 @@
-
+AddCollectionView
@@ -1267,18 +1267,18 @@ model to get started
-
-
-
+
+ %n file
+ %n files
-
-
-
+
+ %n word
+ %n words
@@ -1722,18 +1722,18 @@ model to get started
-
-
-
+
+ %n file
+ %n files
-
-
-
+
+ %n word
+ %n words
From bec5045a7eaee197ebef738db4fd972d117e7c54 Mon Sep 17 00:00:00 2001
From: Adam Treat
Date: Thu, 8 Aug 2024 17:03:07 -0400
Subject: [PATCH 08/66] Update translation files.
Signed-off-by: Adam Treat
---
gpt4all-chat/translations/gpt4all_en_US.ts | 48 ++++++++++----------
gpt4all-chat/translations/gpt4all_es_MX.ts | 46 ++++++++++---------
gpt4all-chat/translations/gpt4all_it_IT.ts | 46 ++++++++++---------
gpt4all-chat/translations/gpt4all_pt_BR.ts | 46 ++++++++++---------
gpt4all-chat/translations/gpt4all_ro_RO.ts | 46 ++++++++++---------
gpt4all-chat/translations/gpt4all_zh_CN.ts | 53 +++++++++++++---------
gpt4all-chat/translations/gpt4all_zh_TW.ts | 46 ++++++++++---------
7 files changed, 181 insertions(+), 150 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index 83ed70fd..0769d674 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -1040,12 +1040,6 @@ model to get started
-
-
-
-
-
-
@@ -1194,6 +1188,12 @@ model to get started
+
+
+
+
+
+
@@ -1303,37 +1303,37 @@ model to get started
Download
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1770,37 +1770,37 @@ model to get started
ModelList
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1815,22 +1815,22 @@ model to get started
-
+
-
+
-
+
-
+
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index e1cdb7c3..e19b8d4a 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -1113,10 +1113,8 @@ model to get started
Tú
-
-
- recalculando contexto ...
+ recalculando contexto ...
@@ -1291,6 +1289,12 @@ model to get started
Cargar · %1 (predeterminado) →
+
+
+
+
+
+
@@ -1401,37 +1405,37 @@ model to get started
Download
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1961,57 +1965,57 @@ model to get started
-
+
-
+
-
+
-
+ <strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2
-
+
-
+ <strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Modelo Mistral Medium</strong><br> %1
-
+
-
+
-
+
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index d9e46071..9a72ba13 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -1047,10 +1047,8 @@ modello per iniziare
Tu
-
-
- ricalcolo contesto ...
+ ricalcolo contesto ...
@@ -1200,6 +1198,12 @@ modello per iniziare
Carica · %1 (predefinito) →
+
+
+
+
+
+
@@ -1309,37 +1313,37 @@ modello per iniziare
Download
-
+ Il modello "%1" è stato installato correttamente.
-
+ ERRORE: $MODEL_NAME è vuoto.
-
+ ERRORE: $API_KEY è vuoto.
-
+ ERRORE: $BASE_URL non è valido.
-
+ ERRORE: il modello "%1 (%2)" è in conflitto.
-
+ Il modello "%1 (%2)" è stato installato correttamente.
-
+ Il modello "%1" è stato rimosso.
@@ -1781,37 +1785,37 @@ modello per iniziare
ModelList
-
+ <ul><li>Richiede una chiave API OpenAI personale.</li><li>ATTENZIONE: invierà le tue chat a OpenAI!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con OpenAI</li><li>Puoi richiedere una chiave API <a href="https://platform.openai.com/account/api-keys">qui.</a> </li>
-
+
-
+
-
+
-
+
-
+
-
+ <br><br><i>* Anche se paghi OpenAI per ChatGPT-4 questo non garantisce l'accesso alla chiave API. Contatta OpenAI per maggiori informazioni.
@@ -1826,22 +1830,22 @@ modello per iniziare
<strong>Modello API compatibile con OpenAI</strong><br><ul><li>Chiave API: %1</li><li>URL di base: %2</li><li>Nome modello: %3</li></ul>
-
+ <ul><li>Richiede una chiave API Mistral personale.</li><li>ATTENZIONE: invierà le tue chat a Mistral!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con Mistral</li><li>Puoi richiedere una chiave API <a href="https://console.mistral.ai/user/api-keys">qui</a>. </li>
-
+ <ul><li>Richiede una chiave API personale e l'URL di base dell'API.</li><li>ATTENZIONE: invierà le tue chat al server API compatibile con OpenAI che hai specificato!</li><li>La tua chiave API verrà archiviata su disco</li><li>Verrà utilizzata solo per comunicare con il server API compatibile con OpenAI</li>
-
+ <strong>Connetti al server API compatibile con OpenAI</strong><br> %1
-
+ <strong>Creato da %1.</strong><br><ul><li>Pubblicato il %2.<li>Questo modello ha %3 Mi piace.<li>Questo modello ha %4 download.<li>Altro informazioni possono essere trovate <a href="https://huggingface.co/%5">qui.</a></ul>
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index ffd8c52c..97835cf3 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -1052,10 +1052,8 @@ modelo instalado para funcionar
Você
-
-
- recalculando contexto...
+ recalculando contexto...
@@ -1205,6 +1203,12 @@ modelo instalado para funcionar
Carregar · %1 (padrão) →
+
+
+
+
+
+
@@ -1314,37 +1318,37 @@ modelo instalado para funcionar
Download
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1785,37 +1789,37 @@ modelo instalado para funcionar
ModelList
-
+ <ul><li>É necessária uma chave de API da OpenAI.</li><li>AVISO: Seus chats serão enviados para a OpenAI!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a OpenAI</li><li>Você pode solicitar uma chave de API <a href="https://platform.openai.com/account/api-keys">aqui.</a></li>
-
+ <strong>Modelo ChatGPT GPT-3.5 Turbo da OpenAI</strong><br> %1
-
+ <strong>Modelo ChatGPT GPT-4 da OpenAI</strong><br> %1 %2
-
+ <strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Modelo Mistral Medium</strong><br> %1
-
+ <br><br><i>* Mesmo que você pague pelo ChatGPT-4 da OpenAI, isso não garante acesso à chave de API. Contate a OpenAI para mais informações.
@@ -1830,22 +1834,22 @@ modelo instalado para funcionar
-
+ <ul><li>É necessária uma chave de API da Mistral.</li><li>AVISO: Seus chats serão enviados para a Mistral!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a Mistral</li><li>Você pode solicitar uma chave de API <a href="https://console.mistral.ai/user/api-keys">aqui</a>.</li>
-
+
-
+
-
+ <strong>Criado por %1.</strong><br><ul><li>Publicado em %2.<li>Este modelo tem %3 curtidas.<li>Este modelo tem %4 downloads.<li>Mais informações podem ser encontradas <a href="https://huggingface.co/%5">aqui.</a></ul>
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 0137cfc9..fdfc1470 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -1107,10 +1107,8 @@ model to get started
Tu
-
-
- se reface contextul...
+ se reface contextul...
@@ -1284,6 +1282,12 @@ model to get started
Incarca · %1 (implicit) →
+
+
+
+
+
+
@@ -1395,37 +1399,37 @@ model to get started
Download
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1952,57 +1956,57 @@ model to get started
-
+
-
+
-
+
-
+ <strong>Modelul ChatGPT GPT-4 al OpenAI</strong><br> %1 %2
-
+
-
+ <strong>Modelul Mistral Tiny</strong><br> %1
-
+ <strong>Modelul Mistral Small</strong><br> %1
-
+ <strong>Modelul Mistral Medium</strong><br> %1
-
+
-
+
-
+
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index a4a5eb04..a4c93ad7 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -1132,10 +1132,8 @@ model to get started
模型在思考
-
-
- 重新生成上下文...
+ 重新生成上下文...
@@ -1293,6 +1291,12 @@ model to get started
载入 · %1 (默认) →
+
+
+
+
+
+
@@ -1400,37 +1404,37 @@ model to get started
Download
-
+ 模型 "%1" 安装成功
-
+ 错误:$MODEL_NAME 为空
-
+ 错误:$API_KEY为空
-
+ 错误:$BASE_URL 非法
-
+ 错误: 模型 "%1 (%2)" 有冲突.
-
+ 模型 "%1 (%2)" 安装成功.
-
+ 模型 "%1" 已删除.
@@ -1708,8 +1712,15 @@ model to get started
+
<h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。
+
+
+
+
+
+
@@ -1878,42 +1889,42 @@ model to get started
<strong>与 OpenAI 兼容的 API 模型</strong><br><ul><li>API 密钥:%1</li><li>基本 URL:%2</li><li>模型名称:%3</li></ul>
-
+ <ul><li>需要个人 OpenAI API 密钥。</li><li>警告:将把您的聊天内容发送给 OpenAI!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与 OpenAI 通信</li><li>您可以在此处<a href="https://platform.openai.com/account/api-keys">申请 API 密钥。</a></li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2
-
+ <strong>Mistral Tiny model</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1
-
+ <ul><li>需要个人 API 密钥和 API 基本 URL。</li><li>警告:将把您的聊天内容发送到您指定的与 OpenAI 兼容的 API 服务器!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与与 OpenAI 兼容的 API 服务器通信</li>
-
+ <strong>连接到与 OpenAI 兼容的 API 服务器</strong><br> %1
@@ -1922,7 +1933,7 @@ model to get started
<strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
-
+ <br><br><i>* 即使您为ChatGPT-4向OpenAI付款,这也不能保证API密钥访问。联系OpenAI获取更多信息。
@@ -1931,7 +1942,7 @@ model to get started
<strong>OpenAI's ChatGPT model GPT-4</strong><br>
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
@@ -1948,7 +1959,7 @@ model to get started
<strong>Mistral Medium model</strong><br>
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 808a1253..59212c65 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -1132,10 +1132,8 @@ model to get started
模型正在思考中
-
-
- 重新計算語境中......
+ 重新計算語境中......
@@ -1277,6 +1275,12 @@ model to get started
+
+
+
+
+
+
@@ -1372,37 +1376,37 @@ model to get started
Download
-
+
-
+
-
+
-
+
-
+
-
+
-
+
@@ -1851,57 +1855,57 @@ model to get started
-
+ <ul><li>需要個人的 OpenAI API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 OpenAI</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 OpenAI 進行通訊</li><li>您可以在<a href="https://platform.openai.com/account/api-keys">此處</a>申請一個 API 金鑰。</li>
-
+ <strong>OpenAI 的 ChatGPT 模型 GPT-3.5 Turbo</strong><br> %1
-
+ <br><br><i>* 即使您已向 OpenAI 付費購買了 ChatGPT 的 GPT-4 模型使用權,但這也不能保證您能擁有 API 金鑰的使用權限。請聯繫 OpenAI 以查閱更多資訊。
-
+ <strong>OpenAI 的 ChatGPT 模型 GPT-4</strong><br> %1 %2
-
+ <ul><li>需要個人的 Mistral API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 Mistral!</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 Mistral 進行通訊</li><li>您可以在<a href="https://console.mistral.ai/user/api-keys">此處</a>申請一個 API 金鑰。</li>
-
+ <strong>Mistral 迷你模型</strong><br> %1
-
+ <strong>Mistral 小型模型</strong><br> %1
-
+ <strong>Mistral 中型模型</strong><br> %1
-
+
-
+
-
+ <strong>建立者:%1。</strong><br><ul><li>發行於:%2。<li>這個模型有 %3 個讚。<li>這個模型有 %4 次下載次數。<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul>
From a910d657554d5e413ff48beeaa6f6d52f541dd1e Mon Sep 17 00:00:00 2001
From: AT
Date: Thu, 8 Aug 2024 18:42:11 -0400
Subject: [PATCH 09/66] Fix the translation change for the default model.
(#2815)
Signed-off-by: Adam Treat
---
gpt4all-chat/qml/AddModelView.qml | 22 +++++++++++++++++++---
gpt4all-chat/qml/ApplicationSettings.qml | 24 +++++++++++++++++++++---
2 files changed, 40 insertions(+), 6 deletions(-)
diff --git a/gpt4all-chat/qml/AddModelView.qml b/gpt4all-chat/qml/AddModelView.qml
index c43b7f56..d366b8f9 100644
--- a/gpt4all-chat/qml/AddModelView.qml
+++ b/gpt4all-chat/qml/AddModelView.qml
@@ -187,7 +187,12 @@ Rectangle {
visible: false
MyComboBox {
id: comboSort
- model: [qsTr("Default"), qsTr("Likes"), qsTr("Downloads"), qsTr("Recent")]
+ model: ListModel {
+ ListElement { name: qsTr("Default") }
+ ListElement { name: qsTr("Likes") }
+ ListElement { name: qsTr("Downloads") }
+ ListElement { name: qsTr("Recent") }
+ }
currentIndex: ModelList.discoverSort
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
@@ -207,7 +212,10 @@ Rectangle {
}
MyComboBox {
id: comboSortDirection
- model: [qsTr("Asc"), qsTr("Desc")]
+ model: ListModel {
+ ListElement { name: qsTr("Asc") }
+ ListElement { name: qsTr("Desc") }
+ }
currentIndex: {
if (ModelList.discoverSortDirection === 1)
return 0
@@ -235,7 +243,15 @@ Rectangle {
}
MyComboBox {
id: comboLimit
- model: ["5", "10", "20", "50", "100", qsTr("None")]
+ model: ListModel {
+ ListElement { name: "5" }
+ ListElement { name: "10" }
+ ListElement { name: "20" }
+ ListElement { name: "50" }
+ ListElement { name: "100" }
+ ListElement { name: qsTr("None") }
+ }
+
currentIndex: {
if (ModelList.discoverLimit === 5)
return 0;
diff --git a/gpt4all-chat/qml/ApplicationSettings.qml b/gpt4all-chat/qml/ApplicationSettings.qml
index 118d9f49..1e459e99 100644
--- a/gpt4all-chat/qml/ApplicationSettings.qml
+++ b/gpt4all-chat/qml/ApplicationSettings.qml
@@ -108,7 +108,11 @@ MySettingsTab {
Layout.fillWidth: false
Layout.alignment: Qt.AlignRight
// NOTE: indices match values of ChatTheme enum, keep them in sync
- model: [qsTr("Light"), qsTr("Dark"), qsTr("LegacyDark")]
+ model: ListModel {
+ ListElement { name: qsTr("Light") }
+ ListElement { name: qsTr("Dark") }
+ ListElement { name: qsTr("LegacyDark") }
+ }
Accessible.name: themeLabel.text
Accessible.description: themeLabel.helpText
function updateModel() {
@@ -143,7 +147,11 @@ MySettingsTab {
Layout.fillWidth: false
Layout.alignment: Qt.AlignRight
// NOTE: indices match values of FontSize enum, keep them in sync
- model: [qsTr("Small"), qsTr("Medium"), qsTr("Large")]
+ model: ListModel {
+ ListElement { name: qsTr("Small") }
+ ListElement { name: qsTr("Medium") }
+ ListElement { name: qsTr("Large") }
+ }
Accessible.name: fontLabel.text
Accessible.description: fontLabel.helpText
function updateModel() {
@@ -313,6 +321,12 @@ MySettingsTab {
defaultModelBox.updateModel()
}
}
+ Connections {
+ target: MySettings
+ function onLanguageAndLocaleChanged() {
+ defaultModelBox.rebuildModel()
+ }
+ }
Connections {
target: ModelList
function onSelectableModelListChanged() {
@@ -335,7 +349,11 @@ MySettingsTab {
Layout.maximumWidth: 400
Layout.alignment: Qt.AlignRight
// NOTE: indices match values of SuggestionMode enum, keep them in sync
- model: [ qsTr("When chatting with LocalDocs"), qsTr("Whenever possible"), qsTr("Never") ]
+ model: ListModel {
+ ListElement { name: qsTr("When chatting with LocalDocs") }
+ ListElement { name: qsTr("Whenever possible") }
+ ListElement { name: qsTr("Never") }
+ }
Accessible.name: suggestionModeLabel.text
Accessible.description: suggestionModeLabel.helpText
onActivated: {
From 6957706af7daf0b7953b90737a664883a7afcbad Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Thu, 8 Aug 2024 18:44:15 -0400
Subject: [PATCH 10/66] chat: fix crash at startup due to missing en_US
translation (#2816)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CMakeLists.txt | 1 +
gpt4all-chat/mysettings.cpp | 2 ++
2 files changed, 3 insertions(+)
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 1e7066a4..1e59a113 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -229,6 +229,7 @@ qt_add_qml_module(chat
)
if (GPT4ALL_TRANSLATIONS)
+ target_compile_definitions(chat PRIVATE GPT4ALL_USE_TRANSLATIONS)
qt_add_translations(chat
TS_FILES
${CMAKE_SOURCE_DIR}/translations/gpt4all_en_US.ts
diff --git a/gpt4all-chat/mysettings.cpp b/gpt4all-chat/mysettings.cpp
index a1fc4fa4..5d71a661 100644
--- a/gpt4all-chat/mysettings.cpp
+++ b/gpt4all-chat/mysettings.cpp
@@ -632,6 +632,7 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
else
locale = QLocale(l);
+#ifdef GPT4ALL_USE_TRANSLATIONS
// If we previously installed a translator, then remove it
if (m_translator) {
if (!qGuiApp->removeTranslator(m_translator.get())) {
@@ -661,6 +662,7 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
m_translator.reset();
}
}
+#endif
// Finally, set the locale whether we have a translation or not
QLocale::setDefault(locale);
From 3f640c7fe20b55018fdf1c847754af833a1bb3a6 Mon Sep 17 00:00:00 2001
From: Riccardo Giovanetti <29801031+Harvester62@users.noreply.github.com>
Date: Fri, 9 Aug 2024 00:45:41 +0200
Subject: [PATCH 11/66] Italian localization update (#2814)
Signed-off-by: Riccardo Giovanetti
---
gpt4all-chat/translations/gpt4all_it_IT.ts | 32 +++++++++++-----------
1 file changed, 16 insertions(+), 16 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index 9a72ba13..e956c412 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -19,7 +19,7 @@
- Aggiungi una cartella contenente file di testo semplice, PDF o Markdown. Configura estensioni aggiuntive in Impostazioni.
+ Aggiungi una cartella contenente file di testo semplice, PDF o Markdown. Configura estensioni aggiuntive in Settaggi.
@@ -469,7 +469,7 @@
- Impostazioni applicazione
+ Settaggi applicazione
@@ -541,19 +541,19 @@
- Lingua e impostazioni locali
+ Lingua e settaggi locali
- La lingua e le impostazioni locali che desideri utilizzare.
+ La lingua e i settaggi locali che vuoi utilizzare.
- Impostazioni locali del sistema
+ Settaggi locali del sistema
@@ -994,7 +994,7 @@
- Carica il modello predefinito che può essere modificato nelle impostazioni
+ Carica il modello predefinito che può essere modificato nei settaggi
@@ -1176,7 +1176,7 @@ modello per iniziare
- <h3>Si è verificato un errore durante il caricamento del modello:</h3><br><i>"%1"</i><br><br>Gli errori di caricamento del modello possono verificarsi per diversi motivi, ma le cause più comuni includono un formato di file non valido, un download incompleto o danneggiato, il tipo di file sbagliato, RAM di sistema insufficiente o un tipo di modello incompatibile. Ecco alcuni suggerimenti per risolvere il problema:<br><ul><li>Assicurati che il file del modello abbia un formato e un tipo compatibili<li>Verifica che il file del modello sia completo nella cartella di download<li>Puoi trovare la cartella di download nella finestra di dialogo delle impostazioni<li>Se hai scaricato manualmente il modello, assicurati che il file non sia danneggiato controllando md5sum<li>Leggi ulteriori informazioni su quali modelli sono supportati nella nostra <a href="https://docs.gpt4all.io/ ">documentazione</a> per la GUI<li>Consulta il nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per assistenza
+ <h3>Si è verificato un errore durante il caricamento del modello:</h3><br><i>"%1"</i><br><br>Gli errori di caricamento del modello possono verificarsi per diversi motivi, ma le cause più comuni includono un formato di file non valido, un download incompleto o danneggiato, il tipo di file sbagliato, RAM di sistema insufficiente o un tipo di modello incompatibile. Ecco alcuni suggerimenti per risolvere il problema:<br><ul><li>Assicurati che il file del modello abbia un formato e un tipo compatibili<li>Verifica che il file del modello sia completo nella cartella di download<li>Puoi trovare la cartella di download nella finestra di dialogo dei settaggi<li>Se hai scaricato manualmente il modello, assicurati che il file non sia danneggiato controllando md5sum<li>Leggi ulteriori informazioni su quali modelli sono supportati nella nostra <a href="https://docs.gpt4all.io/ ">documentazione</a> per la GUI<li>Consulta il nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per assistenza
@@ -1202,7 +1202,7 @@ modello per iniziare
-
+ ripristino dal testo ...
@@ -1471,7 +1471,7 @@ modello per iniziare
- Impostazioni LocalDocs
+ Settaggi LocalDocs
@@ -1862,7 +1862,7 @@ modello per iniziare
- Impostazioni modello
+ Settaggi modello
@@ -2420,7 +2420,7 @@ NOTA: non ha effetto finché non si ricarica il modello.
- Ripristina la finestra di dialogo delle impostazioni a uno stato predefinito
+ Ripristina la finestra di dialogo dei settaggi a uno stato predefinito
@@ -2550,13 +2550,13 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- Impostazioni
+ Settaggi
- Contiene varie impostazioni dell'applicazione
+ Contiene vari settaggi dell'applicazione
@@ -2804,7 +2804,7 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- <h3>Si è verificato un errore all'avvio:</h3><br><i>"Impossibile accedere al file delle impostazioni."</i><br><br>Sfortunatamente, qualcosa impedisce al programma di accedere al file delle impostazioni. Ciò potrebbe essere causato da autorizzazioni errate nella cartella di configurazione locale dell'app in cui si trova il file delle impostazioni. Dai un'occhiata al nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per ricevere assistenza.
+ <h3>Si è verificato un errore all'avvio:</h3><br><i>"Impossibile accedere al file dei settaggi."</i><br><br>Sfortunatamente, qualcosa impedisce al programma di accedere al file dei settaggi. Ciò potrebbe essere causato da autorizzazioni errate nella cartella di configurazione locale dell'app in cui si trova il file dei settaggi. Dai un'occhiata al nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per ricevere assistenza.
@@ -2900,13 +2900,13 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- Impostazioni
+ Settaggi
- Vista delle impostazioni per la configurazione dell'applicazione
+ Vista dei settaggi per la configurazione dell'applicazione
From da0dddc3d4016baf19491deababa778f8d117dfe Mon Sep 17 00:00:00 2001
From: wuhanodoo <99947164+wuodoo@users.noreply.github.com>
Date: Fri, 9 Aug 2024 23:00:06 +0800
Subject: [PATCH 12/66] Update gpt4all_zh_CN.ts (#2819)
Signed-off-by: wuhanodoo <99947164+wuodoo@users.noreply.github.com>
---
gpt4all-chat/translations/gpt4all_zh_CN.ts | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index a4c93ad7..c2f0cf88 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -1295,7 +1295,7 @@ model to get started
-
+ 从文本恢复中
@@ -1719,7 +1719,7 @@ model to get started
-
+ <h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。
From e35bc60876c3c9b7b3e968ad332cee7f312dbb54 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=B8=8D=E7=9F=A5=E7=81=AB=20Shiranui?=
Date: Fri, 9 Aug 2024 23:01:07 +0800
Subject: [PATCH 13/66] Update zh_TW translation (#2820)
Signed-off-by: SuperSonic
---
gpt4all-chat/translations/gpt4all_zh_TW.ts | 261 ++++-----------------
1 file changed, 47 insertions(+), 214 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 59212c65..07e8896e 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -184,25 +184,25 @@
-
+ 網路錯誤:無法取得 %1
-
+ <strong><font size="1"><a href="#error">錯誤</a></strong></font>
-
+ <strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
-
+ %1 GB
@@ -210,11 +210,7 @@
-
-
-
-
- 網路錯誤:無法取得 http://gpt4all.io/models/models3.json
+ ?
@@ -305,28 +301,12 @@
安裝線上模型
-
-
- <a href="#error">錯誤</a>
- 解釋下載時發生的錯誤
-
-
- <strong><font size="2">警告:不推薦在您的硬體上運作。
-
-
-
- 模型需要比較多的記憶體(
-
-
-
- GB),但您的系統記憶體空間不足(
-
@@ -385,37 +365,37 @@
-
+ 錯誤:$API_KEY 未填寫。
- 輸入 $API_KEY
+ 請輸入 $API_KEY
-
+ 錯誤:$BASE_URL 未填寫。
-
+ 請輸入 $BASE_URL
-
+ 錯誤:$MODEL_NAME 未填寫。
-
+ 請輸入 $MODEL_NAME
@@ -429,10 +409,6 @@
所需的記憶體
-
-
- GB
-
@@ -582,7 +558,7 @@
-
+ 系統語系
@@ -590,10 +566,6 @@
裝置
-
-
- 用於生成文字的計算設備。 「Auto」將自動使用 Vulkan 或 Metal。
-
@@ -640,7 +612,7 @@
-
+ 用於生成文字的計算設備。
@@ -648,7 +620,7 @@
-
+ 應用程式預設值
@@ -772,26 +744,18 @@
伺服器交談
-
-
- 提示詞:
-
-
-
- 回覆:
- ChatAPIWorker
-
+ 錯誤:網路錯誤,無法連線到目標 API 伺服器
-
+ ChatAPIWorker::handleFinished 遇到一個 HTTP 錯誤 %1 %2
@@ -904,14 +868,6 @@
ChatView
-
-
- <h3>載入模型時發生錯誤:</h3><br>
-
-
-
- <br><br>模型載入失敗的原因有很多,但最常見的原因包括檔案格式錯誤、下載不完整或損壞、檔案類型錯誤、系統主記憶體不足或模型類型不相容。以下是解決問題的一些建議:<br><ul><li>確保模型檔案具有相容的格式與類型<li>檢查下載資料夾中的模型檔案是否完整<li>您可以找到下載資料夾在設置對話方塊中<li>如果您已旁載入模型,請透過檢查md5sum 確保檔案未損壞<li>在我們的<a href="https://docs.gpt4all.io/ 中了解有關支援哪些模型的更多資訊">GUI 檔案</a><li>查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助
-
@@ -942,10 +898,6 @@
程式碼已複製到剪貼簿。
-
-
- 回覆:
-
@@ -1052,14 +1004,6 @@
將文件集合新增至交談中
-
-
- 載入 ·
-
-
-
- (預設) →
-
@@ -1122,19 +1066,6 @@ model to get started
您
-
-
- 參考自 https://terms.naer.edu.tw
- 忙線指示器
-
-
-
- 模型正在思考中
-
-
-
- 重新計算語境中......
-
@@ -1169,7 +1100,7 @@ model to get started
-
+ 生成問題......
@@ -1255,7 +1186,7 @@ model to get started
-
+ 停止生成
@@ -1273,7 +1204,7 @@ model to get started
-
+ <h3>載入模型時發生錯誤:</h3><br><i>"%1"</i><br><br>導致模型載入失敗的原因可能有很多種,但絕大多數的原因是檔案格式損毀、下載的檔案不完整、檔案類型錯誤、系統RAM空間不足或不相容的模型類型。這裡有些建議可供疑難排解:<br><ul><li>確保使用的模型是相容的格式與類型<li>檢查位於下載資料夾的檔案是否完整<li>您可以從設定中找到您所設定的「下載資料夾路徑」<li>如果您有側載模型,請利用 md5sum 等工具確保您的檔案是完整的<li>想了解更多關於我們所支援的模型資訊,煩請詳閱<a href="https://docs.gpt4all.io/">本文件</a>。<li>歡迎洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求幫助
@@ -1378,37 +1309,37 @@ model to get started
-
+ 模型「%1」已安裝成功。
-
+ 錯誤:$MODEL_NAME 未填寫。
-
+ 錯誤:$API_KEY 未填寫。
-
+ 錯誤:$BASE_URL 無效。
-
+ 錯誤:模型「%1 (%2)」發生衝突。
-
+ 模型「%1(%2)」已安裝成功。
-
+ 模型「%1」已移除。
@@ -1677,15 +1608,11 @@ model to get started
+ 新增收藏
-
-
- 錯誤:「我的文件」資料庫已損壞。
-
-
+ <h3>錯誤:「我的文件」資料庫已無法存取或已損壞。</h3><br><i>提醒:執行完以下任何疑難排解的動作後,請務必重新啟動應用程式。</i><br><ul><li>請確保<b>「下載路徑」</b>所指向的資料夾確實存在於檔案系統當中。</li><li>檢查 <b>「下載路徑」</b>所指向的資料夾,確保其「擁有者」為您本身,以及確保您對該資料夾擁有讀寫權限。</li><li>如果該資料夾內存在一份名為 <b>localdocs_v2.db</b> 的檔案,請同時確保您對其擁有讀寫權限。</li></ul><br>如果問題依舊存在,且該資料夾內存在與「localdocs_v*.db」名稱相關的檔案,請嘗試備份並移除它們。<br>雖然這樣一來,您恐怕得著手重建您的收藏,但這將或許能夠解決這份錯誤。
@@ -1847,12 +1774,12 @@ model to get started
-
+ %1(%2)
-
+ <strong>OpenAI 相容 API 模型</strong><br><ul><li>API 金鑰:%1</li><li>Base URL: %2</li><li>模型名稱: %3</li></ul>
@@ -1897,12 +1824,12 @@ model to get started
-
+ <ul><li>需要個人的 API 金鑰和 API 的 base URL。</li><li>警告:這將會傳送您的交談紀錄到您所指定的 OpenAI 相容性 API 伺服器</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與其 OpenAI 相容性 API 伺服器進行通訊</li>
-
+ <strong>連線到 OpenAI API 相容性伺服器</strong><br> %1
@@ -1978,12 +1905,6 @@ model to get started
必須包含要替換為使用者輸入的字串「%1」。
-
-
- 新增
-可選圖片
-
@@ -2297,29 +2218,25 @@ NOTE: Does not take effect until you reload the model.
-
+ <strong><font size="1"><a href="#error">錯誤</a></strong></font>
-
+ <strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
-
+ %1 GB
-
-
-
-
- <a href="#error">錯誤</a>
+ ?
@@ -2327,18 +2244,6 @@ NOTE: Does not take effect until you reload the model.
解釋下載時發生的錯誤
-
-
- <strong><font size="2">警告:不推薦在您的硬體上運作。
-
-
-
- 模型需要比較多的記憶體(
-
-
-
- GB),但您的系統記憶體空間不足(
-
@@ -2404,37 +2309,37 @@ NOTE: Does not take effect until you reload the model.
-
+ 錯誤:$API_KEY 未填寫。
- 輸入 $API_KEY
+ 請輸入 $API_KEY
-
+ 錯誤:$BASE_URL 未填寫。
-
+ 請輸入 $BASE_URL
-
+ 錯誤:$MODEL_NAME 未填寫。
-
+ 請輸入 $MODEL_NAME
@@ -2448,10 +2353,6 @@ NOTE: Does not take effect until you reload the model.
所需的記憶體
-
-
- GB
-
@@ -2678,18 +2579,6 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
歡迎使用!
-
-
- ### 版本資訊
-
-
-
-
- ### 貢獻者
-
-
@@ -2773,7 +2662,9 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
-
+ ### 版本資訊
+%1### 貢獻者
+%2
@@ -2896,81 +2787,23 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
main
-
-
- GPT4All v
-
-
-
- <h3>啟動時發生錯誤:</h3><br>
-
-
-
- <i>「偵測到不相容的硬體」</i>
-
-
-
- <br><br>糟糕,您的中央處理器不符合運行的最低要求
-
-
-
- 這個程式。特別是,它不支援 AVX 內在函數,這
-
-
-
- 程式需要成功運行現代大型語言模型。
-
-
-
- 此時唯一的解決方案是將硬體升級到更現代的中央處理器。
-
-
-
- 中文網址造成 Linguist 會發出警告,請無視。
- <br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86">
-
-
-
- 中文網址造成 Linguist 會發出警告,請無視。
- https://zh.wikipedia.org/wiki/AVX%E6%8C%87%E4%BB%A4%E9%9B%86</a>
-
-
-
- <i>「無法存取設定檔。」</i>
-
-
-
- <br><br>糟糕,有些東西正在阻止程式存取
-
-
-
- 設定檔。這可能是本機權限設定不正確導致的
-
-
-
- 設定檔案所在的app config 目錄。
-
-
-
- 請查看我們的<a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a>尋求協助。
-
-
+ GPT4All v%1
-
+ <h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體設備。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a>
-
+ <h3>啟動時發生錯誤:</h3><br><i>「無法存取設定檔。」</i><br><br>糟糕!有些東西正在阻止程式存取設定檔。這極為可能是由於設定檔所在的本機應用程式設定資料夾中的權限設定不正確所造成的。煩請洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求協助。
From c6f111b1d55a2ec7810df11ea00e80b4c902046f Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=B8=8D=E7=9F=A5=E7=81=AB=20Shiranui?=
Date: Fri, 9 Aug 2024 23:46:45 +0800
Subject: [PATCH 14/66] Update zh_TW translation (#2821)
Signed-off-by: SuperSonic
---
gpt4all-chat/translations/gpt4all_zh_TW.ts | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 07e8896e..19fbc6f5 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -1210,7 +1210,7 @@ model to get started
-
+ 從文字中恢復......
From c54ff89c3f319a4f14b312d77c5feace3c289452 Mon Sep 17 00:00:00 2001
From: Thiago Ramos <45890502+thiagojramos@users.noreply.github.com>
Date: Fri, 9 Aug 2024 12:50:45 -0300
Subject: [PATCH 15/66] Update gpt4all_pt_BR.ts (#2822)
Signed-off-by: Thiago Ramos <45890502+thiagojramos@users.noreply.github.com>
---
gpt4all-chat/translations/gpt4all_pt_BR.ts | 60 ++++++++++------------
1 file changed, 28 insertions(+), 32 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 97835cf3..7f01713a 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -284,31 +284,31 @@
-
+ ERRO: A $API_KEY está vazia.
-
+ ERRO: A $BASE_URL está vazia.
-
+ inserir a $BASE_URL
-
+ ERRO: O $MODEL_NAME está vazio.
-
+ inserir o $MODEL_NAME
@@ -559,7 +559,7 @@
-
+ Local do Sistema
@@ -567,15 +567,11 @@
Processador
-
-
- Processador usado para gerar texto. (Automático: Vulkan ou Metal).
-
-
+ Processador usado para gerar texto.
@@ -583,7 +579,7 @@
-
+ Aplicativo padrão
@@ -755,12 +751,12 @@
-
+ ERRO: Ocorreu um erro de rede ao conectar-se ao servidor da API
-
+ ChatAPIWorker::handleFinished recebeu erro HTTP %1 %2
@@ -1207,7 +1203,7 @@ modelo instalado para funcionar
-
+ Recuperando do texto...
@@ -1320,37 +1316,37 @@ modelo instalado para funcionar
-
+ Modelo "%1" instalado com sucesso.
-
+ ERRO: O nome do modelo ($MODEL_NAME) está vazio.
-
+ ERRO: A chave da API ($API_KEY) está vazia.
-
+ ERRO: A URL base ($BASE_URL) é inválida.
-
+ ERRO: Conflito com o modelo "%1 (%2)".
-
+ Modelo "%1 (%2)" instalado com sucesso.
-
+ Modelo "%1" removido.
@@ -1627,7 +1623,7 @@ modelo instalado para funcionar
-
+ <h3>ERRO: Não foi possível acessar o banco de dados do LocalDocs ou ele não é válido.</h3><br><i>Observação: Será necessário reiniciar o aplicativo após tentar qualquer uma das seguintes correções sugeridas.</i><br><ul><li>Certifique-se de que a pasta definida como <b>Caminho de Download</b> existe no sistema de arquivos.</li><li>Verifique a propriedade, bem como as permissões de leitura e gravação do <b>Caminho de Download</b>.</li><li>Se houver um arquivo <b>localdocs_v2.db</b>, verifique também sua propriedade e permissões de leitura/gravação.</li></ul><br>Se o problema persistir e houver algum arquivo 'localdocs_v*.db' presente, como último recurso, você pode<br>tentar fazer backup deles e removê-los. No entanto, você terá que recriar suas coleções.
@@ -1826,12 +1822,12 @@ modelo instalado para funcionar
-
+ %1 (%2)
-
+ <strong>Modelo de API Compatível com OpenAI</strong><br><ul><li>Chave da API: %1</li><li>URL Base: %2</li><li>Nome do Modelo: %3</li></ul>
@@ -1841,12 +1837,12 @@ modelo instalado para funcionar
-
+ <ul><li>É necessária uma chave de API e a URL da API.</li><li>AVISO: Seus chats serão enviados para o servidor de API compatível com OpenAI que você especificou!</li><li>Sua chave de API será armazenada no disco</li><li>Será usada apenas para comunicação com o servidor de API compatível com OpenAI</li>
-
+ <strong>Conectar a um servidor de API compatível com OpenAI</strong><br> %1
@@ -2247,31 +2243,31 @@ Obs.: Só entrará em vigor após recarregar o modelo.
-
+ ERRO: A $API_KEY está vazia.
-
+ ERRO: A $BASE_URL está vazia.
-
+ inserir a $BASE_URL
-
+ ERRO: O $MODEL_NAME está vazio.
-
+ inserir o $MODEL_NAME
From 1eb63dac40190656d53d49a03f722dbb34057fad Mon Sep 17 00:00:00 2001
From: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Date: Fri, 9 Aug 2024 20:38:33 +0300
Subject: [PATCH 16/66] Update: TRANSLATION: gpt4all_ro_RO.ts (#2828)
The translated text for the interface of v3.1.1+
has been updated as to be shown correctly in the language:
Romanian - ro_RO
2024.08.09
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
---
gpt4all-chat/translations/gpt4all_ro_RO.ts | 1016 +++++++++++---------
1 file changed, 564 insertions(+), 452 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index fdfc1470..fa64057d 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -7,31 +7,32 @@
- ← Colecţiile curente
+ ← Colecţiile curente
- Adauga o Colecţie de documente
+ Adaugă o Colecţie de documente
- Adaugă un folder care conţine fişiere in format text-simplu, PDF sau Markdown.
- Extensii suplimentare pot fi specificate în Configurare.
+ Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown.
+ Extensii suplimentare pot fi specificate în Configurare.
-
+ Adaugă un folder cu fişiere în format text, PDF sau Markdown.
+ Alte extensii pot fi adăugate în Configurare.
- Selectează un folder/director
+ Selectează un folder/director
@@ -43,13 +44,13 @@
- Denumirea Colecţiei...
+ Denumirea Colecţiei...
- Denumirea Colecţiei de adăugat (necesar)
+ Denumirea Colecţiei de adăugat (necesar)
@@ -73,13 +74,13 @@
- Căutare
+ Căutare
- Creează o Colecţie
+ Creează o Colecţie
@@ -88,37 +89,37 @@
- ← Modele existente/accesibile
+ ← Modelele curente/instalate
- Caută modele
+ Caută modele
- Caută şi descarcă modele după un cuvant-cheie...
+ Caută şi descarcă modele după un cuvânt-cheie...
- Câmp-text pentru căutarea şi filtrarea modelelor ce pot fi descărcate
+ Câmp pentru căutarea şi filtrarea modelelor ce pot fi descărcate
- Initiază căutarea şi filtrarea modelelor
+ Iniţiază căutarea şi filtrarea modelelor
- Activează căutarea şi filtrarea modelelor
+ Activează căutarea şi filtrarea modelelor
@@ -130,7 +131,7 @@
- Likes (Îmi Place)
+ Likes
@@ -148,13 +149,13 @@
- Asc (A->Z)
+ Asc. (A->Z)
- Desc (Z->A)
+ Desc. (Z->A)
@@ -166,31 +167,31 @@
- Căutare · %1
+ Căutare · %1
- Ordonare după: %1
+ Ordonare după: %1
- Sensul ordonării: %1
+ Sensul ordonării: %1
- Límită: %1
+ Límită: %1
- Eroare de reţea: nu se poate prelua %1
+ Eroare de reţea: nu se poate prelua %1
@@ -204,19 +205,19 @@
- Se afişează în timpul solicitării modelului
+ Afişat în timpul solicitării modelului
- Fişierul modelului
+ Fişierul modelului
- Fişierul modelului de descărcat
+ Fişierul modelului de descărcat
@@ -228,7 +229,7 @@
- Descrierea fişierului
+ Descrierea fişierului
@@ -252,19 +253,19 @@
- Opreşte/Reporneşte/Începe descărcarea
+ Opreşte/Reporneşte/Începe descărcarea
- Elimină
+ şterge
- Şterge modelul din sistemul de fişiere
+ şterge modelul din sistemul de fişiere
@@ -278,26 +279,28 @@
- Instalează un model prin reţea
+ Instalez un model din online
-
+ <strong><font size="1"><a href="#error">Eroare</a></strong></font>
-
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru
+ acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
+ (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat
- pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem
+ <strong><font size="2">ATENţIE: Nerecomandat
+ pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem
(%2).</strong></font>
@@ -318,7 +321,7 @@
- Descrie eroarea apăruta in timpul descărcarii
+ Descrie eroarea apărută în timpul descărcăriiShows the progress made in the download
- Afişează progresia descărcarii
+ Afişează progresia descărcăriiDownload speed
- Viteza de download/descărcare
+ Viteza de downloadDownload speed in bytes/kilobytes/megabytes per second
- Viteza de download/descărcare în bytes/kilobytes/megabytes pe secundă
+ Viteza de download în bytes/kilobytes/megabytes pe secundă
@@ -372,19 +375,19 @@
Whether the file hash is being calculated
- Dacă se calculează hash-ul fişierului
+ Dacă se calculează hash-ul fişieruluiDisplayed when the file hash is being calculated
- Se afişează când se calculează hash-ul fişierului
+ Se afişează când se calculează hash-ul fişieruluiERROR: $API_KEY is empty.
-
+ EROARE: $API_KEY absentă
@@ -396,37 +399,37 @@
ERROR: $BASE_URL is empty.
-
+ EROARE: $BASE_URL absentăenter $BASE_URL
-
+ introdu $BASE_URLERROR: $MODEL_NAME is empty.
-
+ EROARE: $MODEL_NAME absententer $MODEL_NAME
-
+ introdu $MODEL_NAMEFile size
- Dimensiunea fişierului
+ Dimensiunea fişieruluiRAM required
- RAM necesară
+ RAM necesară
@@ -453,13 +456,13 @@
Application
- Aplicaţie/Program
+ Aplicaţie/ProgramNetwork dialog
- Reţea
+ Reţea
@@ -476,12 +479,12 @@
If you can't start it manually, then I'm afraid you'll have
to<br>
reinstall.
- EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br>
- necesară căutării de versiuni noi!<br><br>
- Ai instalat acest program folosind kitul online? Dacă da,<br>
- atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br>
+ EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br>
+ necesară căutării de versiuni noi!<br><br>
+ Ai instalat acest program folosind kitul online? Dacă da,<br>
+ atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br>
unde ai instalat programul.<br><br>
- Dacă nu poate fi lansata manual, atunci programul trebuie reinstalat.
+ Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
@@ -505,7 +508,7 @@
Theme
- Tema pentru interfaţă
+ Tema pentru interfaţă
@@ -517,7 +520,7 @@
Dark
- Întunecat
+ Întunecat
@@ -529,7 +532,7 @@
LegacyDark
- Întunecat-vechi
+ Întunecat-vechi
@@ -541,7 +544,7 @@
The size of text in the application.
- Dimensiunea textului în program.
+ Dimensiunea textului în program.
@@ -553,7 +556,7 @@
The compute device used for text generation. "Auto" uses Vulkan or
Metal.Dispozitivul de calcul utilizat pentru generarea de text.
- "Auto" apelează la Vulkan sau la Metal.
+ "Auto" apelează la Vulkan sau la Metal.
@@ -565,49 +568,54 @@
above where this application resides on your filesystem.<br><br>
If you can't start it manually, then I'm afraid you'll have to<br>
reinstall.
-
+ EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br>
+ necesară căutării de versiuni noi!<br><br>
+ Ai instalat acest program folosind kitul online? Dacă da,<br>
+ atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br>
+ unde ai instalat programul.<br><br>
+ Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.Small
-
+ MicMedium
-
+ MediuLarge
-
+ MareLanguage and Locale
-
+ Limbă şi LocalizareThe language and locale you wish to use.
-
+ Limba şi Localizarea de utilizatSystem Locale
-
+ LocalizareThe compute device used for text generation.
-
+ Dispozitivul de calcul utilizat pentru generarea de text.
@@ -615,7 +623,7 @@
Application default
-
+ Implicit
@@ -627,7 +635,7 @@
The preferred model for new chats. Also used as the local server fallback.
- Modelul preferat pentru noile conversaţii. De asemenea, folosit ca rezervă pentru serverul local.
+ Modelul preferat pentru noile conversaţii. Va fi folosit drept rezervă pentru serverul local.
@@ -639,61 +647,61 @@
Generate suggested follow-up questions at the end of responses.
- Generarea de întrebări pentru continuare, la finalul replicilor.
+ Generarea de Întrebări pentru continuare, la finalul replicilor.When chatting with LocalDocs
- Când se discută cu LocalDocs
+ Când se discută cu LocalDocsWhenever possible
- Oricând este posibil
+ Oricând e posibilNever
- Niciodată
+ NiciodatăDownload Path
- Calea pentru download/descărcare
+ Calea pentru downloadWhere to store local models and the LocalDocs database.
- Unde să fie plasate modelele şi baza de date LocalDocs.
+ Unde să fie plasate modelele şi baza de date LocalDocs.Browse
- Căutare
+ CăutareChoose where to save model files
- Selectează locul unde vor fi plasate fişierele modelelor
+ Selectează locul unde vor fi plasate fişierele modelelorEnable Datalake
- Activează DataLake
+ Activează DataLakeSend chats and feedback to the GPT4All Open-Source Datalake.
- Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
+ Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
@@ -711,42 +719,44 @@
The number of CPU threads used for inference and embedding.
- Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.
+ Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.Save Chat Context
- Salvarea contextului conversaţiei
+ Salvarea contextului conversaţieiSave the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
-
+ Salvează pe disc starea modelului pentru încărcare mai rapidă.
+ ATENŢIE: Consumă ~2GB/conversaţie.Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
-
+ Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte
+ consumul de resurse.Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB
per chat.
- Salvează pe disc starea modelului pentru încărcare mai rapidă.
- ATENŢIE: Consumă ~2GB/conversaţie.
+ Salvează pe disc starea modelului pentru Încărcare mai rapidă.
+ ATENţIE: Consumă ~2GB/conversaţie.Enable Local Server
- Activează Serverul local
+ Activează Serverul localExpose an OpenAI-Compatible server to localhost. WARNING: Results in increased
resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte
+ Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte
consumul de resurse.
@@ -759,25 +769,25 @@
The port to use for the local server. Requires restart.
- Portul utilizat pentru Serverul local. Necesită repornirea programului.
+ Portul utilizat pentru Serverul local. Necesită repornirea programului.Check For Updates
- Caută update-uri
+ Caută update-uriManually check for an update to GPT4All.
- Caută manual update-uri pentru GPT4All.
+ Caută manual update-uri pentru GPT4All.Updates
- Update-uri/Actualizări
+ Update-uri/Actualizări
@@ -786,12 +796,12 @@
New Chat
- Chat/Conversaţie Nouă
+ Chat/Conversaţie NouăServer Chat
- Chat/Conversaţie cu Serverul
+ Conversaţie cu Serverul
@@ -799,12 +809,12 @@
ERROR: Network error occurred while connecting to the API server
-
+ EROARE: Eroare de reţea - conectarea la serverul APIChatAPIWorker::handleFinished got HTTP Error %1 %2
-
+ ChatAPIWorker::handleFinished - eroare: HTTP Error %1 %2
@@ -825,61 +835,61 @@
+ New Chat
- + Chat/Conversaţie nouă
+ + Conversaţie nouăCreate a new chat
- Creează o Conversaţie nouă
+ Creează o Conversaţie nouăSelect the current chat or edit the chat when in edit mode
- Selectează conversaţia curentă sau editează conversaţia cand eşti în modul editare
+ Selectează conversaţia curentă sau editeaz-o când eşti în modul editareEdit chat name
- Editează denumirea conversaţiei
+ Editează denumirea conversaţieiSave chat name
- Salveazăa denumirea conversaţiei
+ Salveazăa denumirea conversaţieiDelete chat
- Şterge conversaţia
+ şterge conversaţiaConfirm chat deletion
- CONFIRMĂ ştergerea conversaţiei
+ CONFIRMă ştergerea conversaţieiCancel chat deletion
- ANULEAZĂ ştergerea conversaţiei
+ ANULEAZă ştergerea conversaţieiList of chats
- Lista conversaţiilor
+ Lista conversaţiilorList of chats in the drawer dialog
- Lista conversaţiilor în secţiunea-sertar
+ Lista conversaţiilor în secţiunea-sertar
@@ -887,12 +897,12 @@
TODAY
- ASTĂZI
+ ASTăZITHIS WEEK
- SĂPTĂMÂNA ACEASTA
+ SăPTăMâNA ACEASTA
@@ -902,7 +912,7 @@
LAST SIX MONTHS
- ULTIMELE ŞASE LUNI
+ ULTIMELE şASE LUNI
@@ -921,7 +931,7 @@
<h3>Warning</h3><p>%1</p>
- <h3>Atenţie</h3><p>%1</p>
+ <h3>Atenţie</h3><p>%1</p>
@@ -933,43 +943,43 @@
Warn the user if they switch models, then context will be erased
- Avertizează utilizatorul că la schimbarea modelului va fi şters contextul
+ Avertizează utilizatorul că la schimbarea modelului va fi şters contextulConversation copied to clipboard.
- Conversaţia a fost plasată în Clipboard.
+ Conversaţia a fost plasată în Clipboard.Code copied to clipboard.
- Codul a fost plasat în Clipboard.
+ Codul a fost plasat în Clipboard.Chat panel
- Secţiunea de chat
+ Secţiunea de chatChat panel with options
- Secţiunea de chat cu opţiuni
+ Secţiunea de chat cu opţiuniReload the currently loaded model
- Reîncarca modelul curent
+ ReÎncarcă modelul curentEject the currently loaded model
- Ejectează modelul curent
+ Ejectează modelul curent
@@ -981,25 +991,25 @@
Model loading error.
- Eroare la încarcarea modelului.
+ Eroare la Încărcarea modelului.Waiting for model...
- Se aşteaptă modelul...
+ Se aşteaptă modelul...Switching context...
- Se schimbă contextul...
+ Se schimbă contextul...Choose a model...
- Selectează un model...
+ Selectează un model...
@@ -1011,7 +1021,7 @@
The top item is the current model
- Primul element este modelul curent
+ Primul element e modelul curent
@@ -1019,31 +1029,31 @@
LocalDocs
- LocalDocs - Documente Locale
+ LocalDocsAdd documents
- Adaugă documente
+ Adaug documenteadd collections of documents to the chat
- adaugă Colecţii de documente la conversaţie
+ adaugă Colecţii de documente la conversaţieLoad the default model
- Încarcă modelul implicit
+ Încarcă modelul implicitLoads the default model which can be changed in settings
- Încarcă modelul implicit care poate fi stabilit în Configurare
+ Încarcă modelul implicit care poate fi stabilit în Configurare
@@ -1054,45 +1064,60 @@
GPT4All requires that you install at least one
model to get started
- GPT4All necesită cel puţin un
- model pentru a putea rula
+ GPT4All necesită cel puţin un
+ model pentru a putea porni<h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
-
+ <h3>EROARE la încărcarea
+ modelului:</h3><br><i>"%1"</i><br><br>Astfel
+ de erori pot apărea din mai multe cauze, dintre care cele mai comune
+ includ un format inadecvat al fişierului, un download incomplet sau întrerupt,
+ un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
+ Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
+ un format şi un tip compatibile; verifică dacă fişierul modelului este complet
+ în folderul dedicat - acest folder este afişat în secţiunea Configurare;
+ dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
+ după ce îi verifici amprenta MD5 (md5sum)<li>Află mai mult despre modelele compatibile
+ în pagina unde am plasat <a
+ href="https://docs.gpt4all.io/">documentaţia</a> pentru
+ interfaţa gráfică<li>poţi găsi <a
+ href="https://discord.gg/4M2QFmTt2k">canalul nostru Discord</a> unde
+ se oferă ajutorGPT4All requires that you install at least one
model to get started
-
+ GPT4All necesită cel puţin un
+ model pentru a putea rulaInstall a Model
- Instalează un model
+ Instalează un modelShows the add model view
- Afisează secţiunea de adăugare a unui model
+ Afisează secţiunea de adăugare a unui modelConversation with the model
- Conversaţie cu modelul
+ Conversaţie cu modelulprompt / response pairs from the conversation
- perechi prompt/replică din conversaţie
+ perechi prompt/replică din conversaţie
@@ -1108,13 +1133,13 @@ model to get started
recalculating context ...
- se reface contextul...
+ se recalculează contextul...response stopped ...
- replică întreruptă...
+ replică Întreruptă...
@@ -1126,13 +1151,13 @@ model to get started
generating response ...
- se generează replica...
+ se generează replica...generating questions ...
- se generează întrebari...
+ se generează Întrebări...
@@ -1164,25 +1189,25 @@ model to get started
Thumbs up
- Îmi Place
+ BravoGives a thumbs up to the response
- Da un Îmi Place acestei replici
+ Dă un Bravo acestei repliciThumbs down
- Nu Îmi Place
+ AiureaOpens thumbs down dialog
- Deschide reacţia Nu Îmi Place
+ Deschide reacţia Aiurea
@@ -1194,43 +1219,43 @@ model to get started
Suggested follow-ups
- Continuări sugerate
+ Continuări sugerateErase and reset chat session
- Şterge şi resetează sesiunea de chat
+ şterge şi resetează sesiunea de chatCopy chat session to clipboard
- Copiez sesiunea de chat în Clipboard
+ Copiez sesiunea de chat în ClipboardRedo last chat response
- Refacerea ultimei replici
+ Reface ultima replicăStop generating
- Opreşte generarea
+ Opreşte generareaStop the current response generation
- Opreşte generarea replicii curente
+ Opreşte generarea replicii curenteReloads the model
- Reîncarc modelul
+ ReÎncarc modelul<h3>Encountered an error loading
@@ -1246,21 +1271,21 @@ model to get started
href="https://docs.gpt4all.io/">documentation</a> for the
gui<li>Check out our <a
href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
- <h3>EROARE la încărcarea
+ <h3>EROARE la Încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
- de erori pot apărea din mai multe cauze, dintre care cele mai comune
- includ un format inadecvat al fişierului, un download incomplet sau întrerupt,
- un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
- Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
- un format si un tip compatibile; verifică dacă fişierul modelului este complet
- în folderul dedicat - acest folder este afişat în secţiunea Configurare;
- dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
- după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile
- în pagina unde am plasat <a
- href="https://docs.gpt4all.io/">documentaţia</a> pentru
- interfaţa gráfică<li>poti parcurge <a
+ de erori pot apărea din mai multe cauze, dintre care cele mai comune
+ includ un format inadecvat al fişierului, un download incomplet sau Întrerupt,
+ un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
+ Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
+ un format si un tip compatibile; verifică dacă fişierul modelului este complet
+ în folderul dedicat - acest folder este afişat în secţiunea Configurare;
+ dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
+ după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile
+ în pagina unde am plasat <a
+ href="https://docs.gpt4all.io/">documentaţia</a> pentru
+ interfaţa gráfică<li>poţi parcurge <a
href="https://discord.gg/4M2QFmTt2k">canalul nostru Discord</a> unde
- se oferă ajutor
+ se oferă ajutor
@@ -1268,37 +1293,37 @@ model to get started
Reload · %1
- Reincarcare · %1
+ Reîncărcare · %1Loading · %1
- Incarcare · %1
+ Încărcare · %1Load · %1 (default) →
- Incarca · %1 (implicit) →
+ Încarcă · %1 (implicit) →restoring from text ...
-
+ restaurare din text...retrieving localdocs: %1 ...
- se preiau LocalDocs/Documentele Locale: %1 ...
+ se preia din LocalDocs: %1 ...searching localdocs: %1 ...
- se cauta in LocalDocs: %1 ...
+ se caută în LocalDocs: %1 ...
@@ -1310,13 +1335,13 @@ model to get started
Load a model to continue...
- Incarca un model pentru a continua...
+ Încarcă un model pentru a continua...Send messages/prompts to the model
- Trimite mesaje/prompt-uri către model
+ Trimite mesaje/prompt-uri către model
@@ -1346,7 +1371,7 @@ model to get started
Sends the message/prompt contained in textfield to the model
- Trimite modelului mesajul/prompt-ul din campul-text
+ Trimite modelului mesajul/prompt-ul din câmpul-text
@@ -1355,15 +1380,15 @@ model to get started
Warning: searching collections while indexing can return incomplete results
- Atenţie: căutarea în Colecţii in timp ce sunt Indexate poate cauza rezultate incomplete
+ Atenţie: căutarea în Colecţii în timp ce sunt Indexate poate întoarce rezultate incomplete%n file(s)
- %n fişier
- %n fişiere
+ %n fişier
+ %n fişiere
@@ -1372,7 +1397,7 @@ model to get started
%n word(s)
- %n cuvânt
+ %n cuvânt%n cuvinte
@@ -1393,7 +1418,7 @@ model to get started
Select a collection to make it available to the chat model.
- Selectează o Colectie pentru ca modelul să o poată accesa.
+ Selectează o Colecţie pentru ca modelul să o poată accesa.
@@ -1401,37 +1426,37 @@ model to get started
Model "%1" is installed successfully.
-
+. Modelul "%1" - instalat cu succesERROR: $MODEL_NAME is empty.
-
+ EROARE: $MODEL_NAME absent.ERROR: $API_KEY is empty.
-
+ EROARE: $API_KEY absentăERROR: $BASE_URL is invalid.
-
+ EROARE: $API_KEY incorectaERROR: Model "%1 (%2)" is conflict.
-
+ EROARE: Model "%1 (%2)" conflictual.Model "%1 (%2)" is installed successfully.
-
+ Modelul "%1 (%2)" - instalat cu succesModel "%1" is removed.
-
+ Modelul "%1" - îndepărtat
@@ -1440,79 +1465,79 @@ model to get started
Welcome to GPT4All
- Bun venit in GPT4All
+ Bun venit în GPT4AllThe privacy-first LLM chat application
- Programul care prioritizeaza intimitatea (privacy)
+ Programul ce prioritizează confidenţialitatea (privacy)Start chatting
- Începe o conversaţie
+ Începe o conversaţieStart Chatting
- Începe o conversaţie
+ Începe o conversaţieChat with any LLM
- Dialoghează cu un LLM
+ Dialoghează cu orice LLMLocalDocs
- LocalDocs/Documente Locale
+ LocalDocsChat with your local files
- Dialoghează cu fişierele tale locale
+ Dialoghează cu fişiere localeFind Models
- Caută modele
+ Caută modeleExplore and download models
- Explorează şi descarcă modele
+ Explorează şi descarcă modeleLatest news
- Ultimele ştiri
+ Ultimele ştiriLatest news from GPT4All
- Ultimele ştiri de la GPT4All
+ Ultimele ştiri de la GPT4AllRelease Notes
- Despre această versiune
+ Despre această versiuneDocumentation
- Documentaţie
+ Documentaţie
@@ -1529,8 +1554,8 @@ model to get started
- Github
- Github
+ GitHub
+ GitHub
@@ -1551,7 +1576,7 @@ model to get started
LocalDocs
- LocalDocs/Documente Locale
+ LocalDocs
@@ -1569,13 +1594,13 @@ model to get started
Allowed File Extensions
- Extensii compatibile de fişier
+ Extensii compatibile de fişierComma-separated list. LocalDocs will only attempt to process files with these
extensions.
- Elemente (extensii) separate prin virgulă. LocalDocs va încerca procesarea
- numai a fişierelor cu aceste extensii.
+ Extensiile, separate prin virgulă. LocalDocs va Încerca procesarea
+ numai a fişierelor cu aceste extensii.
@@ -1592,8 +1617,8 @@ model to get started
Embed documents using the fast Nomic API instead of a private local model.
Requires restart.
- Embedding pe documente folosind API de la Nomic în locul unui model local.
- Necesită repornire.
+ Embedding pe documente folosind API de la Nomic în locul unui model local.
+ Necesită repornire.
@@ -1605,9 +1630,9 @@ model to get started
API key to use for Nomic Embed. Get one from the Atlas <a
href="https://atlas.nomic.ai/cli-login">API keys page</a>.
Requires restart.
- Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a
href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a>
- Necesită repornire.
+ Necesită repornire.
@@ -1619,79 +1644,90 @@ model to get started
The compute device used for embeddings. "Auto" uses the CPU. Requires
restart.Dispozitivul pentru Embeddings.
- "Auto" apelează la CPU. Necesită repornire
+ "Auto" apelează la CPU. Necesită repornire
Comma-separated list. LocalDocs will only attempt to process files with these extensions.
-
+ Extensiile, separate prin virgulă. LocalDocs va încerca procesarea
+ numai a fişierelor cu aceste extensii.Embed documents using the fast Nomic API instead of a private local model. Requires restart.
-
+ Embedding pe documente folosind API de la Nomic în locul unui model local.
+ Necesită repornire.API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
-
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a
+ href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a>
+ Necesită repornire.The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
-
+ Dispozitivul pentru Embeddings.
+ "Auto" apelează la CPU. Necesită repornire.Display
- Visualizare
+ VizualizareShow Sources
- Afişarea Surselor
+ Afişarea SurselorDisplay the sources used for each response.
- Afişează Sursele utilizate pentru fiecare replică.
+ Afişează Sursele utilizate pentru fiecare replică.Advanced
- Avansat
+ AvansateWarning: Advanced usage only.
- Atenţie: Numai pentru utilizare avansată.
+ Atenţie: Numai pentru utilizare avansată.Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
-
+ Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau
+ chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat
+ la Context Window/Size/Length a modelului. Mai multe informaţii: <a
+ href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
+ Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea
+ unor replici corecte, dar de asemenea cauzează generare lentă.Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul
+ pentru prompt. Numere mari amplifică probabilitatea
+ unor replici corecte, dar de asemenea cauzează generare lentă.
@@ -1701,35 +1737,35 @@ model to get started
href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
Valori prea mari pot cauza erori cu LocalDocs, replici lente sau
- absenţa lor completă. În mare, numărul {N caractere x N citate} este adăugat
- la Context Window/Size/Length a modelului. Mai multe informaţii: <a
+ absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat
+ la Context Window/Size/Length a modelului. Mai multe informaţii: <a
href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.Document snippet size (characters)
- Lungimea (în caractere) a citatelor din documente
+ Lungimea (în caractere) a citatelor din documenteNumber of characters per document snippet. Larger numbers increase likelihood of
factual responses, but also result in slower generation.
- Numarul caracterelor din fiecare citat. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea pot cauza generare lentă.
+ numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea
+ unor replici corecte, dar de asemenea pot cauza generare lentă.Max document snippets per prompt
- Numărul maxim de citate per prompt
+ Numărul maxim de citate per promptMax best N matches of retrieved document snippets to add to the context for
prompt. Larger numbers increase likelihood of factual responses, but also result in
slower generation.
- Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul
- pentru prompt. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea pot cauza generare lentă.
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul
+ pentru prompt. Numere mari amplifică probabilitatea
+ unor replici corecte, dar de asemenea pot cauza generare lentă.
@@ -1744,59 +1780,59 @@ model to get started
Chat with your local files
- Dialoghează cu fişierele tale locale
+ Dialoghează cu fişiere locale+ Add Collection
- + Adaugă o Colecţie
+ + Adaugă o ColecţieERROR: The LocalDocs database is not valid.
- EROARE: Baza de date LocalDocs nu e validă.
+ EROARE: Baza de date LocalDocs nu e validă.<h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
-
+ EROARE: Baza de date LocalDocs nu poate fi accesată sau nu e validă. Programul trebuie repornit după ce se încearcă oricare din următoarele remedii sugerate.</i><br><ul><li>Asigură-te că folderul pentru <b>Download Path</b> există în sistemul de fişiere.</li><li>Verifică permisiunile şi apartenenţa folderului pentru <b>Download Path</b>.</li><li>Dacă există fişierul <b>localdocs_v2.db</b>, verifică-i apartenenţa şi permisiunile citire/scriere (read/write).</li></ul><br>Dacă problema persistă şi există vreun fişier 'localdocs_v*.db', ca ultimă soluţie poţi<br>încerca duplicarea (backup) şi apoi ştergerea lor. Oricum, va trebui să re-creezi Colecţiile.No Collections Installed
- Nu există Colecţii instalate
+ Nu există Colecţii instalateInstall a collection of local documents to get started using this feature
- Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta
+ Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta+ Add Doc Collection
- + Adaugă o Colecţie de documente
+ + Adaugă o Colecţie de documenteShows the add model view
- Afişează sectiunea de adăugare a unui model
+ Afişează secţiunea de adăugare a unui modelIndexing progressBar
- Bara de progresie a Indexării
+ Bara de progresie a IndexăriiShows the progress made in the indexing
- Afişează progresia Indexării
+ Afişează progresia Indexării
@@ -1808,7 +1844,7 @@ model to get started
INDEXING
- ...SE INDEXEAZĂ...
+ ...SE INDEXEAZă...
@@ -1820,7 +1856,7 @@ model to get started
REQUIRES UPDATE
- NECESITĂ UPDATE
+ NECESITă UPDATE
@@ -1838,31 +1874,31 @@ model to get started
Indexing in progress
- Se Indexează...
+ Se Indexează...Embedding in progress
- ...Se calculează Embeddings...
+ ...Se calculează Embeddings...This collection requires an update after version change
- Această Colecţie necesită update după schimbarea versiunii
+ Această Colecţie necesită update după schimbarea versiuniiAutomatically reindexes upon changes to the folder
- Se reindexează automat după schimbări ale folderului
+ Se reindexează automat după schimbări ale folderuluiInstallation in progress
- ...Instalare în curs...
+ ...Instalare în curs...
@@ -1875,8 +1911,8 @@ model to get started
%n file(s)
- %n fişier
- %n fişiere
+ %n fişier
+ %n fişiere
@@ -1885,7 +1921,7 @@ model to get started
%n word(s)
- %n cuvânt
+ %n cuvânt%n cuvinte
@@ -1894,19 +1930,19 @@ model to get started
Remove
- Elimină
+ ŞtergRebuild
- Reconstrucţie
+ ReconstrucţieReindex this folder from scratch. This is slow and usually not needed.
- Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
+ Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
@@ -1918,7 +1954,7 @@ model to get started
Update the collection to the new version. This is a slow operation.
- Actualizează Colectia la noua versiune. Această procedură e lentă.
+ Actualizează Colecţia la noua versiune. Această procedură e lentă.
@@ -1932,11 +1968,11 @@ model to get started
OpenAI</li><li>You can apply for an API key <a
href="https://platform.openai.com/account/api-keys">here.</a></li>
- <ul><li>Necesită o cheie API OpenAI personală.
- </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!
- </li><li>Cheia ta API va fi stocată pe disc (local)
- </li><li>Va fi utilizată numai pentru comunicarea cu
- OpenAI</li><li>Poţi solicita o cheie API aici: <a
+ <ul><li>Necesită o cheie API OpenAI personală.
+ </li><li>ATENţIE: Conversaţiile tale vor fi trimise la OpenAI!
+ </li><li>Cheia ta API va fi stocată pe disc (local)
+ </li><li>Va fi utilizată numai pentru comunicarea cu
+ OpenAI</li><li>Poţi solicita o cheie API aici: <a
href="https://platform.openai.com/account/api-keys">aquí.</a></li>
@@ -1948,27 +1984,33 @@ model to get started
%1 (%2)
-
+ %1 (%2)<strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul>
-
+ <strong>Model API compatibil cu OpenAI</strong><br><ul><li>Cheia API: %1</li><li>Base URL: %2</li><li>Numele modelului: %3</li></ul><ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
-
+ <ul><li>Necesită o cheie API OpenAI personală.
+ </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!
+ </li><li>Cheia ta API va fi stocată pe disc (local)
+ </li><li>Va fi utilizată numai pentru comunicarea cu
+ OpenAI</li><li>Poţi solicita o cheie API aici: <a
+ href="https://platform.openai.com/account/api-keys">aquí.</a></li><strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>Modelul OpenAI's ChatGPT GPT-3.5 Turbo</strong><br> %1<br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.
-
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu
+ garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -1978,7 +2020,12 @@ model to get started
<ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
-
+ <ul><li>Necesită cheia personală Mistral API.
+ </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la
+ Mistral!</li><li>Cheia ta API va fi stocată
+ pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu
+ Mistral</li><li>Poţi solicita o cheie API aici: <a
+ href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
@@ -1998,23 +2045,27 @@ model to get started
<ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
-
+ <ul><li>Necesită cheia personală API si base-URL a API.</li><li>ATENŢIE: Conversaţiile tale vor fi trimise la serverul API compatibil cu OpenAI specificat!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu serverul API compatibil cu OpenAI</li><strong>Connect to OpenAI-compatible API server</strong><br> %1
-
+ Conectare la un server API compatibil cu OpenAI<strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
-
+ <strong>Creat de către
+ %1.</strong><br><ul><li>Publicat in: %2.<li>Acest
+ model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii
+ pot fi găsite la: <a
+ href="https://huggingface.co/%5">aquí.</a></ul><br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does
not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu
- garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu
+ garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -2025,11 +2076,11 @@ model to get started
Mistral</li><li>You can apply for an API key <a
href="https://console.mistral.ai/user/api-keys">here</a>.</li>
- <ul><li>Necesită cheia personală Mistral API.
- </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la
- Mistral!</li><li>Cheia ta API va fi stocată
- pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu
- Mistral</li><li>Poţi solicita o cheie API aici: <a
+ <ul><li>Necesită cheia personală Mistral API.
+ </li><li>ATENţIE: Conversaţiile tale vor fi trimise la
+ Mistral!</li><li>Cheia ta API va fi stocată
+ pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu
+ Mistral</li><li>Poţi solicita o cheie API aici: <a
href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
@@ -2037,10 +2088,10 @@ model to get started
%1.</strong><br><ul><li>Published on %2.<li>This model
has %3 likes.<li>This model has %4 downloads.<li>More info can be found
<a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creat de către
+ <strong>Creat de către
%1.</strong><br><ul><li>Publicat in: %2.<li>Acest
- model are %3 Likes (Îmi Place).<li>Acest model are %4 download-uri.<li>Mai multe informaţii
- pot fi găsite la: <a
+ model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii
+ pot fi găsite la: <a
href="https://huggingface.co/%5">aquí.</a></ul>
@@ -2056,7 +2107,7 @@ model to get started
Model Settings
- Configurarea modelului
+ Configurez modelul
@@ -2068,7 +2119,7 @@ model to get started
Remove
- Elimin
+ Şterg
@@ -2080,7 +2131,7 @@ model to get started
Model File
- Fisierul modelului
+ Fişierul modelului
@@ -2091,8 +2142,8 @@ model to get started
Prefixed at the beginning of every conversation. Must contain the appropriate
framing tokens.
- Plasat la începutul fiecărei conversaţii. Trebuie să conţină
- token-uri(le) adecvate de încadrare.
+ Plasat la Începutul fiecărei conversaţii. Trebuie să conţină
+ token-uri(le) adecvate de Încadrare.
@@ -2104,25 +2155,25 @@ model to get started
The template that wraps every prompt.
- Standardul de formulare a fiecărui prompt.
+ Standardul de formulare a fiecărui prompt.Must contain the string "%1" to be replaced with the user's
input.
- Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie
+ Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie
utilizatorul.Chat Name Prompt
- Denumirea conversaţiei
+ Denumirea conversaţieiPrompt used to automatically generate chat names.
- Standardul de formulare a denumirii conversaţiilor.
+ Standardul de formulare a denumirii conversaţiilor.
@@ -2134,7 +2185,7 @@ model to get started
Prompt used to generate suggested follow-up questions.
- Prompt-ul folosit pentru generarea întrebărilor de continuare.
+ Prompt-ul folosit pentru generarea Întrebărilor de continuare.
@@ -2146,15 +2197,15 @@ model to get started
Number of input and output tokens the model sees.
- Numărul token-urilor de input şi de output văzute de model.
+ Numărul token-urilor de input şi de output văzute de model.Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie.
- Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va determina rezultate mai slabe.
- NOTĂ: Nu are efect până la reincărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie.
+ Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe.
+ NOTă: Nu are efect până la reincărcarea modelului.
@@ -2166,13 +2217,13 @@ model to get started
Randomness of model output. Higher -> more variation.
- Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.
+ Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile.
- NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile.
+ NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
@@ -2189,20 +2240,21 @@ model to get started
Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P.
- NOTĂ: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P.
+ NOTă: Se evită selectarea token-urilor foarte improbabile.Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
-
+ Plasat la începutul fiecărei conversaţii. Trebuie să conţină
+ token-uri(le) adecvate de încadrare.Must contain the string "%1" to be replaced with the user's input.
-
+ Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul.
@@ -2210,21 +2262,25 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
-
+ Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie.
+ Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe.
+ NOTĂ: Nu are efect până la reîncărcarea modelului.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
-
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile.
+ NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
-
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P.
+ NOTĂ: Se evită selectarea token-urilor foarte improbabile.
@@ -2236,13 +2292,13 @@ NOTE: Prevents choosing highly unlikely tokens.
Minimum token probability. Higher -> more predictable.
- Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.
+ Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.Sets the minimum relative probability for a token to be considered.
- Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
+ Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
@@ -2266,13 +2322,13 @@ NOTE: Prevents choosing highly unlikely tokens.
Max Length
- Lungimea maximă
+ Lungimea maximăMaximum response length, in tokens.
- Lungimea maximă - in token-uri - a replicii.
+ Lungimea maximă - în token-uri - a replicii.
@@ -2291,7 +2347,8 @@ NOTE: Prevents choosing highly unlikely tokens.
Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
-
+ Numărul token-urilor procesate simultan.
+ NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2299,13 +2356,16 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
-
+ Cât de multe layere ale modelului să fie încărcate în VRAM.
+ Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul.
+ Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa.
+ NOTĂ: Nu are efect până la reîncărcarea modelului.Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- Numarul token-urilor procesate simultan.
- NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ numărul token-urilor procesate simultan.
+ NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2317,41 +2377,41 @@ NOTE: Does not take effect until you reload the model.
Repetition penalty factor. Set to 1 to disable.
- Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.
+ Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.Repeat Penalty Tokens
- Token-uri pentru penalzare a repetării
+ Token-uri pentru penalzare a repetăriiNumber of previous tokens used for penalty.
- Numărul token-urilor anterioare considerate pentru penalizare.
+ Numărul token-urilor anterioare considerate pentru penalizare.GPU Layers
- Layere în GPU
+ Layere în GPUNumber of model layers to load into VRAM.
- Numărul layerelor modelului ce vor fi încărcate în VRAM.
+ Numărul layerelor modelului ce vor fi Încărcate în VRAM.How many model layers to load into VRAM. Decrease this if GPT4All runs out of
VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie încărcate în VRAM.
- Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul.
- Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa.
- NOTĂ: Nu are efect până la reîncărcarea modelului.
+ Cât de multe layere ale modelului să fie Încărcate în VRAM.
+ Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul.
+ Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa.
+ NOTă: Nu are efect până la reÎncărcarea modelului.
@@ -2360,13 +2420,13 @@ NOTE: Does not take effect until you reload the model.
No Models Installed
- Nu există modele instalate
+ Nu există modele instalateInstall a model to get started using GPT4All
- Instalează un model pentru a începe să foloseşti GPT4All
+ Instalează un model pentru a Începe să foloseşti GPT4All
@@ -2374,13 +2434,13 @@ NOTE: Does not take effect until you reload the model.
+ Add Model
- + Adaugă un model
+ + Adaugă un modelShows the add model view
- Afişează secţiunea de adăugare a unui model
+ Afişează secţiunea de adăugare a unui model
@@ -2392,19 +2452,19 @@ NOTE: Does not take effect until you reload the model.
Locally installed chat models
- Modele conversaţionale instalate local
+ Modele conversaţionale instalate localModel file
- Fişierul modelului
+ Fişierul modeluluiModel file to be downloaded
- Fişierul modelului ce va fi descărcat
+ Fişierul modelului ce va fi descărcat
@@ -2416,7 +2476,7 @@ NOTE: Does not take effect until you reload the model.
File description
- Descrierea fişierului
+ Descrierea fişierului
@@ -2434,19 +2494,19 @@ NOTE: Does not take effect until you reload the model.
Stop/restart/start the download
- Oprirea/Repornirea/Initierea descărcării
+ Oprirea/Repornirea/Initierea descărcăriiRemove
- Elimină
+ ŞtergRemove model from filesystem
- Elimină modelul din sistemul de fişiere
+ Şterg modelul din sistemul de fişiere
@@ -2454,13 +2514,13 @@ NOTE: Does not take effect until you reload the model.
Install
- Instalează
+ InstaleazăInstall online model
- Instalează un model din online
+ Instalez un model din online<strong><font size="1"><a
@@ -2472,8 +2532,8 @@ NOTE: Does not take effect until you reload the model.<strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru
- acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău
+ <strong><font size="2">ATENţIE: Nerecomandat pentru
+ acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău
(%2).</strong></font>
@@ -2492,19 +2552,21 @@ NOTE: Does not take effect until you reload the model.
Describes an error that occurred when downloading
- Descrie o eroare apărută în timpul descărcării
+ Descrie o eroare apărută la download<strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="1"><a href="#eroare">Error</a></strong></font><strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru
+ acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău
+ (%2).</strong></font>
@@ -2516,13 +2578,13 @@ NOTE: Does not take effect until you reload the model.
Download progressBar
- Bara de progresie a descărcării
+ Bara de progresie a descărcăriiShows the progress made in the download
- Afişează progresia descărcării
+ Afişează progresia descărcării
@@ -2534,13 +2596,13 @@ NOTE: Does not take effect until you reload the model.
Download speed in bytes/kilobytes/megabytes per second
- Viteza de download în bytes/kilobytes/megabytes pe secundă
+ Viteza de download în bytes/kilobytes/megabytes pe secundăCalculating...
- ...Se calculeaza...
+ ...Se calculează...
@@ -2552,7 +2614,7 @@ NOTE: Does not take effect until you reload the model.
Whether the file hash is being calculated
- Dacă se va calcula hash-ul fişierului
+ Dacă se va calcula hash-ul fişierului
@@ -2564,13 +2626,13 @@ NOTE: Does not take effect until you reload the model.
Displayed when the file hash is being calculated
- Afişat când se calculează hash-ul unui fişier
+ Afişat când se calculează hash-ul unui fişierERROR: $API_KEY is empty.
-
+ EROARE: $API_KEY absentă.
@@ -2582,37 +2644,37 @@ NOTE: Does not take effect until you reload the model.
ERROR: $BASE_URL is empty.
-
+ EROARE: $BASE_URL absentă.enter $BASE_URL
-
+ introdu $BASE_URLERROR: $MODEL_NAME is empty.
-
+ EROARE: $MODEL_NAME absent.enter $MODEL_NAME
-
+ introdu $MODEL_NAMEFile size
- Dimensiunea fişierului
+ Dimensiunea fişieruluiRAM required
- RAM necesară
+ RAM necesară
@@ -2654,7 +2716,7 @@ NOTE: Does not take effect until you reload the model.
Please choose a directory
- Selectează un director (folder)
+ Selectează un director (folder)
@@ -2663,13 +2725,13 @@ NOTE: Does not take effect until you reload the model.
Restore Defaults
- Restaurează valorile implicite
+ Restaurez valorile impliciteRestores settings dialog to a default state
- Restaurează secţiunea de configurare la starea sa implicită
+ Restaurez secţiunea de configurare la starea sa implicită
@@ -2678,7 +2740,7 @@ NOTE: Does not take effect until you reload the model.
Contribute data to the GPT4All Opensource Datalake.
- Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.
+ Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.By enabling this feature, you will be able to participate in the democratic
@@ -2697,21 +2759,21 @@ NOTE: Does not take effect until you reload the model.
used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
attribution information attached to your data and you will be credited as a
contributor to any GPT4All model release that uses your data!
- Dacă activezi această funcţionalitate, vei participa la procesul democratic
- de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
+ Dacă activezi această funcţionalitate, vei participa la procesul democratic
+ de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
- Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
- trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
- Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
- Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
+ Când un model în GPT4All Îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
+ trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
+ Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
+ Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
- NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
- DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
- această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
- Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
- utilizate de către Nomic AI pentru a imbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
- toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
- participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
+ NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
+ această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
+ Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
+ utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
@@ -2721,7 +2783,21 @@ NOTE: Does not take effect until you reload the model.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
-
+ Dacă activezi această funcţionalitate, vei participa la procesul democratic
+ de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
+
+ Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
+ trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
+ Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
+ Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
+
+ NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
+ această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
+ Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
+ utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
@@ -2733,37 +2809,37 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Describes what will happen when you opt-in
- Descrie ce se întâmplă când participi
+ Descrie ce se Întâmplă când participiPlease provide a name for attribution (optional)
- Specifică o denumire pentru această apreciere (optional)
+ Specifică o denumire pentru această apreciere (optional)Attribution (optional)
- Apreciere (opţional)
+ Apreciere (opţional)Provide attribution
- Apreciază
+ ApreciazăEnable
- Activează
+ ActiveazăEnable opt-in
- Activează participarea
+ Activează participarea
@@ -2775,7 +2851,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Cancel opt-in
- Anulează participarea
+ Anulează participarea
@@ -2784,7 +2860,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
New version is available
- O nouă versiune disponibilă!
+ O nouă versiune disponibilă!
@@ -2796,7 +2872,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Update to new version
- Actualizează la noua versiune
+ Actualizează la noua versiune
@@ -2805,7 +2881,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Reveals a shortlived help balloon
- Afisează un mesaj scurt de asistenţă
+ Afisează un mesaj scurt de asistenţă
@@ -2817,7 +2893,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Displayed when the popup is showing busy
- Se afişează când procedura este în desfăşurare
+ Se afişează când procedura este în desfăşurare
@@ -2834,7 +2910,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Contains various application settings
- Conţine setări ale programului
+ Conţine setări ale programului
@@ -2881,7 +2957,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Release notes for this version
- Despre această versiune
+ Despre această versiune### Opt-ins for anonymous usage analytics and datalake
@@ -2907,28 +2983,28 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
attribution information attached to your data and you will be credited as a
contributor to any GPT4All
model release that uses your data!
- ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake
- Activând aceste functionalităţi vei putea participa la procesul democratic
+ ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake
+ Activând aceste functionalităţi vei putea participa la procesul democratic
de instruire a unui
- model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
- Cand un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
- trimisă la componenta
- Open-source DataLake a GPT4All. Mai mult - poti aprecia (Like/Dislike) răspunsul. Dacă
- un răspuns Nu Îţi Place. poţi
- sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
+ model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
+ Când un model în GPT4All Îţi răspunde şi îi accepţi răspunsul, conversaţia este
+ trimisă la componenta
+ Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Like/Dislike) răspunsul. Dacă
+ un răspuns Nu Îţi Place. poţi
+ sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate În
componenta DataLake a GPT4All.
- NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
DataLake a GPT4All.
- Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
- Totuşi, te poţi aştepta la a beneficia de apreciere -
- opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
- pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
- pentru a îmbunătăţi modele viitoare în GPT4All.
- Nomic AI va păstra
- toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
+ Totuşi, te poţi aştepta la a beneficia de apreciere -
+ opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
+ pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
+ pentru a îmbunătăţi modele viitoare în GPT4All.
+ Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
participant contribuitor la orice lansare a unui model GPT4All
- care foloseşte datele tale!
+ care foloseşte datele tale!
@@ -2936,7 +3012,9 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
### Release notes
%1### Contributors
%2
-
+ ### Despre versiune
+ %1### Contributori
+ %2
@@ -2955,7 +3033,25 @@ an expectation of an optional attribution if you wish. Your chat data will be op
to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
attribution information attached to your data and you will be credited as a contributor to any GPT4All
model release that uses your data!
-
+ ### Acordul pentru analizarea utilizării anonime şi pentru DataLake
+ Activând aceste functionalităţi vei putea participa la procesul democratic de instruire
+a unui model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
+
+Când un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
+trimisă la componenta Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Bravo/Aiurea) răspunsul. Dacă
+un răspuns e Aiurea. poţi sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
+componenta DataLake a GPT4All.
+
+NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta DataLake a GPT4All.
+Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
+Totuşi, te poţi aştepta la a beneficia de apreciere -
+opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
+pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
+pentru a îmbunătăţi modele viitoare în GPT4All.
+Nomic AI va păstra
+toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+participant contribuitor la orice lansare a unui model GPT4All
+care foloseşte datele tale!
@@ -2967,7 +3063,7 @@ model release that uses your data!
Describes what will happen when you opt-in
- Descrie ce se întâmplă când accepţi
+ Descrie ce se Întâmplă când participi
@@ -2975,7 +3071,7 @@ model release that uses your data!
Opt-in for anonymous usage statistics
- Acceptă colectarea de statistici despre utilizare anonmă
+ Acceptă colectarea de statistici despre utilizare anonmă
@@ -2989,7 +3085,7 @@ model release that uses your data!
Allow opt-in for anonymous usage statistics
- Acceptă colectarea de statistici despre utilizare anonimă
+ Acceptă participarea la colectarea de statistici despre utilizare -anonimă-
@@ -3003,13 +3099,13 @@ model release that uses your data!
Opt-out for anonymous usage statistics
- Anuleaza colectarea de statistici despre utilizare anonimă
+ Anulează participarea la colectarea de statistici despre utilizare -anonimă-Allow opt-out for anonymous usage statistics
- Permite anularea colectării de statistici despre utilizare anonimă
+ Permite anularea participării la colectarea de statistici despre utilizare -anonimă-
@@ -3017,31 +3113,31 @@ model release that uses your data!
Opt-in for network
- Acceptă pentru reţea
+ Acceptă pentru reţeaAllow opt-in for network
- Permite acceptarea pentru reţea
+ Permite participarea pentru reţeaAllow opt-in anonymous sharing of chats to the GPT4All Datalake
- Permite partajarea (share) anonimă a conversaţiilor către DataLake a GPT4All
+ Permite participarea la partajarea (share) anonimă a conversaţiilor către DataLake a GPT4AllOpt-out for network
- Refuz pentru reţea
+ Refuz participarea, pentru reţeaAllow opt-out anonymous sharing of chats to the GPT4All Datalake
- Permite anularea partajării (share) anonime a conversaţiilor către DataLake a GPT4All
+ Permite anularea participării la partajarea anonimă a conversaţiilor către DataLake a GPT4All
@@ -3049,26 +3145,27 @@ model release that uses your data!
<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va sterge conversaţia
- curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va sterge conversaţia
+ curentă. Confirmi aceasta?<b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
-
+ <b>Atenţie:</b> schimbarea modelului va şterge conversaţia
+ curentă. Confirmi aceasta?Continue
- Continuă
+ ContinuăContinue with model loading
- Continuă cu încărcarea modelului
+ Continuă cu Încărcarea modelului
@@ -3085,13 +3182,13 @@ model release that uses your data!
Please edit the text below to provide a better response. (optional)
- Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).
+ Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).Please provide a better response...
- Te rog, oferă o replică mai bună...
+ Te rog, oferă o replică mai bună...
@@ -3103,7 +3200,7 @@ model release that uses your data!
Submits the user's response
- Trimite răspunsul dat de utilizator
+ Trimite răspunsul dat de utilizator
@@ -3115,7 +3212,7 @@ model release that uses your data!
Closes the response dialog
- Închide dialogul răspunsului
+ Închide dialogul răspunsului
@@ -3127,17 +3224,17 @@ model release that uses your data!
detected."</i><br><br>Unfortunately, your CPU does not meet
the minimal requirements to run this program. In particular, it does not support AVX
intrinsics which this program requires to successfully run a modern large language
- model. The only solution at this time is to upgrade your hardware to a more modern
+ model. The only soluţion at this time is to upgrade your hardware to a more modern
CPU.<br><br>See here for more information: <a
href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:;
+ <h3>A apărut o eroare la iniţializare:;
</h3><br><i>"Hardware incompatibil.
- "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte
- condiţiile minime pentru a rula acest program. În particular, nu suportă
- instrucţiunile AVX pe care programul le necesită pentru a integra un model
- conversaţional modern. În acest moment, unica solutie este să îţi aduci la zi sistemul hardware
- cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii:
+ "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte
+ condiţiile minime pentru a rula acest program. în particular, nu suportă
+ instrucţiunile AVX pe care programul le necesită pentru a integra un model
+ conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware
+ cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii:
<a
href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
@@ -3155,86 +3252,101 @@ model release that uses your data!
permissions in the local app config directory where the settings file is located.
Check out our <a href="https://discord.gg/4M2QFmTt2k">discord
channel</a> for help.
- <h3>A apărut o eroare la iniţializare:;
- </h3><br><i>"Nu poate fi accesat fişierul de configurare
- a programului."</i><br><br>Din păcate, ceva împiedică
- programul în a accesa acel fişier. Cauza poate fi un set de permisiuni
- incorecte în/pe directorul/folderul local de configurare unde se află acel fişier.
- Poti vizita canalul nostru <a
+ <h3>A apărut o eroare la iniţializare:;
+ </h3><br><i>"Nu poate fi accesat fişierul de configurare
+ a programului."</i><br><br>Din păcate, ceva împiedică
+ programul în a accesa acel fişier. Cauza poate fi un set de permisiuni
+ incorecte În/pe directorul/folderul local de configurare unde se află acel fişier.
+ Poţi parcurge canalul nostru <a
href="https://discord.gg/4M2QFmTt2k">Discord</a> unde
- vei putea primi asistenţă.
+ vei putea primi asistenţă.
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only soluţion at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:;
+ </h3><br><i>"Hardware incompatibil.
+ "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte
+ condiţiile minime pentru a rula acest program. În particular, nu suportă
+ instrucţiunile AVX pe care programul le necesită pentru a integra un model
+ conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware
+ cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii:
+ <a
+ href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
-
+ <h3>A apărut o eroare la iniţializare:;
+ </h3><br><i>"Nu poate fi accesat fişierul de configurare
+ a programului."</i><br><br>Din păcate, ceva împiedică
+ programul în a accesa acel fişier. Cauza poate fi un set de permisiuni
+ incorecte în/pe directorul/folderul local de configurare unde se află acel fişier.
+ Poţi parcurge canalul nostru <a
+ href="https://discord.gg/4M2QFmTt2k">Discord</a> unde
+ vei putea primi asistenţă.Connection to datalake failed.
- Conectarea la DataLake a eşuat.
+ Conectarea la DataLake a eşuat.Saving chats.
- Se salvează conversaţiile.
+ Se salvează conversaţiile.Network dialog
- Dialogul despre reţea
+ Dialogul despre reţeaopt-in to share feedback/conversations
- acceptă partajarea (share) de comentarii/conversaţii
+ acceptă partajarea (share) de comentarii/conversaţiiHome view
- Secţiunea de început
+ Secţiunea de ÎnceputHome view of application
- Secţiunea de început a programului
+ Secţiunea de Început a programuluiHome
- Prima pagină
+ Prima paginăChat view
- Secţiunea conversaţiilor
+ Secţiunea conversaţiilorChat view to interact with models
- Secţiunea de chat pentru interacţiune cu modele
+ Secţiunea de chat pentru interacţiune cu modeleChats
- Conversaţii/Chat-uri
+ Conversaţii/Chat-uri
@@ -3248,7 +3360,7 @@ model release that uses your data!
Models view for installed models
- Secţiunea modelelor instalate
+ Secţiunea modelelor instalate
@@ -3262,7 +3374,7 @@ model release that uses your data!
LocalDocs view to configure and use local docs
- Secţiunea LocalDocs de configurare şi folosire a documentelor locale
+ Secţiunea LocalDocs de configurare şi folosire a Documentelor Locale
@@ -3276,7 +3388,7 @@ model release that uses your data!
Settings view for application configuration
- Secţiunea de configurare a programului
+ Secţiunea de configurare a programului
@@ -3288,7 +3400,7 @@ model release that uses your data!
Using a network model
- Se foloseşte un model pe reţea
+ Se foloseşte un model pe reţea
@@ -3306,7 +3418,7 @@ model release that uses your data!
View of installed models
- Secţiunea modelelor instalate
+ Secţiunea modelelor instalate
From 79086e10edbfe958139b1f31ece2088227297904 Mon Sep 17 00:00:00 2001
From: Adam Treat
Date: Fri, 9 Aug 2024 13:40:53 -0400
Subject: [PATCH 17/66] Fix stray character in new ro_RO that snuck in.
Signed-off-by: Adam Treat
---
gpt4all-chat/translations/gpt4all_ro_RO.ts | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index fa64057d..9089992f 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -1426,7 +1426,7 @@ model to get started
Model "%1" is installed successfully.
-. Modelul "%1" - instalat cu succes
+ Modelul "%1" - instalat cu succes
From 257a734f257f7b128d25fdad44413a27a673e89b Mon Sep 17 00:00:00 2001
From: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Date: Fri, 9 Aug 2024 21:47:05 +0300
Subject: [PATCH 18/66] Update gpt4all_ro_RO.ts (#2831)
Deleted the translated term "LocalDocs" and left it as it is.
Deleted "chat-uri" as it was a combined word from 2 languages, "-uri" being the plural of the new arrival "chat" in ro_RO.
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
---
gpt4all-chat/translations/gpt4all_ro_RO.ts | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 9089992f..fcd6380e 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -1426,7 +1426,7 @@ model to get started
Model "%1" is installed successfully.
- Modelul "%1" - instalat cu succes
+ Modelul "%1" - instalat cu succes.
@@ -1451,7 +1451,7 @@ model to get started
Model "%1 (%2)" is installed successfully.
- Modelul "%1 (%2)" - instalat cu succes
+ Modelul "%1 (%2)" - instalat cu succes.
@@ -1774,7 +1774,7 @@ model to get started
LocalDocs
- LocalDocs/Documente Locale
+ LocalDocs
@@ -2928,7 +2928,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
LocalDocs
- LocalDocs/Documente Locale
+ LocalDocs
@@ -3328,7 +3328,7 @@ care foloseşte datele tale!
Home
- Prima pagină
+ Prima<br/>pagină
@@ -3346,7 +3346,7 @@ care foloseşte datele tale!
Chats
- Conversaţii/Chat-uri
+ Conversaţii
@@ -3368,7 +3368,7 @@ care foloseşte datele tale!
LocalDocs
- LocalDocs/Documente Locale
+ LocalDocs
From 2df330cde32f1563adc029871005ad57430791ef Mon Sep 17 00:00:00 2001
From: Jay <127801635+jstayco@users.noreply.github.com>
Date: Fri, 9 Aug 2024 12:18:27 -0700
Subject: [PATCH 19/66] Updated es_MX translation (#2829)
Signed-off-by: JSTayco
---
gpt4all-chat/translations/gpt4all_es_MX.ts | 2308 +++++++++-----------
1 file changed, 1007 insertions(+), 1301 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index e19b8d4a..b4471c97 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -15,17 +15,11 @@
Add Document CollectionAgregar colección de documentos
-
- Add a folder containing plain text files, PDFs, or Markdown. Configure
- additional extensions in Settings.
- Agregue una carpeta que contenga archivos de texto plano, PDFs o Markdown.
- Configure extensiones adicionales en Configuración.
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
-
+ Agregue una carpeta que contenga archivos de texto plano, PDFs o Markdown. Configure extensiones adicionales en Configuración.
@@ -85,367 +79,348 @@
AddModelView
-
-
+
+ ← Existing Models← Modelos existentes
-
-
+
+ Explore ModelsExplorar modelos
-
-
+
+ Discover and download models by keyword search...Descubre y descarga modelos mediante búsqueda por palabras clave...
-
-
+
+ Text field for discovering and filtering downloadable modelsCampo de texto para descubrir y filtrar modelos descargables
-
-
+
+ Initiate model discovery and filteringIniciar descubrimiento y filtrado de modelos
-
-
+
+ Triggers discovery and filtering of modelsActiva el descubrimiento y filtrado de modelos
-
-
+
+ DefaultPredeterminado
-
-
+
+ LikesMe gusta
-
-
+
+ DownloadsDescargas
-
-
+
+ RecentReciente
-
-
+
+ AscAsc
-
-
+
+ DescDesc
-
-
+
+ NoneNinguno
-
-
+
+ Searching · %1Buscando · %1
-
-
+
+ Sort by: %1Ordenar por: %1
-
-
+
+ Sort dir: %1Dirección de ordenamiento: %1
-
-
+
+ Limit: %1Límite: %1
-
-
+
+ Network error: could not retrieve %1Error de red: no se pudo recuperar %1
-
-
-
-
+
+
+
+ Busy indicatorIndicador de ocupado
-
-
+
+ Displayed when the models request is ongoingSe muestra cuando la solicitud de modelos está en curso
-
-
+
+ Model fileArchivo del modelo
-
-
+
+ Model file to be downloadedArchivo del modelo a descargar
-
-
+
+ DescriptionDescripción
-
-
+
+ File descriptionDescripción del archivo
-
-
+
+ CancelCancelar
-
-
+
+ ResumeReanudar
-
-
+
+ DownloadDescargar
-
-
+
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
-
+
+ RemoveEliminar
-
-
+
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
-
-
+
+
+
+ InstallInstalar
-
-
+
+ Install online modelInstalar modelo en línea
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
+ <strong><font size="2">ADVERTENCIA: No recomendado para tu hardware. El modelo requiere más memoria (%1 GB) de la que tu sistema tiene disponible (%2).</strong></font>
- <strong><font size="2">WARNING: Not recommended for your
- hardware. Model requires more memory (%1 GB) than your system has available
- (%2).</strong></font>
- <strong><font size="2">ADVERTENCIA: No recomendado
- para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene
- disponible
- (%2).</strong></font>
-
-
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
-
-
+
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
- <strong><font size="1"><a
- href="#error">Error</a></strong></font>
- <strong><font size="1"><a
- href="#error">Error</a></strong></font>
-
-
-
-
+
+ Error for incompatible hardwareError por hardware incompatible
-
-
+
+ Download progressBarBarra de progreso de descarga
-
-
+
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
-
+
+ Download speedVelocidad de descarga
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
-
+
+ Calculating...Calculando...
+
-
-
-
+
-
-
- Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
-
+
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
+
+
+
+ enter $API_KEY
+ ingrese $API_KEY
+
+
+
+
+ File size
+ Tamaño del archivo
+
+
+
+
+ RAM required
+ RAM requerida
+
+
+
+
+ Parameters
+ Parámetros
+
+
+
+
+ Quant
+ Cuantificación
+
+
+
+
+ Type
+ Tipo
+ ERROR: $API_KEY is empty.
-
-
-
-
-
- enter $API_KEY
- ingrese $API_KEY
+ ERROR: $API_KEY está vacío.ERROR: $BASE_URL is empty.
-
+ ERROR: $BASE_URL está vacío.enter $BASE_URL
-
+ ingrese $BASE_URLERROR: $MODEL_NAME is empty.
-
+ ERROR: $MODEL_NAME está vacío.enter $MODEL_NAME
-
-
-
-
-
- File size
- Tamaño del archivo
-
-
-
-
- RAM required
- RAM requerida
-
-
-
-
- Parameters
- Parámetros
-
-
-
-
- Quant
- Cuantificación
-
-
-
-
- Type
- Tipo
+ ingrese $MODEL_NAME
@@ -468,26 +443,6 @@
opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
-
- ERROR: Update system could not find the MaintenanceTool used<br>
- to check for updates!<br><br>
- Did you install this application using the online installer? If so,<br>
- the MaintenanceTool executable should be located one directory<br>
- above where this application resides on your filesystem.<br><br>
- If you can't start it manually, then I'm afraid you'll have
- to<br>
- reinstall.
- ERROR: El sistema de actualización no pudo encontrar la herramienta de
- mantenimiento utilizada<br>
- para buscar actualizaciones<br><br>
- ¿Instaló esta aplicación utilizando el instalador en línea? Si es así,<br>
- el ejecutable de la herramienta de mantenimiento debería estar ubicado un
- directorio<br>
- por encima de donde reside esta aplicación en su sistema de
- archivos.<br><br>
- Si no puede iniciarlo manualmente, me temo que tendrá que<br>
- reinstalar.
-
@@ -519,100 +474,244 @@
El esquema de colores de la aplicación.
-
-
+
+ DarkOscuro
-
-
+
+ LightClaro
-
-
+
+ LegacyDarkOscuro legado
-
-
+
+ Font SizeTamaño de fuente
-
-
+
+ The size of text in the application.El tamaño del texto en la aplicación.
+
+
+
+ Device
+ Dispositivo
+
+
+
+ The compute device used for text generation. "Auto" uses Vulkan or Metal.
+ El dispositivo de cómputo utilizado para la generación de texto. "Auto" utiliza Vulkan o Metal.
+
+
+
+
+ Small
+ Pequeño
+
+
+
+
+ Medium
+ Mediano
+
+
+
+
+ Large
+ Grande
+
+
+
+
+ Language and Locale
+ Idioma y configuración regional
+
+
+
+
+ The language and locale you wish to use.
+ El idioma y la configuración regional que deseas usar.
+
+
+
+
+ Default Model
+ Modelo predeterminado
+
+
+
+
+ The preferred model for new chats. Also used as the local server fallback.
+ El modelo preferido para nuevos chats. También se utiliza como respaldo del servidor local.
+
+
+
+
+ Suggestion Mode
+ Modo de sugerencia
+
+
+
+
+ Generate suggested follow-up questions at the end of responses.
+ Generar preguntas de seguimiento sugeridas al final de las respuestas.
+
+
+
+
+ When chatting with LocalDocs
+ Al chatear con LocalDocs
+
+
+
+
+ Whenever possible
+ Siempre que sea posible
+
+
+
+
+ Never
+ Nunca
+
+
+
+
+ Download Path
+ Ruta de descarga
+
+
+
+
+ Where to store local models and the LocalDocs database.
+ Dónde almacenar los modelos locales y la base de datos de LocalDocs.
+
+
+
+
+ Browse
+ Explorar
+
+
+
+
+ Choose where to save model files
+ Elegir dónde guardar los archivos del modelo
+
+
+
+
+ Enable Datalake
+ Habilitar Datalake
+
+
+
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.
+ Enviar chats y comentarios al Datalake de código abierto de GPT4All.
+
+
+
+
+ Advanced
+ Avanzado
+
+
+
+
+ CPU Threads
+ Hilos de CPU
+
+
+
+
+ The number of CPU threads used for inference and embedding.
+ El número de hilos de CPU utilizados para inferencia e incrustación.
+
+
+
+
+ Save Chat Context
+ Guardar contexto del chat
+
+
+
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
+ Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat.
+
+
+
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
+ Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta en un mayor uso de recursos.
+
+
+
+
+ Enable Local Server
+ Habilitar servidor local
+
+
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
+ resource usage.
+ Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta
+ en un mayor uso de recursos.
+
+
+
+
+ API Server Port
+ Puerto del servidor API
+
+
+
+
+ The port to use for the local server. Requires restart.
+ El puerto a utilizar para el servidor local. Requiere reinicio.
+
+
+
+
+ Check For Updates
+ Buscar actualizaciones
+
+
+
+
+ Manually check for an update to GPT4All.
+ Buscar manualmente una actualización para GPT4All.
+
+
+
+
+ Updates
+ Actualizaciones
+ System Locale
-
-
-
-
-
- Device
- Dispositivo
-
-
- The compute device used for text generation. "Auto" uses Vulkan or
- Metal.
- El dispositivo de cómputo utilizado para la generación de texto.
- "Auto" utiliza Vulkan o Metal.
-
-
-
-
- ERROR: Update system could not find the MaintenanceTool used<br>
- to check for updates!<br><br>
- Did you install this application using the online installer? If so,<br>
- the MaintenanceTool executable should be located one directory<br>
- above where this application resides on your filesystem.<br><br>
- If you can't start it manually, then I'm afraid you'll have to<br>
- reinstall.
-
-
-
-
-
- Small
-
-
-
-
-
- Medium
-
-
-
-
-
- Large
-
-
-
-
-
- Language and Locale
-
-
-
-
-
- The language and locale you wish to use.
-
+ Regional del sistemaThe compute device used for text generation.
-
+ El dispositivo de cómputo utilizado para la generación de texto.
@@ -620,170 +719,25 @@
Application default
-
+ Predeterminado de la aplicación
-
-
- Default Model
- Modelo predeterminado
-
-
-
-
- The preferred model for new chats. Also used as the local server fallback.
- El modelo preferido para nuevos chats. También se utiliza como respaldo del
- servidor local.
-
-
-
-
- Suggestion Mode
- Modo de sugerencia
-
-
-
-
- Generate suggested follow-up questions at the end of responses.
- Generar preguntas de seguimiento sugeridas al final de las respuestas.
-
-
-
-
- When chatting with LocalDocs
- Al chatear con LocalDocs
-
-
-
-
- Whenever possible
- Siempre que sea posible
-
-
-
-
- Never
- Nunca
-
-
-
-
- Download Path
- Ruta de descarga
-
-
-
-
- Where to store local models and the LocalDocs database.
- Dónde almacenar los modelos locales y la base de datos de LocalDocs.
-
-
-
-
- Browse
- Explorar
-
-
-
-
- Choose where to save model files
- Elegir dónde guardar los archivos del modelo
-
-
-
-
- Enable Datalake
- Habilitar Datalake
-
-
-
-
- Send chats and feedback to the GPT4All Open-Source Datalake.
- Enviar chats y comentarios al Datalake de código abierto de GPT4All.
-
-
-
-
- Advanced
- Avanzado
-
-
-
-
- CPU Threads
- Hilos de CPU
-
-
-
-
- The number of CPU threads used for inference and embedding.
- El número de hilos de CPU utilizados para inferencia e incrustación.
-
-
-
-
- Save Chat Context
- Guardar contexto del chat
-
-
-
-
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
-
-
-
-
-
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
-
-
-
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB
- per chat.
- Guardar el estado del modelo de chat en el disco para una carga más rápida.
- ADVERTENCIA: Usa ~2GB por chat.
-
-
-
-
- Enable Local Server
- Habilitar servidor local
-
-
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
- resource usage.
- Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta
- en un mayor uso de recursos.
-
-
-
-
- API Server Port
- Puerto del servidor API
-
-
-
-
- The port to use for the local server. Requires restart.
- El puerto a utilizar para el servidor local. Requiere reinicio.
-
-
-
-
- Check For Updates
- Buscar actualizaciones
-
-
-
-
- Manually check for an update to GPT4All.
- Buscar manualmente una actualización para GPT4All.
-
-
-
-
- Updates
- Actualizaciones
+
+
+ ERROR: Update system could not find the MaintenanceTool used<br>
+ to check for updates!<br><br>
+ Did you install this application using the online installer? If so,<br>
+ the MaintenanceTool executable should be located one directory<br>
+ above where this application resides on your filesystem.<br><br>
+ If you can't start it manually, then I'm afraid you'll have to<br>
+ reinstall.
+ ERROR: El sistema de actualización no pudo encontrar la Herramienta de Mantenimiento utilizada<br>
+ para buscar actualizaciones.<br><br>
+ ¿Instaló esta aplicación utilizando el instalador en línea? Si es así,<br>
+ el ejecutable de la Herramienta de Mantenimiento debería estar ubicado un directorio<br>
+ por encima de donde reside esta aplicación en su sistema de archivos.<br><br>
+ Si no puede iniciarlo manualmente, me temo que tendrá que<br>
+ reinstalar la aplicación.
@@ -800,19 +754,6 @@
Chat del servidor
-
- ChatAPIWorker
-
-
- ERROR: Network error occurred while connecting to the API server
-
-
-
-
- ChatAPIWorker::handleFinished got HTTP Error %1 %2
-
-
-ChatDrawer
@@ -1021,9 +962,9 @@
-
+
-
+ LocalDocsDocumentosLocales
@@ -1040,240 +981,198 @@
agregar colecciones de documentos al chat
-
-
+
+ Load the default modelCargar el modelo predeterminado
-
-
+
+ Loads the default model which can be changed in settingsCarga el modelo predeterminado que se puede cambiar en la configuración
-
-
+
+ No Model InstalledNo hay modelo instalado
-
- GPT4All requires that you install at least one
- model to get started
- GPT4All requiere que instales al menos un
- modelo para comenzar
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
-
+ <h3>Se encontró un error al cargar el modelo:</h3><br><i>"%1"</i><br><br>Los fallos en la carga de modelos pueden ocurrir por varias razones, pero las causas más comunes incluyen un formato de archivo incorrecto, una descarga incompleta o corrupta, un tipo de archivo equivocado, RAM del sistema insuficiente o un tipo de modelo incompatible. Aquí hay algunas sugerencias para resolver el problema:<br><ul><li>Asegúrate de que el archivo del modelo tenga un formato y tipo compatibles<li>Verifica que el archivo del modelo esté completo en la carpeta de descargas<li>Puedes encontrar la carpeta de descargas en el diálogo de configuración<li>Si has cargado el modelo manualmente, asegúrate de que el archivo no esté corrupto verificando el md5sum<li>Lee más sobre qué modelos son compatibles en nuestra <a href="https://docs.gpt4all.io/">documentación</a> para la interfaz gráfica<li>Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de discord</a> para obtener ayuda
-
-
- GPT4All requires that you install at least one
-model to get started
-
-
-
-
-
+
+ Install a ModelInstalar un modelo
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Conversation with the modelConversación con el modelo
-
-
+
+ prompt / response pairs from the conversationpares de pregunta / respuesta de la conversación
-
-
+
+ GPT4AllGPT4All
-
-
+
+ YouTú
+
+ recalculating context ...
- recalculando contexto ...
+ recalculando contexto ...
-
-
+
+ response stopped ...respuesta detenida ...
-
-
+
+ processing ...procesando ...
-
-
+
+ generating response ...generando respuesta ...
-
-
+
+ generating questions ...generando preguntas ...
-
-
-
-
+
+
+
+ CopyCopiar
-
-
+
+ Copy MessageCopiar mensaje
-
-
+
+ Disable markdownDesactivar markdown
-
-
+
+ Enable markdownActivar markdown
-
-
+
+ Thumbs upMe gusta
-
-
+
+ Gives a thumbs up to the responseDa un me gusta a la respuesta
-
-
+
+ Thumbs downNo me gusta
-
-
+
+ Opens thumbs down dialogAbre el diálogo de no me gusta
-
-
+
+ %1 Sources%1 Fuentes
-
-
+
+ Suggested follow-upsSeguimientos sugeridos
-
-
+
+ Erase and reset chat sessionBorrar y reiniciar sesión de chat
-
-
+
+ Copy chat session to clipboardCopiar sesión de chat al portapapeles
-
-
+
+ Redo last chat responseRehacer última respuesta del chat
-
-
+
+ Stop generatingDetener generación
-
-
+
+ Stop the current response generationDetener la generación de la respuesta actual
-
-
+
+ Reloads the modelRecarga el modelo
-
- <h3>Encountered an error loading
- model:</h3><br><i>"%1"</i><br><br>Model
- loading failures can happen for a variety of reasons, but the most common causes
- include a bad file format, an incomplete or corrupted download, the wrong file type,
- not enough system RAM or an incompatible model type. Here are some suggestions for
- resolving the problem:<br><ul><li>Ensure the model file has a
- compatible format and type<li>Check the model file is complete in the download
- folder<li>You can find the download folder in the settings dialog<li>If
- you've sideloaded the model ensure the file is not corrupt by checking
- md5sum<li>Read more about what models are supported in our <a
- href="https://docs.gpt4all.io/">documentation</a> for the
- gui<li>Check out our <a
- href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
- <h3>Se encontró un error al cargar el
- modelo:</h3><br><i>"%1"</i><br><br>Los
- fallos de carga de modelos pueden ocurrir por varias razones, pero las causas más
- comunes incluyen un formato de archivo incorrecto, una descarga incompleta o
- corrupta, un tipo de archivo equivocado, insuficiente RAM del sistema o un tipo de
- modelo incompatible. Aquí hay algunas sugerencias para resolver el
- problema:<br><ul><li>Asegúrese de que el archivo del modelo tenga
- un formato y tipo compatible<li>Verifique que el archivo del modelo esté
- completo en la carpeta de descargas<li>Puede encontrar la carpeta de descargas
- en el diálogo de configuración<li>Si ha cargado el modelo manualmente,
- asegúrese de que el archivo no esté corrupto verificando el md5sum<li>Lea más
- sobre qué modelos son compatibles en nuestra <a
- href="https://docs.gpt4all.io/">documentación</a> para la
- interfaz gráfica<li>Consulte nuestro <a
- href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para
- obtener ayuda
-
-
+
-
+ Reload · %1Recargar · %1
@@ -1284,90 +1183,98 @@ model to get started
Cargando · %1
-
-
+
+ Load · %1 (default) →Cargar · %1 (predeterminado) →
+
+
+
+ retrieving localdocs: %1 ...
+ recuperando documentos locales: %1 ...
+
+
+
+
+ searching localdocs: %1 ...
+ buscando en documentos locales: %1 ...
+
+
+
+
+ Send a message...
+ Enviar un mensaje...
+
+
+
+
+ Load a model to continue...
+ Carga un modelo para continuar...
+
+
+
+
+ Send messages/prompts to the model
+ Enviar mensajes/indicaciones al modelo
+
+
+
+
+ Cut
+ Cortar
+
+
+
+
+ Paste
+ Pegar
+
+
+
+
+ Select All
+ Seleccionar todo
+
+
+
+
+ Send message
+ Enviar mensaje
+
+
+
+
+ Sends the message/prompt contained in textfield to the model
+ Envía el mensaje/indicación contenido en el campo de texto al modelo
+
+
+
+
+ GPT4All requires that you install at least one
+model to get started
+ GPT4All requiere que instale al menos un
+modelo para comenzar
+
+ restoring from text ...
-
-
-
-
-
- retrieving localdocs: %1 ...
- recuperando documentos locales: %1 ...
-
-
-
-
- searching localdocs: %1 ...
- buscando en documentos locales: %1 ...
-
-
-
-
- Send a message...
- Enviar un mensaje...
-
-
-
-
- Load a model to continue...
- Carga un modelo para continuar...
-
-
-
-
- Send messages/prompts to the model
- Enviar mensajes/indicaciones al modelo
-
-
-
-
- Cut
- Cortar
-
-
-
-
- Paste
- Pegar
-
-
-
-
- Select All
- Seleccionar todo
-
-
-
-
- Send message
- Enviar mensaje
-
-
-
-
- Sends the message/prompt contained in textfield to the model
- Envía el mensaje/indicación contenido en el campo de texto al modelo
+ restaurando desde texto ...CollectionsDrawer
-
-
+
+ Warning: searching collections while indexing can return incomplete results
- Advertencia: buscar en colecciones mientras se indexan puede devolver
- resultados incompletos
+ Advertencia: buscar en colecciones mientras se indexan puede devolver resultados incompletos
-
-
+
+ %n file(s)%n archivo
@@ -1375,8 +1282,8 @@ model to get started
-
-
+
+ %n word(s)%n palabra
@@ -1384,62 +1291,24 @@ model to get started
-
-
+
+ UpdatingActualizando
-
-
+
+ + Add Docs+ Agregar documentos
-
-
+
+ Select a collection to make it available to the chat model.Seleccione una colección para hacerla disponible al modelo de chat.
-
- Download
-
-
- Model "%1" is installed successfully.
-
-
-
-
- ERROR: $MODEL_NAME is empty.
-
-
-
-
- ERROR: $API_KEY is empty.
-
-
-
-
- ERROR: $BASE_URL is invalid.
-
-
-
-
- ERROR: Model "%1 (%2)" is conflict.
-
-
-
-
- Model "%1 (%2)" is installed successfully.
-
-
-
-
- Model "%1" is removed.
-
-
-HomeView
@@ -1580,7 +1449,7 @@ model to get started
Comma-separated list. LocalDocs will only attempt to process files with these
extensions.
- Lista separada por comas. DocumentosLocales solo intentará procesar
+ Lista separada por comas. DocumentosLocales solo intentará procesar
archivos con estas extensiones.
@@ -1598,7 +1467,7 @@ model to get started
Embed documents using the fast Nomic API instead of a private local model.
Requires restart.
- Incrustar documentos usando la rápida API de Nomic en lugar de un modelo
+ Incrustar documentos usando la rápida API de Nomic en lugar de un modelo
local privado. Requiere reinicio.
@@ -1608,12 +1477,8 @@ model to get started
Clave API de Nomic
- API key to use for Nomic Embed. Get one from the Atlas <a
- href="https://atlas.nomic.ai/cli-login">API keys page</a>.
- Requires restart.
- Clave API para usar con Nomic Embed. Obtén una en la <a
- href="https://atlas.nomic.ai/cli-login">página de claves API</a>
- de Atlas. Requiere reinicio.
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
+ Clave API para usar con Nomic Embed. Obtén una en la <a href="https://atlas.nomic.ai/cli-login">página de claves API</a> de Atlas. Requiere reinicio.
@@ -1624,32 +1489,26 @@ model to get started
The compute device used for embeddings. "Auto" uses the CPU. Requires
restart.
- El dispositivo de cómputo utilizado para las incrustaciones.
+ El dispositivo de cómputo utilizado para las incrustaciones.
"Auto" usa la CPU. Requiere reinicio.Comma-separated list. LocalDocs will only attempt to process files with these extensions.
-
+ Lista separada por comas. LocalDocs solo intentará procesar archivos con estas extensiones.Embed documents using the fast Nomic API instead of a private local model. Requires restart.
-
-
-
-
-
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
-
+ Incrustar documentos usando la API rápida de Nomic en lugar de un modelo local privado. Requiere reinicio.The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
-
+ El dispositivo de cómputo utilizado para las incrustaciones. "Auto" usa la CPU. Requiere reinicio.
@@ -1685,32 +1544,24 @@ model to get started
Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
-
+ Valores demasiado grandes pueden causar fallos en localdocs, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se añaden a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
+ Número de caracteres por fragmento de documento. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
+ Máximo de N mejores coincidencias de fragmentos de documentos recuperados para añadir al contexto del prompt. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
-
- Values too large may cause localdocs failure, extremely slow responses or
- failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to
- the model's context window. More info <a
- href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
-
- Valores demasiado grandes pueden causar fallos en documentos locales,
- respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N
- caracteres x N fragmentos} se agregan a la ventana de contexto del modelo. Más
- información <a
- href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
+
+ Valores demasiado grandes pueden causar fallos en documentos locales, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se agregan a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
@@ -1718,28 +1569,12 @@ model to get started
Document snippet size (characters)Tamaño del fragmento de documento (caracteres)
-
- Number of characters per document snippet. Larger numbers increase likelihood of
- factual responses, but also result in slower generation.
- Número de caracteres por fragmento de documento. Números más grandes
- aumentan la probabilidad de respuestas fácticas, pero también resultan en una
- generación más lenta.
- Max document snippets per promptMáximo de fragmentos de documento por indicación
-
- Max best N matches of retrieved document snippets to add to the context for
- prompt. Larger numbers increase likelihood of factual responses, but also result in
- slower generation.
- Máximo de los mejores N coincidencias de fragmentos de documento
- recuperados para agregar al contexto de la indicación. Números más grandes aumentan
- la probabilidad de respuestas fácticas, pero también resultan en una generación más
- lenta.
- LocalDocsView
@@ -1762,127 +1597,122 @@ model to get started
+ Agregar colección
+
+ ERROR: The LocalDocs database is not valid.
- ERROR: La base de datos de DocumentosLocales no es válida.
+ ERROR: La base de datos de DocumentosLocales no es válida.
-
-
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
-
-
-
-
-
+
+ No Collections InstalledNo hay colecciones instaladas
-
-
+
+ Install a collection of local documents to get started using this feature
- Instala una colección de documentos locales para comenzar a usar esta
- función
+ Instala una colección de documentos locales para comenzar a usar esta función
-
-
+
+ + Add Doc Collection+ Agregar colección de documentos
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Indexing progressBarBarra de progreso de indexación
-
-
+
+ Shows the progress made in the indexingMuestra el progreso realizado en la indexación
-
-
+
+ ERRORERROR
-
-
+
+ INDEXINGINDEXANDO
-
-
+
+ EMBEDDINGINCRUSTANDO
-
-
+
+ REQUIRES UPDATEREQUIERE ACTUALIZACIÓN
-
-
+
+ READYLISTO
-
-
+
+ INSTALLINGINSTALANDO
-
-
+
+ Indexing in progressIndexación en progreso
-
-
+
+ Embedding in progressIncrustación en progreso
-
-
+
+ This collection requires an update after version changeEsta colección requiere una actualización después del cambio de versión
-
-
+
+ Automatically reindexes upon changes to the folderReindexación automática al cambiar la carpeta
-
-
+
+ Installation in progressInstalación en progreso
-
-
+
+ %%
-
-
+
+ %n file(s)%n archivo
@@ -1890,167 +1720,124 @@ model to get started
-
-
+
+ %n word(s)%n palabra
- %n palabras
+ %n palabra(s)
-
-
+
+ RemoveEliminar
-
-
+
+ RebuildReconstruir
-
-
+
+ Reindex this folder from scratch. This is slow and usually not needed.
- Reindexar esta carpeta desde cero. Esto es lento y generalmente no es
- necesario.
+ Reindexar esta carpeta desde cero. Esto es lento y generalmente no es necesario.
-
-
+
+ UpdateActualizar
-
-
+
+ Update the collection to the new version. This is a slow operation.Actualizar la colección a la nueva versión. Esta es una operación lenta.
+
+
+
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
+ <h3>ERROR: No se puede acceder a la base de datos LocalDocs o no es válida.</h3><br><i>Nota: Necesitará reiniciar después de intentar cualquiera de las siguientes soluciones sugeridas.</i><br><ul><li>Asegúrese de que la carpeta establecida como <b>Ruta de Descarga</b> exista en el sistema de archivos.</li><li>Verifique la propiedad y los permisos de lectura y escritura de la <b>Ruta de Descarga</b>.</li><li>Si hay un archivo <b>localdocs_v2.db</b>, verifique también su propiedad y permisos de lectura/escritura.</li></ul><br>Si el problema persiste y hay archivos 'localdocs_v*.db' presentes, como último recurso puede<br>intentar hacer una copia de seguridad y eliminarlos. Sin embargo, tendrá que recrear sus colecciones.
+ ModelList
-
- <ul><li>Requires personal OpenAI API
- key.</li><li>WARNING: Will send your chats to
- OpenAI!</li><li>Your API key will be stored on
- disk</li><li>Will only be used to communicate with
- OpenAI</li><li>You can apply for an API key <a
- href="https://platform.openai.com/account/api-keys">here.</a></li>
-
- <ul><li>Requiere clave API personal de
- OpenAI.</li><li>ADVERTENCIA: ¡Enviará sus chats a
- OpenAI!</li><li>Su clave API se almacenará en el
- disco</li><li>Solo se usará para comunicarse con
- OpenAI</li><li>Puede solicitar una clave API <a
- href="https://platform.openai.com/account/api-keys">aquí.</a></li>
+
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
+ <ul><li>Requiere clave API personal de OpenAI.</li><li>ADVERTENCIA: ¡Enviará sus chats a OpenAI!</li><li>Su clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con OpenAI</li><li>Puede solicitar una clave API <a href="https://platform.openai.com/account/api-keys">aquí.</a></li>
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
%1
- <strong>Modelo ChatGPT GPT-3.5 Turbo de
+ <strong>Modelo ChatGPT GPT-3.5 Turbo de
OpenAI</strong><br> %1
-
- %1 (%2)
-
-
-
-
- <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul>
-
-
-
-
- <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
-
-
-
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>Modelo ChatGPT GPT-3.5 Turbo de OpenAI</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.
-
+ <br><br><i>* Aunque pagues a OpenAI por ChatGPT-4, esto no garantiza el acceso a la clave API. Contacta a OpenAI para más información.
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
-
+ <ul><li>Requiere una clave API personal de Mistral.</li><li>ADVERTENCIA: ¡Enviará tus chats a Mistral!</li><li>Tu clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con Mistral</li><li>Puedes solicitar una clave API <a href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Modelo Mistral Medium</strong><br> %1
-
- <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
-
-
-
-
- <strong>Connect to OpenAI-compatible API server</strong><br> %1
-
-
-
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
-
+ <strong>Creado por %1.</strong><br><ul><li>Publicado el %2.<li>Este modelo tiene %3 me gusta.<li>Este modelo tiene %4 descargas.<li>Más información puede encontrarse <a href="https://huggingface.co/%5">aquí.</a></ul>
- <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does
- not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Aunque pague a OpenAI por ChatGPT-4, esto no
- garantiza el acceso a la clave API. Contacte a OpenAI para más información.
+
+ %1 (%2)
+ %1 (%2)
-
- <ul><li>Requires personal Mistral API
- key.</li><li>WARNING: Will send your chats to
- Mistral!</li><li>Your API key will be stored on
- disk</li><li>Will only be used to communicate with
- Mistral</li><li>You can apply for an API key <a
- href="https://console.mistral.ai/user/api-keys">here</a>.</li>
-
- <ul><li>Requiere clave API personal de
- Mistral.</li><li>ADVERTENCIA: ¡Enviará sus chats a
- Mistral!</li><li>Su clave API se almacenará en el
- disco</li><li>Solo se usará para comunicarse con
- Mistral</li><li>Puede solicitar una clave API <a
- href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
+
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul>
+ <strong>Modelo de API compatible con OpenAI</strong><br><ul><li>Clave API: %1</li><li>URL base: %2</li><li>Nombre del modelo: %3</li></ul>
- <strong>Created by
- %1.</strong><br><ul><li>Published on %2.<li>This model
- has %3 likes.<li>This model has %4 downloads.<li>More info can be found
- <a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creado por
- %1.</strong><br><ul><li>Publicado el %2.<li>Este
- modelo tiene %3 me gusta.<li>Este modelo tiene %4 descargas.<li>Puede
- encontrar más información <a
- href="https://huggingface.co/%5">aquí.</a></ul>
+
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
+ <ul><li>Requiere una clave API personal y la URL base de la API.</li><li>ADVERTENCIA: ¡Enviará sus chats al servidor de API compatible con OpenAI que especificó!</li><li>Su clave API se almacenará en el disco</li><li>Solo se utilizará para comunicarse con el servidor de API compatible con OpenAI</li>
+
+
+
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1
+ <strong>Conectar al servidor de API compatible con OpenAI</strong><br> %1
@@ -2098,10 +1885,9 @@ model to get started
Indicación del sistema
- Prefixed at the beginning of every conversation. Must contain the appropriate
- framing tokens.
- Prefijado al inicio de cada conversación. Debe contener los tokens de
- encuadre apropiados.
+
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
+ Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados.
@@ -2116,10 +1902,9 @@ model to get started
La plantilla que envuelve cada indicación.
- Must contain the string "%1" to be replaced with the user's
- input.
- Debe contener la cadena "%1" para ser reemplazada con la entrada
- del usuario.
+
+ Must contain the string "%1" to be replaced with the user's input.
+ Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario.
@@ -2134,87 +1919,177 @@ model to get started
Indicación utilizada para generar automáticamente nombres de chat.
-
-
+
+ Suggested FollowUp PromptIndicación de seguimiento sugerida
-
-
+
+ Prompt used to generate suggested follow-up questions.Indicación utilizada para generar preguntas de seguimiento sugeridas.
-
-
+
+ Context LengthLongitud del contexto
-
-
+
+ Number of input and output tokens the model sees.Número de tokens de entrada y salida que el modelo ve.
- Maximum combined prompt/response tokens before information is lost.
- Using more context than the model was trained on will yield poor results.
- NOTE: Does not take effect until you reload the model.
- Máximo de tokens combinados de indicación/respuesta antes de que se pierda
- información.
- Usar más contexto del que se utilizó para entrenar el modelo producirá resultados
- pobres.
- NOTA: No tiene efecto hasta que recargues el modelo.
+
+ Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results.
+NOTE: Does not take effect until you reload the model.
+ Máximo de tokens combinados de indicación/respuesta antes de que se pierda información. Usar más contexto del que se utilizó para entrenar el modelo producirá resultados pobres.
+NOTA: No tiene efecto hasta que recargues el modelo.
-
-
+
+ TemperatureTemperatura
-
-
+
+ Randomness of model output. Higher -> more variation.Aleatoriedad de la salida del modelo. Mayor -> más variación.
+ Temperature increases the chances of choosing less likely tokens.
- NOTE: Higher temperature gives more creative but less predictable outputs.
- La temperatura aumenta las probabilidades de elegir tokens menos probables.
- NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles.
+NOTE: Higher temperature gives more creative but less predictable outputs.
+ La temperatura aumenta las probabilidades de elegir tokens menos probables.
+NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles.
-
-
+
+ Top-PTop-P
-
-
+
+ Nucleus Sampling factor. Lower -> more predicatable.Factor de muestreo de núcleo. Menor -> más predecible.
+ Only the most likely tokens up to a total probability of top_p can be chosen.
- NOTE: Prevents choosing highly unlikely tokens.
- Solo se pueden elegir los tokens más probables hasta una probabilidad total
- de top_p.
- NOTA: Evita elegir tokens altamente improbables.
+NOTE: Prevents choosing highly unlikely tokens.
+ Solo se pueden elegir los tokens más probables hasta una probabilidad total de top_p.
+NOTA: Evita elegir tokens altamente improbables.
-
-
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
-
+
+
+ Min-P
+ Min-P
-
-
- Must contain the string "%1" to be replaced with the user's input.
-
+
+
+ Minimum token probability. Higher -> more predictable.
+ Probabilidad mínima del token. Mayor -> más predecible.
+
+
+
+
+ Sets the minimum relative probability for a token to be considered.
+ Establece la probabilidad relativa mínima para que un token sea considerado.
+
+
+
+
+ Top-K
+ Top-K
+
+
+
+
+ Size of selection pool for tokens.
+ Tamaño del grupo de selección para tokens.
+
+
+
+
+ Only the top K most likely tokens will be chosen from.
+ Solo se elegirán los K tokens más probables.
+
+
+
+
+ Max Length
+ Longitud máxima
+
+
+
+
+ Maximum response length, in tokens.
+ Longitud máxima de respuesta, en tokens.
+
+
+
+
+ Prompt Batch Size
+ Tamaño del lote de indicaciones
+
+
+
+
+ The batch size used for prompt processing.
+ El tamaño del lote utilizado para el procesamiento de indicaciones.
+
+
+
+
+ Amount of prompt tokens to process at once.
+NOTE: Higher values can speed up reading prompts but will use more RAM.
+ Cantidad de tokens de prompt a procesar de una vez.
+NOTA: Valores más altos pueden acelerar la lectura de prompts, pero usarán más RAM.
+
+
+
+
+ Repeat Penalty
+ Penalización por repetición
+
+
+
+
+ Repetition penalty factor. Set to 1 to disable.
+ Factor de penalización por repetición. Establecer a 1 para desactivar.
+
+
+
+
+ Repeat Penalty Tokens
+ Tokens de penalización por repetición
+
+
+
+
+ Number of previous tokens used for penalty.
+ Número de tokens anteriores utilizados para la penalización.
+
+
+
+
+ GPU Layers
+ Capas de GPU
+
+
+
+
+ Number of model layers to load into VRAM.
+ Número de capas del modelo a cargar en la VRAM.
@@ -2222,89 +2097,9 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
-
-
-
-
-
- Temperature increases the chances of choosing less likely tokens.
-NOTE: Higher temperature gives more creative but less predictable outputs.
-
-
-
-
-
- Only the most likely tokens up to a total probability of top_p can be chosen.
-NOTE: Prevents choosing highly unlikely tokens.
-
-
-
-
-
- Min-P
- Min-P
-
-
-
-
- Minimum token probability. Higher -> more predictable.
- Probabilidad mínima del token. Mayor -> más predecible.
-
-
-
-
- Sets the minimum relative probability for a token to be considered.
- Establece la probabilidad relativa mínima para que un token sea
- considerado.
-
-
-
-
- Top-K
- Top-K
-
-
-
-
- Size of selection pool for tokens.
- Tamaño del grupo de selección para tokens.
-
-
-
-
- Only the top K most likely tokens will be chosen from.
- Solo se elegirán los K tokens más probables.
-
-
-
-
- Max Length
- Longitud máxima
-
-
-
-
- Maximum response length, in tokens.
- Longitud máxima de respuesta, en tokens.
-
-
-
-
- Prompt Batch Size
- Tamaño del lote de indicaciones
-
-
-
-
- The batch size used for prompt processing.
- El tamaño del lote utilizado para el procesamiento de indicaciones.
-
-
-
-
- Amount of prompt tokens to process at once.
-NOTE: Higher values can speed up reading prompts but will use more RAM.
-
+ Máximo de tokens combinados de pregunta/respuesta antes de que se pierda información.
+Usar más contexto del que el modelo fue entrenado producirá resultados deficientes.
+NOTA: No surtirá efecto hasta que recargue el modelo.
@@ -2312,340 +2107,268 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
-
-
-
- Amount of prompt tokens to process at once.
- NOTE: Higher values can speed up reading prompts but will use more RAM.
- Cantidad de tokens de indicación para procesar a la vez.
- NOTA: Valores más altos pueden acelerar la lectura de indicaciones pero usarán más
- RAM.
-
-
-
-
- Repeat Penalty
- Penalización por repetición
-
-
-
-
- Repetition penalty factor. Set to 1 to disable.
- Factor de penalización por repetición. Establecer a 1 para desactivar.
-
-
-
-
- Repeat Penalty Tokens
- Tokens de penalización por repetición
-
-
-
-
- Number of previous tokens used for penalty.
- Número de tokens anteriores utilizados para la penalización.
-
-
-
-
- GPU Layers
- Capas de GPU
-
-
-
-
- Number of model layers to load into VRAM.
- Número de capas del modelo a cargar en la VRAM.
-
-
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of
- VRAM while loading this model.
- Lower values increase CPU load and RAM usage, and make inference slower.
- NOTE: Does not take effect until you reload the model.
- Cuántas capas del modelo cargar en la VRAM. Disminuye esto si GPT4All se
- queda sin VRAM al cargar este modelo.
- Valores más bajos aumentan la carga de CPU y el uso de RAM, y hacen la inferencia
- más lenta.
- NOTA: No tiene efecto hasta que recargues el modelo.
+ Cuántas capas del modelo cargar en la VRAM. Disminuya esto si GPT4All se queda sin VRAM al cargar este modelo.
+Valores más bajos aumentan la carga de la CPU y el uso de RAM, y hacen que la inferencia sea más lenta.
+NOTA: No surte efecto hasta que recargue el modelo.ModelsView
-
-
+
+ No Models InstalledNo hay modelos instalados
-
-
+
+ Install a model to get started using GPT4AllInstala un modelo para empezar a usar GPT4All
-
-
-
-
+
+
+
+ + Add Model+ Agregar modelo
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Installed ModelsModelos instalados
-
-
+
+ Locally installed chat modelsModelos de chat instalados localmente
-
-
+
+ Model fileArchivo del modelo
-
-
+
+ Model file to be downloadedArchivo del modelo a descargar
-
-
+
+ DescriptionDescripción
-
-
+
+ File descriptionDescripción del archivo
-
-
+
+ CancelCancelar
-
-
+
+ ResumeReanudar
-
-
+
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
-
+
+ RemoveEliminar
-
-
+
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
-
-
+
+
+
+ InstallInstalar
-
-
+
+ Install online modelInstalar modelo en línea
- <strong><font size="1"><a
- href="#error">Error</a></strong></font>
- <strong><font size="1"><a
- href="#error">Error</a></strong></font>
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your
- hardware. Model requires more memory (%1 GB) than your system has available
- (%2).</strong></font>
- <strong><font size="2">ADVERTENCIA: No recomendado
- para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene
- disponible (%2).</strong></font>
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
+ <strong><font size="2">ADVERTENCIA: No recomendado para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene disponible (%2).</strong></font>
-
-
+
+ %1 GB%1 GB
-
-
+
+ ??
-
-
+
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
-
-
- <strong><font size="1"><a href="#error">Error</a></strong></font>
-
-
-
-
-
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
-
-
-
-
+
+ Error for incompatible hardwareError por hardware incompatible
-
-
+
+ Download progressBarBarra de progreso de descarga
-
-
+
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
-
+
+ Download speedVelocidad de descarga
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
-
+
+ Calculating...Calculando...
+
-
-
-
+
-
-
- Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
-
+
+ Busy indicatorIndicador de ocupado
-
-
+
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
+
+
+
+ enter $API_KEY
+ ingrese $API_KEY
+
+
+
+
+ File size
+ Tamaño del archivo
+
+
+
+
+ RAM required
+ RAM requerida
+
+
+
+
+ Parameters
+ Parámetros
+
+
+
+
+ Quant
+ Cuantificación
+
+
+
+
+ Type
+ Tipo
+ ERROR: $API_KEY is empty.
-
-
-
-
-
- enter $API_KEY
- ingrese $API_KEY
+ ERROR: $API_KEY está vacía.ERROR: $BASE_URL is empty.
-
+ ERROR: $BASE_URL está vacía.enter $BASE_URL
-
+ ingrese $BASE_URLERROR: $MODEL_NAME is empty.
-
+ ERROR: $MODEL_NAME está vacío.enter $MODEL_NAME
-
-
-
-
-
- File size
- Tamaño del archivo
-
-
-
-
- RAM required
- RAM requerida
-
-
-
-
- Parameters
- Parámetros
-
-
-
-
- Quant
- Cuantificación
-
-
-
-
- Type
- Tipo
+ ingrese $MODEL_NAME
@@ -2696,38 +2419,17 @@ NOTE: Does not take effect until you reload the model.
Contribuir datos al Datalake de código abierto de GPT4All.
- By enabling this feature, you will be able to participate in the democratic
- process of training a large language model by contributing data for future model
- improvements.
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
+
+When a GPT4All model responds to you and you have opted-in, your conversation will
+be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its
+response. If you dislike a response, you can suggest an alternative response. This
+data will be collected and aggregated in the GPT4All Datalake.
- When a GPT4All model responds to you and you have opted-in, your conversation will
- be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its
- response. If you dislike a response, you can suggest an alternative response. This
- data will be collected and aggregated in the GPT4All Datalake.
-
- NOTE: By turning on this feature, you will be sending your data to the GPT4All Open
- Source Datalake. You should have no expectation of chat privacy when this feature is
- enabled. You should; however, have an expectation of an optional attribution if you
- wish. Your chat data will be openly available for anyone to download and will be
- used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
- attribution information attached to your data and you will be credited as a
- contributor to any GPT4All model release that uses your data!
- Al habilitar esta función, podrá participar en el proceso democrático de
- entrenamiento de un modelo de lenguaje grande contribuyendo con datos para futuras
- mejoras del modelo.
-
- Cuando un modelo GPT4All le responda y usted haya aceptado, su conversación se
- enviará al Datalake de código abierto de GPT4All. Además, puede dar me gusta/no me
- gusta a su respuesta. Si no le gusta una respuesta, puede sugerir una alternativa.
- Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
-
- NOTA: Al activar esta función, enviará sus datos al Datalake de código abierto de
- GPT4All. No debe esperar privacidad en el chat cuando esta función esté habilitada.
- Sin embargo, puede esperar una atribución opcional si lo desea. Sus datos de chat
- estarán disponibles abiertamente para que cualquiera los descargue y serán
- utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará
- toda la información de atribución adjunta a sus datos y se le acreditará como
- contribuyente en cualquier lanzamiento de modelo GPT4All que utilice sus datos.
+NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
+ Al habilitar esta función, podrá participar en el proceso democrático de entrenamiento de un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All le responda y usted haya aceptado, su conversación se enviará al Datalake de código abierto de GPT4All. Además, puede dar me gusta/no me gusta a su respuesta. Si no le gusta una respuesta, puede sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
+
+NOTA: Al activar esta función, enviará sus datos al Datalake de código abierto de GPT4All. No debe esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puede esperar una atribución opcional si lo desea. Sus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a sus datos y se le acreditará como contribuyente en cualquier lanzamiento de modelo GPT4All que utilice sus datos.
@@ -2737,7 +2439,11 @@ NOTE: Does not take effect until you reload the model.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
-
+ Al habilitar esta función, podrás participar en el proceso democrático de entrenar un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo.
+
+Cuando un modelo GPT4All te responda y hayas aceptado participar, tu conversación se enviará al Datalake de Código Abierto de GPT4All. Además, podrás indicar si te gusta o no su respuesta. Si no te gusta una respuesta, puedes sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
+
+NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Código Abierto de GPT4All. No debes esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puedes esperar una atribución opcional si lo deseas. Tus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a tus datos y se te acreditará como contribuyente en cualquier lanzamiento de modelo GPT4All que utilice tus datos.
@@ -2839,8 +2545,9 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
QObject
+ Default
- Predeterminado
+ Predeterminado
@@ -2886,14 +2593,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Welcome!¡Bienvenido!
-
- ### Release notes
- %1### Contributors
- %2
- ### Notas de la versión
- %1### Contribuidores
- %2
-
@@ -2906,79 +2605,30 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Release notes for this versionNotas de la versión para esta versión
-
- ### Opt-ins for anonymous usage analytics and datalake
- By enabling these features, you will be able to participate in the democratic
- process of training a
- large language model by contributing data for future model improvements.
-
- When a GPT4All model responds to you and you have opted-in, your conversation will
- be sent to the GPT4All
- Open Source Datalake. Additionally, you can like/dislike its response. If you
- dislike a response, you
- can suggest an alternative response. This data will be collected and aggregated in
- the GPT4All Datalake.
-
- NOTE: By turning on this feature, you will be sending your data to the GPT4All Open
- Source Datalake.
- You should have no expectation of chat privacy when this feature is enabled. You
- should; however, have
- an expectation of an optional attribution if you wish. Your chat data will be openly
- available for anyone
- to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI
- will retain all
- attribution information attached to your data and you will be credited as a
- contributor to any GPT4All
- model release that uses your data!
- ### Aceptación para análisis de uso anónimo y datalake
- Al habilitar estas funciones, podrá participar en el proceso democrático de entrenar
- un
- modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo.
- Cuando un modelo GPT4All le responda y usted haya aceptado, su conversación se
- enviará al
- Datalake de código abierto de GPT4All. Además, puede dar me gusta/no me gusta a su
- respuesta. Si no le gusta una respuesta,
- puede sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake
- de GPT4All.
-
- NOTA: Al activar esta función, enviará sus datos al Datalake de código abierto de
- GPT4All.
- No debe esperar privacidad en el chat cuando esta función esté habilitada. Sin
- embargo, puede
- esperar una atribución opcional si lo desea. Sus datos de chat estarán disponibles
- abiertamente para que cualquiera
- los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de
- GPT4All. Nomic AI conservará toda la
- información de atribución adjunta a sus datos y se le acreditará como contribuyente
- en cualquier lanzamiento de modelo GPT4All
- que utilice sus datos.
- ### Release notes
%1### Contributors
-%2
-
+%2
+
+ ### Notas de la versión
+%1### Colaboradores
+%2
+
- ### Opt-ins for anonymous usage analytics and datalake
-By enabling these features, you will be able to participate in the democratic process of training a
-large language model by contributing data for future model improvements.
+ ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
+
+When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
-When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All
-Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you
-can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
-
-NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake.
-You should have no expectation of chat privacy when this feature is enabled. You should; however, have
-an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone
-to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
-attribution information attached to your data and you will be credited as a contributor to any GPT4All
-model release that uses your data!
-
+NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
+ ### Autorización para análisis de uso anónimo y datalake Al habilitar estas funciones, podrás participar en el proceso democrático de entrenar un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All te responda y hayas aceptado participar, tu conversación se enviará al Datalake de Código Abierto de GPT4All. Además, podrás indicar si te gusta o no su respuesta. Si no te gusta una respuesta, puedes sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
+
+NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Código Abierto de GPT4All. No debes esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puedes esperar una atribución opcional si lo deseas. Tus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a tus datos y se te acreditará como colaborador en cualquier lanzamiento de modelo GPT4All que utilice tus datos.
+
@@ -3066,20 +2716,63 @@ model release that uses your data!
Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermitir rechazar el compartir anónimo de chats con el Datalake de GPT4All
+
+
+
+ ### Release notes
+%1### Contributors
+%2
+ ### Notas de la versión
+%1### Colaboradores
+%2
+
+
+
+
+
+ ### Opt-ins for anonymous usage analytics and datalake
+By enabling these features, you will be able to participate in the democratic process of training a
+large language model by contributing data for future model improvements.
+
+When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All
+Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you
+can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
+
+NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake.
+You should have no expectation of chat privacy when this feature is enabled. You should; however, have
+an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone
+to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
+attribution information attached to your data and you will be credited as a contributor to any GPT4All
+model release that uses your data!
+ ### Consentimiento para análisis de uso anónimo y lago de datos
+Al habilitar estas funciones, podrá participar en el proceso democrático de entrenar un
+modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo.
+
+Cuando un modelo GPT4All le responda y usted haya dado su consentimiento, su conversación se enviará al
+Lago de Datos de Código Abierto de GPT4All. Además, puede indicar si le gusta o no su respuesta. Si no le gusta una respuesta,
+puede sugerir una respuesta alternativa. Estos datos se recopilarán y agregarán en el Lago de Datos de GPT4All.
+
+NOTA: Al activar esta función, estará enviando sus datos al Lago de Datos de Código Abierto de GPT4All.
+No debe esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puede
+esperar una atribución opcional si lo desea. Sus datos de chat estarán disponibles abiertamente para que cualquiera
+los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda
+la información de atribución adjunta a sus datos y se le acreditará como contribuyente en cualquier
+lanzamiento de modelo GPT4All que utilice sus datos.
+ SwitchModelDialog<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Advertencia:</b> cambiar el modelo borrará la conversación
+ <b>Advertencia:</b> cambiar el modelo borrará la conversación
actual. ¿Desea continuar?<b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
-
+ <b>Advertencia:</b> cambiar el modelo borrará la conversación actual. ¿Deseas continuar?
@@ -3144,61 +2837,23 @@ model release that uses your data!
main
-
-
- <h3>Encountered an error starting
- up:</h3><br><i>"Incompatible hardware
- detected."</i><br><br>Unfortunately, your CPU does not meet
- the minimal requirements to run this program. In particular, it does not support AVX
- intrinsics which this program requires to successfully run a modern large language
- model. The only solution at this time is to upgrade your hardware to a more modern
- CPU.<br><br>See here for more information: <a
- href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
- <h3>Se encontró un error al
- iniciar:</h3><br><i>"Hardware incompatible
- detectado."</i><br><br>Desafortunadamente, su CPU no cumple
- con los requisitos mínimos para ejecutar este programa. En particular, no soporta
- las instrucciones AVX que este programa requiere para ejecutar con éxito un modelo
- de lenguaje grande moderno. La única solución en este momento es actualizar su
- hardware a una CPU más moderna.<br><br>Vea aquí para más información:
- <a
- href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- GPT4All v%1GPT4All v%1
-
- <h3>Encountered an error starting
- up:</h3><br><i>"Inability to access settings
- file."</i><br><br>Unfortunately, something is preventing the
- program from accessing the settings file. This could be caused by incorrect
- permissions in the local app config directory where the settings file is located.
- Check out our <a href="https://discord.gg/4M2QFmTt2k">discord
- channel</a> for help.
- <h3>Se encontró un error al
- iniciar:</h3><br><i>"No se puede acceder al archivo de
- configuración."</i><br><br>Desafortunadamente, algo está
- impidiendo que el programa acceda al archivo de configuración. Esto podría ser
- causado por permisos incorrectos en el directorio de configuración local de la
- aplicación donde se encuentra el archivo de configuración. Consulte nuestro <a
- href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para
- obtener ayuda.
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Se encontró un error al iniciar:</h3><br><i>"Se detectó hardware incompatible."</i><br><br>Desafortunadamente, tu CPU no cumple con los requisitos mínimos para ejecutar este programa. En particular, no soporta instrucciones AVX, las cuales este programa requiere para ejecutar con éxito un modelo de lenguaje grande moderno. La única solución en este momento es actualizar tu hardware a una CPU más moderna.<br><br>Consulta aquí para más información: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
-
+ <h3>Se encontró un error al iniciar:</h3><br><i>"No se puede acceder al archivo de configuración."</i><br><br>Desafortunadamente, algo está impidiendo que el programa acceda al archivo de configuración. Esto podría ser causado por permisos incorrectos en el directorio de configuración local de la aplicación donde se encuentra el archivo de configuración. Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para obtener ayuda.
@@ -3280,7 +2935,7 @@ model release that uses your data!
LocalDocs
- DocumentosLocales
+ DocsLocales
@@ -3333,4 +2988,55 @@ model release that uses your data!
Vista de modelos instalados
+
+ ChatAPIWorker
+
+
+ ERROR: Network error occurred while connecting to the API server
+ ERROR: Ocurrió un error de red al conectar con el servidor API
+
+
+
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2
+ ChatAPIWorker::handleFinished obtuvo Error HTTP %1 %2
+
+
+
+ Download
+
+
+ Model "%1" is installed successfully.
+ El modelo "%1" se ha instalado correctamente.
+
+
+
+ ERROR: $MODEL_NAME is empty.
+ ERROR: $MODEL_NAME está vacío.
+
+
+
+ ERROR: $API_KEY is empty.
+ ERROR: $API_KEY está vacía.
+
+
+
+ ERROR: $BASE_URL is invalid.
+ ERROR: $BASE_URL no es válida.
+
+
+
+ ERROR: Model "%1 (%2)" is conflict.
+ ERROR: El modelo "%1 (%2)" está en conflicto.
+
+
+
+ Model "%1 (%2)" is installed successfully.
+ El modelo "%1 (%2)" se ha instalado correctamente.
+
+
+
+ Model "%1" is removed.
+ El modelo "%1" ha sido eliminado.
+
+
From bf8873098add27feab575b33ddff175e72ba601e Mon Sep 17 00:00:00 2001
From: Jay <127801635+jstayco@users.noreply.github.com>
Date: Fri, 9 Aug 2024 12:31:41 -0700
Subject: [PATCH 20/66] Small fixes for better main menu UI (#2832)
Signed-off-by: JSTayco
---
gpt4all-chat/translations/gpt4all_es_MX.ts | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index b4471c97..a62f2d6d 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -2935,7 +2935,8 @@ lanzamiento de modelo GPT4All que utilice sus datos.
LocalDocs
- DocsLocales
+ Docs
+Locales
@@ -2949,7 +2950,7 @@ lanzamiento de modelo GPT4All que utilice sus datos.
Settings
- Configuración
+ Config.
From 2feda2a82dcb9123e045ca1230331deec951b7cd Mon Sep 17 00:00:00 2001
From: Thiago Ramos
Date: Fri, 9 Aug 2024 23:21:22 -0300
Subject: [PATCH 21/66] Fixed and updated some strings in pt-BR (#2836)
Signed-off-by: Thiago Ramos
---
gpt4all-chat/translations/gpt4all_pt_BR.ts | 23 +++++++++++++---------
1 file changed, 14 insertions(+), 9 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 7f01713a..45d5e103 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -529,7 +529,7 @@
Small
- Pequena
+ Pequeno
@@ -571,6 +571,7 @@
The compute device used for text generation.
+ I chose to use "Processador" instead of "Dispositivo" (Device) or "Dispositivo de Computação" (Compute Device) to simplify the terminology and make it more straightforward and understandable. "Dispositivo" can be vague and could refer to various types of hardware, whereas "Processador" clearly and specifically indicates the component responsible for processing tasks. This improves usability by avoiding the ambiguity that might arise from using more generic terms like "Dispositivo."Processador usado para gerar texto.
@@ -681,6 +682,7 @@
Save Chat Context
+ I used "Histórico do Chat" (Chat History) instead of "Contexto do Chat" (Chat Context) to clearly convey that it refers to saving past messages, making it more intuitive and avoiding potential confusion with abstract terms.Salvar Histórico do Chat
@@ -1568,12 +1570,13 @@ modelo instalado para funcionar
Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valores muito grandes podem causar falhas no LocalDocs, respostas extremamente lentas ou nenhuma resposta. Em termos gerais, o {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Consulte <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a> para mais informações.
+ Valores muito altos podem causar falhas no LocalDocs, respostas extremamente lentas ou até mesmo nenhuma resposta. De forma geral, o valor {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Clique <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aqui</a> para mais informações.Document snippet size (characters)
+ I translated "snippet" as "trecho" to make the term feel more natural and understandable in Portuguese. "Trecho" effectively conveys the idea of a portion or section of a document, fitting well within the context, whereas a more literal translation might sound less intuitive or awkward for users.Tamanho do trecho de documento (caracteres)
@@ -1623,7 +1626,7 @@ modelo instalado para funcionar
<h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
- <h3>ERRO: Não foi possível acessar o banco de dados do LocalDocs ou ele não é válido.</h3><br><i>Observação: Será necessário reiniciar o aplicativo após tentar qualquer uma das seguintes correções sugeridas.</i><br><ul><li>Certifique-se de que a pasta definida como <b>Caminho de Download</b> existe no sistema de arquivos.</li><li>Verifique a propriedade, bem como as permissões de leitura e gravação do <b>Caminho de Download</b>.</li><li>Se houver um arquivo <b>localdocs_v2.db</b>, verifique também sua propriedade e permissões de leitura/gravação.</li></ul><br>Se o problema persistir e houver algum arquivo 'localdocs_v*.db' presente, como último recurso, você pode<br>tentar fazer backup deles e removê-los. No entanto, você terá que recriar suas coleções.
+ <h3>ERRO: Não foi possível acessar o banco de dados do LocalDocs ou ele não é válido.</h3><br><i>Observação: Será necessário reiniciar o aplicativo após tentar qualquer uma das seguintes correções sugeridas.</i><br><ul><li>Certifique-se de que a pasta definida como <b>Caminho de Download</b> existe no sistema de arquivos.</li><li>Verifique a propriedade, bem como as permissões de leitura e gravação do <b>Caminho de Download</b>.</li><li>Se houver um arquivo <b>localdocs_v2.db</b>, verifique também sua propriedade e permissões de leitura/gravação.</li></ul><br>Se o problema persistir e houver algum arquivo 'localdocs_v*.db' presente, como último recurso, você pode<br>tentar fazer backup deles e removê-los. No entanto, você terá que recriar suas coleções.
@@ -1713,7 +1716,7 @@ modelo instalado para funcionar
This collection requires an update after version change
- Esta coleção requer uma atualização após a mudança de versão
+ Esta coleção precisa ser atualizada após a mudança de versão
@@ -1767,7 +1770,7 @@ modelo instalado para funcionar
Reindex this folder from scratch. This is slow and usually not needed.
- Reindexar esta pasta do zero. Esta operação é muito lenta e geralmente não é necessária.
+ eindexar pasta do zero. Lento e geralmente desnecessário.
@@ -1779,7 +1782,7 @@ modelo instalado para funcionar
Update the collection to the new version. This is a slow operation.
- Atualizar a coleção para a nova versão. Esta operação pode demorar.
+ Atualizar coleção para nova versão. Pode demorar.
@@ -2533,7 +2536,8 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
Busy indicator
- Indicador de processamento
+ The literal translation of "busy indicator" as "indicador de ocupado" might create ambiguity in Portuguese, as it doesn't clearly convey whether the system is processing something or simply unavailable. "Progresso" (progress) was chosen to more clearly indicate that an activity is in progress and that the user should wait for its completion.
+ Indicador de progresso
@@ -2557,7 +2561,8 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
Settings
- Configurações
+ I used "Config" instead of "Configurações" to keep the UI concise and visually balanced. "Config" is a widely recognized abbreviation that maintains clarity while saving space, making the interface cleaner and more user-friendly, especially in areas with limited space.
+ Config
@@ -2915,7 +2920,7 @@ versão do modelo GPT4All que utilize seus dados!
Settings
- Configurações
+ Config
From bc0fb53eabc089d2be5540d420130013410cad81 Mon Sep 17 00:00:00 2001
From: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Date: Mon, 12 Aug 2024 16:19:47 +0300
Subject: [PATCH 22/66] GPT4All +v3.1.1: GUI: TRANSLATION: into ro_RO (#2834)
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
---
gpt4all-chat/translations/gpt4all_ro_RO.ts | 250 +++++----------------
1 file changed, 61 insertions(+), 189 deletions(-)
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index fcd6380e..f8855d48 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -18,15 +18,13 @@
Add a folder containing plain text files, PDFs, or Markdown. Configure
additional extensions in Settings.
- Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown.
- Extensii suplimentare pot fi specificate în Configurare.
+ Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown. Extensii suplimentare pot fi specificate în Configurare.Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
- Adaugă un folder cu fişiere în format text, PDF sau Markdown.
- Alte extensii pot fi adăugate în Configurare.
+ Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi adăugate în Configurare.
@@ -291,17 +289,14 @@
<strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru
- acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
(%2).</strong></font><strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat
- pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem
- (%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem (%2).</strong></font>
@@ -326,8 +321,7 @@
<strong><font size="1"><a
href="#eroare">Eroare</a></strong></font>
- <strong><font size="1"><a
- href="#eroare">Eroare</a></strong></font>
+ <strong><font size="1"><a href="#eroare">Eroare</a></strong></font>
@@ -479,12 +473,7 @@
If you can't start it manually, then I'm afraid you'll have
to<br>
reinstall.
- EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br>
- necesară căutării de versiuni noi!<br><br>
- Ai instalat acest program folosind kitul online? Dacă da,<br>
- atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br>
- unde ai instalat programul.<br><br>
- Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
+ EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
@@ -568,12 +557,7 @@
above where this application resides on your filesystem.<br><br>
If you can't start it manually, then I'm afraid you'll have to<br>
reinstall.
- EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br>
- necesară căutării de versiuni noi!<br><br>
- Ai instalat acest program folosind kitul online? Dacă da,<br>
- atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br>
- unde ai instalat programul.<br><br>
- Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
+ EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
@@ -731,21 +715,18 @@
Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
- Salvează pe disc starea modelului pentru încărcare mai rapidă.
- ATENŢIE: Consumă ~2GB/conversaţie.
+ Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte
- consumul de resurse.
+ Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB
per chat.
- Salvează pe disc starea modelului pentru Încărcare mai rapidă.
- ATENţIE: Consumă ~2GB/conversaţie.
+ Salvează pe disc starea modelului pentru Încărcare mai rapidă. ATENţIE: Consumă ~2GB/conversaţie.
@@ -756,8 +737,7 @@
Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte
- consumul de resurse.
+ Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte consumul de resurse.
@@ -1389,7 +1369,7 @@ model to get started
%n fişier%n fişiere
-
+ %n fişiere
@@ -1399,7 +1379,7 @@ model to get started
%n cuvânt%n cuvinte
-
+ %n cuvinte
@@ -1617,8 +1597,7 @@ model to get started
Embed documents using the fast Nomic API instead of a private local model.
Requires restart.
- Embedding pe documente folosind API de la Nomic în locul unui model local.
- Necesită repornire.
+ Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
@@ -1630,9 +1609,7 @@ model to get started
API key to use for Nomic Embed. Get one from the Atlas <a
href="https://atlas.nomic.ai/cli-login">API keys page</a>.
Requires restart.
- Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a
- href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a>
- Necesită repornire.
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
@@ -1643,37 +1620,31 @@ model to get started
The compute device used for embeddings. "Auto" uses the CPU. Requires
restart.
- Dispozitivul pentru Embeddings.
- "Auto" apelează la CPU. Necesită repornire
+ Dispozitivul pentru Embeddings. "Auto" apelează la CPU. Necesită repornireComma-separated list. LocalDocs will only attempt to process files with these extensions.
- Extensiile, separate prin virgulă. LocalDocs va încerca procesarea
- numai a fişierelor cu aceste extensii.
+ Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.Embed documents using the fast Nomic API instead of a private local model. Requires restart.
- Embedding pe documente folosind API de la Nomic în locul unui model local.
- Necesită repornire.
+ Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
- Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a
- href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a>
- Necesită repornire.
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- Dispozitivul pentru Embeddings.
- "Auto" apelează la CPU. Necesită repornire.
+ Dispozitivul pentru Embeddings. "Auto" apelează la CPU. Necesită repornire.
@@ -1709,25 +1680,19 @@ model to get started
Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau
- chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat
- la Context Window/Size/Length a modelului. Mai multe informaţii: <a
- href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
+ Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea cauzează generare lentă.
+ Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul
- pentru prompt. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea cauzează generare lentă.
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
@@ -1736,10 +1701,7 @@ model to get started
the model's context window. More info <a
href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valori prea mari pot cauza erori cu LocalDocs, replici lente sau
- absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat
- la Context Window/Size/Length a modelului. Mai multe informaţii: <a
- href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
+ Valori prea mari pot cauza erori cu LocalDocs, replici lente sau absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
@@ -1750,8 +1712,7 @@ model to get started
Number of characters per document snippet. Larger numbers increase likelihood of
factual responses, but also result in slower generation.
- numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea pot cauza generare lentă.
+ numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
@@ -1763,9 +1724,7 @@ model to get started
Max best N matches of retrieved document snippets to add to the context for
prompt. Larger numbers increase likelihood of factual responses, but also result in
slower generation.
- Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul
- pentru prompt. Numere mari amplifică probabilitatea
- unor replici corecte, dar de asemenea pot cauza generare lentă.
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
@@ -1913,7 +1872,7 @@ model to get started
%n fişier%n fişiere
-
+ %n fişiere
@@ -1923,7 +1882,7 @@ model to get started
%n cuvânt%n cuvinte
-
+ %n cuvinte
@@ -1968,18 +1927,12 @@ model to get started
OpenAI</li><li>You can apply for an API key <a
href="https://platform.openai.com/account/api-keys">here.</a></li>
- <ul><li>Necesită o cheie API OpenAI personală.
- </li><li>ATENţIE: Conversaţiile tale vor fi trimise la OpenAI!
- </li><li>Cheia ta API va fi stocată pe disc (local)
- </li><li>Va fi utilizată numai pentru comunicarea cu
- OpenAI</li><li>Poţi solicita o cheie API aici: <a
- href="https://platform.openai.com/account/api-keys">aquí.</a></li>
+ <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la OpenAI! </li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
<strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
%1
- <strong>Modelul ChatGPT GPT-3.5 Turbo al
- OpenAI</strong><br> %1
+ <strong>Modelul ChatGPT GPT-3.5 Turbo al OpenAI</strong><br> %1
@@ -1994,12 +1947,7 @@ model to get started
<ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
- <ul><li>Necesită o cheie API OpenAI personală.
- </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!
- </li><li>Cheia ta API va fi stocată pe disc (local)
- </li><li>Va fi utilizată numai pentru comunicarea cu
- OpenAI</li><li>Poţi solicita o cheie API aici: <a
- href="https://platform.openai.com/account/api-keys">aquí.</a></li>
+ <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!</li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
@@ -2009,8 +1957,7 @@ model to get started
<br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu
- garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -2020,12 +1967,7 @@ model to get started
<ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
- <ul><li>Necesită cheia personală Mistral API.
- </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la
- Mistral!</li><li>Cheia ta API va fi stocată
- pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu
- Mistral</li><li>Poţi solicita o cheie API aici: <a
- href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
+ <ul><li>Necesită cheia personală Mistral API. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
@@ -2050,22 +1992,17 @@ model to get started
<strong>Connect to OpenAI-compatible API server</strong><br> %1
- Conectare la un server API compatibil cu OpenAI
+ <strong>Conectare la un server API compatibil cu OpenAI</strong><br> %1<strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creat de către
- %1.</strong><br><ul><li>Publicat in: %2.<li>Acest
- model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii
- pot fi găsite la: <a
- href="https://huggingface.co/%5">aquí.</a></ul>
+ <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul><br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does
not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu
- garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -2076,23 +2013,14 @@ model to get started
Mistral</li><li>You can apply for an API key <a
href="https://console.mistral.ai/user/api-keys">here</a>.</li>
- <ul><li>Necesită cheia personală Mistral API.
- </li><li>ATENţIE: Conversaţiile tale vor fi trimise la
- Mistral!</li><li>Cheia ta API va fi stocată
- pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu
- Mistral</li><li>Poţi solicita o cheie API aici: <a
- href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
+ <ul><li>Necesită cheia personală Mistral API. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
<strong>Created by
%1.</strong><br><ul><li>Published on %2.<li>This model
has %3 likes.<li>This model has %4 downloads.<li>More info can be found
<a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creat de către
- %1.</strong><br><ul><li>Publicat in: %2.<li>Acest
- model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii
- pot fi găsite la: <a
- href="https://huggingface.co/%5">aquí.</a></ul>
+ <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul>
@@ -2142,8 +2070,7 @@ model to get started
Prefixed at the beginning of every conversation. Must contain the appropriate
framing tokens.
- Plasat la Începutul fiecărei conversaţii. Trebuie să conţină
- token-uri(le) adecvate de Încadrare.
+ Plasat la Începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de Încadrare.
@@ -2160,8 +2087,7 @@ model to get started
Must contain the string "%1" to be replaced with the user's
input.
- Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie
- utilizatorul.
+ Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie utilizatorul.
@@ -2203,9 +2129,7 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie.
- Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe.
- NOTă: Nu are efect până la reincărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTă: Nu are efect până la reincărcarea modelului.
@@ -2222,8 +2146,7 @@ model to get started
Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile.
- NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
@@ -2240,15 +2163,13 @@ model to get started
Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P.
- NOTă: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTă: Se evită selectarea token-urilor foarte improbabile.Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
- Plasat la începutul fiecărei conversaţii. Trebuie să conţină
- token-uri(le) adecvate de încadrare.
+ Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare.
@@ -2262,25 +2183,21 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie.
- Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe.
- NOTĂ: Nu are efect până la reîncărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile.
- NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P.
- NOTĂ: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
@@ -2347,8 +2264,7 @@ NOTE: Prevents choosing highly unlikely tokens.
Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- Numărul token-urilor procesate simultan.
- NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2356,16 +2272,12 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie încărcate în VRAM.
- Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul.
- Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa.
- NOTĂ: Nu are efect până la reîncărcarea modelului.
+ Cât de multe layere ale modelului să fie încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. NOTĂ: Nu are efect până la reîncărcarea modelului.Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- numărul token-urilor procesate simultan.
- NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ numărul token-urilor procesate simultan. NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2408,10 +2320,7 @@ NOTE: Does not take effect until you reload the model.
VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie Încărcate în VRAM.
- Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul.
- Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa.
- NOTă: Nu are efect până la reÎncărcarea modelului.
+ Cât de multe layere ale modelului să fie Încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa. NOTă: Nu are efect până la reÎncărcarea modelului.
@@ -2525,16 +2434,13 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="1"><a
href="#error">Error</a></strong></font>
- <strong><font size="1"><a
- href="#eroare">Eroare</a></strong></font>
+ <strong><font size="1"><a href="#eroare">Eroare</a></strong></font><strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat pentru
- acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău
- (%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
@@ -2564,9 +2470,7 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru
- acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău
- (%2).</strong></font>
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
@@ -3145,15 +3049,13 @@ care foloseşte datele tale!
<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va sterge conversaţia
- curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va sterge conversaţia curentă. Confirmi aceasta?<b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va şterge conversaţia
- curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
@@ -3228,15 +3130,7 @@ care foloseşte datele tale!
CPU.<br><br>See here for more information: <a
href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:;
- </h3><br><i>"Hardware incompatibil.
- "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte
- condiţiile minime pentru a rula acest program. în particular, nu suportă
- instrucţiunile AVX pe care programul le necesită pentru a integra un model
- conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware
- cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii:
- <a
- href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte condiţiile minime pentru a rula acest program. în particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
@@ -3252,41 +3146,19 @@ care foloseşte datele tale!
permissions in the local app config directory where the settings file is located.
Check out our <a href="https://discord.gg/4M2QFmTt2k">discord
channel</a> for help.
- <h3>A apărut o eroare la iniţializare:;
- </h3><br><i>"Nu poate fi accesat fişierul de configurare
- a programului."</i><br><br>Din păcate, ceva împiedică
- programul în a accesa acel fişier. Cauza poate fi un set de permisiuni
- incorecte În/pe directorul/folderul local de configurare unde se află acel fişier.
- Poţi parcurge canalul nostru <a
- href="https://discord.gg/4M2QFmTt2k">Discord</a> unde
- vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte În/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.<h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only soluţion at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:;
- </h3><br><i>"Hardware incompatibil.
- "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte
- condiţiile minime pentru a rula acest program. În particular, nu suportă
- instrucţiunile AVX pe care programul le necesită pentru a integra un model
- conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware
- cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii:
- <a
- href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
- <h3>A apărut o eroare la iniţializare:;
- </h3><br><i>"Nu poate fi accesat fişierul de configurare
- a programului."</i><br><br>Din păcate, ceva împiedică
- programul în a accesa acel fişier. Cauza poate fi un set de permisiuni
- incorecte în/pe directorul/folderul local de configurare unde se află acel fişier.
- Poţi parcurge canalul nostru <a
- href="https://discord.gg/4M2QFmTt2k">Discord</a> unde
- vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
@@ -3328,7 +3200,7 @@ care foloseşte datele tale!
Home
- Prima<br/>pagină
+ Prima<br>pagină
From b70d68977db50c1266c80129dcc1cbb0080fd4d6 Mon Sep 17 00:00:00 2001
From: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
Date: Mon, 12 Aug 2024 16:15:05 +0200
Subject: [PATCH 23/66] Add default CLion build folder pattern to .gitignore
(#2835)
CLion uses a `cmake-build-` prefix unlike Qt Creator
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
---
.gitignore | 1 +
1 file changed, 1 insertion(+)
diff --git a/.gitignore b/.gitignore
index 9d1d6918..d7801d0b 100644
--- a/.gitignore
+++ b/.gitignore
@@ -181,6 +181,7 @@ CMakeLists.txt.user
gpt4all-chat/models/*
build_*
build-*
+cmake-build-*
# IntelliJ
.idea/
From b89314df96bd53189ebf66ae5fd3ed92da9de23e Mon Sep 17 00:00:00 2001
From: AT
Date: Mon, 12 Aug 2024 11:00:49 -0400
Subject: [PATCH 24/66] Change to a whitelist for released translations.
(#2830)
- Change to a whitelist for released translations.
- Added changelog entry.
- Bump the version for translation release.
Signed-off-by: Adam Treat
Signed-off-by: AT
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/CMakeLists.txt | 30 +++++++++++++-----------------
gpt4all-chat/mysettings.cpp | 5 ++---
3 files changed, 16 insertions(+), 20 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 4084b4a0..1adeef99 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -8,6 +8,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Add Qwen2-1.5B-Instruct to models3.json (by [@ThiloteE](https://github.com/ThiloteE) in [#2759](https://github.com/nomic-ai/gpt4all/pull/2759))
+- Enable translation feature for seven languages: English, Spanish, Italian, Portuguese, Chinese Simplified, Chinese Traditional, Romanian ([#2830](https://github.com/nomic-ai/gpt4all/pull/2830))
### Changed
- Add missing entries to Italian transltation (by [@Harvester62](https://github.com/Harvester62) in [#2783](https://github.com/nomic-ai/gpt4all/pull/2783))
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 1e59a113..79555c4f 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -17,10 +17,10 @@ if(APPLE)
endif()
set(APP_VERSION_MAJOR 3)
-set(APP_VERSION_MINOR 1)
-set(APP_VERSION_PATCH 2)
+set(APP_VERSION_MINOR 2)
+set(APP_VERSION_PATCH 0)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
-set(APP_VERSION "${APP_VERSION_BASE}-dev0")
+set(APP_VERSION "${APP_VERSION_BASE}")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake/Modules")
@@ -32,7 +32,6 @@ project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
-option(GPT4ALL_TRANSLATIONS "Build with translations" OFF)
option(GPT4ALL_LOCALHOST "Build installer for localhost repo" OFF)
option(GPT4ALL_OFFLINE_INSTALLER "Build an offline installer" OFF)
option(GPT4ALL_SIGN_INSTALL "Sign installed binaries and installers (requires signing identities)" OFF)
@@ -228,19 +227,16 @@ qt_add_qml_module(chat
icons/you.svg
)
-if (GPT4ALL_TRANSLATIONS)
- target_compile_definitions(chat PRIVATE GPT4ALL_USE_TRANSLATIONS)
- qt_add_translations(chat
- TS_FILES
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_en_US.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_es_MX.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_CN.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_TW.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_ro_RO.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_it_IT.ts
- ${CMAKE_SOURCE_DIR}/translations/gpt4all_pt_BR.ts
- )
-endif()
+qt_add_translations(chat
+ TS_FILES
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_en_US.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_es_MX.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_CN.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_zh_TW.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_ro_RO.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_it_IT.ts
+ ${CMAKE_SOURCE_DIR}/translations/gpt4all_pt_BR.ts
+)
set_target_properties(chat PROPERTIES
WIN32_EXECUTABLE TRUE
diff --git a/gpt4all-chat/mysettings.cpp b/gpt4all-chat/mysettings.cpp
index 5d71a661..525ccc1e 100644
--- a/gpt4all-chat/mysettings.cpp
+++ b/gpt4all-chat/mysettings.cpp
@@ -122,6 +122,7 @@ static QString getUiLanguage(const QString directory, const QString fileName)
static QStringList getUiLanguages(const QString &modelPath)
{
QStringList languageList;
+ static const QStringList releasedLanguages = { "en_US", "it_IT", "zh_CN", "zh_TW", "es_MX", "pt_BR", "ro_RO" };
// Add the language translations from model path files first which is used by translation developers
// to load translations in progress without having to rebuild all of GPT4All from source
@@ -138,7 +139,7 @@ static QStringList getUiLanguages(const QString &modelPath)
const QStringList qmFiles = dir.entryList({"*.qm"}, QDir::Files);
for (const QString &fileName : qmFiles) {
const QString lang = getUiLanguage(":/i18n", fileName);
- if (!languageList.contains(lang))
+ if (!languageList.contains(lang) && releasedLanguages.contains(lang))
languageList.append(lang);
}
}
@@ -632,7 +633,6 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
else
locale = QLocale(l);
-#ifdef GPT4ALL_USE_TRANSLATIONS
// If we previously installed a translator, then remove it
if (m_translator) {
if (!qGuiApp->removeTranslator(m_translator.get())) {
@@ -662,7 +662,6 @@ void MySettings::setLanguageAndLocale(const QString &bcp47Name)
m_translator.reset();
}
}
-#endif
// Finally, set the locale whether we have a translation or not
QLocale::setDefault(locale);
From 3e0ad62fcbb0bb9ca0172dc1c3ca30eba3f03cba Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 12 Aug 2024 15:35:25 -0400
Subject: [PATCH 25/66] ci: fix macOS target version (#2846)
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 3 +++
1 file changed, 3 insertions(+)
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index f56fc0f7..2d1bd614 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -75,6 +75,8 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
+ -DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
+ -DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
-DGPT4ALL_SIGN_INSTALL=ON \
@@ -208,6 +210,7 @@ jobs:
-DCMAKE_GENERATOR:STRING=Ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
+ -DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=OFF \
-DGPT4ALL_SIGN_INSTALL=ON \
From ea6361149396dff51e55b90d148d182678562be3 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 12 Aug 2024 17:12:14 -0400
Subject: [PATCH 26/66] chat: add release notes for v3.2.0 and bump version
(#2847)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 5 +++--
gpt4all-chat/CMakeLists.txt | 4 ++--
gpt4all-chat/metadata/release.json | 34 ++++++++++++++++++++++++++++++
3 files changed, 39 insertions(+), 4 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 1adeef99..e63207e4 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -4,7 +4,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
-## [Unreleased]
+## [3.2.0] - 2024-08-12
### Added
- Add Qwen2-1.5B-Instruct to models3.json (by [@ThiloteE](https://github.com/ThiloteE) in [#2759](https://github.com/nomic-ai/gpt4all/pull/2759))
@@ -23,6 +23,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- Make reverse prompt detection work more reliably and prevent it from breaking output ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
- Disallow context shift for chat name and follow-up generation to prevent bugs ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+- Explicitly target macOS 12.6 in CI to fix Metal compatibility on older macOS ([#2846](https://github.com/nomic-ai/gpt4all/pull/2846))
## [3.1.1] - 2024-07-27
@@ -82,6 +83,6 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
-[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/v3.1.1...HEAD
+[3.2.0]: https://github.com/nomic-ai/gpt4all/compare/v3.1.1...v3.2.0
[3.1.1]: https://github.com/nomic-ai/gpt4all/compare/v3.1.0...v3.1.1
[3.1.0]: https://github.com/nomic-ai/gpt4all/compare/v3.0.0...v3.1.0
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 79555c4f..27f3f5d9 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -18,9 +18,9 @@ endif()
set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 2)
-set(APP_VERSION_PATCH 0)
+set(APP_VERSION_PATCH 1)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
-set(APP_VERSION "${APP_VERSION_BASE}")
+set(APP_VERSION "${APP_VERSION_BASE}-dev0")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake/Modules")
diff --git a/gpt4all-chat/metadata/release.json b/gpt4all-chat/metadata/release.json
index b7a8e567..cd6c54ba 100644
--- a/gpt4all-chat/metadata/release.json
+++ b/gpt4all-chat/metadata/release.json
@@ -953,6 +953,40 @@
* Jared Van Bortel (Nomic AI)
* Shiranui (@supersonictw)
* Community (beta testers, bug reporters, bindings authors)
+"
+ },
+ {
+ "version": "3.2.0",
+ "notes":
+"
+— What's New —
+* Translations for Simplified Chinese, Traditional Chinese, Italian, Portuguese, Romanian, and Spanish
+* Significantly faster context recalculation when context runs out
+* Models no longer stop generating when they run out of context
+* Add Qwen2-1.5B-Instruct to the model list
+
+— Fixes —
+* Fix a CUDA crash with long conversations since v3.1.0
+* Fix \"file(s)\" and \"word(s)\" appearing in UI instead of proper plurals
+* Show the correct icons for LocalDocs sources with uppercase extensions
+* More reliable reverse prompt detection
+* Fix a minor prompting issue introduced in v3.1.0
+* Disallow context shift for chat name and follow-up generation
+* Fix potential incompatibility with macOS 12 and 13
+",
+ "contributors":
+"
+* Jared Van Bortel (Nomic AI)
+* Adam Treat (Nomic AI)
+* Riccardo Giovanetti (`@Harvester62`)
+* Victor Emanuel (`@SINAPSA-IC`)
+* Jeremy Tayco (`@jstayco`)
+* Shiranui (`@supersonictw`)
+* Thiago Ramos (`@thiagojramos`)
+* ThiloteE (`@ThiloteE`)
+* Dominik (`@cosmic-snow`)
+* Jack (`@wuodoo`)
+* Community (beta testers, bug reporters, bindings authors)
"
}
]
From ceb7726f22c4f98ab85f05729cb66bcd557364ae Mon Sep 17 00:00:00 2001
From: AT
Date: Mon, 12 Aug 2024 18:15:58 -0400
Subject: [PATCH 27/66] Add some news about our latest translation release.
(#2848)
Signed-off-by: Adam Treat
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/metadata/latestnews.md | 9 ++++++---
1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/gpt4all-chat/metadata/latestnews.md b/gpt4all-chat/metadata/latestnews.md
index 1b24122a..ccd7f916 100644
--- a/gpt4all-chat/metadata/latestnews.md
+++ b/gpt4all-chat/metadata/latestnews.md
@@ -1,6 +1,9 @@
## Latest News
-* **New Model Support**: LLaMa 3.1 8b, Gemma, Mixtral, GPT-NeoX, Gemma 2, OpenELM, ChatGLM, Jais architectures, StarCoder2, XVERSE, Command R, and OLMo (all with Vulkan support)
-* **Suggested Follow Up Questions**: Get follow up questions on your LocalDocs or chats automatically suggested
+We're happy to announce that version 3.2.0 has been released! This new version brings:
-Roadmap: we're planning support for tools in GPT4All that models like LLaMa 3.1 can use. Share suggestions on Discord!
+* **Official Language Translations**: Translations for Simplified Chinese, Traditional Chinese, Italian, Portuguese, Romanian, and Spanish
+* **Context Window Improvements**: Significantly faster context recalculation when context runs out
+* **Bugfixes**: Models no longer stop generating when they run out of context
+
+Also, Qwen2-1.5B-Instruct was recently added to the model list, which has good Chinese support.
From 932cdd8ead603040bfc665a6d79b1fe34100caec Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 12 Aug 2024 19:01:21 -0400
Subject: [PATCH 28/66] latestnews: clarify how to change language (#2850)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/metadata/latestnews.md | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/gpt4all-chat/metadata/latestnews.md b/gpt4all-chat/metadata/latestnews.md
index ccd7f916..436e4d4b 100644
--- a/gpt4all-chat/metadata/latestnews.md
+++ b/gpt4all-chat/metadata/latestnews.md
@@ -2,7 +2,8 @@
We're happy to announce that version 3.2.0 has been released! This new version brings:
-* **Official Language Translations**: Translations for Simplified Chinese, Traditional Chinese, Italian, Portuguese, Romanian, and Spanish
+* **Official Language Translations**: Translations for Simplified Chinese, Traditional Chinese, Italian, Portuguese, Romanian, and Spanish.
+ Go to Settings > Language and Locale to change the application language.
* **Context Window Improvements**: Significantly faster context recalculation when context runs out
* **Bugfixes**: Models no longer stop generating when they run out of context
From be91576937702f4b40f0342ae0dca83f65c5f557 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 12 Aug 2024 19:03:18 -0400
Subject: [PATCH 29/66] ci: use consistent build options on macOS (#2849)
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 93 +++++++++++++++-------------
gpt4all-bindings/python/CHANGELOG.md | 1 +
2 files changed, 51 insertions(+), 43 deletions(-)
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index 2d1bd614..5d12ac43 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -73,18 +73,16 @@ jobs:
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
- -DCMAKE_GENERATOR:STRING=Ninja \
+ -S ../gpt4all-chat -B . -G Ninja \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
+ -DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=ON \
- -DGPT4ALL_SIGN_INSTALL=ON \
- -DCMAKE_BUILD_TYPE=Release \
- -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
- -DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
- -S ../gpt4all-chat \
- -B .
+ -DGPT4ALL_SIGN_INSTALL=ON
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
@@ -207,18 +205,16 @@ jobs:
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
- -DCMAKE_GENERATOR:STRING=Ninja \
+ -S ../gpt4all-chat -B . -G Ninja \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
+ -DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
-DMACDEPLOYQT=~/Qt/6.5.1/macos/bin/macdeployqt \
-DGPT4ALL_OFFLINE_INSTALLER=OFF \
- -DGPT4ALL_SIGN_INSTALL=ON \
- -DCMAKE_BUILD_TYPE=Release \
- -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
- -DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
- -S ../gpt4all-chat \
- -B .
+ -DGPT4ALL_SIGN_INSTALL=ON
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
@@ -345,7 +341,10 @@ jobs:
mkdir build
cd build
mkdir upload
- ~/Qt/Tools/CMake/bin/cmake -DGPT4ALL_OFFLINE_INSTALLER=ON -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
+ ~/Qt/Tools/CMake/bin/cmake \
+ -S ../gpt4all-chat -B . \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DGPT4ALL_OFFLINE_INSTALLER=ON
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
@@ -402,7 +401,10 @@ jobs:
mkdir build
cd build
mkdir upload
- ~/Qt/Tools/CMake/bin/cmake -DGPT4ALL_OFFLINE_INSTALLER=OFF -DCMAKE_BUILD_TYPE=Release -S ../gpt4all-chat -B .
+ ~/Qt/Tools/CMake/bin/cmake \
+ -S ../gpt4all-chat -B . \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DGPT4ALL_OFFLINE_INSTALLER=OFF
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
@@ -485,15 +487,12 @@ jobs:
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
- "-DCMAKE_GENERATOR:STRING=Ninja" `
+ -S ..\gpt4all-chat -B . -G Ninja `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
- "-DGPT4ALL_OFFLINE_INSTALLER=ON" `
- "-DGPT4ALL_SIGN_INSTALLER=ON" `
- "-S ..\gpt4all-chat" `
- "-B ."
+ "-DGPT4ALL_OFFLINE_INSTALLER=ON"
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
@@ -616,15 +615,12 @@ jobs:
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
- "-DCMAKE_GENERATOR:STRING=Ninja" `
+ -S ..\gpt4all-chat -B . -G Ninja `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
"-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
- "-DGPT4ALL_OFFLINE_INSTALLER=OFF" `
- "-DGPT4ALL_SIGN_INSTALLER=ON" `
- "-S ..\gpt4all-chat" `
- "-B ."
+ "-DGPT4ALL_OFFLINE_INSTALLER=OFF"
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
@@ -714,7 +710,9 @@ jobs:
command: |
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:/usr/local/cuda/bin
- ~/Qt/Tools/CMake/bin/cmake -DCMAKE_BUILD_TYPE=Release -S gpt4all-chat -B build
+ ~/Qt/Tools/CMake/bin/cmake \
+ -S gpt4all-chat -B build \
+ -DCMAKE_BUILD_TYPE=Release
~/Qt/Tools/CMake/bin/cmake --build build -j$(nproc) --target all
build-gpt4all-chat-windows:
@@ -774,13 +772,11 @@ jobs:
$Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
- "-DCMAKE_GENERATOR:STRING=Ninja" `
+ -S gpt4all-chat -B build -G Ninja `
"-DCMAKE_BUILD_TYPE=Release" `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
- "-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
- "-S gpt4all-chat" `
- "-B build"
+ "-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON"
& "C:\Qt\Tools\Ninja\ninja.exe" -C build
build-gpt4all-chat-macos:
@@ -816,13 +812,13 @@ jobs:
name: Build
command: |
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
- -DCMAKE_GENERATOR:STRING=Ninja \
- -DBUILD_UNIVERSAL=ON \
+ -S gpt4all-chat -B build -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
- -S gpt4all-chat \
- -B build
+ -DBUILD_UNIVERSAL=ON \
+ -DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
+ -DGGML_METAL_MACOSX_VERSION_MIN=12.6
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build build --target all
build-ts-docs:
docker:
@@ -892,7 +888,8 @@ jobs:
export PATH=$PATH:/usr/local/cuda/bin
git submodule update --init --recursive
cd gpt4all-backend
- cmake -B build -DCMAKE_BUILD_TYPE=Release \
+ cmake -B build \
+ -DCMAKE_BUILD_TYPE=Release \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build -j$(nproc)
@@ -910,7 +907,7 @@ jobs:
build-py-macos:
macos:
- xcode: "14.2.0"
+ xcode: 15.4.0
resource_class: macos.m1.large.gen1
steps:
- checkout
@@ -924,7 +921,11 @@ jobs:
command: |
git submodule update --init # don't use --recursive because macOS doesn't use Kompute
cd gpt4all-backend
- cmake -B build -DCMAKE_BUILD_TYPE=Release -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
+ cmake -B build \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DBUILD_UNIVERSAL=ON \
+ -DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
+ -DGGML_METAL_MACOSX_VERSION_MIN=12.6
cmake --build build --parallel
- run:
name: Build wheel
@@ -991,7 +992,8 @@ jobs:
$Env:PATH += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
cd gpt4all-backend
- cmake -G Ninja -B build -DCMAKE_BUILD_TYPE=Release `
+ cmake -B build -G Ninja `
+ -DCMAKE_BUILD_TYPE=Release `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
-DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build --parallel
@@ -1053,8 +1055,9 @@ jobs:
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
- cmake ../..
- cmake --build . -j$(nproc) --config Release
+ cmake ../.. \
+ -DCMAKE_BUILD_TYPE=Release
+ cmake --build . -j$(nproc)
mkdir ../linux-x64
cp -L *.so ../linux-x64 # otherwise persist_to_workspace seems to mess symlinks
- persist_to_workspace:
@@ -1082,8 +1085,12 @@ jobs:
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
- cmake ../.. -DCMAKE_OSX_ARCHITECTURES="x86_64;arm64"
- cmake --build . --parallel --config Release
+ cmake ../.. \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DBUILD_UNIVERSAL=ON \
+ -DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
+ -DGGML_METAL_MACOSX_VERSION_MIN=12.6
+ cmake --build . --parallel
mkdir ../osx-x64
cp -L *.dylib ../osx-x64
cp ../../llama.cpp-mainline/*.metal ../osx-x64
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 2e5be01f..8d35abff 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Make reverse prompt detection work more reliably and prevent it from breaking output ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
+- Explicitly target macOS 12.6 in CI to fix Metal compatibility on older macOS ([#2849](https://github.com/nomic-ai/gpt4all/pull/2849))
## [2.8.0] - 2024-08-05
From 971c83d1d367331cb15e24852d603b73e4433ecd Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 13 Aug 2024 11:10:10 -0400
Subject: [PATCH 30/66] llama.cpp: pull in fix for Kompute-related nvidia-egl
crash (#2843)
Signed-off-by: Jared Van Bortel
---
gpt4all-backend/llama.cpp-mainline | 2 +-
gpt4all-bindings/python/CHANGELOG.md | 2 ++
gpt4all-chat/CHANGELOG.md | 7 +++++++
3 files changed, 10 insertions(+), 1 deletion(-)
diff --git a/gpt4all-backend/llama.cpp-mainline b/gpt4all-backend/llama.cpp-mainline
index add38785..443665ae 160000
--- a/gpt4all-backend/llama.cpp-mainline
+++ b/gpt4all-backend/llama.cpp-mainline
@@ -1 +1 @@
-Subproject commit add387854ea73d83770a62282089dea666fa266f
+Subproject commit 443665aec4721ecf57df8162e7e093a0cd674a76
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 8d35abff..9dfd7f9f 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -15,6 +15,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Make reverse prompt detection work more reliably and prevent it from breaking output ([#2781](https://github.com/nomic-ai/gpt4all/pull/2781))
- Explicitly target macOS 12.6 in CI to fix Metal compatibility on older macOS ([#2849](https://github.com/nomic-ai/gpt4all/pull/2849))
+- Do not initialize Vulkan driver when only using CPU ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
+- Fix a segfault on exit when using CPU mode on Linux with NVIDIA and EGL ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
## [2.8.0] - 2024-08-05
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index e63207e4..64b461da 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -4,6 +4,12 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
+## [Unreleased]
+
+### Fixed
+- Do not initialize Vulkan driver when only using CPU ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
+- Fix a potential crash on exit when using only CPU on Linux with NVIDIA (does not affect X11) ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
+
## [3.2.0] - 2024-08-12
### Added
@@ -83,6 +89,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
+[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/v3.2.0...HEAD
[3.2.0]: https://github.com/nomic-ai/gpt4all/compare/v3.1.1...v3.2.0
[3.1.1]: https://github.com/nomic-ai/gpt4all/compare/v3.1.0...v3.1.1
[3.1.0]: https://github.com/nomic-ai/gpt4all/compare/v3.0.0...v3.1.0
From 7463b2170b84798693f9e570843d60dabc198a1b Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 13 Aug 2024 14:47:48 -0400
Subject: [PATCH 31/66] backend(build): set CUDA arch defaults before
enable_language(CUDA) (#2855)
Signed-off-by: Jared Van Bortel
---
gpt4all-backend/CMakeLists.txt | 18 ++++++++++++++++++
gpt4all-backend/llama.cpp.cmake | 14 +-------------
gpt4all-chat/CHANGELOG.md | 1 +
3 files changed, 20 insertions(+), 13 deletions(-)
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index 6b38b954..684080ab 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -63,6 +63,24 @@ if (LLMODEL_VULKAN)
list(APPEND BUILD_VARIANTS vulkan vulkan-avxonly)
endif()
if (LLMODEL_CUDA)
+ cmake_minimum_required(VERSION 3.18) # for CMAKE_CUDA_ARCHITECTURES
+
+ # Defaults must be set before enable_language(CUDA).
+ # Keep this in sync with the arch list in ggml/src/CMakeLists.txt.
+ if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
+ # 52 == lowest CUDA 12 standard
+ # 60 == f16 CUDA intrinsics
+ # 61 == integer CUDA intrinsics
+ # 70 == compute capability at which unrolling a loop in mul_mat_q kernels is faster
+ if (GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
+ set(CMAKE_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
+ else()
+ set(CMAKE_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
+ #set(CMAKE_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
+ endif()
+ endif()
+ message(STATUS "Using CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
+
include(CheckLanguage)
check_language(CUDA)
if (NOT CMAKE_CUDA_COMPILER)
diff --git a/gpt4all-backend/llama.cpp.cmake b/gpt4all-backend/llama.cpp.cmake
index a541da03..c82668d7 100644
--- a/gpt4all-backend/llama.cpp.cmake
+++ b/gpt4all-backend/llama.cpp.cmake
@@ -378,19 +378,7 @@ function(include_ggml SUFFIX)
find_package(CUDAToolkit REQUIRED)
set(CUDAToolkit_BIN_DIR ${CUDAToolkit_BIN_DIR} PARENT_SCOPE)
- if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
- # 52 == lowest CUDA 12 standard
- # 60 == f16 CUDA intrinsics
- # 61 == integer CUDA intrinsics
- # 70 == compute capability at which unrolling a loop in mul_mat_q kernels is faster
- if (GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
- set(CMAKE_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
- else()
- set(CMAKE_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
- #set(CMAKE_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
- endif()
- endif()
- message(STATUS "Using CUDA architectures: ${CMAKE_CUDA_ARCHITECTURES}")
+ # architectures are set in gpt4all-backend/CMakeLists.txt
set(GGML_HEADERS_CUDA ${DIRECTORY}/ggml/include/ggml-cuda.h)
file(GLOB GGML_HEADERS_CUDA "${DIRECTORY}/ggml/src/ggml-cuda/*.cuh")
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 64b461da..ae4b5d08 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Do not initialize Vulkan driver when only using CPU ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
- Fix a potential crash on exit when using only CPU on Linux with NVIDIA (does not affect X11) ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
+- Fix default CUDA architecture list after [#2802](https://github.com/nomic-ai/gpt4all/pull/2802) ([#2855](https://github.com/nomic-ai/gpt4all/pull/2855))
## [3.2.0] - 2024-08-12
From 8ccf1fa2f5a576e69334edb7b88b47d2666ed968 Mon Sep 17 00:00:00 2001
From: AT
Date: Tue, 13 Aug 2024 14:59:32 -0400
Subject: [PATCH 32/66] Change version to v3.2.1 for bugfix release. (#2856)
Signed-off-by: Adam Treat
---
gpt4all-chat/CMakeLists.txt | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 27f3f5d9..ccaa39b7 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -20,7 +20,7 @@ set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 2)
set(APP_VERSION_PATCH 1)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
-set(APP_VERSION "${APP_VERSION_BASE}-dev0")
+set(APP_VERSION "${APP_VERSION_BASE}")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake/Modules")
From 6518b3369799a6722e25eb5d119d379e25f13bb2 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 13 Aug 2024 17:04:50 -0400
Subject: [PATCH 33/66] llamamodel: use greedy sampling when temp=0 (#2854)
Signed-off-by: Jared Van Bortel
---
gpt4all-backend/llamamodel.cpp | 26 +++++++++++++++++---------
gpt4all-bindings/python/CHANGELOG.md | 3 +++
gpt4all-chat/CHANGELOG.md | 8 +++++++-
3 files changed, 27 insertions(+), 10 deletions(-)
diff --git a/gpt4all-backend/llamamodel.cpp b/gpt4all-backend/llamamodel.cpp
index f07a05e8..e2bbd0ac 100644
--- a/gpt4all-backend/llamamodel.cpp
+++ b/gpt4all-backend/llamamodel.cpp
@@ -137,7 +137,7 @@ struct gpt_params {
bool use_mlock = false; // use mlock to keep model in memory
};
-static int llama_sample_top_p_top_k(
+static llama_token llama_sample_top_p_top_k(
llama_context *ctx,
const llama_token *last_n_tokens_data,
int last_n_tokens_size,
@@ -157,14 +157,22 @@ static int llama_sample_top_p_top_k(
llama_token_data_array candidates_p = {candidates.data(), candidates.size(), false};
// Sample repeat penalty
llama_sample_repetition_penalties(nullptr, &candidates_p, last_n_tokens_data, last_n_tokens_size, repeat_penalty, 0.0f, 0.0f);
- // Temperature sampling
- llama_sample_top_k(ctx, &candidates_p, top_k, 1);
- llama_sample_tail_free(ctx, &candidates_p, 1.0f, 1);
- llama_sample_typical(ctx, &candidates_p, 1.0f, 1);
- llama_sample_top_p(ctx, &candidates_p, top_p, 1);
- llama_sample_min_p(ctx, &candidates_p, min_p, 1);
- llama_sample_temp(ctx, &candidates_p, temp);
- return llama_sample_token(ctx, &candidates_p);
+
+ llama_token id;
+ if (temp == 0.0) {
+ // greedy sampling, no probs
+ id = llama_sample_token_greedy(ctx, &candidates_p);
+ } else {
+ // temperature sampling
+ llama_sample_top_k(ctx, &candidates_p, top_k, 1);
+ llama_sample_tail_free(ctx, &candidates_p, 1.0f, 1);
+ llama_sample_typical(ctx, &candidates_p, 1.0f, 1);
+ llama_sample_top_p(ctx, &candidates_p, top_p, 1);
+ llama_sample_min_p(ctx, &candidates_p, min_p, 1);
+ llama_sample_temp(ctx, &candidates_p, temp);
+ id = llama_sample_token(ctx, &candidates_p);
+ }
+ return id;
}
const char *get_arch_name(gguf_context *ctx_gguf)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 9dfd7f9f..7de7b980 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -6,6 +6,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [Unreleased]
+### Added
+- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
+
### Changed
- Search for pip-installed CUDA 11 as well as CUDA 12 ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
- Stop shipping CUBINs to reduce wheel size ([#2802](https://github.com/nomic-ai/gpt4all/pull/2802))
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index ae4b5d08..969669b1 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -6,6 +6,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [Unreleased]
+### Added
+- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
+
+## [3.2.1] - 2024-08-13
+
### Fixed
- Do not initialize Vulkan driver when only using CPU ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
- Fix a potential crash on exit when using only CPU on Linux with NVIDIA (does not affect X11) ([#2843](https://github.com/nomic-ai/gpt4all/pull/2843))
@@ -90,7 +95,8 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
-[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/v3.2.0...HEAD
+[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/v3.2.1...HEAD
+[3.2.1]: https://github.com/nomic-ai/gpt4all/compare/v3.2.0...v3.2.1
[3.2.0]: https://github.com/nomic-ai/gpt4all/compare/v3.1.1...v3.2.0
[3.1.1]: https://github.com/nomic-ai/gpt4all/compare/v3.1.0...v3.1.1
[3.1.0]: https://github.com/nomic-ai/gpt4all/compare/v3.0.0...v3.1.0
From 3ba9c6344d3610ff8b2d54b650f97eb288f6d6d1 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 13 Aug 2024 17:12:34 -0400
Subject: [PATCH 34/66] python: release version 2.8.1 (#2857)
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/CHANGELOG.md | 4 ++--
gpt4all-bindings/python/setup.py | 2 +-
2 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 7de7b980..4753e648 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -4,7 +4,7 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
-## [Unreleased]
+## [2.8.1] - 2024-08-13
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
@@ -51,5 +51,5 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
-[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.0...HEAD
+[2.8.1]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.0...python-v2.8.1
[2.8.0]: https://github.com/nomic-ai/gpt4all/compare/python-v2.7.0...python-v2.8.0
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index e92fba61..1374d92d 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -68,7 +68,7 @@ def get_long_description():
setup(
name=package_name,
- version="2.8.1.dev0",
+ version="2.8.1",
description="Python bindings for GPT4All",
long_description=get_long_description(),
long_description_content_type="text/markdown",
From af9416c0bf4ac445a4ada92d0451c5c8ad18022d Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 13 Aug 2024 19:11:04 -0400
Subject: [PATCH 35/66] python: fix CUDA dependency version (#2858)
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/setup.py | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index 1374d92d..63bd9140 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -94,8 +94,8 @@ setup(
],
extras_require={
'cuda': [
- 'nvidia-cuda-runtime-cu12',
- 'nvidia-cublas-cu12',
+ 'nvidia-cuda-runtime-cu11',
+ 'nvidia-cublas-cu11',
],
'all': [
'gpt4all[cuda]; platform_system == "Windows" or platform_system == "Linux"',
From 3386ac6331ee79af776a27af4a3b80774ff84d58 Mon Sep 17 00:00:00 2001
From: AT
Date: Tue, 13 Aug 2024 19:24:25 -0400
Subject: [PATCH 36/66] Add release notes and bump version for v3.2.1 (#2859)
Signed-off-by: Adam Treat
Signed-off-by: AT
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CMakeLists.txt | 4 ++--
gpt4all-chat/metadata/latestnews.md | 2 +-
gpt4all-chat/metadata/release.json | 13 +++++++++++++
3 files changed, 16 insertions(+), 3 deletions(-)
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index ccaa39b7..e0f6b6f8 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -18,9 +18,9 @@ endif()
set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 2)
-set(APP_VERSION_PATCH 1)
+set(APP_VERSION_PATCH 2)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
-set(APP_VERSION "${APP_VERSION_BASE}")
+set(APP_VERSION "${APP_VERSION_BASE}-dev0")
list(APPEND CMAKE_MODULE_PATH "${CMAKE_CURRENT_LIST_DIR}/cmake/Modules")
diff --git a/gpt4all-chat/metadata/latestnews.md b/gpt4all-chat/metadata/latestnews.md
index 436e4d4b..8675584a 100644
--- a/gpt4all-chat/metadata/latestnews.md
+++ b/gpt4all-chat/metadata/latestnews.md
@@ -1,6 +1,6 @@
## Latest News
-We're happy to announce that version 3.2.0 has been released! This new version brings:
+Version 3.2.1 has now been released which fixes an issue with poor quality responses on NVIDIA GPUs in 3.2.0. The new 3.2 minor version brings:
* **Official Language Translations**: Translations for Simplified Chinese, Traditional Chinese, Italian, Portuguese, Romanian, and Spanish.
Go to Settings > Language and Locale to change the application language.
diff --git a/gpt4all-chat/metadata/release.json b/gpt4all-chat/metadata/release.json
index cd6c54ba..f7d997c1 100644
--- a/gpt4all-chat/metadata/release.json
+++ b/gpt4all-chat/metadata/release.json
@@ -987,6 +987,19 @@
* Dominik (`@cosmic-snow`)
* Jack (`@wuodoo`)
* Community (beta testers, bug reporters, bindings authors)
+"
+ },
+ {
+ "version": "3.2.1",
+ "notes":
+"
+— Fixes —
+* Fix a potential Vulkan crash on application exit on some Linux systems
+* Fix a bad CUDA build option that led to gibberish on newer NVIDIA GPUs
+",
+ "contributors":
+"
+* Jared Van Bortel (Nomic AI)
"
}
]
From a232befa58c8a82356a7e907b30d14d77f904f8f Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 14 Aug 2024 13:30:14 -0400
Subject: [PATCH 37/66] python: fix py3.8 compat (#2871)
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/CHANGELOG.md | 5 +++
gpt4all-bindings/python/gpt4all/_pyllmodel.py | 2 +-
gpt4all-bindings/python/gpt4all/gpt4all.py | 34 +++++++++++--------
gpt4all-bindings/python/setup.py | 2 +-
4 files changed, 26 insertions(+), 17 deletions(-)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 4753e648..e2be48e6 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
+## [2.8.2] - 2024-08-14
+
+### Fixed
+- Fixed incompatibility with Python 3.8 since v2.7.0 and Python <=3.11 since v2.8.1 ([#2871](https://github.com/nomic-ai/gpt4all/pull/2871))
+
## [2.8.1] - 2024-08-13
### Added
diff --git a/gpt4all-bindings/python/gpt4all/_pyllmodel.py b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
index 88be2949..d623b528 100644
--- a/gpt4all-bindings/python/gpt4all/_pyllmodel.py
+++ b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
@@ -45,7 +45,7 @@ def _load_cuda(rtver: str, blasver: str) -> None:
cudalib = f"lib/libcudart.so.{rtver}"
cublaslib = f"lib/libcublas.so.{blasver}"
else: # Windows
- cudalib = fr"bin\cudart64_{rtver.replace(".", "")}.dll"
+ cudalib = fr"bin\cudart64_{rtver.replace('.', '')}.dll"
cublaslib = fr"bin\cublas64_{blasver}.dll"
# preload the CUDA libs so the backend can find them
diff --git a/gpt4all-bindings/python/gpt4all/gpt4all.py b/gpt4all-bindings/python/gpt4all/gpt4all.py
index 4364d7d5..1711429d 100644
--- a/gpt4all-bindings/python/gpt4all/gpt4all.py
+++ b/gpt4all-bindings/python/gpt4all/gpt4all.py
@@ -209,27 +209,27 @@ class GPT4All:
self._current_prompt_template: str = "{0}"
device_init = None
- if sys.platform == 'darwin':
+ if sys.platform == "darwin":
if device is None:
- backend = 'auto' # 'auto' is effectively 'metal' due to currently non-functional fallback
- elif device == 'cpu':
- backend = 'cpu'
+ backend = "auto" # "auto" is effectively "metal" due to currently non-functional fallback
+ elif device == "cpu":
+ backend = "cpu"
else:
- if platform.machine() != 'arm64' or device != 'gpu':
- raise ValueError(f'Unknown device for this platform: {device}')
- backend = 'metal'
+ if platform.machine() != "arm64" or device != "gpu":
+ raise ValueError(f"Unknown device for this platform: {device}")
+ backend = "metal"
else:
- backend = 'kompute'
- if device is None or device == 'cpu':
+ backend = "kompute"
+ if device is None or device == "cpu":
pass # use kompute with no device
- elif device in ('cuda', 'kompute'):
+ elif device in ("cuda", "kompute"):
backend = device
- device_init = 'gpu'
- elif device.startswith('cuda:'):
- backend = 'cuda'
- device_init = device.removeprefix('cuda:')
+ device_init = "gpu"
+ elif device.startswith("cuda:"):
+ backend = "cuda"
+ device_init = _remove_prefix(device, "cuda:")
else:
- device_init = device.removeprefix('kompute:')
+ device_init = _remove_prefix(device, "kompute:")
# Retrieve model and download if allowed
self.config: ConfigType = self.retrieve_model(model_name, model_path=model_path, allow_download=allow_download, verbose=verbose)
@@ -706,3 +706,7 @@ def _fsync(fd: int | _HasFileno) -> None:
else:
return
os.fsync(fd)
+
+
+def _remove_prefix(s: str, prefix: str) -> str:
+ return s[len(prefix):] if s.startswith(prefix) else s
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index 63bd9140..e22cbe23 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -68,7 +68,7 @@ def get_long_description():
setup(
name=package_name,
- version="2.8.1",
+ version="2.8.2",
description="Python bindings for GPT4All",
long_description=get_long_description(),
long_description_content_type="text/markdown",
From b99ca17a7d80e5afa784c131c42cbe2776be4fce Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 14 Aug 2024 14:21:45 -0400
Subject: [PATCH 38/66] python: fix missing link in changelog
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/CHANGELOG.md | 1 +
1 file changed, 1 insertion(+)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index e2be48e6..69346498 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -56,5 +56,6 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+[2.8.2]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.1...python-v2.8.2
[2.8.1]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.0...python-v2.8.1
[2.8.0]: https://github.com/nomic-ai/gpt4all/compare/python-v2.7.0...python-v2.8.0
From 7073fe341f2af92747ec0400d9f5e324ef013eeb Mon Sep 17 00:00:00 2001
From: Simon Willison
Date: Wed, 14 Aug 2024 13:28:04 -0700
Subject: [PATCH 39/66] Add Changelog to links on PyPI (#2860)
Signed-off-by: Simon Willison
---
gpt4all-bindings/python/setup.py | 1 +
1 file changed, 1 insertion(+)
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index e22cbe23..cd8485aa 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -78,6 +78,7 @@ setup(
project_urls={
"Documentation": "https://docs.gpt4all.io/gpt4all_python.html",
"Source code": "https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python",
+ "Changelog": "https://github.com/nomic-ai/gpt4all/blob/main/gpt4all-bindings/python/CHANGELOG.md",
},
classifiers = [
"Programming Language :: Python :: 3",
From 3aa680634195f41c249220f9f6f7b66b6590a6fb Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 14 Aug 2024 16:28:17 -0400
Subject: [PATCH 40/66] LocalDocsSettings: fix embedding device selection after
#2690 (#2873)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 3 +
gpt4all-chat/qml/LocalDocsSettings.qml | 16 +-
gpt4all-chat/translations/gpt4all_en_US.ts | 520 ++++----
gpt4all-chat/translations/gpt4all_es_MX.ts | 1209 +++++++++----------
gpt4all-chat/translations/gpt4all_it_IT.ts | 430 +++----
gpt4all-chat/translations/gpt4all_pt_BR.ts | 430 +++----
gpt4all-chat/translations/gpt4all_ro_RO.ts | 1261 ++++++++++----------
gpt4all-chat/translations/gpt4all_zh_CN.ts | 430 +++----
gpt4all-chat/translations/gpt4all_zh_TW.ts | 430 +++----
9 files changed, 2396 insertions(+), 2333 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 969669b1..745d972a 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -9,6 +9,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
+### Fixed
+- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
+
## [3.2.1] - 2024-08-13
### Fixed
diff --git a/gpt4all-chat/qml/LocalDocsSettings.qml b/gpt4all-chat/qml/LocalDocsSettings.qml
index 47f44090..45df85e3 100644
--- a/gpt4all-chat/qml/LocalDocsSettings.qml
+++ b/gpt4all-chat/qml/LocalDocsSettings.qml
@@ -163,7 +163,7 @@ MySettingsTab {
MySettingsLabel {
id: deviceLabel
text: qsTr("Embeddings Device")
- helpText: qsTr('The compute device used for embeddings. "Auto" uses the CPU. Requires restart.')
+ helpText: qsTr('The compute device used for embeddings. Requires restart.')
}
MyComboBox {
id: deviceBox
@@ -172,11 +172,18 @@ MySettingsTab {
Layout.maximumWidth: 400
Layout.fillWidth: false
Layout.alignment: Qt.AlignRight
- model: MySettings.embeddingsDeviceList
+ model: ListModel {
+ ListElement { text: qsTr("Application default") }
+ Component.onCompleted: {
+ MySettings.embeddingsDeviceList.forEach(d => append({"text": d}));
+ }
+ }
Accessible.name: deviceLabel.text
Accessible.description: deviceLabel.helpText
function updateModel() {
- deviceBox.currentIndex = deviceBox.indexOfValue(MySettings.localDocsEmbedDevice);
+ var device = MySettings.localDocsEmbedDevice;
+ // This usage of 'Auto' should not be translated
+ deviceBox.currentIndex = device === "Auto" ? 0 : deviceBox.indexOfValue(device);
}
Component.onCompleted: {
deviceBox.updateModel();
@@ -188,7 +195,8 @@ MySettingsTab {
}
}
onActivated: {
- MySettings.localDocsEmbedDevice = deviceBox.currentText;
+ // This usage of 'Auto' should not be translated
+ MySettings.localDocsEmbedDevice = deviceBox.currentIndex === 0 ? "Auto" : deviceBox.currentText;
}
}
}
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index 0769d674..b8944c0c 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -115,44 +115,44 @@
-
-
+
+ Default
-
-
+
+ Likes
-
-
+
+ Downloads
-
-
+
+ Recent
-
-
+
+ Asc
-
-
+
+ Desc
-
-
+
+ None
@@ -163,266 +163,266 @@
-
-
+
+ Sort by: %1
-
-
+
+ Sort dir: %1
-
-
+
+ Limit: %1
-
-
+
+ Network error: could not retrieve %1
-
-
-
-
+
+
+
+ Busy indicator
-
-
+
+ Displayed when the models request is ongoing
-
-
+
+ Model file
-
-
+
+ Model file to be downloaded
-
-
+
+ Description
-
-
+
+ File description
-
-
+
+ Cancel
-
-
+
+ Resume
-
-
+
+ Download
-
-
+
+ Stop/restart/start the download
-
-
+
+ Remove
-
-
+
+ Remove model from filesystem
-
-
-
-
+
+
+
+ Install
-
-
-
- Install online model
-
-
-
-
-
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
-
-
-
-
- ERROR: $API_KEY is empty.
-
-
-
-
-
- ERROR: $BASE_URL is empty.
-
-
-
-
-
- enter $BASE_URL
-
-
-
-
-
- ERROR: $MODEL_NAME is empty.
-
-
-
-
-
- enter $MODEL_NAME
-
-
-
-
-
- %1 GB
-
-
-
-
-
-
-
- ?
-
-
+ Install online model
+
+
+
+
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
+
+
+
+
+
+ ERROR: $API_KEY is empty.
+
+
+
+
+
+ ERROR: $BASE_URL is empty.
+
+
+
+
+
+ enter $BASE_URL
+
+
+
+
+
+ ERROR: $MODEL_NAME is empty.
+
+
+
+
+
+ enter $MODEL_NAME
+
+
+
+
+
+ %1 GB
+
+
+
+
+
+
+
+ ?
+
+
+
+
+ Describes an error that occurred when downloading
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
-
-
+
+ Error for incompatible hardware
-
-
+
+ Download progressBar
-
-
+
+ Shows the progress made in the download
-
-
+
+ Download speed
-
-
+
+ Download speed in bytes/kilobytes/megabytes per second
-
-
+
+ Calculating...
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculated
-
-
+
+ Displayed when the file hash is being calculated
-
-
+
+ enter $API_KEY
-
-
+
+ File size
-
-
+
+ RAM required
-
-
+
+ Parameters
-
-
+
+ Quant
-
-
+
+ Type
@@ -490,238 +490,238 @@
-
-
+
+ Dark
-
-
+
+ Light
-
-
+
+ LegacyDark
-
-
+
+ Font Size
-
-
+
+ The size of text in the application.
-
-
+
+ Small
-
-
+
+ Medium
-
-
+
+ Large
-
-
+
+ Language and Locale
-
-
+
+ The language and locale you wish to use.
-
-
+
+ System Locale
-
-
+
+ Device
-
-
+
+ The compute device used for text generation.
-
-
-
-
+
+
+
+ Application default
-
-
+
+ Default Model
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.
-
-
+
+ Suggestion Mode
-
-
+
+ Generate suggested follow-up questions at the end of responses.
-
-
+
+ When chatting with LocalDocs
-
-
+
+ Whenever possible
-
-
+
+ Never
-
-
+
+ Download Path
-
-
+
+ Where to store local models and the LocalDocs database.
-
-
+
+ Browse
-
-
+
+ Choose where to save model files
-
-
+
+ Enable Datalake
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.
-
-
+
+ Advanced
-
-
+
+ CPU Threads
-
-
+
+ The number of CPU threads used for inference and embedding.
-
-
-
- Save Chat Context
-
-
-
-
-
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
-
-
-
-
-
- Enable Local Server
-
-
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
+ Save Chat Context
-
-
- API Server Port
+
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
+ Enable Local Server
+
+
+
+
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
+
+
+
+
+
+ API Server Port
+
+
+
+
+ The port to use for the local server. Requires restart.
-
-
+
+ Check For Updates
-
-
+
+ Manually check for an update to GPT4All.
-
-
+
+ Updates
@@ -1520,66 +1520,72 @@ model to get started
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
+ The compute device used for embeddings. Requires restart.
-
-
+
+
+ Application default
+
+
+
+
+ Display
-
-
+
+ Show Sources
-
-
+
+ Display the sources used for each response.
-
-
+
+ Advanced
-
-
+
+ Warning: Advanced usage only.
-
-
-
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
-
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
+
+
+
+
+ Document snippet size (characters)
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
-
+
+ Max document snippets per prompt
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index a62f2d6d..433d514a 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -79,346 +79,350 @@
AddModelView
-
-
+
+ ← Existing Models← Modelos existentes
-
-
+
+ Explore ModelsExplorar modelos
-
-
+
+ Discover and download models by keyword search...Descubre y descarga modelos mediante búsqueda por palabras clave...
-
-
+
+ Text field for discovering and filtering downloadable modelsCampo de texto para descubrir y filtrar modelos descargables
-
-
+
+ Initiate model discovery and filteringIniciar descubrimiento y filtrado de modelos
-
-
+
+ Triggers discovery and filtering of modelsActiva el descubrimiento y filtrado de modelos
-
-
+
+ DefaultPredeterminado
-
-
+
+ LikesMe gusta
-
-
+
+ DownloadsDescargas
-
-
+
+ RecentReciente
-
-
+
+ AscAsc
-
-
+
+ DescDesc
-
-
+
+ NoneNinguno
-
-
+
+ Searching · %1Buscando · %1
-
-
+
+ Sort by: %1Ordenar por: %1
-
-
+
+ Sort dir: %1Dirección de ordenamiento: %1
-
-
+
+ Limit: %1Límite: %1
-
-
+
+ Network error: could not retrieve %1Error de red: no se pudo recuperar %1
-
-
-
-
+
+
+
+ Busy indicatorIndicador de ocupado
-
-
+
+ Displayed when the models request is ongoingSe muestra cuando la solicitud de modelos está en curso
-
-
+
+ Model fileArchivo del modelo
-
-
+
+ Model file to be downloadedArchivo del modelo a descargar
-
-
+
+ DescriptionDescripción
-
-
+
+ File descriptionDescripción del archivo
-
-
+
+ CancelCancelar
-
-
+
+ ResumeReanudar
-
-
+
+ DownloadDescargar
-
-
+
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
-
+
+ RemoveEliminar
-
-
+
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
-
-
+
+
+
+ InstallInstalar
-
-
+
+ Install online modelInstalar modelo en línea
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para tu hardware. El modelo requiere más memoria (%1 GB) de la que tu sistema tiene disponible (%2).</strong></font>
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
-
-
+
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
-
-
+
+ Error for incompatible hardwareError por hardware incompatible
-
-
+
+ Download progressBarBarra de progreso de descarga
-
-
+
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
-
+
+ Download speedVelocidad de descarga
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
-
+
+ Calculating...Calculando...
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
-
+
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
-
-
+
+ enter $API_KEYingrese $API_KEY
-
-
+
+ File sizeTamaño del archivo
-
-
+
+ RAM requiredRAM requerida
-
-
+
+ ParametersParámetros
-
-
+
+ QuantCuantificación
-
-
+
+ TypeTipo
-
-
+
+ ERROR: $API_KEY is empty.ERROR: $API_KEY está vacío.
-
-
+
+ ERROR: $BASE_URL is empty.ERROR: $BASE_URL está vacío.
-
-
+
+ enter $BASE_URLingrese $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
-
-
+
+ enter $MODEL_NAMEingrese $MODEL_NAME
@@ -443,6 +447,18 @@
opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
+
+
+
+ ERROR: Update system could not find the MaintenanceTool used<br>
+ to check for updates!<br><br>
+ Did you install this application using the online installer? If so,<br>
+ the MaintenanceTool executable should be located one directory<br>
+ above where this application resides on your filesystem.<br><br>
+ If you can't start it manually, then I'm afraid you'll have to<br>
+ reinstall.
+
+
@@ -474,256 +490,252 @@
El esquema de colores de la aplicación.
-
-
+
+ DarkOscuro
-
-
+
+ LightClaro
-
-
+
+ LegacyDarkOscuro legado
-
-
+
+ Font SizeTamaño de fuente
-
-
+
+ The size of text in the application.El tamaño del texto en la aplicación.
-
-
+
+ DeviceDispositivo
- The compute device used for text generation. "Auto" uses Vulkan or Metal.
- El dispositivo de cómputo utilizado para la generación de texto. "Auto" utiliza Vulkan o Metal.
+ El dispositivo de cómputo utilizado para la generación de texto. "Auto" utiliza Vulkan o Metal.
-
-
+
+ SmallPequeño
-
-
+
+ MediumMediano
-
-
+
+ LargeGrande
-
-
+
+ Language and LocaleIdioma y configuración regional
-
-
+
+ The language and locale you wish to use.El idioma y la configuración regional que deseas usar.
-
-
+
+ Default ModelModelo predeterminado
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.El modelo preferido para nuevos chats. También se utiliza como respaldo del servidor local.
-
-
+
+ Suggestion ModeModo de sugerencia
-
-
+
+ Generate suggested follow-up questions at the end of responses.Generar preguntas de seguimiento sugeridas al final de las respuestas.
-
-
+
+ When chatting with LocalDocsAl chatear con LocalDocs
-
-
+
+ Whenever possibleSiempre que sea posible
-
-
+
+ NeverNunca
-
-
+
+ Download PathRuta de descarga
-
-
+
+ Where to store local models and the LocalDocs database.Dónde almacenar los modelos locales y la base de datos de LocalDocs.
-
-
+
+ BrowseExplorar
-
-
+
+ Choose where to save model filesElegir dónde guardar los archivos del modelo
-
-
+
+ Enable DatalakeHabilitar Datalake
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.Enviar chats y comentarios al Datalake de código abierto de GPT4All.
-
-
+
+ AdvancedAvanzado
-
-
+
+ CPU ThreadsHilos de CPU
-
-
+
+ The number of CPU threads used for inference and embedding.El número de hilos de CPU utilizados para inferencia e incrustación.
-
-
+
+ Save Chat ContextGuardar contexto del chat
-
-
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat.
-
-
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta en un mayor uso de recursos.
-
-
+
+ Enable Local ServerHabilitar servidor local
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
resource usage.
- Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta
+ Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta
en un mayor uso de recursos.
-
-
+
+ API Server PortPuerto del servidor API
-
-
+
+ The port to use for the local server. Requires restart.El puerto a utilizar para el servidor local. Requiere reinicio.
-
-
+
+ Check For UpdatesBuscar actualizaciones
-
-
+
+ Manually check for an update to GPT4All.Buscar manualmente una actualización para GPT4All.
-
-
+
+ UpdatesActualizaciones
-
-
+
+ System LocaleRegional del sistema
-
-
+
+ The compute device used for text generation.El dispositivo de cómputo utilizado para la generación de texto.
-
-
-
-
+
+
+
+ Application defaultPredeterminado de la aplicación
-
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -731,7 +743,7 @@
above where this application resides on your filesystem.<br><br>
If you can't start it manually, then I'm afraid you'll have to<br>
reinstall.
- ERROR: El sistema de actualización no pudo encontrar la Herramienta de Mantenimiento utilizada<br>
+ ERROR: El sistema de actualización no pudo encontrar la Herramienta de Mantenimiento utilizada<br>
para buscar actualizaciones.<br><br>
¿Instaló esta aplicación utilizando el instalador en línea? Si es así,<br>
el ejecutable de la Herramienta de Mantenimiento debería estar ubicado un directorio<br>
@@ -754,6 +766,19 @@
Chat del servidor
+
+ ChatAPIWorker
+
+
+ ERROR: Network error occurred while connecting to the API server
+ ERROR: Ocurrió un error de red al conectar con el servidor API
+
+
+
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2
+ ChatAPIWorker::handleFinished obtuvo Error HTTP %1 %2
+
+ChatDrawer
@@ -962,9 +987,9 @@
-
+
-
+ LocalDocsDocumentosLocales
@@ -981,20 +1006,20 @@
agregar colecciones de documentos al chat
-
-
+
+ Load the default modelCargar el modelo predeterminado
-
-
+
+ Loads the default model which can be changed in settingsCarga el modelo predeterminado que se puede cambiar en la configuración
-
-
+
+ No Model InstalledNo hay modelo instalado
@@ -1005,174 +1030,172 @@
<h3>Se encontró un error al cargar el modelo:</h3><br><i>"%1"</i><br><br>Los fallos en la carga de modelos pueden ocurrir por varias razones, pero las causas más comunes incluyen un formato de archivo incorrecto, una descarga incompleta o corrupta, un tipo de archivo equivocado, RAM del sistema insuficiente o un tipo de modelo incompatible. Aquí hay algunas sugerencias para resolver el problema:<br><ul><li>Asegúrate de que el archivo del modelo tenga un formato y tipo compatibles<li>Verifica que el archivo del modelo esté completo en la carpeta de descargas<li>Puedes encontrar la carpeta de descargas en el diálogo de configuración<li>Si has cargado el modelo manualmente, asegúrate de que el archivo no esté corrupto verificando el md5sum<li>Lee más sobre qué modelos son compatibles en nuestra <a href="https://docs.gpt4all.io/">documentación</a> para la interfaz gráfica<li>Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de discord</a> para obtener ayuda
-
-
+
+ Install a ModelInstalar un modelo
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Conversation with the modelConversación con el modelo
-
-
+
+ prompt / response pairs from the conversationpares de pregunta / respuesta de la conversación
-
-
+
+ GPT4AllGPT4All
-
-
+
+ YouTú
-
- recalculating context ...
- recalculando contexto ...
+ recalculando contexto ...
-
-
+
+ response stopped ...respuesta detenida ...
-
-
+
+ processing ...procesando ...
-
-
+
+ generating response ...generando respuesta ...
-
-
+
+ generating questions ...generando preguntas ...
-
-
-
-
+
+
+
+ CopyCopiar
-
-
+
+ Copy MessageCopiar mensaje
-
-
+
+ Disable markdownDesactivar markdown
-
-
+
+ Enable markdownActivar markdown
-
-
+
+ Thumbs upMe gusta
-
-
+
+ Gives a thumbs up to the responseDa un me gusta a la respuesta
-
-
+
+ Thumbs downNo me gusta
-
-
+
+ Opens thumbs down dialogAbre el diálogo de no me gusta
-
-
+
+ %1 Sources%1 Fuentes
-
-
+
+ Suggested follow-upsSeguimientos sugeridos
-
-
+
+ Erase and reset chat sessionBorrar y reiniciar sesión de chat
-
-
+
+ Copy chat session to clipboardCopiar sesión de chat al portapapeles
-
-
+
+ Redo last chat responseRehacer última respuesta del chat
-
-
+
+ Stop generatingDetener generación
-
-
+
+ Stop the current response generationDetener la generación de la respuesta actual
-
-
+
+ Reloads the modelRecarga el modelo
-
+
-
+ Reload · %1Recargar · %1
@@ -1183,68 +1206,68 @@
Cargando · %1
-
-
+
+ Load · %1 (default) →Cargar · %1 (predeterminado) →
-
-
+
+ retrieving localdocs: %1 ...recuperando documentos locales: %1 ...
-
-
+
+ searching localdocs: %1 ...buscando en documentos locales: %1 ...
-
-
+
+ Send a message...Enviar un mensaje...
-
-
+
+ Load a model to continue...Carga un modelo para continuar...
-
-
+
+ Send messages/prompts to the modelEnviar mensajes/indicaciones al modelo
-
-
+
+ CutCortar
-
-
+
+ PastePegar
-
-
+
+ Select AllSeleccionar todo
-
-
+
+ Send messageEnviar mensaje
-
-
+
+ Sends the message/prompt contained in textfield to the modelEnvía el mensaje/indicación contenido en el campo de texto al modelo
@@ -1267,14 +1290,14 @@ modelo para comenzar
CollectionsDrawer
-
-
+
+ Warning: searching collections while indexing can return incomplete resultsAdvertencia: buscar en colecciones mientras se indexan puede devolver resultados incompletos
-
-
+
+ %n file(s)%n archivo
@@ -1282,8 +1305,8 @@ modelo para comenzar
-
-
+
+ %n word(s)%n palabra
@@ -1291,24 +1314,62 @@ modelo para comenzar
-
-
+
+ UpdatingActualizando
-
-
+
+ + Add Docs+ Agregar documentos
-
-
+
+ Select a collection to make it available to the chat model.Seleccione una colección para hacerla disponible al modelo de chat.
+
+ Download
+
+
+ Model "%1" is installed successfully.
+ El modelo "%1" se ha instalado correctamente.
+
+
+
+ ERROR: $MODEL_NAME is empty.
+ ERROR: $MODEL_NAME está vacío.
+
+
+
+ ERROR: $API_KEY is empty.
+ ERROR: $API_KEY está vacía.
+
+
+
+ ERROR: $BASE_URL is invalid.
+ ERROR: $BASE_URL no es válida.
+
+
+
+ ERROR: Model "%1 (%2)" is conflict.
+ ERROR: El modelo "%1 (%2)" está en conflicto.
+
+
+
+ Model "%1 (%2)" is installed successfully.
+ El modelo "%1 (%2)" se ha instalado correctamente.
+
+
+
+ Model "%1" is removed.
+ El modelo "%1" ha sido eliminado.
+
+HomeView
@@ -1449,7 +1510,7 @@ modelo para comenzar
Comma-separated list. LocalDocs will only attempt to process files with these
extensions.
- Lista separada por comas. DocumentosLocales solo intentará procesar
+ Lista separada por comas. DocumentosLocales solo intentará procesar
archivos con estas extensiones.
@@ -1467,7 +1528,7 @@ modelo para comenzar
Embed documents using the fast Nomic API instead of a private local model.
Requires restart.
- Incrustar documentos usando la rápida API de Nomic en lugar de un modelo
+ Incrustar documentos usando la rápida API de Nomic en lugar de un modelo
local privado. Requiere reinicio.
@@ -1477,6 +1538,8 @@ modelo para comenzar
Clave API de Nomic
+
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Clave API para usar con Nomic Embed. Obtén una en la <a href="https://atlas.nomic.ai/cli-login">página de claves API</a> de Atlas. Requiere reinicio.
@@ -1487,10 +1550,10 @@ modelo para comenzar
Dispositivo de incrustaciones
- The compute device used for embeddings. "Auto" uses the CPU. Requires
- restart.
- El dispositivo de cómputo utilizado para las incrustaciones.
- "Auto" usa la CPU. Requiere reinicio.
+
+
+ The compute device used for embeddings. Requires restart.
+ El dispositivo de cómputo utilizado para las incrustaciones. Requiere reinicio.
@@ -1505,73 +1568,73 @@ modelo para comenzar
Incrustar documentos usando la API rápida de Nomic en lugar de un modelo local privado. Requiere reinicio.
-
-
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- El dispositivo de cómputo utilizado para las incrustaciones. "Auto" usa la CPU. Requiere reinicio.
+
+
+ Application default
+ Predeterminado de la aplicación
-
-
+
+ DisplayVisualización
-
-
+
+ Show SourcesMostrar fuentes
-
-
+
+ Display the sources used for each response.Mostrar las fuentes utilizadas para cada respuesta.
-
-
+
+ AdvancedAvanzado
-
-
+
+ Warning: Advanced usage only.Advertencia: Solo para uso avanzado.
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores demasiado grandes pueden causar fallos en localdocs, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se añaden a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por fragmento de documento. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Máximo de N mejores coincidencias de fragmentos de documentos recuperados para añadir al contexto del prompt. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta. Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valores demasiado grandes pueden causar fallos en documentos locales, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se agregan a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
+ Valores demasiado grandes pueden causar fallos en documentos locales, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se agregan a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
-
-
+
+ Document snippet size (characters)Tamaño del fragmento de documento (caracteres)
-
-
+
+ Max document snippets per promptMáximo de fragmentos de documento por indicación
@@ -1597,122 +1660,120 @@ modelo para comenzar
+ Agregar colección
-
- ERROR: The LocalDocs database is not valid.
- ERROR: La base de datos de DocumentosLocales no es válida.
+ ERROR: La base de datos de DocumentosLocales no es válida.
-
-
+
+ No Collections InstalledNo hay colecciones instaladas
-
-
+
+ Install a collection of local documents to get started using this featureInstala una colección de documentos locales para comenzar a usar esta función
-
-
+
+ + Add Doc Collection+ Agregar colección de documentos
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Indexing progressBarBarra de progreso de indexación
-
-
+
+ Shows the progress made in the indexingMuestra el progreso realizado en la indexación
-
-
+
+ ERRORERROR
-
-
+
+ INDEXINGINDEXANDO
-
-
+
+ EMBEDDINGINCRUSTANDO
-
-
+
+ REQUIRES UPDATEREQUIERE ACTUALIZACIÓN
-
-
+
+ READYLISTO
-
-
+
+ INSTALLINGINSTALANDO
-
-
+
+ Indexing in progressIndexación en progreso
-
-
+
+ Embedding in progressIncrustación en progreso
-
-
+
+ This collection requires an update after version changeEsta colección requiere una actualización después del cambio de versión
-
-
+
+ Automatically reindexes upon changes to the folderReindexación automática al cambiar la carpeta
-
-
+
+ Installation in progressInstalación en progreso
-
-
+
+ %%
-
-
+
+ %n file(s)%n archivo
@@ -1720,8 +1781,8 @@ modelo para comenzar
-
-
+
+ %n word(s)%n palabra
@@ -1729,32 +1790,32 @@ modelo para comenzar
-
-
+
+ RemoveEliminar
-
-
+
+ RebuildReconstruir
-
-
+
+ Reindex this folder from scratch. This is slow and usually not needed.Reindexar esta carpeta desde cero. Esto es lento y generalmente no es necesario.
-
-
+
+ UpdateActualizar
-
-
+
+ Update the collection to the new version. This is a slow operation.Actualizar la colección a la nueva versión. Esta es una operación lenta.
@@ -1768,54 +1829,53 @@ modelo para comenzar
ModelList
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>Requiere clave API personal de OpenAI.</li><li>ADVERTENCIA: ¡Enviará sus chats a OpenAI!</li><li>Su clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con OpenAI</li><li>Puede solicitar una clave API <a href="https://platform.openai.com/account/api-keys">aquí.</a></li>
- <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
%1
- <strong>Modelo ChatGPT GPT-3.5 Turbo de
+ <strong>Modelo ChatGPT GPT-3.5 Turbo de
OpenAI</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>Modelo ChatGPT GPT-3.5 Turbo de OpenAI</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* Aunque pagues a OpenAI por ChatGPT-4, esto no garantiza el acceso a la clave API. Contacta a OpenAI para más información.
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Requiere una clave API personal de Mistral.</li><li>ADVERTENCIA: ¡Enviará tus chats a Mistral!</li><li>Tu clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con Mistral</li><li>Puedes solicitar una clave API <a href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Modelo Mistral Medium</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Creado por %1.</strong><br><ul><li>Publicado el %2.<li>Este modelo tiene %3 me gusta.<li>Este modelo tiene %4 descargas.<li>Más información puede encontrarse <a href="https://huggingface.co/%5">aquí.</a></ul>
@@ -1830,12 +1890,12 @@ modelo para comenzar
<strong>Modelo de API compatible con OpenAI</strong><br><ul><li>Clave API: %1</li><li>URL base: %2</li><li>Nombre del modelo: %3</li></ul>
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>Requiere una clave API personal y la URL base de la API.</li><li>ADVERTENCIA: ¡Enviará sus chats al servidor de API compatible con OpenAI que especificó!</li><li>Su clave API se almacenará en el disco</li><li>Solo se utilizará para comunicarse con el servidor de API compatible con OpenAI</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Conectar al servidor de API compatible con OpenAI</strong><br> %1
@@ -1885,7 +1945,8 @@ modelo para comenzar
Indicación del sistema
-
+
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados.
@@ -1902,7 +1963,8 @@ modelo para comenzar
La plantilla que envuelve cada indicación.
-
+
+ Must contain the string "%1" to be replaced with the user's input.Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario.
@@ -1919,175 +1981,176 @@ modelo para comenzar
Indicación utilizada para generar automáticamente nombres de chat.
-
-
+
+ Suggested FollowUp PromptIndicación de seguimiento sugerida
-
-
+
+ Prompt used to generate suggested follow-up questions.Indicación utilizada para generar preguntas de seguimiento sugeridas.
-
-
+
+ Context LengthLongitud del contexto
-
-
+
+ Number of input and output tokens the model sees.Número de tokens de entrada y salida que el modelo ve.
- Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Máximo de tokens combinados de indicación/respuesta antes de que se pierda información. Usar más contexto del que se utilizó para entrenar el modelo producirá resultados pobres.
+ Máximo de tokens combinados de indicación/respuesta antes de que se pierda información. Usar más contexto del que se utilizó para entrenar el modelo producirá resultados pobres.
NOTA: No tiene efecto hasta que recargues el modelo.
-
-
+
+ TemperatureTemperatura
-
-
+
+ Randomness of model output. Higher -> more variation.Aleatoriedad de la salida del modelo. Mayor -> más variación.
-
+
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.La temperatura aumenta las probabilidades de elegir tokens menos probables.
NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles.
-
-
+
+ Top-PTop-P
-
-
+
+ Nucleus Sampling factor. Lower -> more predicatable.Factor de muestreo de núcleo. Menor -> más predecible.
-
+
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Solo se pueden elegir los tokens más probables hasta una probabilidad total de top_p.
NOTA: Evita elegir tokens altamente improbables.
-
-
+
+ Min-PMin-P
-
-
+
+ Minimum token probability. Higher -> more predictable.Probabilidad mínima del token. Mayor -> más predecible.
-
-
+
+ Sets the minimum relative probability for a token to be considered.Establece la probabilidad relativa mínima para que un token sea considerado.
-
-
+
+ Top-KTop-K
-
-
+
+ Size of selection pool for tokens.Tamaño del grupo de selección para tokens.
-
-
+
+ Only the top K most likely tokens will be chosen from.Solo se elegirán los K tokens más probables.
-
-
+
+ Max LengthLongitud máxima
-
-
+
+ Maximum response length, in tokens.Longitud máxima de respuesta, en tokens.
-
-
+
+ Prompt Batch SizeTamaño del lote de indicaciones
-
-
+
+ The batch size used for prompt processing.El tamaño del lote utilizado para el procesamiento de indicaciones.
-
-
+
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Cantidad de tokens de prompt a procesar de una vez.
NOTA: Valores más altos pueden acelerar la lectura de prompts, pero usarán más RAM.
-
-
+
+ Repeat PenaltyPenalización por repetición
-
-
+
+ Repetition penalty factor. Set to 1 to disable.Factor de penalización por repetición. Establecer a 1 para desactivar.
-
-
+
+ Repeat Penalty TokensTokens de penalización por repetición
-
-
+
+ Number of previous tokens used for penalty.Número de tokens anteriores utilizados para la penalización.
-
-
+
+ GPU LayersCapas de GPU
-
-
+
+ Number of model layers to load into VRAM.Número de capas del modelo a cargar en la VRAM.
@@ -2115,228 +2178,234 @@ NOTA: No surte efecto hasta que recargue el modelo.
ModelsView
-
-
+
+ No Models InstalledNo hay modelos instalados
-
-
+
+ Install a model to get started using GPT4AllInstala un modelo para empezar a usar GPT4All
-
-
-
-
+
+
+
+ + Add Model+ Agregar modelo
-
-
+
+ Shows the add model viewMuestra la vista de agregar modelo
-
-
+
+ Installed ModelsModelos instalados
-
-
+
+ Locally installed chat modelsModelos de chat instalados localmente
-
-
+
+ Model fileArchivo del modelo
-
-
+
+ Model file to be downloadedArchivo del modelo a descargar
-
-
+
+ DescriptionDescripción
-
-
+
+ File descriptionDescripción del archivo
-
-
+
+ CancelCancelar
-
-
+
+ ResumeReanudar
-
-
+
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
-
+
+ RemoveEliminar
-
-
+
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
-
-
+
+
+
+ InstallInstalar
-
-
+
+ Install online modelInstalar modelo en línea
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene disponible (%2).</strong></font>
-
-
+
+ %1 GB%1 GB
-
-
+
+ ??
-
-
+
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
-
-
+
+ Error for incompatible hardwareError por hardware incompatible
-
-
+
+ Download progressBarBarra de progreso de descarga
-
-
+
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
-
+
+ Download speedVelocidad de descarga
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
-
+
+ Calculating...Calculando...
-
-
+
+
+
+
+
+ Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
-
+
+ Busy indicatorIndicador de ocupado
-
-
+
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
-
-
+
+ enter $API_KEYingrese $API_KEY
-
-
+
+ File sizeTamaño del archivo
-
-
+
+ RAM requiredRAM requerida
-
-
+
+ ParametersParámetros
-
-
+
+ QuantCuantificación
-
-
+
+ TypeTipo
@@ -2427,7 +2496,7 @@ response. If you dislike a response, you can suggest an alternative response. Th
data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
- Al habilitar esta función, podrá participar en el proceso democrático de entrenamiento de un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All le responda y usted haya aceptado, su conversación se enviará al Datalake de código abierto de GPT4All. Además, puede dar me gusta/no me gusta a su respuesta. Si no le gusta una respuesta, puede sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
+ Al habilitar esta función, podrá participar en el proceso democrático de entrenamiento de un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All le responda y usted haya aceptado, su conversación se enviará al Datalake de código abierto de GPT4All. Además, puede dar me gusta/no me gusta a su respuesta. Si no le gusta una respuesta, puede sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
NOTA: Al activar esta función, enviará sus datos al Datalake de código abierto de GPT4All. No debe esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puede esperar una atribución opcional si lo desea. Sus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a sus datos y se le acreditará como contribuyente en cualquier lanzamiento de modelo GPT4All que utilice sus datos.
@@ -2545,9 +2614,8 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
QObject
- Default
- Predeterminado
+ Predeterminado
@@ -2606,26 +2674,22 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
Notas de la versión para esta versión
-
- ### Release notes
%1### Contributors
%2
- ### Notas de la versión
+ ### Notas de la versión
%1### Colaboradores
%2
-
- ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
- ### Autorización para análisis de uso anónimo y datalake Al habilitar estas funciones, podrás participar en el proceso democrático de entrenar un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All te responda y hayas aceptado participar, tu conversación se enviará al Datalake de Código Abierto de GPT4All. Además, podrás indicar si te gusta o no su respuesta. Si no te gusta una respuesta, puedes sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
+ ### Autorización para análisis de uso anónimo y datalake Al habilitar estas funciones, podrás participar en el proceso democrático de entrenar un modelo de lenguaje grande contribuyendo con datos para futuras mejoras del modelo. Cuando un modelo GPT4All te responda y hayas aceptado participar, tu conversación se enviará al Datalake de Código Abierto de GPT4All. Además, podrás indicar si te gusta o no su respuesta. Si no te gusta una respuesta, puedes sugerir una alternativa. Estos datos se recopilarán y agregarán en el Datalake de GPT4All.
NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Código Abierto de GPT4All. No debes esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puedes esperar una atribución opcional si lo deseas. Tus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a tus datos y se te acreditará como colaborador en cualquier lanzamiento de modelo GPT4All que utilice tus datos.
@@ -2765,7 +2829,7 @@ lanzamiento de modelo GPT4All que utilice sus datos.<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Advertencia:</b> cambiar el modelo borrará la conversación
+ <b>Advertencia:</b> cambiar el modelo borrará la conversación
actual. ¿Desea continuar?
@@ -2989,55 +3053,4 @@ LocalesVista de modelos instalados
-
- ChatAPIWorker
-
-
- ERROR: Network error occurred while connecting to the API server
- ERROR: Ocurrió un error de red al conectar con el servidor API
-
-
-
- ChatAPIWorker::handleFinished got HTTP Error %1 %2
- ChatAPIWorker::handleFinished obtuvo Error HTTP %1 %2
-
-
-
- Download
-
-
- Model "%1" is installed successfully.
- El modelo "%1" se ha instalado correctamente.
-
-
-
- ERROR: $MODEL_NAME is empty.
- ERROR: $MODEL_NAME está vacío.
-
-
-
- ERROR: $API_KEY is empty.
- ERROR: $API_KEY está vacía.
-
-
-
- ERROR: $BASE_URL is invalid.
- ERROR: $BASE_URL no es válida.
-
-
-
- ERROR: Model "%1 (%2)" is conflict.
- ERROR: El modelo "%1 (%2)" está en conflicto.
-
-
-
- Model "%1 (%2)" is installed successfully.
- El modelo "%1 (%2)" se ha instalado correctamente.
-
-
-
- Model "%1" is removed.
- El modelo "%1" ha sido eliminado.
-
-
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index e956c412..68cf05f9 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -115,44 +115,44 @@
Attiva la scoperta e il filtraggio dei modelli
-
-
+
+ DefaultPredefinito
-
-
+
+ LikesMi piace
-
-
+
+ DownloadsScaricamenti
-
-
+
+ RecentRecenti
-
-
+
+ AscAsc
-
-
+
+ DescDisc
-
-
+
+ NoneNiente
@@ -163,266 +163,266 @@
Ricerca · %1
-
-
+
+ Sort by: %1Ordina per: %1
-
-
+
+ Sort dir: %1Direzione ordinamento: %1
-
-
+
+ Limit: %1Limite: %1
-
-
+
+ Network error: could not retrieve %1Errore di rete: impossibile recuperare %1
-
-
-
-
+
+
+
+ Busy indicatorIndicatore di occupato
-
-
+
+ Displayed when the models request is ongoingVisualizzato quando la richiesta dei modelli è in corso
-
-
+
+ Model fileFile del modello
-
-
+
+ Model file to be downloadedFile del modello da scaricare
-
-
+
+ DescriptionDescrizione
-
-
+
+ File descriptionDescrizione del file
-
-
+
+ CancelAnnulla
-
-
+
+ ResumeRiprendi
-
-
+
+ DownloadScarica
-
-
+
+ Stop/restart/start the downloadArresta/riavvia/avvia il download
-
-
+
+ RemoveRimuovi
-
-
+
+ Remove model from filesystemRimuovi il modello dal sistema dei file
-
-
-
-
+
+
+
+ InstallInstalla
-
-
+
+ Install online modelInstalla il modello online
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVVERTENZA: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font>
-
-
+
+ ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
-
-
+
+ ERROR: $BASE_URL is empty.ERRORE: $BASE_URL non è valido.
-
-
+
+ enter $BASE_URLinserisci $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
-
-
+
+ enter $MODEL_NAMEinserisci $MODEL_NAME
-
-
+
+ %1 GB
-
-
-
-
+
+
+
+ ?
-
-
+
+ Describes an error that occurred when downloadingDescrive un errore che si è verificato durante lo scaricamento
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Errore</a></strong></font>
-
-
+
+ Error for incompatible hardwareErrore per hardware incompatibile
-
-
+
+ Download progressBarBarra di avanzamento dello scaricamento
-
-
+
+ Shows the progress made in the downloadMostra lo stato di avanzamento dello scaricamento
-
-
+
+ Download speedVelocità di scaricamento
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocità di scaricamento in byte/kilobyte/megabyte al secondo
-
-
+
+ Calculating...Calcolo in corso...
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculatedSe viene calcolato l'hash del file
-
-
+
+ Displayed when the file hash is being calculatedVisualizzato durante il calcolo dell'hash del file
-
-
+
+ enter $API_KEYInserire $API_KEY
-
-
+
+ File sizeDimensione del file
-
-
+
+ RAM requiredRAM richiesta
-
-
+
+ ParametersParametri
-
-
+
+ QuantQuant
-
-
+
+ TypeTipo
@@ -490,74 +490,74 @@
La combinazione di colori dell'applicazione.
-
-
+
+ DarkScuro
-
-
+
+ LightChiaro
-
-
+
+ LegacyDarkScuro Legacy
-
-
+
+ Font SizeDimensioni del Font
-
-
+
+ The size of text in the application.La dimensione del testo nell'applicazione.
-
-
+
+ SmallPiccolo
-
-
+
+ MediumMedio
-
-
+
+ LargeGrande
-
-
+
+ Language and LocaleLingua e settaggi locali
-
-
+
+ The language and locale you wish to use.La lingua e i settaggi locali che vuoi utilizzare.
-
-
+
+ System LocaleSettaggi locali del sistema
-
-
+
+ DeviceDispositivo
@@ -566,167 +566,167 @@
Il dispositivo di calcolo utilizzato per la generazione del testo. "Auto" utilizza Vulkan o Metal.
-
-
+
+ The compute device used for text generation.Il dispositivo di calcolo utilizzato per la generazione del testo.
-
-
-
-
+
+
+
+ Application defaultApplicazione predefinita
-
-
+
+ Default ModelModello predefinito
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.Il modello preferito per le nuove chat. Utilizzato anche come ripiego del server locale.
-
-
+
+ Suggestion ModeModalità suggerimento
-
-
+
+ Generate suggested follow-up questions at the end of responses.Genera le domande di approfondimento suggerite alla fine delle risposte.
-
-
+
+ When chatting with LocalDocsQuando chatti con LocalDocs
-
-
+
+ Whenever possibleQuando possibile
-
-
+
+ NeverMai
-
-
+
+ Download PathPercorso di scarico
-
-
+
+ Where to store local models and the LocalDocs database.Dove archiviare i modelli locali e il database LocalDocs.
-
-
+
+ BrowseEsplora
-
-
+
+ Choose where to save model filesScegli dove salvare i file del modello
-
-
+
+ Enable DatalakeAbilita Datalake
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.Invia chat e commenti al Datalake Open Source GPT4All.
-
-
+
+ AdvancedAvanzate
-
-
+
+ CPU ThreadsThread della CPUTread CPU
-
-
+
+ The number of CPU threads used for inference and embedding.Il numero di thread della CPU utilizzati per l'inferenza e l'incorporamento.
-
-
+
+ Save Chat ContextSalva il contesto della chat
-
-
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat.
-
-
+
+ Enable Local ServerAbilita server locale
-
-
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Esporre un server compatibile con OpenAI a localhost. ATTENZIONE: comporta un maggiore utilizzo delle risorse.
-
-
+
+ API Server PortPorta del server API
-
-
+
+ The port to use for the local server. Requires restart.La porta da utilizzare per il server locale. Richiede il riavvio.
-
-
+
+ Check For UpdatesControlla gli aggiornamenti
-
-
+
+ Manually check for an update to GPT4All.Verifica manualmente l'aggiornamento di GPT4All.
-
-
+
+ UpdatesAggiornamenti
@@ -1531,66 +1531,72 @@ modello per iniziare
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- Il dispositivo di calcolo utilizzato per gli incorporamenti. "Auto" utilizza la CPU. Richiede il riavvio.
+ The compute device used for embeddings. Requires restart.
+ Il dispositivo di calcolo utilizzato per gli incorporamenti. Richiede il riavvio.
-
-
+
+
+ Application default
+ Applicazione predefinita
+
+
+
+ DisplayMostra
-
-
+
+ Show SourcesMostra le fonti
-
-
+
+ Display the sources used for each response.Visualizza le fonti utilizzate per ciascuna risposta.
-
-
+
+ AdvancedAvanzate
-
-
+
+ Warning: Advanced usage only.Avvertenza: solo per uso avanzato.
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valori troppo grandi possono causare errori di Localdocs, risposte estremamente lente o l'impossibilità di rispondere. In parole povere, {N caratteri x N frammenti} vengono aggiunti alla finestra di contesto del modello. Maggiori informazioni <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">qui</a>.
-
-
+
+ Document snippet size (characters)Dimensioni del frammento di documento (caratteri)
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numero di caratteri per frammento di documento. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
-
-
+
+ Max document snippets per promptNumero massimo di frammenti di documento per prompt
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numero massimo di N corrispondenze migliori di frammenti di documento recuperati da aggiungere al contesto del prompt. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 45d5e103..7697a400 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -115,44 +115,44 @@
Aciona a descoberta e filtragem de modelos
-
-
+
+ DefaultPadrão
-
-
+
+ LikesCurtidas
-
-
+
+ DownloadsDownloads
-
-
+
+ RecentRecentes
-
-
+
+ AscAsc
-
-
+
+ DescDesc
-
-
+
+ NoneNenhum
@@ -163,266 +163,266 @@
Pesquisando · %1
-
-
+
+ Sort by: %1Ordenar por: %1
-
-
+
+ Sort dir: %1Ordenar diretório: %1
-
-
+
+ Limit: %1Limite: %1
-
-
+
+ Network error: could not retrieve %1Erro de rede: não foi possível obter %1
-
-
-
-
+
+
+
+ Busy indicatorIndicador de processamento
-
-
+
+ Displayed when the models request is ongoingxibido enquanto os modelos estão sendo carregados
-
-
+
+ Model fileArquivo do modelo
-
-
+
+ Model file to be downloadedArquivo do modelo a ser baixado
-
-
+
+ DescriptionDescrição
-
-
+
+ File descriptionDescrição do arquivo
-
-
+
+ CancelCancelar
-
-
+
+ ResumeRetomar
-
-
+
+ DownloadBaixar
-
-
+
+ Stop/restart/start the downloadParar/reiniciar/iniciar o download
-
-
+
+ RemoveRemover
-
-
+
+ Remove model from filesystemRemover modelo do sistema
-
-
-
-
+
+
+
+ InstallInstalar
-
-
+
+ Install online modelInstalar modelo online
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENÇÃO: Este modelo não é recomendado para seu hardware. Ele exige mais memória (%1 GB) do que seu sistema possui (%2).</strong></font>
-
-
+
+ ERROR: $API_KEY is empty.ERRO: A $API_KEY está vazia.
-
-
+
+ ERROR: $BASE_URL is empty.ERRO: A $BASE_URL está vazia.
-
-
+
+ enter $BASE_URLinserir a $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.ERRO: O $MODEL_NAME está vazio.
-
-
+
+ enter $MODEL_NAMEinserir o $MODEL_NAME
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
-
-
+
+ Describes an error that occurred when downloadingMostra informações sobre o erro no download
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Erro</a></strong></font>
-
-
+
+ Error for incompatible hardwareAviso: Hardware não compatível
-
-
+
+ Download progressBarProgresso do download
-
-
+
+ Shows the progress made in the downloadMostra o progresso do download
-
-
+
+ Download speedVelocidade de download
-
-
+
+ Download speed in bytes/kilobytes/megabytes per secondVelocidade de download em bytes/kilobytes/megabytes por segundo
-
-
+
+ Calculating...Calculando...
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculatedQuando o hash do arquivo está sendo calculado
-
-
+
+ Displayed when the file hash is being calculatedExibido durante o cálculo do hash do arquivo
-
-
+
+ enter $API_KEYinserir $API_KEY
-
-
+
+ File sizeTamanho do arquivo
-
-
+
+ RAM requiredRAM necessária
-
-
+
+ ParametersParâmetros
-
-
+
+ QuantQuant
-
-
+
+ TypeTipo
@@ -496,240 +496,240 @@
Esquema de cores.
-
-
+
+ DarkModo Escuro
-
-
+
+ LightModo Claro
-
-
+
+ LegacyDarkModo escuro (legado)
-
-
+
+ Font SizeTamanho da Fonte
-
-
+
+ The size of text in the application.Tamanho do texto.
-
-
+
+ SmallPequeno
-
-
+
+ MediumMédio
-
-
+
+ LargeGrande
-
-
+
+ Language and LocaleIdioma e Região
-
-
+
+ The language and locale you wish to use.Selecione seu idioma e região.
-
-
+
+ System LocaleLocal do Sistema
-
-
+
+ DeviceProcessador
-
-
+
+ The compute device used for text generation.I chose to use "Processador" instead of "Dispositivo" (Device) or "Dispositivo de Computação" (Compute Device) to simplify the terminology and make it more straightforward and understandable. "Dispositivo" can be vague and could refer to various types of hardware, whereas "Processador" clearly and specifically indicates the component responsible for processing tasks. This improves usability by avoiding the ambiguity that might arise from using more generic terms like "Dispositivo."Processador usado para gerar texto.
-
-
-
-
+
+
+
+ Application defaultAplicativo padrão
-
-
+
+ Default ModelModelo Padrão
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.Modelo padrão para novos chats e em caso de falha do modelo principal.
-
-
+
+ Suggestion ModeModo de sugestões
-
-
+
+ Generate suggested follow-up questions at the end of responses.Sugerir perguntas após as respostas.
-
-
+
+ When chatting with LocalDocsAo conversar com o LocalDocs
-
-
+
+ Whenever possibleSempre que possível
-
-
+
+ NeverNunca
-
-
+
+ Download PathDiretório de Download
-
-
+
+ Where to store local models and the LocalDocs database.Pasta para modelos e banco de dados do LocalDocs.
-
-
+
+ BrowseProcurar
-
-
+
+ Choose where to save model filesLocal para armazenar os modelos
-
-
+
+ Enable DatalakeHabilitar Datalake
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.Contribua para o Datalake de código aberto do GPT4All.
-
-
+
+ AdvancedAvançado
-
-
+
+ CPU ThreadsThreads de CPU
-
-
+
+ The number of CPU threads used for inference and embedding.Quantidade de núcleos (threads) do processador usados para processar e responder às suas perguntas.
-
-
+
+ Save Chat ContextI used "Histórico do Chat" (Chat History) instead of "Contexto do Chat" (Chat Context) to clearly convey that it refers to saving past messages, making it more intuitive and avoiding potential confusion with abstract terms.Salvar Histórico do Chat
-
-
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat).
-
-
+
+ Enable Local ServerAtivar Servidor Local
-
-
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Ativar servidor local compatível com OpenAI (uso de recursos elevado).
-
-
+
+ API Server PortPorta da API
-
-
+
+ The port to use for the local server. Requires restart.Porta de acesso ao servidor local. (requer reinicialização).
-
-
+
+ Check For UpdatesProcurar por Atualizações
-
-
+
+ Manually check for an update to GPT4All.Verifica se há novas atualizações para o GPT4All.
-
-
+
+ UpdatesAtualizações
@@ -1533,67 +1533,73 @@ modelo instalado para funcionar
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- Dispositivo usado para processar as incorporações. "Automático" utiliza a CPU. Requer reinicialização.
+ The compute device used for embeddings. Requires restart.
+ Dispositivo usado para processar as incorporações. Requer reinicialização.
-
-
+
+
+ Application default
+ Aplicativo padrão
+
+
+
+ DisplayExibir
-
-
+
+ Show SourcesMostrar Fontes
-
-
+
+ Display the sources used for each response.Mostra as fontes usadas para cada resposta.
-
-
+
+ AdvancedApenas para usuários avançados
-
-
+
+ Warning: Advanced usage only.Atenção: Apenas para usuários avançados.
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores muito altos podem causar falhas no LocalDocs, respostas extremamente lentas ou até mesmo nenhuma resposta. De forma geral, o valor {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Clique <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aqui</a> para mais informações.
-
-
+
+ Document snippet size (characters)I translated "snippet" as "trecho" to make the term feel more natural and understandable in Portuguese. "Trecho" effectively conveys the idea of a portion or section of a document, fitting well within the context, whereas a more literal translation might sound less intuitive or awkward for users.Tamanho do trecho de documento (caracteres)
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por trecho de documento. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
-
-
+
+ Max document snippets per promptMáximo de Trechos de Documento por Prompt
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número máximo de trechos de documentos a serem adicionados ao contexto do prompt. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index f8855d48..219dc777 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -7,30 +7,30 @@
← Existing Collections
- ← Colecţiile curente
+ ← Colecţiile curenteAdd Document Collection
- Adaugă o Colecţie de documente
+ Adaugă o Colecţie de documenteAdd a folder containing plain text files, PDFs, or Markdown. Configure
additional extensions in Settings.
- Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown. Extensii suplimentare pot fi specificate în Configurare.
+ Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown. Extensii suplimentare pot fi specificate în Configurare.Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
- Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi adăugate în Configurare.
+ Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi adăugate în Configurare.Please choose a directory
- Selectează un folder/director
+ Selectează un folder/director
@@ -42,13 +42,13 @@
Collection name...
- Denumirea Colecţiei...
+ Denumirea Colecţiei...Name of the collection to add (Required)
- Denumirea Colecţiei de adăugat (necesar)
+ Denumirea Colecţiei de adăugat (necesar)
@@ -72,13 +72,13 @@
Browse
- Căutare
+ CăutareCreate Collection
- Creează o Colecţie
+ Creează o Colecţie
@@ -93,71 +93,71 @@
Explore Models
- Caută modele
+ Caută modeleDiscover and download models by keyword search...
- Caută şi descarcă modele după un cuvânt-cheie...
+ Caută şi descarcă modele după un cuvânt-cheie...Text field for discovering and filtering downloadable models
- Câmp pentru căutarea şi filtrarea modelelor ce pot fi descărcate
+ Câmp pentru căutarea şi filtrarea modelelor ce pot fi descărcateInitiate model discovery and filtering
- Iniţiază căutarea şi filtrarea modelelor
+ Iniţiază căutarea şi filtrarea modelelorTriggers discovery and filtering of models
- Activează căutarea şi filtrarea modelelor
+ Activează căutarea şi filtrarea modelelor
-
-
+
+ DefaultImplicit
-
-
+
+ LikesLikes
-
-
+
+ DownloadsDownload-uri
-
-
+
+ RecentRecent/e
-
-
+
+ AscAsc. (A->Z)
-
-
+
+ DescDesc. (Z->A)
-
-
+
+ NoneNiciunul
@@ -165,158 +165,158 @@
Searching · %1
- Căutare · %1
+ Căutare · %1
-
-
+
+ Sort by: %1
- Ordonare după: %1
+ Ordonare după: %1
-
-
+
+ Sort dir: %1
- Sensul ordonării: %1
+ Sensul ordonării: %1
-
-
+
+ Limit: %1
- Límită: %1
+ Límită: %1
-
-
+
+ Network error: could not retrieve %1
- Eroare de reţea: nu se poate prelua %1
+ Eroare de reţea: nu se poate prelua %1
-
-
-
-
+
+
+
+ Busy indicatorIndicator de activitate
-
-
+
+ Displayed when the models request is ongoing
- Afişat în timpul solicitării modelului
+ Afişat în timpul solicitării modelului
-
-
+
+ Model file
- Fişierul modelului
+ Fişierul modelului
-
-
+
+ Model file to be downloaded
- Fişierul modelului de descărcat
+ Fişierul modelului de descărcat
-
-
+
+ DescriptionDescriere
-
-
+
+ File description
- Descrierea fişierului
+ Descrierea fişierului
-
-
+
+ CancelAnulare
-
-
+
+ ResumeContinuare
-
-
+
+ DownloadDownload
-
-
+
+ Stop/restart/start the download
- Opreşte/Reporneşte/Începe descărcarea
+ Opreşte/Reporneşte/Începe descărcarea
-
-
+
+ Remove
- şterge
+ şterge
-
-
+
+ Remove model from filesystem
- şterge modelul din sistemul de fişiere
+ şterge modelul din sistemul de fişiere
-
-
-
-
+
+
+
+ InstallInstalare
-
-
+
+ Install online modelInstalez un model din online
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Eroare</a></strong></font>
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
(%2).</strong></font><strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem (%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem (%2).</strong></font>
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
-
-
+
+ Describes an error that occurred when downloading
- Descrie eroarea apărută în timpul descărcării
+ Descrie eroarea apărută în timpul descărcării<strong><font size="1"><a
@@ -324,122 +324,122 @@
<strong><font size="1"><a href="#eroare">Eroare</a></strong></font>
-
-
+
+ Error for incompatible hardwareEroare: hardware incompatibil
-
-
+
+ Download progressBar
- Progresia descărcării
+ Progresia descărcării
-
-
+
+ Shows the progress made in the download
- Afişează progresia descărcării
+ Afişează progresia descărcării
-
-
+
+ Download speedViteza de download
-
-
+
+ Download speed in bytes/kilobytes/megabytes per second
- Viteza de download în bytes/kilobytes/megabytes pe secundă
+ Viteza de download în bytes/kilobytes/megabytes pe secundă
-
-
+
+ Calculating...Calculare...
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculated
- Dacă se calculează hash-ul fişierului
+ Dacă se calculează hash-ul fişierului
-
-
+
+ Displayed when the file hash is being calculated
- Se afişează când se calculează hash-ul fişierului
+ Se afişează când se calculează hash-ul fişierului
-
-
+
+ ERROR: $API_KEY is empty.
- EROARE: $API_KEY absentă
+ EROARE: $API_KEY absentă
-
-
+
+ enter $API_KEYintrodu cheia $API_KEY
-
-
+
+ ERROR: $BASE_URL is empty.
- EROARE: $BASE_URL absentă
+ EROARE: $BASE_URL absentă
-
-
+
+ enter $BASE_URLintrodu $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent
-
-
+
+ enter $MODEL_NAMEintrodu $MODEL_NAME
-
-
+
+ File size
- Dimensiunea fişierului
+ Dimensiunea fişierului
-
-
+
+ RAM required
- RAM necesară
+ RAM necesară
-
-
+
+ ParametersParametri
-
-
+
+ QuantQuant(ificare)
-
-
+
+ TypeTip
@@ -450,13 +450,13 @@
Application
- Aplicaţie/Program
+ Aplicaţie/ProgramNetwork dialog
- Reţea
+ Reţea
@@ -473,7 +473,7 @@
If you can't start it manually, then I'm afraid you'll have
to<br>
reinstall.
- EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
+ EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
@@ -497,7 +497,7 @@
Theme
- Tema pentru interfaţă
+ Tema pentru interfaţă
@@ -506,38 +506,38 @@
Schema de culori a programului.
-
-
+
+ Dark
- Întunecat
+ Întunecat
-
-
+
+ LightLuminos
-
-
+
+ LegacyDark
- Întunecat-vechi
+ Întunecat-vechi
-
-
+
+ Font SizeDimensiunea textului
-
-
+
+ The size of text in the application.
- Dimensiunea textului în program.
+ Dimensiunea textului în program.
-
-
+
+ DeviceDispozitiv/Device
@@ -545,7 +545,7 @@
The compute device used for text generation. "Auto" uses Vulkan or
Metal.Dispozitivul de calcul utilizat pentru generarea de text.
- "Auto" apelează la Vulkan sau la Metal.
+ "Auto" apelează la Vulkan sau la Metal.
@@ -557,217 +557,217 @@
above where this application resides on your filesystem.<br><br>
If you can't start it manually, then I'm afraid you'll have to<br>
reinstall.
- EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
+ EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
-
-
+
+ SmallMic
-
-
+
+ MediumMediu
-
-
+
+ LargeMare
-
-
+
+ Language and Locale
- Limbă şi Localizare
+ Limbă şi Localizare
-
-
+
+ The language and locale you wish to use.
- Limba şi Localizarea de utilizat
+ Limba şi Localizarea de utilizat
-
-
+
+ System LocaleLocalizare
-
-
+
+ The compute device used for text generation.Dispozitivul de calcul utilizat pentru generarea de text.
-
-
-
-
+
+
+
+ Application defaultImplicit
-
-
+
+ Default ModelModelul implicit
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.
- Modelul preferat pentru noile conversaţii. Va fi folosit drept rezervă pentru serverul local.
+ Modelul preferat pentru noile conversaţii. Va fi folosit drept rezervă pentru serverul local.
-
-
+
+ Suggestion ModeModul de sugerare
-
-
+
+ Generate suggested follow-up questions at the end of responses.
- Generarea de Întrebări pentru continuare, la finalul replicilor.
+ Generarea de Întrebări pentru continuare, la finalul replicilor.
-
-
+
+ When chatting with LocalDocs
- Când se discută cu LocalDocs
+ Când se discută cu LocalDocs
-
-
+
+ Whenever possible
- Oricând e posibil
+ Oricând e posibil
-
-
+
+ Never
- Niciodată
+ Niciodată
-
-
+
+ Download PathCalea pentru download
-
-
+
+ Where to store local models and the LocalDocs database.
- Unde să fie plasate modelele şi baza de date LocalDocs.
+ Unde să fie plasate modelele şi baza de date LocalDocs.
-
-
+
+ Browse
- Căutare
+ Căutare
-
-
+
+ Choose where to save model files
- Selectează locul unde vor fi plasate fişierele modelelor
+ Selectează locul unde vor fi plasate fişierele modelelor
-
-
+
+ Enable Datalake
- Activează DataLake
+ Activează DataLake
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.
- Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
+ Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
-
-
+
+ AdvancedAvansate
-
-
+
+ CPU ThreadsThread-uri CPU
-
-
+
+ The number of CPU threads used for inference and embedding.
- Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.
-
-
-
-
- Save Chat Context
- Salvarea contextului conversaţiei
-
-
-
-
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
- Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
+ Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.
+ Save Chat Context
+ Salvarea contextului conversaţiei
+
+
+
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
+ Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
+
+
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.
+ Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB
per chat.
- Salvează pe disc starea modelului pentru Încărcare mai rapidă. ATENţIE: Consumă ~2GB/conversaţie.
-
-
-
-
- Enable Local Server
- Activează Serverul local
-
-
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
- resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte consumul de resurse.
-
-
-
-
- API Server Port
- Portul Serverului API
+ Salvează pe disc starea modelului pentru Încărcare mai rapidă. ATENţIE: Consumă ~2GB/conversaţie.
+ Enable Local Server
+ Activează Serverul local
+
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
+ resource usage.
+ Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte consumul de resurse.
+
+
+
+
+ API Server Port
+ Portul Serverului API
+
+
+
+ The port to use for the local server. Requires restart.
- Portul utilizat pentru Serverul local. Necesită repornirea programului.
+ Portul utilizat pentru Serverul local. Necesită repornirea programului.
-
-
+
+ Check For Updates
- Caută update-uri
+ Caută update-uri
-
-
+
+ Manually check for an update to GPT4All.
- Caută manual update-uri pentru GPT4All.
+ Caută manual update-uri pentru GPT4All.
-
-
+
+ Updates
- Update-uri/Actualizări
+ Update-uri/Actualizări
@@ -776,12 +776,12 @@
New Chat
- Chat/Conversaţie Nouă
+ Chat/Conversaţie NouăServer Chat
- Conversaţie cu Serverul
+ Conversaţie cu Serverul
@@ -789,7 +789,7 @@
ERROR: Network error occurred while connecting to the API server
- EROARE: Eroare de reţea - conectarea la serverul API
+ EROARE: Eroare de reţea - conectarea la serverul API
@@ -815,61 +815,61 @@
+ New Chat
- + Conversaţie nouă
+ + Conversaţie nouăCreate a new chat
- Creează o Conversaţie nouă
+ Creează o Conversaţie nouăSelect the current chat or edit the chat when in edit mode
- Selectează conversaţia curentă sau editeaz-o când eşti în modul editare
+ Selectează conversaţia curentă sau editeaz-o când eşti în modul editareEdit chat name
- Editează denumirea conversaţiei
+ Editează denumirea conversaţieiSave chat name
- Salveazăa denumirea conversaţiei
+ Salveazăa denumirea conversaţieiDelete chat
- şterge conversaţia
+ şterge conversaţiaConfirm chat deletion
- CONFIRMă ştergerea conversaţiei
+ CONFIRMă ştergerea conversaţieiCancel chat deletion
- ANULEAZă ştergerea conversaţiei
+ ANULEAZă ştergerea conversaţieiList of chats
- Lista conversaţiilor
+ Lista conversaţiilorList of chats in the drawer dialog
- Lista conversaţiilor în secţiunea-sertar
+ Lista conversaţiilor în secţiunea-sertar
@@ -877,12 +877,12 @@
TODAY
- ASTăZI
+ ASTăZITHIS WEEK
- SăPTăMâNA ACEASTA
+ SăPTăMâNA ACEASTA
@@ -892,7 +892,7 @@
LAST SIX MONTHS
- ULTIMELE şASE LUNI
+ ULTIMELE şASE LUNI
@@ -911,7 +911,7 @@
<h3>Warning</h3><p>%1</p>
- <h3>Atenţie</h3><p>%1</p>
+ <h3>Atenţie</h3><p>%1</p>
@@ -923,43 +923,43 @@
Warn the user if they switch models, then context will be erased
- Avertizează utilizatorul că la schimbarea modelului va fi şters contextul
+ Avertizează utilizatorul că la schimbarea modelului va fi şters contextulConversation copied to clipboard.
- Conversaţia a fost plasată în Clipboard.
+ Conversaţia a fost plasată în Clipboard.Code copied to clipboard.
- Codul a fost plasat în Clipboard.
+ Codul a fost plasat în Clipboard.Chat panel
- Secţiunea de chat
+ Secţiunea de chatChat panel with options
- Secţiunea de chat cu opţiuni
+ Secţiunea de chat cu opţiuniReload the currently loaded model
- ReÎncarcă modelul curent
+ ReÎncarcă modelul curentEject the currently loaded model
- Ejectează modelul curent
+ Ejectează modelul curent
@@ -971,25 +971,25 @@
Model loading error.
- Eroare la Încărcarea modelului.
+ Eroare la Încărcarea modelului.Waiting for model...
- Se aşteaptă modelul...
+ Se aşteaptă modelul...Switching context...
- Se schimbă contextul...
+ Se schimbă contextul...Choose a model...
- Selectează un model...
+ Selectează un model...
@@ -1021,19 +1021,19 @@
add collections of documents to the chat
- adaugă Colecţii de documente la conversaţie
+ adaugă Colecţii de documente la conversaţieLoad the default model
- Încarcă modelul implicit
+ Încarcă modelul implicitLoads the default model which can be changed in settings
- Încarcă modelul implicit care poate fi stabilit în Configurare
+ Încarcă modelul implicit care poate fi stabilit în Configurare
@@ -1044,60 +1044,60 @@
GPT4All requires that you install at least one
model to get started
- GPT4All necesită cel puţin un
+ GPT4All necesită cel puţin un
model pentru a putea porni<h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
- <h3>EROARE la încărcarea
+ <h3>EROARE la încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
- de erori pot apărea din mai multe cauze, dintre care cele mai comune
- includ un format inadecvat al fişierului, un download incomplet sau întrerupt,
- un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
- Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
- un format şi un tip compatibile; verifică dacă fişierul modelului este complet
- în folderul dedicat - acest folder este afişat în secţiunea Configurare;
- dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
- după ce îi verifici amprenta MD5 (md5sum)<li>Află mai mult despre modelele compatibile
- în pagina unde am plasat <a
- href="https://docs.gpt4all.io/">documentaţia</a> pentru
- interfaţa gráfică<li>poţi găsi <a
+ de erori pot apărea din mai multe cauze, dintre care cele mai comune
+ includ un format inadecvat al fişierului, un download incomplet sau întrerupt,
+ un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
+ Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
+ un format şi un tip compatibile; verifică dacă fişierul modelului este complet
+ în folderul dedicat - acest folder este afişat în secţiunea Configurare;
+ dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
+ după ce îi verifici amprenta MD5 (md5sum)<li>Află mai mult despre modelele compatibile
+ în pagina unde am plasat <a
+ href="https://docs.gpt4all.io/">documentaţia</a> pentru
+ interfaţa gráfică<li>poţi găsi <a
href="https://discord.gg/4M2QFmTt2k">canalul nostru Discord</a> unde
- se oferă ajutor
+ se oferă ajutorGPT4All requires that you install at least one
model to get started
- GPT4All necesită cel puţin un
+ GPT4All necesită cel puţin un
model pentru a putea rulaInstall a Model
- Instalează un model
+ Instalează un modelShows the add model view
- Afisează secţiunea de adăugare a unui model
+ Afisează secţiunea de adăugare a unui modelConversation with the model
- Conversaţie cu modelul
+ Conversaţie cu modelulprompt / response pairs from the conversation
- perechi prompt/replică din conversaţie
+ perechi prompt/replică din conversaţie
@@ -1113,13 +1113,13 @@ model to get started
recalculating context ...
- se recalculează contextul...
+ se recalculează contextul...response stopped ...
- replică Întreruptă...
+ replică Întreruptă...
@@ -1131,13 +1131,13 @@ model to get started
generating response ...
- se generează replica...
+ se generează replica...generating questions ...
- se generează Întrebări...
+ se generează Întrebări...
@@ -1175,7 +1175,7 @@ model to get started
Gives a thumbs up to the response
- Dă un Bravo acestei replici
+ Dă un Bravo acestei replici
@@ -1187,7 +1187,7 @@ model to get started
Opens thumbs down dialog
- Deschide reacţia Aiurea
+ Deschide reacţia Aiurea
@@ -1199,43 +1199,43 @@ model to get started
Suggested follow-ups
- Continuări sugerate
+ Continuări sugerateErase and reset chat session
- şterge şi resetează sesiunea de chat
+ şterge şi resetează sesiunea de chatCopy chat session to clipboard
- Copiez sesiunea de chat în Clipboard
+ Copiez sesiunea de chat în ClipboardRedo last chat response
- Reface ultima replică
+ Reface ultima replicăStop generating
- Opreşte generarea
+ Opreşte generareaStop the current response generation
- Opreşte generarea replicii curente
+ Opreşte generarea replicii curenteReloads the model
- ReÎncarc modelul
+ ReÎncarc modelul<h3>Encountered an error loading
@@ -1251,21 +1251,21 @@ model to get started
href="https://docs.gpt4all.io/">documentation</a> for the
gui<li>Check out our <a
href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
- <h3>EROARE la Încărcarea
+ <h3>EROARE la Încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
- de erori pot apărea din mai multe cauze, dintre care cele mai comune
- includ un format inadecvat al fişierului, un download incomplet sau Întrerupt,
- un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
- Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
- un format si un tip compatibile; verifică dacă fişierul modelului este complet
- în folderul dedicat - acest folder este afişat în secţiunea Configurare;
- dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
- după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile
- în pagina unde am plasat <a
- href="https://docs.gpt4all.io/">documentaţia</a> pentru
- interfaţa gráfică<li>poţi parcurge <a
+ de erori pot apărea din mai multe cauze, dintre care cele mai comune
+ includ un format inadecvat al fişierului, un download incomplet sau Întrerupt,
+ un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
+ Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
+ un format si un tip compatibile; verifică dacă fişierul modelului este complet
+ în folderul dedicat - acest folder este afişat în secţiunea Configurare;
+ dacă ai descărcat modelul dinafara programului, asigură-te că fişierul nu e corupt
+ după ce îi verifici amprenta MD5 (md5sum)<li>Află mai multe despre care modele sunt compatibile
+ în pagina unde am plasat <a
+ href="https://docs.gpt4all.io/">documentaţia</a> pentru
+ interfaţa gráfică<li>poţi parcurge <a
href="https://discord.gg/4M2QFmTt2k">canalul nostru Discord</a> unde
- se oferă ajutor
+ se oferă ajutor
@@ -1273,19 +1273,19 @@ model to get started
Reload · %1
- Reîncărcare · %1
+ Reîncărcare · %1Loading · %1
- Încărcare · %1
+ Încărcare · %1Load · %1 (default) →
- Încarcă · %1 (implicit) →
+ Încarcă · %1 (implicit) →
@@ -1303,7 +1303,7 @@ model to get started
searching localdocs: %1 ...
- se caută în LocalDocs: %1 ...
+ se caută în LocalDocs: %1 ...
@@ -1315,13 +1315,13 @@ model to get started
Load a model to continue...
- Încarcă un model pentru a continua...
+ Încarcă un model pentru a continua...Send messages/prompts to the model
- Trimite mesaje/prompt-uri către model
+ Trimite mesaje/prompt-uri către model
@@ -1351,7 +1351,7 @@ model to get started
Sends the message/prompt contained in textfield to the model
- Trimite modelului mesajul/prompt-ul din câmpul-text
+ Trimite modelului mesajul/prompt-ul din câmpul-text
@@ -1360,16 +1360,16 @@ model to get started
Warning: searching collections while indexing can return incomplete results
- Atenţie: căutarea în Colecţii în timp ce sunt Indexate poate întoarce rezultate incomplete
+ Atenţie: căutarea în Colecţii în timp ce sunt Indexate poate întoarce rezultate incomplete%n file(s)
- %n fişier
- %n fişiere
- %n fişiere
+ %n fişier
+ %n fişiere
+ %n fişiere
@@ -1377,7 +1377,7 @@ model to get started
%n word(s)
- %n cuvânt
+ %n cuvânt%n cuvinte%n cuvinte
@@ -1398,7 +1398,7 @@ model to get started
Select a collection to make it available to the chat model.
- Selectează o Colecţie pentru ca modelul să o poată accesa.
+ Selectează o Colecţie pentru ca modelul să o poată accesa.
@@ -1416,7 +1416,7 @@ model to get started
ERROR: $API_KEY is empty.
- EROARE: $API_KEY absentă
+ EROARE: $API_KEY absentă
@@ -1436,7 +1436,7 @@ model to get started
Model "%1" is removed.
- Modelul "%1" - îndepărtat
+ Modelul "%1" - îndepărtat
@@ -1445,31 +1445,31 @@ model to get started
Welcome to GPT4All
- Bun venit în GPT4All
+ Bun venit în GPT4AllThe privacy-first LLM chat application
- Programul ce prioritizează confidenţialitatea (privacy)
+ Programul ce prioritizează confidenţialitatea (privacy)Start chatting
- Începe o conversaţie
+ Începe o conversaţieStart Chatting
- Începe o conversaţie
+ Începe o conversaţieChat with any LLM
- Dialoghează cu orice LLM
+ Dialoghează cu orice LLM
@@ -1481,43 +1481,43 @@ model to get started
Chat with your local files
- Dialoghează cu fişiere locale
+ Dialoghează cu fişiere localeFind Models
- Caută modele
+ Caută modeleExplore and download models
- Explorează şi descarcă modele
+ Explorează şi descarcă modeleLatest news
- Ultimele ştiri
+ Ultimele ştiriLatest news from GPT4All
- Ultimele ştiri de la GPT4All
+ Ultimele ştiri de la GPT4AllRelease Notes
- Despre această versiune
+ Despre această versiuneDocumentation
- Documentaţie
+ Documentaţie
@@ -1534,8 +1534,12 @@ model to get started
+ Github
+
+
+ GitHub
- GitHub
+ GitHub
@@ -1574,13 +1578,13 @@ model to get started
Allowed File Extensions
- Extensii compatibile de fişier
+ Extensii compatibile de fişierComma-separated list. LocalDocs will only attempt to process files with these
extensions.
- Extensiile, separate prin virgulă. LocalDocs va Încerca procesarea
- numai a fişierelor cu aceste extensii.
+ Extensiile, separate prin virgulă. LocalDocs va Încerca procesarea
+ numai a fişierelor cu aceste extensii.
@@ -1597,7 +1601,7 @@ model to get started
Embed documents using the fast Nomic API instead of a private local model.
Requires restart.
- Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
+ Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
@@ -1609,7 +1613,7 @@ model to get started
API key to use for Nomic Embed. Get one from the Atlas <a
href="https://atlas.nomic.ai/cli-login">API keys page</a>.
Requires restart.
- Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
@@ -1618,81 +1622,82 @@ model to get started
Dispozitivul pentru Embeddings
- The compute device used for embeddings. "Auto" uses the CPU. Requires
- restart.
- Dispozitivul pentru Embeddings. "Auto" apelează la CPU. Necesită repornire
+
+
+ The compute device used for embeddings. Requires restart.
+ Dispozitivul pentru Embeddings. Necesită repornire.Comma-separated list. LocalDocs will only attempt to process files with these extensions.
- Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.
+ Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.Embed documents using the fast Nomic API instead of a private local model. Requires restart.
- Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
+ Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
- Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
+ Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
-
-
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- Dispozitivul pentru Embeddings. "Auto" apelează la CPU. Necesită repornire.
+
+
+ Application default
+ Implicit
-
-
+
+ DisplayVizualizare
-
-
+
+ Show Sources
- Afişarea Surselor
+ Afişarea Surselor
-
-
+
+ Display the sources used for each response.
- Afişează Sursele utilizate pentru fiecare replică.
+ Afişează Sursele utilizate pentru fiecare replică.
-
-
+
+ AdvancedAvansate
-
-
+
+ Warning: Advanced usage only.
- Atenţie: Numai pentru utilizare avansată.
+ Atenţie: Numai pentru utilizare avansată.
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
+ Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
+ Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
@@ -1701,30 +1706,30 @@ model to get started
the model's context window. More info <a
href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Valori prea mari pot cauza erori cu LocalDocs, replici lente sau absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
+ Valori prea mari pot cauza erori cu LocalDocs, replici lente sau absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
-
-
+
+ Document snippet size (characters)
- Lungimea (în caractere) a citatelor din documente
+ Lungimea (în caractere) a citatelor din documenteNumber of characters per document snippet. Larger numbers increase likelihood of
factual responses, but also result in slower generation.
- numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
+ numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
-
-
+
+ Max document snippets per prompt
- Numărul maxim de citate per prompt
+ Numărul maxim de citate per promptMax best N matches of retrieved document snippets to add to the context for
prompt. Larger numbers increase likelihood of factual responses, but also result in
slower generation.
- Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
+ Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
@@ -1739,59 +1744,59 @@ model to get started
Chat with your local files
- Dialoghează cu fişiere locale
+ Dialoghează cu fişiere locale+ Add Collection
- + Adaugă o Colecţie
+ + Adaugă o ColecţieERROR: The LocalDocs database is not valid.
- EROARE: Baza de date LocalDocs nu e validă.
+ EROARE: Baza de date LocalDocs nu e validă.<h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
- EROARE: Baza de date LocalDocs nu poate fi accesată sau nu e validă. Programul trebuie repornit după ce se încearcă oricare din următoarele remedii sugerate.</i><br><ul><li>Asigură-te că folderul pentru <b>Download Path</b> există în sistemul de fişiere.</li><li>Verifică permisiunile şi apartenenţa folderului pentru <b>Download Path</b>.</li><li>Dacă există fişierul <b>localdocs_v2.db</b>, verifică-i apartenenţa şi permisiunile citire/scriere (read/write).</li></ul><br>Dacă problema persistă şi există vreun fişier 'localdocs_v*.db', ca ultimă soluţie poţi<br>încerca duplicarea (backup) şi apoi ştergerea lor. Oricum, va trebui să re-creezi Colecţiile.
+ EROARE: Baza de date LocalDocs nu poate fi accesată sau nu e validă. Programul trebuie repornit după ce se încearcă oricare din următoarele remedii sugerate.</i><br><ul><li>Asigură-te că folderul pentru <b>Download Path</b> există în sistemul de fişiere.</li><li>Verifică permisiunile şi apartenenţa folderului pentru <b>Download Path</b>.</li><li>Dacă există fişierul <b>localdocs_v2.db</b>, verifică-i apartenenţa şi permisiunile citire/scriere (read/write).</li></ul><br>Dacă problema persistă şi există vreun fişier 'localdocs_v*.db', ca ultimă soluţie poţi<br>încerca duplicarea (backup) şi apoi ştergerea lor. Oricum, va trebui să re-creezi Colecţiile.No Collections Installed
- Nu există Colecţii instalate
+ Nu există Colecţii instalateInstall a collection of local documents to get started using this feature
- Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta
+ Instalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta+ Add Doc Collection
- + Adaugă o Colecţie de documente
+ + Adaugă o Colecţie de documenteShows the add model view
- Afişează secţiunea de adăugare a unui model
+ Afişează secţiunea de adăugare a unui modelIndexing progressBar
- Bara de progresie a Indexării
+ Bara de progresie a IndexăriiShows the progress made in the indexing
- Afişează progresia Indexării
+ Afişează progresia Indexării
@@ -1803,7 +1808,7 @@ model to get started
INDEXING
- ...SE INDEXEAZă...
+ ...SE INDEXEAZă...
@@ -1815,7 +1820,7 @@ model to get started
REQUIRES UPDATE
- NECESITă UPDATE
+ NECESITă UPDATE
@@ -1833,31 +1838,31 @@ model to get started
Indexing in progress
- Se Indexează...
+ Se Indexează...Embedding in progress
- ...Se calculează Embeddings...
+ ...Se calculează Embeddings...This collection requires an update after version change
- Această Colecţie necesită update după schimbarea versiunii
+ Această Colecţie necesită update după schimbarea versiuniiAutomatically reindexes upon changes to the folder
- Se reindexează automat după schimbări ale folderului
+ Se reindexează automat după schimbări ale folderuluiInstallation in progress
- ...Instalare în curs...
+ ...Instalare în curs...
@@ -1870,9 +1875,9 @@ model to get started
%n file(s)
- %n fişier
- %n fişiere
- %n fişiere
+ %n fişier
+ %n fişiere
+ %n fişiere
@@ -1880,7 +1885,7 @@ model to get started
%n word(s)
- %n cuvânt
+ %n cuvânt%n cuvinte%n cuvinte
@@ -1889,19 +1894,19 @@ model to get started
Remove
- Şterg
+ ŞtergRebuild
- Reconstrucţie
+ ReconstrucţieReindex this folder from scratch. This is slow and usually not needed.
- Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
+ Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
@@ -1913,7 +1918,7 @@ model to get started
Update the collection to the new version. This is a slow operation.
- Actualizează Colecţia la noua versiune. Această procedură e lentă.
+ Actualizează Colecţia la noua versiune. Această procedură e lentă.
@@ -1927,7 +1932,7 @@ model to get started
OpenAI</li><li>You can apply for an API key <a
href="https://platform.openai.com/account/api-keys">here.</a></li>
- <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la OpenAI! </li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
+ <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la OpenAI! </li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
<strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
@@ -1947,7 +1952,7 @@ model to get started<ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
- <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!</li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
+ <ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!</li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
@@ -1957,7 +1962,7 @@ model to get started
<br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -1967,7 +1972,7 @@ model to get started
<ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
- <ul><li>Necesită cheia personală Mistral API. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
+ <ul><li>Necesită cheia personală Mistral API. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
@@ -1987,7 +1992,7 @@ model to get started
<ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
- <ul><li>Necesită cheia personală API si base-URL a API.</li><li>ATENŢIE: Conversaţiile tale vor fi trimise la serverul API compatibil cu OpenAI specificat!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu serverul API compatibil cu OpenAI</li>
+ <ul><li>Necesită cheia personală API si base-URL a API.</li><li>ATENŢIE: Conversaţiile tale vor fi trimise la serverul API compatibil cu OpenAI specificat!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu serverul API compatibil cu OpenAI</li>
@@ -1997,12 +2002,12 @@ model to get started
<strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul>
+ <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul><br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does
not guarantee API key access. Contact OpenAI for more info.
- <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
+ <br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
@@ -2013,14 +2018,14 @@ model to get started
Mistral</li><li>You can apply for an API key <a
href="https://console.mistral.ai/user/api-keys">here</a>.</li>
- <ul><li>Necesită cheia personală Mistral API. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
+ <ul><li>Necesită cheia personală Mistral API. </li><li>ATENţIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
<strong>Created by
%1.</strong><br><ul><li>Published on %2.<li>This model
has %3 likes.<li>This model has %4 downloads.<li>More info can be found
<a href="https://huggingface.co/%5">here.</a></ul>
- <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul>
+ <strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul>
@@ -2047,7 +2052,7 @@ model to get started
Remove
- Şterg
+ Şterg
@@ -2059,7 +2064,7 @@ model to get started
Model File
- Fişierul modelului
+ Fişierul modelului
@@ -2070,7 +2075,7 @@ model to get started
Prefixed at the beginning of every conversation. Must contain the appropriate
framing tokens.
- Plasat la Începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de Încadrare.
+ Plasat la Începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de Încadrare.
@@ -2082,24 +2087,24 @@ model to get started
The template that wraps every prompt.
- Standardul de formulare a fiecărui prompt.
+ Standardul de formulare a fiecărui prompt.Must contain the string "%1" to be replaced with the user's
input.
- Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie utilizatorul.
+ Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie utilizatorul.Chat Name Prompt
- Denumirea conversaţiei
+ Denumirea conversaţieiPrompt used to automatically generate chat names.
- Standardul de formulare a denumirii conversaţiilor.
+ Standardul de formulare a denumirii conversaţiilor.
@@ -2111,7 +2116,7 @@ model to get started
Prompt used to generate suggested follow-up questions.
- Prompt-ul folosit pentru generarea Întrebărilor de continuare.
+ Prompt-ul folosit pentru generarea Întrebărilor de continuare.
@@ -2123,13 +2128,13 @@ model to get started
Number of input and output tokens the model sees.
- Numărul token-urilor de input şi de output văzute de model.
+ Numărul token-urilor de input şi de output văzute de model.Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTă: Nu are efect până la reincărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTă: Nu are efect până la reincărcarea modelului.
@@ -2141,12 +2146,12 @@ model to get started
Randomness of model output. Higher -> more variation.
- Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.
+ Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
@@ -2163,19 +2168,19 @@ model to get started
Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTă: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTă: Se evită selectarea token-urilor foarte improbabile.Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
- Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare.
+ Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare.Must contain the string "%1" to be replaced with the user's input.
- Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul.
+ Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul.
@@ -2183,21 +2188,21 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
@@ -2209,13 +2214,13 @@ NOTE: Prevents choosing highly unlikely tokens.
Minimum token probability. Higher -> more predictable.
- Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.
+ Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.Sets the minimum relative probability for a token to be considered.
- Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
+ Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
@@ -2239,13 +2244,13 @@ NOTE: Prevents choosing highly unlikely tokens.
Max Length
- Lungimea maximă
+ Lungimea maximăMaximum response length, in tokens.
- Lungimea maximă - în token-uri - a replicii.
+ Lungimea maximă - în token-uri - a replicii.
@@ -2264,7 +2269,7 @@ NOTE: Prevents choosing highly unlikely tokens.
Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2272,12 +2277,12 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. NOTĂ: Nu are efect până la reîncărcarea modelului.
+ Cât de multe layere ale modelului să fie încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. NOTĂ: Nu are efect până la reîncărcarea modelului.Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- numărul token-urilor procesate simultan. NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ numărul token-urilor procesate simultan. NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2289,38 +2294,38 @@ NOTE: Does not take effect until you reload the model.
Repetition penalty factor. Set to 1 to disable.
- Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.
+ Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.Repeat Penalty Tokens
- Token-uri pentru penalzare a repetării
+ Token-uri pentru penalzare a repetăriiNumber of previous tokens used for penalty.
- Numărul token-urilor anterioare considerate pentru penalizare.
+ Numărul token-urilor anterioare considerate pentru penalizare.GPU Layers
- Layere în GPU
+ Layere în GPUNumber of model layers to load into VRAM.
- Numărul layerelor modelului ce vor fi Încărcate în VRAM.
+ Numărul layerelor modelului ce vor fi Încărcate în VRAM.How many model layers to load into VRAM. Decrease this if GPT4All runs out of
VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie Încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa. NOTă: Nu are efect până la reÎncărcarea modelului.
+ Cât de multe layere ale modelului să fie Încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa. NOTă: Nu are efect până la reÎncărcarea modelului.
@@ -2329,13 +2334,13 @@ NOTE: Does not take effect until you reload the model.
No Models Installed
- Nu există modele instalate
+ Nu există modele instalateInstall a model to get started using GPT4All
- Instalează un model pentru a Începe să foloseşti GPT4All
+ Instalează un model pentru a Începe să foloseşti GPT4All
@@ -2343,13 +2348,13 @@ NOTE: Does not take effect until you reload the model.
+ Add Model
- + Adaugă un model
+ + Adaugă un modelShows the add model view
- Afişează secţiunea de adăugare a unui model
+ Afişează secţiunea de adăugare a unui model
@@ -2361,19 +2366,19 @@ NOTE: Does not take effect until you reload the model.
Locally installed chat models
- Modele conversaţionale instalate local
+ Modele conversaţionale instalate localModel file
- Fişierul modelului
+ Fişierul modeluluiModel file to be downloaded
- Fişierul modelului ce va fi descărcat
+ Fişierul modelului ce va fi descărcat
@@ -2385,7 +2390,7 @@ NOTE: Does not take effect until you reload the model.
File description
- Descrierea fişierului
+ Descrierea fişierului
@@ -2403,19 +2408,19 @@ NOTE: Does not take effect until you reload the model.
Stop/restart/start the download
- Oprirea/Repornirea/Initierea descărcării
+ Oprirea/Repornirea/Initierea descărcăriiRemove
- Şterg
+ ŞtergRemove model from filesystem
- Şterg modelul din sistemul de fişiere
+ Şterg modelul din sistemul de fişiere
@@ -2423,7 +2428,7 @@ NOTE: Does not take effect until you reload the model.
Install
- Instalează
+ Instalează
@@ -2440,7 +2445,7 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
@@ -2458,7 +2463,7 @@ NOTE: Does not take effect until you reload the model.
Describes an error that occurred when downloading
- Descrie o eroare apărută la download
+ Descrie o eroare apărută la download
@@ -2470,7 +2475,7 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
@@ -2482,13 +2487,13 @@ NOTE: Does not take effect until you reload the model.
Download progressBar
- Bara de progresie a descărcării
+ Bara de progresie a descărcăriiShows the progress made in the download
- Afişează progresia descărcării
+ Afişează progresia descărcării
@@ -2500,13 +2505,13 @@ NOTE: Does not take effect until you reload the model.
Download speed in bytes/kilobytes/megabytes per second
- Viteza de download în bytes/kilobytes/megabytes pe secundă
+ Viteza de download în bytes/kilobytes/megabytes pe secundăCalculating...
- ...Se calculează...
+ ...Se calculează...
@@ -2518,7 +2523,7 @@ NOTE: Does not take effect until you reload the model.
Whether the file hash is being calculated
- Dacă se va calcula hash-ul fişierului
+ Dacă se va calcula hash-ul fişierului
@@ -2530,13 +2535,13 @@ NOTE: Does not take effect until you reload the model.
Displayed when the file hash is being calculated
- Afişat când se calculează hash-ul unui fişier
+ Afişat când se calculează hash-ul unui fişierERROR: $API_KEY is empty.
- EROARE: $API_KEY absentă.
+ EROARE: $API_KEY absentă.
@@ -2548,7 +2553,7 @@ NOTE: Does not take effect until you reload the model.
ERROR: $BASE_URL is empty.
- EROARE: $BASE_URL absentă.
+ EROARE: $BASE_URL absentă.
@@ -2572,13 +2577,13 @@ NOTE: Does not take effect until you reload the model.
File size
- Dimensiunea fişierului
+ Dimensiunea fişieruluiRAM required
- RAM necesară
+ RAM necesară
@@ -2620,7 +2625,7 @@ NOTE: Does not take effect until you reload the model.
Please choose a directory
- Selectează un director (folder)
+ Selectează un director (folder)
@@ -2635,7 +2640,7 @@ NOTE: Does not take effect until you reload the model.
Restores settings dialog to a default state
- Restaurez secţiunea de configurare la starea sa implicită
+ Restaurez secţiunea de configurare la starea sa implicită
@@ -2644,7 +2649,7 @@ NOTE: Does not take effect until you reload the model.
Contribute data to the GPT4All Opensource Datalake.
- Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.
+ Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.By enabling this feature, you will be able to participate in the democratic
@@ -2663,21 +2668,21 @@ NOTE: Does not take effect until you reload the model.
used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
attribution information attached to your data and you will be credited as a
contributor to any GPT4All model release that uses your data!
- Dacă activezi această funcţionalitate, vei participa la procesul democratic
- de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
+ Dacă activezi această funcţionalitate, vei participa la procesul democratic
+ de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
- Când un model în GPT4All Îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
- trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
- Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
- Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
+ Când un model în GPT4All Îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
+ trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
+ Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
+ Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
- NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
- DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
- această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
- Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
- utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
- toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
- participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
+ NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
+ această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
+ Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
+ utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
@@ -2687,21 +2692,21 @@ NOTE: Does not take effect until you reload the model.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
- Dacă activezi această funcţionalitate, vei participa la procesul democratic
- de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
+ Dacă activezi această funcţionalitate, vei participa la procesul democratic
+ de instruire a unui model LLM prin contribuţia ta cu date la îmbunătăţirea modelului.
- Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
- trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
- Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
- Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
+ Când un model în GPT4All îţi răspunde şi îi accepţi replica, atunci conversaţia va fi
+ trimisă la componenta Open-source DataLake a GPT4All. Mai mult - îi poţi aprecia replica,
+ Dacă răspunsul Nu Îti Place, poţi sugera unul alternativ.
+ Aceste date vor fi colectate şi agregate în componenta DataLake a GPT4All.
- NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
- DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
- această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
- Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
- utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
- toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
- participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
+ NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ DataLake a GPT4All. Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi
+ această funcţionalitate. Totuşi, te poţi aştepta la a beneficia de apreciere - opţional, dacă doreşti.
+ Datele din conversaţie vor fi disponibile pentru oricine vrea să le descarce şi vor fi
+ utilizate de către Nomic AI pentru a îmbunătăţi modele viitoare în GPT4All. Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
@@ -2713,37 +2718,37 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Describes what will happen when you opt-in
- Descrie ce se Întâmplă când participi
+ Descrie ce se Întâmplă când participiPlease provide a name for attribution (optional)
- Specifică o denumire pentru această apreciere (optional)
+ Specifică o denumire pentru această apreciere (optional)Attribution (optional)
- Apreciere (opţional)
+ Apreciere (opţional)Provide attribution
- Apreciază
+ ApreciazăEnable
- Activează
+ ActiveazăEnable opt-in
- Activează participarea
+ Activează participarea
@@ -2755,7 +2760,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Cancel opt-in
- Anulează participarea
+ Anulează participarea
@@ -2764,7 +2769,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
New version is available
- O nouă versiune disponibilă!
+ O nouă versiune disponibilă!
@@ -2776,7 +2781,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Update to new version
- Actualizează la noua versiune
+ Actualizează la noua versiune
@@ -2785,7 +2790,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Reveals a shortlived help balloon
- Afisează un mesaj scurt de asistenţă
+ Afisează un mesaj scurt de asistenţă
@@ -2797,7 +2802,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Displayed when the popup is showing busy
- Se afişează când procedura este în desfăşurare
+ Se afişează când procedura este în desfăşurare
@@ -2814,7 +2819,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Contains various application settings
- Conţine setări ale programului
+ Conţine setări ale programului
@@ -2861,7 +2866,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Release notes for this version
- Despre această versiune
+ Despre această versiune### Opt-ins for anonymous usage analytics and datalake
@@ -2887,28 +2892,28 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
attribution information attached to your data and you will be credited as a
contributor to any GPT4All
model release that uses your data!
- ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake
- Activând aceste functionalităţi vei putea participa la procesul democratic
+ ### Acceptul pentru analizarea utilizării anonime şi pentru DataLake
+ Activând aceste functionalităţi vei putea participa la procesul democratic
de instruire a unui
- model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
- Când un model în GPT4All Îţi răspunde şi îi accepţi răspunsul, conversaţia este
- trimisă la componenta
- Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Like/Dislike) răspunsul. Dacă
- un răspuns Nu Îţi Place. poţi
- sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate În
+ model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
+ Când un model în GPT4All Îţi răspunde şi îi accepţi răspunsul, conversaţia este
+ trimisă la componenta
+ Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Like/Dislike) răspunsul. Dacă
+ un răspuns Nu Îţi Place. poţi
+ sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate În
componenta DataLake a GPT4All.
- NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
+ NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
DataLake a GPT4All.
- Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
- Totuşi, te poţi aştepta la a beneficia de apreciere -
- opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
- pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
- pentru a îmbunătăţi modele viitoare în GPT4All.
- Nomic AI va păstra
- toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+ Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
+ Totuşi, te poţi aştepta la a beneficia de apreciere -
+ opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
+ pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
+ pentru a îmbunătăţi modele viitoare în GPT4All.
+ Nomic AI va păstra
+ toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
participant contribuitor la orice lansare a unui model GPT4All
- care foloseşte datele tale!
+ care foloseşte datele tale!
@@ -2937,25 +2942,25 @@ an expectation of an optional attribution if you wish. Your chat data will be op
to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all
attribution information attached to your data and you will be credited as a contributor to any GPT4All
model release that uses your data!
- ### Acordul pentru analizarea utilizării anonime şi pentru DataLake
- Activând aceste functionalităţi vei putea participa la procesul democratic de instruire
-a unui model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
+ ### Acordul pentru analizarea utilizării anonime şi pentru DataLake
+ Activând aceste functionalităţi vei putea participa la procesul democratic de instruire
+a unui model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
-Când un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
-trimisă la componenta Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Bravo/Aiurea) răspunsul. Dacă
-un răspuns e Aiurea. poţi sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
+Când un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
+trimisă la componenta Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Bravo/Aiurea) răspunsul. Dacă
+un răspuns e Aiurea. poţi sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
componenta DataLake a GPT4All.
-NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta DataLake a GPT4All.
-Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
-Totuşi, te poţi aştepta la a beneficia de apreciere -
-opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
-pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
-pentru a îmbunătăţi modele viitoare în GPT4All.
-Nomic AI va păstra
-toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
+NOTĂ: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta DataLake a GPT4All.
+Atunci nu te vei putea aştepta la intimitatea (privacy) conversaţiei dacă activezi această funcţionalitate.
+Totuşi, te poţi aştepta la a beneficia de apreciere -
+opţional, dacă doreşti. Datele din conversaţie vor fi disponibile
+pentru oricine vrea să le descarce şi vor fi utilizate de către Nomic AI
+pentru a îmbunătăţi modele viitoare în GPT4All.
+Nomic AI va păstra
+toate informaţiile despre atribuire asociate datelor tale şi vei fi menţionat ca
participant contribuitor la orice lansare a unui model GPT4All
-care foloseşte datele tale!
+care foloseşte datele tale!
@@ -2967,7 +2972,7 @@ care foloseşte datele tale!
Describes what will happen when you opt-in
- Descrie ce se Întâmplă când participi
+ Descrie ce se Întâmplă când participi
@@ -2975,7 +2980,7 @@ care foloseşte datele tale!
Opt-in for anonymous usage statistics
- Acceptă colectarea de statistici despre utilizare anonmă
+ Acceptă colectarea de statistici despre utilizare anonmă
@@ -2989,7 +2994,7 @@ care foloseşte datele tale!
Allow opt-in for anonymous usage statistics
- Acceptă participarea la colectarea de statistici despre utilizare -anonimă-
+ Acceptă participarea la colectarea de statistici despre utilizare -anonimă-
@@ -3003,13 +3008,13 @@ care foloseşte datele tale!
Opt-out for anonymous usage statistics
- Anulează participarea la colectarea de statistici despre utilizare -anonimă-
+ Anulează participarea la colectarea de statistici despre utilizare -anonimă-Allow opt-out for anonymous usage statistics
- Permite anularea participării la colectarea de statistici despre utilizare -anonimă-
+ Permite anularea participării la colectarea de statistici despre utilizare -anonimă-
@@ -3017,31 +3022,31 @@ care foloseşte datele tale!
Opt-in for network
- Acceptă pentru reţea
+ Acceptă pentru reţeaAllow opt-in for network
- Permite participarea pentru reţea
+ Permite participarea pentru reţeaAllow opt-in anonymous sharing of chats to the GPT4All Datalake
- Permite participarea la partajarea (share) anonimă a conversaţiilor către DataLake a GPT4All
+ Permite participarea la partajarea (share) anonimă a conversaţiilor către DataLake a GPT4AllOpt-out for network
- Refuz participarea, pentru reţea
+ Refuz participarea, pentru reţeaAllow opt-out anonymous sharing of chats to the GPT4All Datalake
- Permite anularea participării la partajarea anonimă a conversaţiilor către DataLake a GPT4All
+ Permite anularea participării la partajarea anonimă a conversaţiilor către DataLake a GPT4All
@@ -3049,25 +3054,25 @@ care foloseşte datele tale!
<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va sterge conversaţia curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va sterge conversaţia curentă. Confirmi aceasta?<b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?Continue
- Continuă
+ ContinuăContinue with model loading
- Continuă cu Încărcarea modelului
+ Continuă cu Încărcarea modelului
@@ -3084,13 +3089,13 @@ care foloseşte datele tale!
Please edit the text below to provide a better response. (optional)
- Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).
+ Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).Please provide a better response...
- Te rog, oferă o replică mai bună...
+ Te rog, oferă o replică mai bună...
@@ -3102,7 +3107,7 @@ care foloseşte datele tale!
Submits the user's response
- Trimite răspunsul dat de utilizator
+ Trimite răspunsul dat de utilizator
@@ -3114,7 +3119,7 @@ care foloseşte datele tale!
Closes the response dialog
- Închide dialogul răspunsului
+ Închide dialogul răspunsului
@@ -3126,11 +3131,11 @@ care foloseşte datele tale!
detected."</i><br><br>Unfortunately, your CPU does not meet
the minimal requirements to run this program. In particular, it does not support AVX
intrinsics which this program requires to successfully run a modern large language
- model. The only soluţion at this time is to upgrade your hardware to a more modern
+ model. The only soluţion at this time is to upgrade your hardware to a more modern
CPU.<br><br>See here for more information: <a
href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte condiţiile minime pentru a rula acest program. în particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte condiţiile minime pentru a rula acest program. în particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
@@ -3146,79 +3151,83 @@ care foloseşte datele tale!
permissions in the local app config directory where the settings file is located.
Check out our <a href="https://discord.gg/4M2QFmTt2k">discord
channel</a> for help.
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte În/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte În/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
+
+
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only soluţion at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only soluţion at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.Connection to datalake failed.
- Conectarea la DataLake a eşuat.
+ Conectarea la DataLake a eşuat.Saving chats.
- Se salvează conversaţiile.
+ Se salvează conversaţiile.Network dialog
- Dialogul despre reţea
+ Dialogul despre reţeaopt-in to share feedback/conversations
- acceptă partajarea (share) de comentarii/conversaţii
+ acceptă partajarea (share) de comentarii/conversaţiiHome view
- Secţiunea de Început
+ Secţiunea de ÎnceputHome view of application
- Secţiunea de Început a programului
+ Secţiunea de Început a programuluiHome
- Prima<br>pagină
+ Prima<br>paginăChat view
- Secţiunea conversaţiilor
+ Secţiunea conversaţiilorChat view to interact with models
- Secţiunea de chat pentru interacţiune cu modele
+ Secţiunea de chat pentru interacţiune cu modeleChats
- Conversaţii
+ Conversaţii
@@ -3232,7 +3241,7 @@ care foloseşte datele tale!
Models view for installed models
- Secţiunea modelelor instalate
+ Secţiunea modelelor instalate
@@ -3246,7 +3255,7 @@ care foloseşte datele tale!
LocalDocs view to configure and use local docs
- Secţiunea LocalDocs de configurare şi folosire a Documentelor Locale
+ Secţiunea LocalDocs de configurare şi folosire a Documentelor Locale
@@ -3260,7 +3269,7 @@ care foloseşte datele tale!
Settings view for application configuration
- Secţiunea de configurare a programului
+ Secţiunea de configurare a programului
@@ -3272,7 +3281,7 @@ care foloseşte datele tale!
Using a network model
- Se foloseşte un model pe reţea
+ Se foloseşte un model pe reţea
@@ -3290,7 +3299,7 @@ care foloseşte datele tale!
View of installed models
- Secţiunea modelelor instalate
+ Secţiunea modelelor instalate
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index c2f0cf88..422e7940 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -119,26 +119,26 @@
触发模型的发现和筛选
-
-
+
+ Default默认
-
-
+
+ Likes喜欢
-
-
+
+ Downloads下载
-
-
+
+ Recent近期
@@ -147,14 +147,14 @@
排序:
-
-
+
+ Asc升序
-
-
+
+ Desc倒序
@@ -163,8 +163,8 @@
排序目录:
-
-
+
+ None无
@@ -183,170 +183,170 @@
搜索中 · %1
-
-
+
+ Sort by: %1排序: %1
-
-
+
+ Sort dir: %1排序目录: %1
-
-
+
+ Limit: %1数量: %1
-
-
+
+ Network error: could not retrieve %1网络错误:无法检索 %1
-
-
-
-
+
+
+
+ Busy indicator繁忙程度
-
-
+
+ Displayed when the models request is ongoing在模型请求进行中时显示
-
-
+
+ Model file模型文件
-
-
+
+ Model file to be downloaded待下载模型
-
-
+
+ Description描述
-
-
+
+ File description文件描述
-
-
+
+ Cancel取消
-
-
+
+ Resume继续
-
-
+
+ Download下载
-
-
+
+ Stop/restart/start the download停止/重启/开始下载
-
-
+
+ Remove删除
-
-
+
+ Remove model from filesystem从系统中删除模型
-
-
-
-
+
+
+
+ Install安装
-
-
+
+ Install online model安装在线模型
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">错误</a></strong></font>
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告: 你的设备硬件不推荐 ,模型需要的内存 (%1 GB)比你的系统还要多 (%2).</strong></font>
-
-
+
+ ERROR: $API_KEY is empty.错误:$API_KEY 为空
-
-
+
+ ERROR: $BASE_URL is empty.错误:$BASE_URL 为空
-
-
+
+ enter $BASE_URL输入 $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME为空
-
-
+
+ enter $MODEL_NAME输入:$MODEL_NAME
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
@@ -355,8 +355,8 @@
<a href="#error">错误</a>
-
-
+
+ Describes an error that occurred when downloading描述下载过程中发生的错误
@@ -373,74 +373,74 @@
你的系统需要 (
-
-
+
+ Error for incompatible hardware硬件不兼容的错误
-
-
+
+ Download progressBar下载进度
-
-
+
+ Shows the progress made in the download显示下载进度
-
-
+
+ Download speed下载速度
-
-
+
+ Download speed in bytes/kilobytes/megabytes per second下载速度 b/kb/mb /s
-
-
+
+ Calculating...计算中
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculated是否正在计算文件哈希
-
-
+
+ Displayed when the file hash is being calculated在计算文件哈希时显示
-
-
+
+ enter $API_KEY输入$API_KEY
-
-
+
+ File size文件大小
-
-
+
+ RAM requiredRAM 需要
@@ -449,20 +449,20 @@
GB
-
-
+
+ Parameters参数
-
-
+
+ Quant量化
-
-
+
+ Type类型
@@ -533,74 +533,74 @@
应用的主题颜色
-
-
+
+ Dark深色
-
-
+
+ Light亮色
-
-
+
+ LegacyDarkLegacyDark
-
-
+
+ Font Size字体大小
-
-
+
+ The size of text in the application.应用中的文本大小。
-
-
+
+ Small小
-
-
+
+ Medium中
-
-
+
+ Large大
-
-
+
+ Language and Locale语言和本地化
-
-
+
+ The language and locale you wish to use.你想使用的语言
-
-
+
+ System Locale系统语言
-
-
+
+ Device设备
@@ -609,166 +609,166 @@
用于文本生成的计算设备. "自动" 使用 Vulkan or Metal.
-
-
+
+ The compute device used for text generation.设备用于文本生成
-
-
-
-
+
+
+
+ Application default程序默认
-
-
+
+ Default Model默认模型
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.新聊天的首选模式。也用作本地服务器回退。
-
-
+
+ Suggestion Mode建议模式
-
-
+
+ Generate suggested follow-up questions at the end of responses.在答复结束时生成建议的后续问题。
-
-
+
+ When chatting with LocalDocs本地文档检索
-
-
+
+ Whenever possible只要有可能
-
-
+
+ Never从不
-
-
+
+ Download Path下载目录
-
-
+
+ Where to store local models and the LocalDocs database.本地模型和本地文档数据库存储目录
-
-
+
+ Browse查看
-
-
+
+ Choose where to save model files模型下载目录
-
-
+
+ Enable Datalake开启数据湖
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.发送对话和反馈给GPT4All 的开源数据湖。
-
-
+
+ Advanced高级
-
-
+
+ CPU ThreadsCPU线程
-
-
+
+ The number of CPU threads used for inference and embedding.用于推理和嵌入的CPU线程数
-
-
+
+ Save Chat Context保存对话上下文
-
-
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话.
-
-
+
+ Enable Local Server开启本地服务
-
-
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.将OpenAI兼容服务器暴露给本地主机。警告:导致资源使用量增加。
-
-
+
+ API Server PortAPI 服务端口
-
-
+
+ The port to use for the local server. Requires restart.使用本地服务的端口,需要重启
-
-
+
+ Check For Updates检查更新
-
-
+
+ Manually check for an update to GPT4All.手动检查更新
-
-
+
+ Updates更新
@@ -1621,66 +1621,72 @@ model to get started
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- 技术设备用于embeddings. "自动" 使用.需要重启.
+ The compute device used for embeddings. Requires restart.
+ 技术设备用于embeddings. 需要重启.
-
-
+
+
+ Application default
+ 程序默认
+
+
+
+ Display显示
-
-
+
+ Show Sources查看源码
-
-
+
+ Display the sources used for each response.显示每个响应所使用的源。
-
-
+
+ Advanced高级
-
-
+
+ Warning: Advanced usage only.提示: 仅限高级使用。
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.值过大可能会导致 localdocs 失败、响应速度极慢或根本无法响应。粗略地说,{N 个字符 x N 个片段} 被添加到模型的上下文窗口中。更多信息请见<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此处</a>。
-
-
+
+ Document snippet size (characters)文档粘贴大小 (字符)
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每个文档片段的字符数。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
-
-
+
+ Max document snippets per prompt每个提示的最大文档片段数
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.检索到的文档片段最多添加到提示上下文中的前 N 个最佳匹配项。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 19fbc6f5..144541fb 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -121,309 +121,309 @@
觸發探索與過濾模型
-
-
+
+ Default預設
-
-
+
+ Likes讚
-
-
+
+ Downloads下載次數
-
-
+
+ Recent最新
-
-
+
+ Sort by: %1排序依據:%1
-
-
+
+ Asc升序
-
-
+
+ Desc降序
-
-
+
+ Sort dir: %1排序順序:%1
-
-
+
+ None無
-
-
+
+ Limit: %1上限:%1
-
-
+
+ Network error: could not retrieve %1網路錯誤:無法取得 %1
-
-
+
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">錯誤</a></strong></font>
-
-
+
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
-
-
+
+ %1 GB%1 GB
-
-
-
-
+
+
+
+ ??
-
-
-
-
+
+
+
+ Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
-
-
+
+ Displayed when the models request is ongoing當模型請求正在進行時顯示
-
-
+
+ Model file模型檔案
-
-
+
+ Model file to be downloaded即將下載的模型檔案
-
-
+
+ Description描述
-
-
+
+ File description檔案描述
-
-
+
+ Cancel取消
-
-
+
+ Resume恢復
-
-
+
+ Download下載
-
-
+
+ Stop/restart/start the download停止/重啟/開始下載
-
-
+
+ Remove移除
-
-
+
+ Remove model from filesystem從檔案系統移除模型
-
-
-
-
+
+
+
+ Install安裝
-
-
+
+ Install online model安裝線上模型
-
-
+
+ Describes an error that occurred when downloading解釋下載時發生的錯誤
-
-
+
+ Error for incompatible hardware錯誤,不相容的硬體
-
-
+
+ Download progressBar下載進度條
-
-
+
+ Shows the progress made in the download顯示下載進度
-
-
+
+ Download speed下載速度
-
-
+
+ Download speed in bytes/kilobytes/megabytes per second下載速度每秒 bytes/kilobytes/megabytes
-
-
+
+ Calculating...計算中......
-
-
-
-
-
-
-
-
+
+
+
+
+
+
+
+ Whether the file hash is being calculated是否正在計算檔案雜湊
-
-
+
+ Displayed when the file hash is being calculated計算檔案雜湊值時顯示
-
-
+
+ ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
-
-
+
+ enter $API_KEY請輸入 $API_KEY
-
-
+
+ ERROR: $BASE_URL is empty.錯誤:$BASE_URL 未填寫。
-
-
+
+ enter $BASE_URL請輸入 $BASE_URL
-
-
+
+ ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
-
-
+
+ enter $MODEL_NAME請輸入 $MODEL_NAME
-
-
+
+ File size檔案大小
-
-
+
+ RAM required所需的記憶體
-
-
+
+ Parameters參數
-
-
+
+ Quant量化
-
-
+
+ Type類型
@@ -495,238 +495,238 @@
應用程式的配色方案。
-
-
+
+ Dark暗色
-
-
+
+ Light亮色
-
-
+
+ LegacyDark傳統暗色
-
-
+
+ Font Size字體大小
-
-
+
+ The size of text in the application.應用程式中的字體大小。
-
-
+
+ Small小
-
-
+
+ Medium中
-
-
+
+ Large大
-
-
+
+ Language and Locale語言與區域設定
-
-
+
+ The language and locale you wish to use.您希望使用的語言與區域設定。
-
-
+
+ System Locale系統語系
-
-
+
+ Device裝置
-
-
+
+ Default Model預設模型
-
-
+
+ The preferred model for new chats. Also used as the local server fallback.用於新交談的預設模型。也用於作為本機伺服器後援使用。
-
-
+
+ Suggestion Mode建議模式
-
-
+
+ When chatting with LocalDocs當使用「我的文件」交談時
-
-
+
+ Whenever possible視情況允許
-
-
+
+ Never永不
-
-
+
+ Generate suggested follow-up questions at the end of responses.在回覆末尾生成後續建議的問題。
-
-
+
+ The compute device used for text generation.用於生成文字的計算設備。
-
-
-
-
+
+
+
+ Application default應用程式預設值
-
-
+
+ Download Path下載路徑
-
-
+
+ Where to store local models and the LocalDocs database.儲存本機模型與「我的文件」資料庫的位置。
-
-
+
+ Browse瀏覽
-
-
+
+ Choose where to save model files選擇儲存模型檔案的位置
-
-
+
+ Enable Datalake啟用資料湖泊
-
-
+
+ Send chats and feedback to the GPT4All Open-Source Datalake.將交談與回饋傳送到 GPT4All 開放原始碼資料湖泊。
-
-
+
+ Advanced進階
-
-
+
+ CPU Threads中央處理器(CPU)線程
-
-
+
+ The number of CPU threads used for inference and embedding.用於推理與嵌入的中央處理器線程數。
-
-
+
+ Save Chat Context儲存交談語境
-
-
+
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。
-
-
+
+ Enable Local Server啟用本機伺服器
-
-
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.將 OpenAI 相容伺服器公開給本機。警告:導致資源使用增加。
-
-
+
+ API Server PortAPI 伺服器埠口
-
-
+
+ The port to use for the local server. Requires restart.用於本機伺服器的埠口。需要重新啟動。
-
-
+
+ Check For Updates檢查更新
-
-
+
+ Manually check for an update to GPT4All.手動檢查 GPT4All 的更新。
-
-
+
+ Updates更新
@@ -1524,66 +1524,72 @@ model to get started
- The compute device used for embeddings. "Auto" uses the CPU. Requires restart.
- 用於嵌入的計算裝置。「Auto」將自動使用中央處理器。需要重新啟動。
+ The compute device used for embeddings. Requires restart.
+ 用於嵌入的計算裝置。需要重新啟動。
-
-
+
+
+ Application default
+ 應用程式預設值
+
+
+
+ Display顯示
-
-
+
+ Show Sources查看來源
-
-
+
+ Display the sources used for each response.顯示每則回覆所使用的來源。
-
-
+
+ Advanced進階
-
-
+
+ Warning: Advanced usage only.警告:僅限進階使用。
-
-
+
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.設定太大的數值可能會導致「我的文件」處理失敗、反應速度極慢或根本無法回覆。簡單地說,這會將 {N 個字元 x N 個片段} 被添加到模型的語境視窗中。更多資訊<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此處</a>。
-
-
+
+ Document snippet size (characters)文件片段大小(字元)
-
-
+
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每個文件片段的字元數。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
-
-
+
+ Max document snippets per prompt每個提示詞的最大文件片段
-
-
+
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.新增至提示詞語境中的檢索到的文件片段的最大 N 個符合的項目。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
From 32b56e819d74e1055d3707d8b4ac41d5da6455d5 Mon Sep 17 00:00:00 2001
From: Riccardo Giovanetti <29801031+Harvester62@users.noreply.github.com>
Date: Fri, 16 Aug 2024 18:31:09 +0200
Subject: [PATCH 41/66] translations: cosmetic fixes for Italian (#2872)
Signed-off-by: Riccardo Giovanetti
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/translations/gpt4all_it_IT.ts | 14 +++++++-------
2 files changed, 8 insertions(+), 7 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 745d972a..fae501ec 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -11,6 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
+- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index 68cf05f9..e1d63a34 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -1916,7 +1916,7 @@ modello per iniziare
The template that wraps every prompt.
- Il modello che racchiude ogni prompt.
+ Lo schema che incorpora ogni prompt.
@@ -1946,7 +1946,7 @@ modello per iniziare
Prompt used to generate suggested follow-up questions.
- Prompt utilizzato per generare domande di approfondimento suggerite.
+ Prompt utilizzato per generare le domande di approfondimento suggerite.
@@ -2435,7 +2435,7 @@ NOTA: non ha effetto finché non si ricarica il modello.
Contribute data to the GPT4All Opensource Datalake.
- Contribuisci coni tuoi dati al Datalake Open Source di GPT4All.
+ Contribuisci con i tuoi dati al Datalake Open Source di GPT4All.
@@ -2447,9 +2447,9 @@ When a GPT4All model responds to you and you have opted-in, your conversation wi
NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data!
Abilitando questa funzionalità, potrai partecipare al processo democratico di addestramento di un modello linguistico di grandi dimensioni fornendo dati per futuri miglioramenti del modello.
-Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione verrà inviata al Datalake Open Source di GPT4All. Inoltre, puoi mettere mi piace/non mi piace la sua risposta. Se non ti piace una risposta, puoi suggerire una risposta alternativa. Questi dati verranno raccolti e aggregati nel Datalake di GPT4All.
+Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione verrà inviata al Datalake Open Source di GPT4All. Inoltre, puoi mettere mi piace/non mi piace alla sua risposta. Se non ti piace una risposta, puoi suggerirne una alternativa. Questi dati verranno raccolti e aggregati nel Datalake di GPT4All.
-NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti; tuttavia, se lo desideri, aspettati un'attribuzione facoltativa. I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
+NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti, tuttavia, aspettarti un'attribuzione facoltativa, se lo desideri. I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
@@ -2633,9 +2633,9 @@ model release that uses your data!
### Abilitazioni per analisi di utilizzo anonime e datalake
Abilitando questa funzionalità, potrai partecipare al processo democratico di addestramento di un modello linguistico di grandi dimensioni fornendo dati per futuri miglioramenti del modello.
-Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione verrà inviata al Datalake Open Source di GPT4All. Inoltre, puoi mettere mi piace/non mi piace la sua risposta. Se non ti piace una risposta, puoi suggerire una risposta alternativa. Questi dati verranno raccolti e aggregati nel Datalake di GPT4All.
+Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione verrà inviata al Datalake Open Source di GPT4All. Inoltre, puoi mettere mi piace/non mi piace alla sua risposta. Se non ti piace una risposta, puoi suggerirne una alternativa. Questi dati verranno raccolti e aggregati nel Datalake di GPT4All.
-NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti; tuttavia, se lo desideri, aspettati un'attribuzione facoltativa. I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
+NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti, tuttavia, aspettarti un'attribuzione facoltativa, se lo desideri, . I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
From ace79959d1d31dd0f7d79d39e7ac983895a02924 Mon Sep 17 00:00:00 2001
From: =?UTF-8?q?=E4=B8=8D=E7=9F=A5=E7=81=AB=20Shiranui?=
Date: Sat, 17 Aug 2024 01:00:26 +0800
Subject: [PATCH 42/66] translations: fix typos in Traditional Chinese (#2852)
Signed-off-by: Shiranui
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/translations/gpt4all_zh_TW.ts | 12 ++++++------
2 files changed, 7 insertions(+), 6 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index fae501ec..88fa3541 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872))
+- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 144541fb..abdbb611 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -612,7 +612,7 @@
The compute device used for text generation.
- 用於生成文字的計算設備。
+ 用於生成文字的計算裝置。
@@ -1785,7 +1785,7 @@ model to get started
<strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul>
- <strong>OpenAI 相容 API 模型</strong><br><ul><li>API 金鑰:%1</li><li>Base URL: %2</li><li>模型名稱: %3</li></ul>
+ <strong>OpenAI API 相容模型</strong><br><ul><li>API 金鑰:%1</li><li>基底 URL:%2</li><li>模型名稱:%3</li></ul>
@@ -1830,17 +1830,17 @@ model to get started
<ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
- <ul><li>需要個人的 API 金鑰和 API 的 base URL。</li><li>警告:這將會傳送您的交談紀錄到您所指定的 OpenAI 相容性 API 伺服器</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與其 OpenAI 相容性 API 伺服器進行通訊</li>
+ <ul><li>需要個人的 API 金鑰和 API 的基底 URL(Base URL)。</li><li>警告:這將會傳送您的交談紀錄到您所指定的 OpenAI API 相容伺服器</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與其 OpenAI API 相容伺服器進行通訊</li><strong>Connect to OpenAI-compatible API server</strong><br> %1
- <strong>連線到 OpenAI API 相容性伺服器</strong><br> %1
+ <strong>連線到 OpenAI API 相容伺服器</strong><br> %1<strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
- <strong>建立者:%1。</strong><br><ul><li>發行於:%2。<li>這個模型有 %3 個讚。<li>這個模型有 %4 次下載次數。<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul>
+ <strong>模型作者:%1</strong><br><ul><li>發佈日期:%2<li>累積讚數:%3 個讚<li>下載次數:%4 次<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul>
@@ -2803,7 +2803,7 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
<h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體設備。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a>
+ <h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體裝置。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a>
From 10a83a8b26e2303d3f45cd5eae32457e87d2633e Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 16 Aug 2024 15:01:19 -0400
Subject: [PATCH 43/66] chat: set the window icon on Linux (#2880)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/main.cpp | 8 ++++++++
2 files changed, 9 insertions(+)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 88fa3541..0aa040bb 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -13,6 +13,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872))
- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
+- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/main.cpp b/gpt4all-chat/main.cpp
index 4546a95b..49f1ea3a 100644
--- a/gpt4all-chat/main.cpp
+++ b/gpt4all-chat/main.cpp
@@ -21,6 +21,11 @@
#include
#include
+#ifdef Q_OS_LINUX
+# include
+#endif
+
+
int main(int argc, char *argv[])
{
QCoreApplication::setOrganizationName("nomic.ai");
@@ -32,6 +37,9 @@ int main(int argc, char *argv[])
Logger::globalInstance();
QGuiApplication app(argc, argv);
+#ifdef Q_OS_LINUX
+ app.setWindowIcon(QIcon(":/gpt4all/icons/gpt4all.svg"));
+#endif
// set search path before constructing the MySettings instance, which relies on this
QString llmodelSearchPaths = QCoreApplication::applicationDirPath();
From 739121ea1e13428451edda76acf322f3b33f3c13 Mon Sep 17 00:00:00 2001
From: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Date: Mon, 19 Aug 2024 18:34:10 +0300
Subject: [PATCH 44/66] translations: corrections for Romanian (#2890)
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/translations/gpt4all_ro_RO.ts | 131 ++++++++++-----------
2 files changed, 64 insertions(+), 68 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 0aa040bb..f5df9944 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -14,6 +14,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872))
- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
+- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 219dc777..89c029a6 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -24,13 +24,13 @@
Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
- Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi adăugate în Configurare.
+ Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi specificate în Configurare.Please choose a directory
- Selectează un folder/director
+ Selectează un director/folder
@@ -78,7 +78,7 @@
Create Collection
- Creează o Colecţie
+ Creează Colecţia
@@ -257,13 +257,13 @@
Remove
- şterge
+ ŞtergeRemove model from filesystem
- şterge modelul din sistemul de fişiere
+ Şterge modelul din sistemul de fişiere
@@ -289,14 +289,13 @@
<strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem
- (%2).</strong></font>
+ <strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem (%2).</strong></font><strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât cea disponibilă în sistem (%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem (%2).</strong></font>
@@ -544,8 +543,7 @@
The compute device used for text generation. "Auto" uses Vulkan or
Metal.
- Dispozitivul de calcul utilizat pentru generarea de text.
- "Auto" apelează la Vulkan sau la Metal.
+ Dispozitivul de calcul utilizat pentru generarea de text. "Auto" apelează la Vulkan sau la Metal.
@@ -587,7 +585,7 @@
The language and locale you wish to use.
- Limba şi Localizarea de utilizat
+ Limba şi Localizarea de utilizat.
@@ -631,7 +629,7 @@
Generate suggested follow-up questions at the end of responses.
- Generarea de Întrebări pentru continuare, la finalul replicilor.
+ Generarea de întrebări pentru continuare, la finalul replicilor.
@@ -726,7 +724,7 @@
Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB
per chat.
- Salvează pe disc starea modelului pentru Încărcare mai rapidă. ATENţIE: Consumă ~2GB/conversaţie.
+ Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
@@ -737,7 +735,7 @@
Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased
resource usage.
- Activează pe localhost un Server compatibil cu Open-AI. ATENţIE: Creşte consumul de resurse.
+ Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.
@@ -776,7 +774,7 @@
New Chat
- Chat/Conversaţie Nouă
+ Conversaţie Nouă
@@ -839,25 +837,25 @@
Save chat name
- Salveazăa denumirea conversaţiei
+ Salvează denumirea conversaţieiDelete chat
- şterge conversaţia
+ Şterge conversaţiaConfirm chat deletion
- CONFIRMă ştergerea conversaţiei
+ CONFIRMĂ ştergerea conversaţieiCancel chat deletion
- ANULEAZă ştergerea conversaţiei
+ ANULEAZĂ ştergerea conversaţiei
@@ -877,12 +875,12 @@
TODAY
- ASTăZI
+ ASTĂZITHIS WEEK
- SăPTăMâNA ACEASTA
+ SĂPTĂMÂNA ACEASTA
@@ -892,7 +890,7 @@
LAST SIX MONTHS
- ULTIMELE şASE LUNI
+ ULTIMELE ŞASE LUNI
@@ -953,7 +951,7 @@
Reload the currently loaded model
- ReÎncarcă modelul curent
+ Reîncarcă modelul curent
@@ -971,7 +969,7 @@
Model loading error.
- Eroare la Încărcarea modelului.
+ Eroare la încărcarea modelului.
@@ -1044,8 +1042,7 @@
GPT4All requires that you install at least one
model to get started
- GPT4All necesită cel puţin un
- model pentru a putea porni
+ GPT4All necesită cel puţin un model pentru a putea porni
@@ -1072,8 +1069,7 @@
GPT4All requires that you install at least one
model to get started
- GPT4All necesită cel puţin un
- model pentru a putea rula
+ GPT4All necesită cel puţin un model pentru a putea rula
@@ -1085,7 +1081,7 @@ model to get started
Shows the add model view
- Afisează secţiunea de adăugare a unui model
+ Afişează secţiunea de adăugare a unui model
@@ -1119,7 +1115,7 @@ model to get started
response stopped ...
- replică Întreruptă...
+ replică întreruptă...
@@ -1137,7 +1133,7 @@ model to get started
generating questions ...
- se generează Întrebări...
+ se generează întrebări...
@@ -1205,13 +1201,13 @@ model to get started
Erase and reset chat session
- şterge şi resetează sesiunea de chat
+ Şterge şi resetează sesiunea de chatCopy chat session to clipboard
- Copiez sesiunea de chat în Clipboard
+ Copiez sesiunea de chat (conversaţia) în Clipboard
@@ -1235,7 +1231,7 @@ model to get started
Reloads the model
- ReÎncarc modelul
+ Reîncarc modelul<h3>Encountered an error loading
@@ -1254,7 +1250,7 @@ model to get started<h3>EROARE la Încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
de erori pot apărea din mai multe cauze, dintre care cele mai comune
- includ un format inadecvat al fişierului, un download incomplet sau Întrerupt,
+ includ un format inadecvat al fişierului, un download incomplet sau întrerupt,
un tip inadecvat de fişier, RAM insuficientă, sau un tip incompatibil de model.
Sugestii pentru rezolvarea problemei: verifică dacă fişierul modelului are
un format si un tip compatibile; verifică dacă fişierul modelului este complet
@@ -1535,7 +1531,7 @@ model to get started
Github
-
+ GitHubGitHub
@@ -1583,8 +1579,7 @@ model to get started
Comma-separated list. LocalDocs will only attempt to process files with these
extensions.
- Extensiile, separate prin virgulă. LocalDocs va Încerca procesarea
- numai a fişierelor cu aceste extensii.
+ Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.
@@ -1808,7 +1803,7 @@ model to get started
INDEXING
- ...SE INDEXEAZă...
+ ...SE INDEXEAZĂ...
@@ -1820,7 +1815,7 @@ model to get started
REQUIRES UPDATE
- NECESITă UPDATE
+ NECESITĂ UPDATE
@@ -2116,7 +2111,7 @@ model to get started
Prompt used to generate suggested follow-up questions.
- Prompt-ul folosit pentru generarea Întrebărilor de continuare.
+ Prompt-ul folosit pentru generarea întrebărilor de continuare.
@@ -2134,7 +2129,7 @@ model to get started
Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
- Numărul maxim combinat al token-urilor în prompt+replică Înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTă: Nu are efect până la reincărcarea modelului.
+ Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.
@@ -2151,7 +2146,7 @@ model to get started
Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTă: O temperatură tot mai Înaltă determinÎ replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile.
@@ -2168,7 +2163,7 @@ model to get started
Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTă: Se evită selectarea token-urilor foarte improbabile.
+ Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
@@ -2195,7 +2190,7 @@ NOTE: Does not take effect until you reload the model.
Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determinî replici tot mai creative şi mai puţin predictibile.
+ Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile.
@@ -2282,7 +2277,7 @@ NOTE: Does not take effect until you reload the model.
Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- numărul token-urilor procesate simultan. NOTă: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
+ numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
@@ -2300,7 +2295,7 @@ NOTE: Does not take effect until you reload the model.
Repeat Penalty Tokens
- Token-uri pentru penalzare a repetării
+ Token-uri pentru penalizare a repetării
@@ -2325,7 +2320,7 @@ NOTE: Does not take effect until you reload the model.
VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
- Cât de multe layere ale modelului să fie Încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce Încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi Încetinesc inferenţa. NOTă: Nu are efect până la reÎncărcarea modelului.
+ Cât de multe layere ale modelului să fie încărcate în VRAM. Valori mici trebuie folosite dacă GPT4All rămâne fără VRAM în timp ce încarcă modelul. Valorile tot mai mici cresc utilizarea CPU şi a RAM şi încetinesc inferenţa. NOTĂ: Nu are efect până la reîncărcarea modelului.
@@ -2340,7 +2335,7 @@ NOTE: Does not take effect until you reload the model.
Install a model to get started using GPT4All
- Instalează un model pentru a Începe să foloseşti GPT4All
+ Instalează un model pentru a începe să foloseşti GPT4All
@@ -2408,7 +2403,7 @@ NOTE: Does not take effect until you reload the model.
Stop/restart/start the download
- Oprirea/Repornirea/Initierea descărcării
+ Oprirea/Repornirea/Iniţierea descărcării
@@ -2445,7 +2440,7 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="2">WARNING: Not recommended for your
hardware. Model requires more memory (%1 GB) than your system has available
(%2).</strong></font>
- <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
+ <strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău (%2).</strong></font>
@@ -2718,13 +2713,13 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Describes what will happen when you opt-in
- Descrie ce se Întâmplă când participi
+ Descrie ce se întâmplă când participiPlease provide a name for attribution (optional)
- Specifică o denumire pentru această apreciere (optional)
+ Specifică o denumire pentru această apreciere (opţional)
@@ -2790,7 +2785,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Reveals a shortlived help balloon
- Afisează un mesaj scurt de asistenţă
+ Afişează un mesaj scurt de asistenţă
@@ -2896,11 +2891,11 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Activând aceste functionalităţi vei putea participa la procesul democratic
de instruire a unui
model conversaţional prin contribuirea cu date/informaţii pentru îmbunătăţirea unor modele.
- Când un model în GPT4All Îţi răspunde şi îi accepţi răspunsul, conversaţia este
+ Când un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
trimisă la componenta
Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Like/Dislike) răspunsul. Dacă
- un răspuns Nu Îţi Place. poţi
- sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate În
+ un răspuns Nu Îţi Place (e "Aiurea"). poţi
+ sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
componenta DataLake a GPT4All.
NOTă: Dacă activezi această funcţionalitate, vei trimite datele tale la componenta
@@ -2972,7 +2967,7 @@ care foloseşte datele tale!
Describes what will happen when you opt-in
- Descrie ce se Întâmplă când participi
+ Descrie ce se întâmplă când participi
@@ -2980,7 +2975,7 @@ care foloseşte datele tale!
Opt-in for anonymous usage statistics
- Acceptă colectarea de statistici despre utilizare anonmă
+ Acceptă colectarea de statistici despre utilizare -anonimă-
@@ -3034,7 +3029,7 @@ care foloseşte datele tale!
Allow opt-in anonymous sharing of chats to the GPT4All Datalake
- Permite participarea la partajarea (share) anonimă a conversaţiilor către DataLake a GPT4All
+ Permite participarea la partajarea (share) -anonimă- a conversaţiilor către DataLake a GPT4All
@@ -3046,7 +3041,7 @@ care foloseşte datele tale!
Allow opt-out anonymous sharing of chats to the GPT4All Datalake
- Permite anularea participării la partajarea anonimă a conversaţiilor către DataLake a GPT4All
+ Permite anularea participării la partajarea -anonimă- a conversaţiilor către DataLake a GPT4All
@@ -3054,7 +3049,7 @@ care foloseşte datele tale!
<b>Warning:</b> changing the model will erase the current
conversation. Do you wish to continue?
- <b>Atenţie:</b> schimbarea modelului va sterge conversaţia curentă. Confirmi aceasta?
+ <b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
@@ -3072,7 +3067,7 @@ care foloseşte datele tale!
Continue with model loading
- Continuă cu Încărcarea modelului
+ Continuă încărcarea modelului
@@ -3119,7 +3114,7 @@ care foloseşte datele tale!
Closes the response dialog
- Închide dialogul răspunsului
+ Închide afişarea răspunsului
@@ -3135,7 +3130,7 @@ care foloseşte datele tale!
CPU.<br><br>See here for more information: <a
href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu Întruneşte condiţiile minime pentru a rula acest program. în particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. în acest moment, unica soluţie este să Îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
@@ -3151,7 +3146,7 @@ care foloseşte datele tale!
permissions in the local app config directory where the settings file is located.
Check out our <a href="https://discord.gg/4M2QFmTt2k">discord
channel</a> for help.
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte În/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.<h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only soluţion at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
@@ -3161,13 +3156,13 @@ care foloseşte datele tale!
<h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
- <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte în/pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
+ <h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
From 432430811dd94d99afc8bbe61a021839e35777d8 Mon Sep 17 00:00:00 2001
From: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
Date: Mon, 19 Aug 2024 18:01:18 +0200
Subject: [PATCH 45/66] ChatView: use correct plurals for "N Source(s)" (#2885)
Signed-off-by: cosmic-snow <134004613+cosmic-snow@users.noreply.github.com>
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/qml/ChatView.qml | 2 +-
gpt4all-chat/translations/gpt4all_en_US.ts | 464 +-------------------
gpt4all-chat/translations/gpt4all_es_MX.ts | 468 +-------------------
gpt4all-chat/translations/gpt4all_it_IT.ts | 468 +-------------------
gpt4all-chat/translations/gpt4all_pt_BR.ts | 468 +-------------------
gpt4all-chat/translations/gpt4all_ro_RO.ts | 479 +--------------------
gpt4all-chat/translations/gpt4all_zh_CN.ts | 467 +-------------------
gpt4all-chat/translations/gpt4all_zh_TW.ts | 467 +-------------------
9 files changed, 60 insertions(+), 3224 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index f5df9944..808648d5 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -15,6 +15,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
+- Fix singular/plural forms of LocalDocs "x Sources" (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2885](https://github.com/nomic-ai/gpt4all/pull/2885))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/qml/ChatView.qml b/gpt4all-chat/qml/ChatView.qml
index 920e2759..f62b6654 100644
--- a/gpt4all-chat/qml/ChatView.qml
+++ b/gpt4all-chat/qml/ChatView.qml
@@ -1142,7 +1142,7 @@ Rectangle {
}
Text {
- text: qsTr("%1 Sources").arg(consolidatedSources.length)
+ text: qsTr("%n Source(s)", "", consolidatedSources.length)
padding: 0
font.pixelSize: theme.fontSizeLarge
font.bold: true
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index b8944c0c..0221d4e7 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections
- Add Document Collection
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
- Please choose a directory
- Name
- Collection name...
- Name of the collection to add (Required)
- Folder
- Folder path...
- Folder path to documents (Required)
- Browse
- Create Collection
@@ -80,295 +68,244 @@
AddModelView
- ← Existing Models
- Explore Models
- Discover and download models by keyword search...
- Text field for discovering and filtering downloadable models
- Initiate model discovery and filtering
- Triggers discovery and filtering of models
- Default
- Likes
- Downloads
- Recent
- Asc
- Desc
- None
- Searching · %1
- Sort by: %1
- Sort dir: %1
- Limit: %1
- Network error: could not retrieve %1
-
- Busy indicator
- Displayed when the models request is ongoing
- Model file
- Model file to be downloaded
- Description
- File description
- Cancel
- Resume
- Download
- Stop/restart/start the download
- Remove
- Remove model from filesystem
-
- Install
- Install online model
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- ERROR: $API_KEY is empty.
- ERROR: $BASE_URL is empty.
- enter $BASE_URL
- ERROR: $MODEL_NAME is empty.
- enter $MODEL_NAME
- %1 GB
-
- ?
- Describes an error that occurred when downloading
- <strong><font size="1"><a href="#error">Error</a></strong></font>
- Error for incompatible hardware
- Download progressBar
- Shows the progress made in the download
- Download speed
- Download speed in bytes/kilobytes/megabytes per second
- Calculating...
@@ -377,52 +314,41 @@
-
-
-
- Whether the file hash is being calculated
- Displayed when the file hash is being calculated
- enter $API_KEY
- File size
- RAM required
- Parameters
- Quant
- Type
@@ -431,25 +357,21 @@
ApplicationSettings
- Application
- Network dialog
- opt-in to share feedback/conversations
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -461,267 +383,222 @@
- Error dialog
- Application Settings
- General
- Theme
- The application color scheme.
- Dark
- Light
- LegacyDark
- Font Size
- The size of text in the application.
- Small
- Medium
- Large
- Language and Locale
- The language and locale you wish to use.
- System Locale
- Device
- The compute device used for text generation.
-
- Application default
- Default Model
- The preferred model for new chats. Also used as the local server fallback.
- Suggestion Mode
- Generate suggested follow-up questions at the end of responses.
- When chatting with LocalDocs
- Whenever possible
- Never
- Download Path
- Where to store local models and the LocalDocs database.
- Browse
- Choose where to save model files
- Enable Datalake
- Send chats and feedback to the GPT4All Open-Source Datalake.
- Advanced
- CPU Threads
- The number of CPU threads used for inference and embedding.
- Save Chat Context
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
- Enable Local Server
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
- API Server Port
- The port to use for the local server. Requires restart.
- Check For Updates
- Manually check for an update to GPT4All.
- Updates
@@ -757,73 +634,61 @@
ChatDrawer
- Drawer
- Main navigation drawer
- + New Chat
- Create a new chat
- Select the current chat or edit the chat when in edit mode
- Edit chat name
- Save chat name
- Delete chat
- Confirm chat deletion
- Cancel chat deletion
- List of chats
- List of chats in the drawer dialog
@@ -865,392 +730,328 @@
ChatView
- <h3>Warning</h3><p>%1</p>
- Switch model dialog
- Warn the user if they switch models, then context will be erased
- Conversation copied to clipboard.
- Code copied to clipboard.
- Chat panel
- Chat panel with options
- Reload the currently loaded model
- Eject the currently loaded model
- No model installed.
- Model loading error.
- Waiting for model...
- Switching context...
- Choose a model...
- Not found: %1
- The top item is the current model
-
- LocalDocs
- Add documents
- add collections of documents to the chat
- Load the default model
- Loads the default model which can be changed in settings
- No Model Installed
- GPT4All requires that you install at least one
model to get started
- Install a Model
- Shows the add model view
- Conversation with the model
- prompt / response pairs from the conversation
- GPT4All
- You
- response stopped ...
- processing ...
- generating response ...
- generating questions ...
-
- Copy
- Copy Message
- Disable markdown
- Enable markdown
- Thumbs up
- Gives a thumbs up to the response
- Thumbs down
- Opens thumbs down dialog
-
+
-
- %1 Sources
-
+ %n Source(s)
+
+ %n Source
+ %n Sources
+
- Suggested follow-ups
- Erase and reset chat session
- Copy chat session to clipboard
- Redo last chat response
- Stop generating
- Stop the current response generation
- Reloads the model
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
-
- Reload · %1
- Loading · %1
- Load · %1 (default) →
- restoring from text ...
- retrieving localdocs: %1 ...
- searching localdocs: %1 ...
- Send a message...
- Load a model to continue...
- Send messages/prompts to the model
- Cut
- Paste
- Select All
- Send message
- Sends the message/prompt contained in textfield to the model
@@ -1259,13 +1060,11 @@ model to get started
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete results
- %n file(s)%n file
@@ -1274,7 +1073,6 @@ model to get started
- %n word(s)%n word
@@ -1283,19 +1081,16 @@ model to get started
- Updating
- + Add Docs
- Select a collection to make it available to the chat model.
@@ -1342,109 +1137,91 @@ model to get started
HomeView
- Welcome to GPT4All
- The privacy-first LLM chat application
- Start chatting
- Start Chatting
- Chat with any LLM
- LocalDocs
- Chat with your local files
- Find Models
- Explore and download models
- Latest news
- Latest news from GPT4All
- Release Notes
- Documentation
- Discord
- X (Twitter)
- Github
- GPT4All.io
- Subscribe to Newsletter
@@ -1453,139 +1230,116 @@ model to get started
LocalDocsSettings
- LocalDocs
- LocalDocs Settings
- Indexing
- Allowed File Extensions
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.
- Embedding
- Use Nomic Embed API
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.
- Nomic API Key
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
- Embeddings Device
- The compute device used for embeddings. Requires restart.
- Application default
- Display
- Show Sources
- Display the sources used for each response.
- Advanced
- Warning: Advanced usage only.
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
- Document snippet size (characters)
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Max document snippets per prompt
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
@@ -1594,139 +1348,116 @@ model to get started
LocalDocsView
- LocalDocs
- Chat with your local files
- + Add Collection
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
- No Collections Installed
- Install a collection of local documents to get started using this feature
- + Add Doc Collection
- Shows the add model view
- Indexing progressBar
- Shows the progress made in the indexing
- ERROR
- INDEXING
- EMBEDDING
- REQUIRES UPDATE
- READY
- INSTALLING
- Indexing in progress
- Embedding in progress
- This collection requires an update after version change
- Automatically reindexes upon changes to the folder
- Installation in progress
- %
- %n file(s)%n file
@@ -1735,7 +1466,6 @@ model to get started
- %n word(s)%n word
@@ -1744,31 +1474,26 @@ model to get started
- Remove
- Rebuild
- Reindex this folder from scratch. This is slow and usually not needed.
- Update
- Update the collection to the new version. This is a slow operation.
@@ -1845,109 +1570,91 @@ model to get started
ModelSettings
- Model
- Model Settings
- Clone
- Remove
- Name
- Model File
- System Prompt
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
- Prompt Template
- The template that wraps every prompt.
- Must contain the string "%1" to be replaced with the user's input.
- Chat Name Prompt
- Prompt used to automatically generate chat names.
- Suggested FollowUp Prompt
- Prompt used to generate suggested follow-up questions.
- Context Length
- Number of input and output tokens the model sees.
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1955,148 +1662,124 @@ NOTE: Does not take effect until you reload the model.
- Temperature
- Randomness of model output. Higher -> more variation.
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
- Top-P
- Nucleus Sampling factor. Lower -> more predicatable.
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Min-P
- Minimum token probability. Higher -> more predictable.
- Sets the minimum relative probability for a token to be considered.
- Top-K
- Size of selection pool for tokens.
- Only the top K most likely tokens will be chosen from.
- Max Length
- Maximum response length, in tokens.
- Prompt Batch Size
- The batch size used for prompt processing.
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- Repeat Penalty
- Repetition penalty factor. Set to 1 to disable.
- Repeat Penalty Tokens
- Number of previous tokens used for penalty.
- GPU Layers
- Number of model layers to load into VRAM.
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2107,203 +1790,168 @@ NOTE: Does not take effect until you reload the model.
ModelsView
- No Models Installed
- Install a model to get started using GPT4All
-
- + Add Model
- Shows the add model view
- Installed Models
- Locally installed chat models
- Model file
- Model file to be downloaded
- Description
- File description
- Cancel
- Resume
- Stop/restart/start the download
- Remove
- Remove model from filesystem
-
- Install
- Install online model
- <strong><font size="1"><a href="#error">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- ERROR: $API_KEY is empty.
- ERROR: $BASE_URL is empty.
- enter $BASE_URL
- ERROR: $MODEL_NAME is empty.
- enter $MODEL_NAME
- %1 GB
- ?
- Describes an error that occurred when downloading
- Error for incompatible hardware
- Download progressBar
- Shows the progress made in the download
- Download speed
- Download speed in bytes/kilobytes/megabytes per second
- Calculating...
@@ -2312,58 +1960,46 @@ NOTE: Does not take effect until you reload the model.
-
-
-
- Whether the file hash is being calculated
- Busy indicator
- Displayed when the file hash is being calculated
- enter $API_KEY
- File size
- RAM required
- Parameters
- Quant
- Type
@@ -2372,13 +2008,11 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
- Fancy link
- A stylized link
@@ -2387,7 +2021,6 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
- Please choose a directory
@@ -2396,13 +2029,11 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
- Restore Defaults
- Restores settings dialog to a default state
@@ -2411,13 +2042,11 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2427,55 +2056,46 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Terms for opt-in
- Describes what will happen when you opt-in
- Please provide a name for attribution (optional)
- Attribution (optional)
- Provide attribution
- Enable
- Enable opt-in
- Cancel
- Cancel opt-in
@@ -2484,19 +2104,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
- New version is available
- Update
- Update to new version
@@ -2505,19 +2122,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
- Reveals a shortlived help balloon
- Busy indicator
- Displayed when the popup is showing busy
@@ -2527,32 +2141,26 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
-
- Settings
- Contains various application settings
- Application
- Model
- LocalDocs
@@ -2561,13 +2169,11 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
- Welcome!
- ### Release notes
%1### Contributors
%2
@@ -2575,19 +2181,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Release notes
- Release notes for this version
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2606,87 +2209,70 @@ model release that uses your data!
- Terms for opt-in
- Describes what will happen when you opt-in
-
- Opt-in for anonymous usage statistics
-
- Yes
- Allow opt-in for anonymous usage statistics
-
- No
- Opt-out for anonymous usage statistics
- Allow opt-out for anonymous usage statistics
-
- Opt-in for network
- Allow opt-in for network
- Allow opt-in anonymous sharing of chats to the GPT4All Datalake
- Opt-out for network
- Allow opt-out anonymous sharing of chats to the GPT4All Datalake
@@ -2695,27 +2281,22 @@ model release that uses your data!
SwitchModelDialog
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
- Continue
- Continue with model loading
-
- Cancel
@@ -2724,37 +2305,31 @@ model release that uses your data!
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)
- Please provide a better response...
- Submit
- Submits the user's response
- Cancel
- Closes the response dialog
@@ -2763,151 +2338,124 @@ model release that uses your data!
main
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
- Connection to datalake failed.
- Saving chats.
- Network dialog
- opt-in to share feedback/conversations
- Home view
- Home view of application
- Home
- Chat view
- Chat view to interact with models
- Chats
-
- Models
- Models view for installed models
-
- LocalDocs
- LocalDocs view to configure and use local docs
-
- Settings
- Settings view for application configuration
- The datalake is enabled
- Using a network model
- Server mode is enabled
- Installed models
- View of installed models
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index 433d514a..a3498c43 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections← Colecciones existentes
- Add Document CollectionAgregar colección de documentos
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Agregue una carpeta que contenga archivos de texto plano, PDFs o Markdown. Configure extensiones adicionales en Configuración.
- Please choose a directoryPor favor, elija un directorio
- NameNombre
- Collection name...Nombre de la colección...
- Name of the collection to add (Required)Nombre de la colección a agregar (Requerido)
- FolderCarpeta
- Folder path...Ruta de la carpeta...
- Folder path to documents (Required)Ruta de la carpeta de documentos (Requerido)
- BrowseExplorar
- Create CollectionCrear colección
@@ -80,265 +68,219 @@
AddModelView
- ← Existing Models← Modelos existentes
- Explore ModelsExplorar modelos
- Discover and download models by keyword search...Descubre y descarga modelos mediante búsqueda por palabras clave...
- Text field for discovering and filtering downloadable modelsCampo de texto para descubrir y filtrar modelos descargables
- Initiate model discovery and filteringIniciar descubrimiento y filtrado de modelos
- Triggers discovery and filtering of modelsActiva el descubrimiento y filtrado de modelos
- DefaultPredeterminado
- LikesMe gusta
- DownloadsDescargas
- RecentReciente
- AscAsc
- DescDesc
- NoneNinguno
- Searching · %1Buscando · %1
- Sort by: %1Ordenar por: %1
- Sort dir: %1Dirección de ordenamiento: %1
- Limit: %1Límite: %1
- Network error: could not retrieve %1Error de red: no se pudo recuperar %1
-
- Busy indicatorIndicador de ocupado
- Displayed when the models request is ongoingSe muestra cuando la solicitud de modelos está en curso
- Model fileArchivo del modelo
- Model file to be downloadedArchivo del modelo a descargar
- DescriptionDescripción
- File descriptionDescripción del archivo
- CancelCancelar
- ResumeReanudar
- DownloadDescargar
- Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
- RemoveEliminar
- Remove model from filesystemEliminar modelo del sistema de archivos
-
- InstallInstalar
- Install online modelInstalar modelo en línea
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para tu hardware. El modelo requiere más memoria (%1 GB) de la que tu sistema tiene disponible (%2).</strong></font>
- %1 GB%1 GB
-
- ??
- Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
- Error for incompatible hardwareError por hardware incompatible
- Download progressBarBarra de progreso de descarga
- Shows the progress made in the downloadMuestra el progreso realizado en la descarga
- Download speedVelocidad de descarga
- Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
- Calculating...Calculando...
@@ -347,82 +289,66 @@
-
-
-
- Whether the file hash is being calculatedSi se está calculando el hash del archivo
- Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
- enter $API_KEYingrese $API_KEY
- File sizeTamaño del archivo
- RAM requiredRAM requerida
- ParametersParámetros
- QuantCuantificación
- TypeTipo
- ERROR: $API_KEY is empty.ERROR: $API_KEY está vacío.
- ERROR: $BASE_URL is empty.ERROR: $BASE_URL está vacío.
- enter $BASE_URLingrese $BASE_URL
- ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
- enter $MODEL_NAMEingrese $MODEL_NAME
@@ -431,25 +357,21 @@
ApplicationSettings
- ApplicationAplicación
- Network dialogDiálogo de red
- opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -461,67 +383,56 @@
- Error dialogDiálogo de error
- Application SettingsConfiguración de la aplicación
- GeneralGeneral
- ThemeTema
- The application color scheme.El esquema de colores de la aplicación.
- DarkOscuro
- LightClaro
- LegacyDarkOscuro legado
- Font SizeTamaño de fuente
- The size of text in the application.El tamaño del texto en la aplicación.
- DeviceDispositivo
@@ -531,151 +442,126 @@
- SmallPequeño
- MediumMediano
- LargeGrande
- Language and LocaleIdioma y configuración regional
- The language and locale you wish to use.El idioma y la configuración regional que deseas usar.
- Default ModelModelo predeterminado
- The preferred model for new chats. Also used as the local server fallback.El modelo preferido para nuevos chats. También se utiliza como respaldo del servidor local.
- Suggestion ModeModo de sugerencia
- Generate suggested follow-up questions at the end of responses.Generar preguntas de seguimiento sugeridas al final de las respuestas.
- When chatting with LocalDocsAl chatear con LocalDocs
- Whenever possibleSiempre que sea posible
- NeverNunca
- Download PathRuta de descarga
- Where to store local models and the LocalDocs database.Dónde almacenar los modelos locales y la base de datos de LocalDocs.
- BrowseExplorar
- Choose where to save model filesElegir dónde guardar los archivos del modelo
- Enable DatalakeHabilitar Datalake
- Send chats and feedback to the GPT4All Open-Source Datalake.Enviar chats y comentarios al Datalake de código abierto de GPT4All.
- AdvancedAvanzado
- CPU ThreadsHilos de CPU
- The number of CPU threads used for inference and embedding.El número de hilos de CPU utilizados para inferencia e incrustación.
- Save Chat ContextGuardar contexto del chat
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat.
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta en un mayor uso de recursos.
- Enable Local ServerHabilitar servidor local
@@ -687,51 +573,42 @@
- API Server PortPuerto del servidor API
- The port to use for the local server. Requires restart.El puerto a utilizar para el servidor local. Requiere reinicio.
- Check For UpdatesBuscar actualizaciones
- Manually check for an update to GPT4All.Buscar manualmente una actualización para GPT4All.
- UpdatesActualizaciones
- System LocaleRegional del sistema
- The compute device used for text generation.El dispositivo de cómputo utilizado para la generación de texto.
-
- Application defaultPredeterminado de la aplicación
@@ -783,73 +660,61 @@
ChatDrawer
- DrawerCajón
- Main navigation drawerCajón de navegación principal
- + New Chat+ Nuevo chat
- Create a new chatCrear un nuevo chat
- Select the current chat or edit the chat when in edit modeSeleccionar el chat actual o editar el chat cuando esté en modo de edición
- Edit chat nameEditar nombre del chat
- Save chat nameGuardar nombre del chat
- Delete chatEliminar chat
- Confirm chat deletionConfirmar eliminación del chat
- Cancel chat deletionCancelar eliminación del chat
- List of chatsLista de chats
- List of chats in the drawer dialogLista de chats en el diálogo del cajón
@@ -891,177 +756,147 @@
ChatView
- <h3>Warning</h3><p>%1</p><h3>Advertencia</h3><p>%1</p>
- Switch model dialogDiálogo para cambiar de modelo
- Warn the user if they switch models, then context will be erasedAdvertir al usuario si cambia de modelo, entonces se borrará el contexto
- Conversation copied to clipboard.Conversación copiada al portapapeles.
- Code copied to clipboard.Código copiado al portapapeles.
- Chat panelPanel de chat
- Chat panel with optionsPanel de chat con opciones
- Reload the currently loaded modelRecargar el modelo actualmente cargado
- Eject the currently loaded modelExpulsar el modelo actualmente cargado
- No model installed.No hay modelo instalado.
- Model loading error.Error al cargar el modelo.
- Waiting for model...Esperando al modelo...
- Switching context...Cambiando contexto...
- Choose a model...Elige un modelo...
- Not found: %1No encontrado: %1
- The top item is the current modelEl elemento superior es el modelo actual
-
- LocalDocsDocumentosLocales
- Add documentsAgregar documentos
- add collections of documents to the chatagregar colecciones de documentos al chat
- Load the default modelCargar el modelo predeterminado
- Loads the default model which can be changed in settingsCarga el modelo predeterminado que se puede cambiar en la configuración
- No Model InstalledNo hay modelo instalado
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Se encontró un error al cargar el modelo:</h3><br><i>"%1"</i><br><br>Los fallos en la carga de modelos pueden ocurrir por varias razones, pero las causas más comunes incluyen un formato de archivo incorrecto, una descarga incompleta o corrupta, un tipo de archivo equivocado, RAM del sistema insuficiente o un tipo de modelo incompatible. Aquí hay algunas sugerencias para resolver el problema:<br><ul><li>Asegúrate de que el archivo del modelo tenga un formato y tipo compatibles<li>Verifica que el archivo del modelo esté completo en la carpeta de descargas<li>Puedes encontrar la carpeta de descargas en el diálogo de configuración<li>Si has cargado el modelo manualmente, asegúrate de que el archivo no esté corrupto verificando el md5sum<li>Lee más sobre qué modelos son compatibles en nuestra <a href="https://docs.gpt4all.io/">documentación</a> para la interfaz gráfica<li>Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de discord</a> para obtener ayuda
- Install a ModelInstalar un modelo
- Shows the add model viewMuestra la vista de agregar modelo
- Conversation with the modelConversación con el modelo
- prompt / response pairs from the conversationpares de pregunta / respuesta de la conversación
- GPT4AllGPT4All
- YouTú
@@ -1071,209 +906,176 @@
- response stopped ...respuesta detenida ...
- processing ...procesando ...
- generating response ...generando respuesta ...
- generating questions ...generando preguntas ...
-
- CopyCopiar
- Copy MessageCopiar mensaje
- Disable markdownDesactivar markdown
- Enable markdownActivar markdown
- Thumbs upMe gusta
- Gives a thumbs up to the responseDa un me gusta a la respuesta
- Thumbs downNo me gusta
- Opens thumbs down dialogAbre el diálogo de no me gusta
-
-
-
- %1 Sources
- %1 Fuentes
-
- Suggested follow-upsSeguimientos sugeridos
- Erase and reset chat sessionBorrar y reiniciar sesión de chat
- Copy chat session to clipboardCopiar sesión de chat al portapapeles
- Redo last chat responseRehacer última respuesta del chat
- Stop generatingDetener generación
- Stop the current response generationDetener la generación de la respuesta actual
- Reloads the modelRecarga el modelo
-
- Reload · %1Recargar · %1
- Loading · %1Cargando · %1
- Load · %1 (default) →Cargar · %1 (predeterminado) →
- retrieving localdocs: %1 ...recuperando documentos locales: %1 ...
- searching localdocs: %1 ...buscando en documentos locales: %1 ...
+
+
+ %n Source(s)
+
+ %n Fuente
+ %n Fuentes
+
+
- Send a message...Enviar un mensaje...
- Load a model to continue...Carga un modelo para continuar...
- Send messages/prompts to the modelEnviar mensajes/indicaciones al modelo
- CutCortar
- PastePegar
- Select AllSeleccionar todo
- Send messageEnviar mensaje
- Sends the message/prompt contained in textfield to the modelEnvía el mensaje/indicación contenido en el campo de texto al modelo
- GPT4All requires that you install at least one
model to get startedGPT4All requiere que instale al menos un
@@ -1282,7 +1084,6 @@ modelo para comenzar
- restoring from text ...restaurando desde texto ...
@@ -1291,13 +1092,11 @@ modelo para comenzar
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete resultsAdvertencia: buscar en colecciones mientras se indexan puede devolver resultados incompletos
- %n file(s)%n archivo
@@ -1306,7 +1105,6 @@ modelo para comenzar
- %n word(s)%n palabra
@@ -1315,19 +1113,16 @@ modelo para comenzar
- UpdatingActualizando
- + Add Docs+ Agregar documentos
- Select a collection to make it available to the chat model.Seleccione una colección para hacerla disponible al modelo de chat.
@@ -1374,109 +1169,91 @@ modelo para comenzar
HomeView
- Welcome to GPT4AllBienvenido a GPT4All
- The privacy-first LLM chat applicationLa aplicación de chat LLM que prioriza la privacidad
- Start chattingComenzar a chatear
- Start ChattingIniciar chat
- Chat with any LLMChatear con cualquier LLM
- LocalDocsDocumentosLocales
- Chat with your local filesChatear con tus archivos locales
- Find ModelsBuscar modelos
- Explore and download modelsExplorar y descargar modelos
- Latest newsÚltimas noticias
- Latest news from GPT4AllÚltimas noticias de GPT4All
- Release NotesNotas de la versión
- DocumentationDocumentación
- DiscordDiscord
- X (Twitter)X (Twitter)
- GithubGithub
- GPT4All.ioGPT4All.io
- Subscribe to NewsletterSuscribirse al boletín
@@ -1485,25 +1262,21 @@ modelo para comenzar
LocalDocsSettings
- LocalDocsDocumentosLocales
- LocalDocs SettingsConfiguración de DocumentosLocales
- IndexingIndexación
- Allowed File ExtensionsExtensiones de archivo permitidas
@@ -1515,13 +1288,11 @@ modelo para comenzar
- EmbeddingIncrustación
- Use Nomic Embed APIUsar API de incrustación Nomic
@@ -1533,91 +1304,76 @@ modelo para comenzar
- Nomic API KeyClave API de Nomic
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Clave API para usar con Nomic Embed. Obtén una en la <a href="https://atlas.nomic.ai/cli-login">página de claves API</a> de Atlas. Requiere reinicio.
- Embeddings DeviceDispositivo de incrustaciones
- The compute device used for embeddings. Requires restart.El dispositivo de cómputo utilizado para las incrustaciones. Requiere reinicio.
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.Lista separada por comas. LocalDocs solo intentará procesar archivos con estas extensiones.
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incrustar documentos usando la API rápida de Nomic en lugar de un modelo local privado. Requiere reinicio.
- Application defaultPredeterminado de la aplicación
- DisplayVisualización
- Show SourcesMostrar fuentes
- Display the sources used for each response.Mostrar las fuentes utilizadas para cada respuesta.
- AdvancedAvanzado
- Warning: Advanced usage only.Advertencia: Solo para uso avanzado.
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores demasiado grandes pueden causar fallos en localdocs, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se añaden a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por fragmento de documento. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Máximo de N mejores coincidencias de fragmentos de documentos recuperados para añadir al contexto del prompt. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
@@ -1628,13 +1384,11 @@ modelo para comenzar
- Document snippet size (characters)Tamaño del fragmento de documento (caracteres)
- Max document snippets per promptMáximo de fragmentos de documento por indicación
@@ -1643,19 +1397,16 @@ modelo para comenzar
LocalDocsView
- LocalDocsDocumentosLocales
- Chat with your local filesChatea con tus archivos locales
- + Add Collection+ Agregar colección
@@ -1665,115 +1416,96 @@ modelo para comenzar
- No Collections InstalledNo hay colecciones instaladas
- Install a collection of local documents to get started using this featureInstala una colección de documentos locales para comenzar a usar esta función
- + Add Doc Collection+ Agregar colección de documentos
- Shows the add model viewMuestra la vista de agregar modelo
- Indexing progressBarBarra de progreso de indexación
- Shows the progress made in the indexingMuestra el progreso realizado en la indexación
- ERRORERROR
- INDEXINGINDEXANDO
- EMBEDDINGINCRUSTANDO
- REQUIRES UPDATEREQUIERE ACTUALIZACIÓN
- READYLISTO
- INSTALLINGINSTALANDO
- Indexing in progressIndexación en progreso
- Embedding in progressIncrustación en progreso
- This collection requires an update after version changeEsta colección requiere una actualización después del cambio de versión
- Automatically reindexes upon changes to the folderReindexación automática al cambiar la carpeta
- Installation in progressInstalación en progreso
- %%
- %n file(s)%n archivo
@@ -1782,7 +1514,6 @@ modelo para comenzar
- %n word(s)%n palabra
@@ -1791,37 +1522,31 @@ modelo para comenzar
- RemoveEliminar
- RebuildReconstruir
- Reindex this folder from scratch. This is slow and usually not needed.Reindexar esta carpeta desde cero. Esto es lento y generalmente no es necesario.
- UpdateActualizar
- Update the collection to the new version. This is a slow operation.Actualizar la colección a la nueva versión. Esta es una operación lenta.
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERROR: No se puede acceder a la base de datos LocalDocs o no es válida.</h3><br><i>Nota: Necesitará reiniciar después de intentar cualquiera de las siguientes soluciones sugeridas.</i><br><ul><li>Asegúrese de que la carpeta establecida como <b>Ruta de Descarga</b> exista en el sistema de archivos.</li><li>Verifique la propiedad y los permisos de lectura y escritura de la <b>Ruta de Descarga</b>.</li><li>Si hay un archivo <b>localdocs_v2.db</b>, verifique también su propiedad y permisos de lectura/escritura.</li></ul><br>Si el problema persiste y hay archivos 'localdocs_v*.db' presentes, como último recurso puede<br>intentar hacer una copia de seguridad y eliminarlos. Sin embargo, tendrá que recrear sus colecciones.
@@ -1904,103 +1629,86 @@ modelo para comenzar
ModelSettings
- ModelModelo
- Model SettingsConfiguración del modelo
- CloneClonar
- RemoveEliminar
- NameNombre
- Model FileArchivo del modelo
- System PromptIndicación del sistema
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados.
- Prompt TemplatePlantilla de indicación
- The template that wraps every prompt.La plantilla que envuelve cada indicación.
- Must contain the string "%1" to be replaced with the user's input.Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario.
- Chat Name PromptIndicación para el nombre del chat
- Prompt used to automatically generate chat names.Indicación utilizada para generar automáticamente nombres de chat.
- Suggested FollowUp PromptIndicación de seguimiento sugerida
- Prompt used to generate suggested follow-up questions.Indicación utilizada para generar preguntas de seguimiento sugeridas.
- Context LengthLongitud del contexto
- Number of input and output tokens the model sees.Número de tokens de entrada y salida que el modelo ve.
@@ -2012,19 +1720,16 @@ NOTA: No tiene efecto hasta que recargues el modelo.
- TemperatureTemperatura
- Randomness of model output. Higher -> more variation.Aleatoriedad de la salida del modelo. Mayor -> más variación.
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.La temperatura aumenta las probabilidades de elegir tokens menos probables.
@@ -2032,19 +1737,16 @@ NOTA: Una temperatura más alta da resultados más creativos pero menos predecib
- Top-PTop-P
- Nucleus Sampling factor. Lower -> more predicatable.Factor de muestreo de núcleo. Menor -> más predecible.
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Solo se pueden elegir los tokens más probables hasta una probabilidad total de top_p.
@@ -2052,67 +1754,56 @@ NOTA: Evita elegir tokens altamente improbables.
- Min-PMin-P
- Minimum token probability. Higher -> more predictable.Probabilidad mínima del token. Mayor -> más predecible.
- Sets the minimum relative probability for a token to be considered.Establece la probabilidad relativa mínima para que un token sea considerado.
- Top-KTop-K
- Size of selection pool for tokens.Tamaño del grupo de selección para tokens.
- Only the top K most likely tokens will be chosen from.Solo se elegirán los K tokens más probables.
- Max LengthLongitud máxima
- Maximum response length, in tokens.Longitud máxima de respuesta, en tokens.
- Prompt Batch SizeTamaño del lote de indicaciones
- The batch size used for prompt processing.El tamaño del lote utilizado para el procesamiento de indicaciones.
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Cantidad de tokens de prompt a procesar de una vez.
@@ -2120,43 +1811,36 @@ NOTA: Valores más altos pueden acelerar la lectura de prompts, pero usarán má
- Repeat PenaltyPenalización por repetición
- Repetition penalty factor. Set to 1 to disable.Factor de penalización por repetición. Establecer a 1 para desactivar.
- Repeat Penalty TokensTokens de penalización por repetición
- Number of previous tokens used for penalty.Número de tokens anteriores utilizados para la penalización.
- GPU LayersCapas de GPU
- Number of model layers to load into VRAM.Número de capas del modelo a cargar en la VRAM.
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -2166,7 +1850,6 @@ NOTA: No surtirá efecto hasta que recargue el modelo.
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2179,173 +1862,143 @@ NOTA: No surte efecto hasta que recargue el modelo.
ModelsView
- No Models InstalledNo hay modelos instalados
- Install a model to get started using GPT4AllInstala un modelo para empezar a usar GPT4All
-
- + Add Model+ Agregar modelo
- Shows the add model viewMuestra la vista de agregar modelo
- Installed ModelsModelos instalados
- Locally installed chat modelsModelos de chat instalados localmente
- Model fileArchivo del modelo
- Model file to be downloadedArchivo del modelo a descargar
- DescriptionDescripción
- File descriptionDescripción del archivo
- CancelCancelar
- ResumeReanudar
- Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
- RemoveEliminar
- Remove model from filesystemEliminar modelo del sistema de archivos
-
- InstallInstalar
- Install online modelInstalar modelo en línea
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene disponible (%2).</strong></font>
- %1 GB%1 GB
- ??
- Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
- Error for incompatible hardwareError por hardware incompatible
- Download progressBarBarra de progreso de descarga
- Shows the progress made in the downloadMuestra el progreso realizado en la descarga
- Download speedVelocidad de descarga
- Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
- Calculating...Calculando...
@@ -2354,88 +2007,71 @@ NOTA: No surte efecto hasta que recargue el modelo.
-
-
-
- Whether the file hash is being calculatedSi se está calculando el hash del archivo
- Busy indicatorIndicador de ocupado
- Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
- enter $API_KEYingrese $API_KEY
- File sizeTamaño del archivo
- RAM requiredRAM requerida
- ParametersParámetros
- QuantCuantificación
- TypeTipo
- ERROR: $API_KEY is empty.ERROR: $API_KEY está vacía.
- ERROR: $BASE_URL is empty.ERROR: $BASE_URL está vacía.
- enter $BASE_URLingrese $BASE_URL
- ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
- enter $MODEL_NAMEingrese $MODEL_NAME
@@ -2444,13 +2080,11 @@ NOTA: No surte efecto hasta que recargue el modelo.
MyFancyLink
- Fancy linkEnlace elegante
- A stylized linkUn enlace estilizado
@@ -2459,7 +2093,6 @@ NOTA: No surte efecto hasta que recargue el modelo.
MySettingsStack
- Please choose a directoryPor favor, elija un directorio
@@ -2468,13 +2101,11 @@ NOTA: No surte efecto hasta que recargue el modelo.
MySettingsTab
- Restore DefaultsRestaurar valores predeterminados
- Restores settings dialog to a default stateRestaura el diálogo de configuración a su estado predeterminado
@@ -2483,7 +2114,6 @@ NOTA: No surte efecto hasta que recargue el modelo.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.Contribuir datos al Datalake de código abierto de GPT4All.
@@ -2502,7 +2132,6 @@ NOTA: Al activar esta función, enviará sus datos al Datalake de código abiert
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2516,55 +2145,46 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
- Terms for opt-inTérminos para optar por participar
- Describes what will happen when you opt-inDescribe lo que sucederá cuando opte por participar
- Please provide a name for attribution (optional)Por favor, proporcione un nombre para la atribución (opcional)
- Attribution (optional)Atribución (opcional)
- Provide attributionProporcionar atribución
- EnableHabilitar
- Enable opt-inHabilitar participación
- CancelCancelar
- Cancel opt-inCancelar participación
@@ -2573,19 +2193,16 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
NewVersionDialog
- New version is availableNueva versión disponible
- UpdateActualizar
- Update to new versionActualizar a nueva versión
@@ -2594,19 +2211,16 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
PopupDialog
- Reveals a shortlived help balloonMuestra un globo de ayuda de corta duración
- Busy indicatorIndicador de ocupado
- Displayed when the popup is showing busySe muestra cuando la ventana emergente está ocupada
@@ -2623,32 +2237,26 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
-
- SettingsConfiguración
- Contains various application settingsContiene varias configuraciones de la aplicación
- ApplicationAplicación
- ModelModelo
- LocalDocsDocumentosLocales
@@ -2657,19 +2265,16 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
StartupDialog
- Welcome!¡Bienvenido!
- Release notesNotas de la versión
- Release notes for this versionNotas de la versión para esta versión
@@ -2696,93 +2301,75 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
- Terms for opt-inTérminos para aceptar
- Describes what will happen when you opt-inDescribe lo que sucederá cuando acepte
-
- Opt-in for anonymous usage statisticsAceptar estadísticas de uso anónimas
-
- YesSí
- Allow opt-in for anonymous usage statisticsPermitir aceptación de estadísticas de uso anónimas
-
- NoNo
- Opt-out for anonymous usage statisticsRechazar estadísticas de uso anónimas
- Allow opt-out for anonymous usage statisticsPermitir rechazo de estadísticas de uso anónimas
-
- Opt-in for networkAceptar para la red
- Allow opt-in for networkPermitir aceptación para la red
- Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermitir compartir anónimamente los chats con el Datalake de GPT4All
- Opt-out for networkRechazar para la red
- Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermitir rechazar el compartir anónimo de chats con el Datalake de GPT4All
- ### Release notes
%1### Contributors
%2
@@ -2793,7 +2380,6 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2834,27 +2420,22 @@ lanzamiento de modelo GPT4All que utilice sus datos.
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Advertencia:</b> cambiar el modelo borrará la conversación actual. ¿Deseas continuar?
- ContinueContinuar
- Continue with model loadingContinuar con la carga del modelo
-
- CancelCancelar
@@ -2863,38 +2444,32 @@ lanzamiento de modelo GPT4All que utilice sus datos.
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)Por favor, edite el texto a continuación para proporcionar una mejor
respuesta. (opcional)
- Please provide a better response...Por favor, proporcione una mejor respuesta...
- SubmitEnviar
- Submits the user's responseEnvía la respuesta del usuario
- CancelCancelar
- Closes the response dialogCierra el diálogo de respuesta
@@ -2903,152 +2478,125 @@ lanzamiento de modelo GPT4All que utilice sus datos.
main
- GPT4All v%1GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Se encontró un error al iniciar:</h3><br><i>"Se detectó hardware incompatible."</i><br><br>Desafortunadamente, tu CPU no cumple con los requisitos mínimos para ejecutar este programa. En particular, no soporta instrucciones AVX, las cuales este programa requiere para ejecutar con éxito un modelo de lenguaje grande moderno. La única solución en este momento es actualizar tu hardware a una CPU más moderna.<br><br>Consulta aquí para más información: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Se encontró un error al iniciar:</h3><br><i>"No se puede acceder al archivo de configuración."</i><br><br>Desafortunadamente, algo está impidiendo que el programa acceda al archivo de configuración. Esto podría ser causado por permisos incorrectos en el directorio de configuración local de la aplicación donde se encuentra el archivo de configuración. Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para obtener ayuda.
- Connection to datalake failed.La conexión al datalake falló.
- Saving chats.Guardando chats.
- Network dialogDiálogo de red
- opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
- Home viewVista de inicio
- Home view of applicationVista de inicio de la aplicación
- HomeInicio
- Chat viewVista de chat
- Chat view to interact with modelsVista de chat para interactuar con modelos
- ChatsChats
-
- ModelsModelos
- Models view for installed modelsVista de modelos para modelos instalados
-
- LocalDocsDocs
Locales
- LocalDocs view to configure and use local docsVista de DocumentosLocales para configurar y usar documentos locales
-
- SettingsConfig.
- Settings view for application configurationVista de configuración para la configuración de la aplicación
- The datalake is enabledEl datalake está habilitado
- Using a network modelUsando un modelo de red
- Server mode is enabledEl modo servidor está habilitado
- Installed modelsModelos instalados
- View of installed modelsVista de modelos instalados
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index e1d63a34..e3615035 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections← Raccolte esistenti
- Add Document CollectionAggiungi raccolta documenti
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Aggiungi una cartella contenente file di testo semplice, PDF o Markdown. Configura estensioni aggiuntive in Settaggi.
- Please choose a directoryScegli una cartella
- NameNome
- Collection name...Nome della raccolta...
- Name of the collection to add (Required)Nome della raccolta da aggiungere (Obbligatorio)
- FolderCartella
- Folder path...Percorso cartella...
- Folder path to documents (Required)Percorso della cartella dei documenti (richiesto)
- BrowseEsplora
- Create CollectionCrea raccolta
@@ -80,295 +68,244 @@
AddModelView
- ← Existing Models← Modelli esistenti
- Explore ModelsEsplora modelli
- Discover and download models by keyword search...Scopri e scarica i modelli tramite ricerca per parole chiave...
- Text field for discovering and filtering downloadable modelsCampo di testo per scoprire e filtrare i modelli scaricabili
- Initiate model discovery and filteringAvvia rilevamento e filtraggio dei modelli
- Triggers discovery and filtering of modelsAttiva la scoperta e il filtraggio dei modelli
- DefaultPredefinito
- LikesMi piace
- DownloadsScaricamenti
- RecentRecenti
- AscAsc
- DescDisc
- NoneNiente
- Searching · %1Ricerca · %1
- Sort by: %1Ordina per: %1
- Sort dir: %1Direzione ordinamento: %1
- Limit: %1Limite: %1
- Network error: could not retrieve %1Errore di rete: impossibile recuperare %1
-
- Busy indicatorIndicatore di occupato
- Displayed when the models request is ongoingVisualizzato quando la richiesta dei modelli è in corso
- Model fileFile del modello
- Model file to be downloadedFile del modello da scaricare
- DescriptionDescrizione
- File descriptionDescrizione del file
- CancelAnnulla
- ResumeRiprendi
- DownloadScarica
- Stop/restart/start the downloadArresta/riavvia/avvia il download
- RemoveRimuovi
- Remove model from filesystemRimuovi il modello dal sistema dei file
-
- InstallInstalla
- Install online modelInstalla il modello online
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVVERTENZA: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font>
- ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
- ERROR: $BASE_URL is empty.ERRORE: $BASE_URL non è valido.
- enter $BASE_URLinserisci $BASE_URL
- ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
- enter $MODEL_NAMEinserisci $MODEL_NAME
- %1 GB
-
- ?
- Describes an error that occurred when downloadingDescrive un errore che si è verificato durante lo scaricamento
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Errore</a></strong></font>
- Error for incompatible hardwareErrore per hardware incompatibile
- Download progressBarBarra di avanzamento dello scaricamento
- Shows the progress made in the downloadMostra lo stato di avanzamento dello scaricamento
- Download speedVelocità di scaricamento
- Download speed in bytes/kilobytes/megabytes per secondVelocità di scaricamento in byte/kilobyte/megabyte al secondo
- Calculating...Calcolo in corso...
@@ -377,52 +314,41 @@
-
-
-
- Whether the file hash is being calculatedSe viene calcolato l'hash del file
- Displayed when the file hash is being calculatedVisualizzato durante il calcolo dell'hash del file
- enter $API_KEYInserire $API_KEY
- File sizeDimensione del file
- RAM requiredRAM richiesta
- ParametersParametri
- QuantQuant
- TypeTipo
@@ -431,25 +357,21 @@
ApplicationSettings
- ApplicationApplicazione
- Network dialogDialogo di rete
- opt-in to share feedback/conversationsaderisci per condividere feedback/conversazioni
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -461,103 +383,86 @@
- Error dialogDialogo d'errore
- Application SettingsSettaggi applicazione
- GeneralGenerale
- ThemeTema
- The application color scheme.La combinazione di colori dell'applicazione.
- DarkScuro
- LightChiaro
- LegacyDarkScuro Legacy
- Font SizeDimensioni del Font
- The size of text in the application.La dimensione del testo nell'applicazione.
- SmallPiccolo
- MediumMedio
- LargeGrande
- Language and LocaleLingua e settaggi locali
- The language and locale you wish to use.La lingua e i settaggi locali che vuoi utilizzare.
- System LocaleSettaggi locali del sistema
- DeviceDispositivo
@@ -567,166 +472,138 @@
- The compute device used for text generation.Il dispositivo di calcolo utilizzato per la generazione del testo.
-
- Application defaultApplicazione predefinita
- Default ModelModello predefinito
- The preferred model for new chats. Also used as the local server fallback.Il modello preferito per le nuove chat. Utilizzato anche come ripiego del server locale.
- Suggestion ModeModalità suggerimento
- Generate suggested follow-up questions at the end of responses.Genera le domande di approfondimento suggerite alla fine delle risposte.
- When chatting with LocalDocsQuando chatti con LocalDocs
- Whenever possibleQuando possibile
- NeverMai
- Download PathPercorso di scarico
- Where to store local models and the LocalDocs database.Dove archiviare i modelli locali e il database LocalDocs.
- BrowseEsplora
- Choose where to save model filesScegli dove salvare i file del modello
- Enable DatalakeAbilita Datalake
- Send chats and feedback to the GPT4All Open-Source Datalake.Invia chat e commenti al Datalake Open Source GPT4All.
- AdvancedAvanzate
- CPU ThreadsThread della CPUTread CPU
- The number of CPU threads used for inference and embedding.Il numero di thread della CPU utilizzati per l'inferenza e l'incorporamento.
- Save Chat ContextSalva il contesto della chat
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat.
- Enable Local ServerAbilita server locale
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Esporre un server compatibile con OpenAI a localhost. ATTENZIONE: comporta un maggiore utilizzo delle risorse.
- API Server PortPorta del server API
- The port to use for the local server. Requires restart.La porta da utilizzare per il server locale. Richiede il riavvio.
- Check For UpdatesControlla gli aggiornamenti
- Manually check for an update to GPT4All.Verifica manualmente l'aggiornamento di GPT4All.
- UpdatesAggiornamenti
@@ -762,73 +639,61 @@
ChatDrawer
- DrawerCassetto
- Main navigation drawerCassetto di navigazione principale
- + New Chat+ Nuova Chat
- Create a new chatCrea una nuova chat
- Select the current chat or edit the chat when in edit modeSeleziona la chat corrente o modifica la chat in modalità modifica
- Edit chat nameModifica il nome della chat
- Save chat nameSalva il nome della chat
- Delete chatElimina chat
- Confirm chat deletionConferma l'eliminazione della chat
- Cancel chat deletionAnnulla l'eliminazione della chat
- List of chatsElenco delle chat
- List of chats in the drawer dialogElenco delle chat nella finestra di dialogo del cassetto
@@ -870,141 +735,117 @@
ChatView
- <h3>Warning</h3><p>%1</p><h3>Avviso</h3><p>%1</p>
- Switch model dialogFinestra di dialogo Cambia modello
- Warn the user if they switch models, then context will be erasedAvvisa l'utente che se cambia modello, il contesto verrà cancellato
- Conversation copied to clipboard.Conversazione copiata negli appunti.
- Code copied to clipboard.Codice copiato negli appunti.
- Chat panelPannello chat
- Chat panel with optionsPannello chat con opzioni
- Reload the currently loaded modelRicarica il modello attualmente caricato
- Eject the currently loaded modelEspelli il modello attualmente caricato
- No model installed.Nessun modello installato.
- Model loading error.Errore di caricamento del modello.
- Waiting for model...In attesa del modello...
- Switching context...Cambio contesto...
- Choose a model...Scegli un modello...
- Not found: %1Non trovato: %1
- The top item is the current modelL'elemento in alto è il modello attuale
-
- LocalDocs
- Add documentsAggiungi documenti
- add collections of documents to the chataggiungi raccolte di documenti alla chat
- Load the default modelCarica il modello predefinito
- Loads the default model which can be changed in settingsCarica il modello predefinito che può essere modificato nei settaggi
- No Model InstalledNessun modello installato
- GPT4All requires that you install at least one
model to get startedGPT4All richiede l'installazione di almeno un
@@ -1012,37 +853,31 @@ modello per iniziare
- Install a ModelInstalla un modello
- Shows the add model viewMostra la vista aggiungi modello
- Conversation with the modelConversazione con il modello
- prompt / response pairs from the conversationcoppie prompt/risposta dalla conversazione
- GPT4All
- YouTu
@@ -1052,215 +887,181 @@ modello per iniziare
- response stopped ...risposta interrotta ...
- processing ...elaborazione ...
- generating response ...generazione risposta ...
- generating questions ...generarzione domande ...
-
- CopyCopia
- Copy MessageCopia messaggio
- Disable markdownDisabilita Markdown
- Enable markdownAbilita Markdown
- Thumbs upMi piace
- Gives a thumbs up to the responseDà un mi piace alla risposta
- Thumbs downNon mi piace
- Opens thumbs down dialogApre la finestra di dialogo "Non mi piace"
-
-
-
- %1 Sources
- %1 Fonti
-
- Suggested follow-upsApprofondimenti suggeriti
- Erase and reset chat sessionCancella e ripristina la sessione di chat
- Copy chat session to clipboardCopia la sessione di chat negli appunti
- Redo last chat responseRiesegui l'ultima risposta della chat
- Stop generatingInterrompi la generazione
- Stop the current response generationArresta la generazione della risposta corrente
- Reloads the modelRicarica il modello
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Si è verificato un errore durante il caricamento del modello:</h3><br><i>"%1"</i><br><br>Gli errori di caricamento del modello possono verificarsi per diversi motivi, ma le cause più comuni includono un formato di file non valido, un download incompleto o danneggiato, il tipo di file sbagliato, RAM di sistema insufficiente o un tipo di modello incompatibile. Ecco alcuni suggerimenti per risolvere il problema:<br><ul><li>Assicurati che il file del modello abbia un formato e un tipo compatibili<li>Verifica che il file del modello sia completo nella cartella di download<li>Puoi trovare la cartella di download nella finestra di dialogo dei settaggi<li>Se hai scaricato manualmente il modello, assicurati che il file non sia danneggiato controllando md5sum<li>Leggi ulteriori informazioni su quali modelli sono supportati nella nostra <a href="https://docs.gpt4all.io/ ">documentazione</a> per la GUI<li>Consulta il nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per assistenza
-
- Reload · %1Ricarica · %1
- Loading · %1Caricamento · %1
- Load · %1 (default) →Carica · %1 (predefinito) →
- restoring from text ...ripristino dal testo ...
- retrieving localdocs: %1 ...recupero documenti locali: %1 ...
- searching localdocs: %1 ...ricerca in documenti locali: %1 ...
+
+
+ %n Source(s)
+
+ %n Fonte
+ %n Fonti
+
+
- Send a message...Manda un messaggio...
- Load a model to continue...Carica un modello per continuare...
- Send messages/prompts to the modelInvia messaggi/prompt al modello
- CutTaglia
- PasteIncolla
- Select AllSeleziona tutto
- Send messageInvia messaggio
- Sends the message/prompt contained in textfield to the modelInvia il messaggio/prompt contenuto nel campo di testo al modello
@@ -1269,13 +1070,11 @@ modello per iniziare
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete resultsAvviso: la ricerca nelle raccolte durante l'indicizzazione può restituire risultati incompleti
- %n file(s)%n file
@@ -1284,7 +1083,6 @@ modello per iniziare
- %n word(s)%n parola
@@ -1293,19 +1091,16 @@ modello per iniziare
- UpdatingIn aggiornamento
- + Add Docs+ Aggiungi documenti
- Select a collection to make it available to the chat model.Seleziona una raccolta per renderla disponibile al modello in chat.
@@ -1352,109 +1147,91 @@ modello per iniziare
HomeView
- Welcome to GPT4AllBenvenuto in GPT4All
- The privacy-first LLM chat applicationL'applicazione di chat LLM che mette al primo posto la privacy
- Start chattingInizia a chattare
- Start ChattingInizia a Chattare
- Chat with any LLMChatta con qualsiasi LLM
- LocalDocs
- Chat with your local filesChatta con i tuoi file locali
- Find ModelsTrova modelli
- Explore and download modelsEsplora e scarica i modelli
- Latest newsUltime notizie
- Latest news from GPT4AllUltime notizie da GPT4All
- Release NotesNote di rilascio
- DocumentationDocumentazione
- Discord
- X (Twitter)
- Github
- GPT4All.io
- Subscribe to NewsletterIscriviti alla Newsletter
@@ -1463,140 +1240,117 @@ modello per iniziare
LocalDocsSettings
- LocalDocs
- LocalDocs SettingsSettaggi LocalDocs
- IndexingIndicizzazione
- Allowed File ExtensionsEstensioni di file consentite
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.Elenco separato da virgole. LocalDocs tenterà di elaborare solo file con queste estensioni.
- EmbeddingQuesto termine si dovrebbe tradurre come "Incorporamento". This term has been translated in other applications like A1111 and InvokeAI as "Incorporamento"Incorporamento
- Use Nomic Embed APIUtilizza l'API di incorporamento Nomic Embed
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incorpora documenti utilizzando la veloce API di Nomic invece di un modello locale privato. Richiede il riavvio.
- Nomic API KeyChiave API di Nomic
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Chiave API da utilizzare per Nomic Embed. Ottienine una dalla <a href="https://atlas.nomic.ai/cli-login">pagina delle chiavi API</a> di Atlas. Richiede il riavvio.
- Embeddings DeviceDispositivo per incorporamenti
- The compute device used for embeddings. Requires restart.Il dispositivo di calcolo utilizzato per gli incorporamenti. Richiede il riavvio.
- Application defaultApplicazione predefinita
- DisplayMostra
- Show SourcesMostra le fonti
- Display the sources used for each response.Visualizza le fonti utilizzate per ciascuna risposta.
- AdvancedAvanzate
- Warning: Advanced usage only.Avvertenza: solo per uso avanzato.
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valori troppo grandi possono causare errori di Localdocs, risposte estremamente lente o l'impossibilità di rispondere. In parole povere, {N caratteri x N frammenti} vengono aggiunti alla finestra di contesto del modello. Maggiori informazioni <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">qui</a>.
- Document snippet size (characters)Dimensioni del frammento di documento (caratteri)
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numero di caratteri per frammento di documento. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
- Max document snippets per promptNumero massimo di frammenti di documento per prompt
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numero massimo di N corrispondenze migliori di frammenti di documento recuperati da aggiungere al contesto del prompt. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
@@ -1605,19 +1359,16 @@ modello per iniziare
LocalDocsView
- LocalDocs
- Chat with your local filesChatta con i tuoi file locali
- + Add Collection+ Aggiungi raccolta
@@ -1627,121 +1378,101 @@ modello per iniziare
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERRORE: Impossibile accedere al database LocalDocs o non è valido.</h3><br><i>Nota: sarà necessario riavviare dopo aver provato una delle seguenti soluzioni suggerite.</i><br><ul><li>Assicurati che la cartella impostata come <b>Percorso di download</b> esista nel file system.</li><li>Controlla la proprietà e i permessi di lettura e scrittura del <b>Percorso di download</b>.</li><li>Se è presente un file <b>localdocs_v2.db</b>, controlla anche la sua proprietà e i permessi di lettura/scrittura.</li></ul><br>Se il problema persiste e sono presenti file 'localdocs_v*.db', come ultima risorsa puoi<br>provare a eseguirne il backup e a rimuoverli. Tuttavia, dovrai ricreare le tue raccolte.
- No Collections InstalledNessuna raccolta installata
- Install a collection of local documents to get started using this featureInstalla una raccolta di documenti locali per iniziare a utilizzare questa funzionalità
- + Add Doc Collection+ Aggiungi raccolta di documenti
- Shows the add model viewMostra la vista aggiungi modello
- Indexing progressBarBarra di avanzamento indicizzazione
- Shows the progress made in the indexingMostra lo stato di avanzamento dell'indicizzazione
- ERRORERRORE
- INDEXINGINDICIZZAZIONE
- EMBEDDINGINCORPORAMENTO
- REQUIRES UPDATERICHIEDE AGGIORNAMENTO
- READYPRONTO
- INSTALLINGINSTALLAZIONE
- Indexing in progressIndicizzazione in corso
- Embedding in progressIncorporamento in corso
- This collection requires an update after version changeQuesta raccolta richiede un aggiornamento dopo il cambio di versione
- Automatically reindexes upon changes to the folderReindicizza automaticamente in caso di modifiche alla cartella
- Installation in progressInstallazione in corso
- %%
- %n file(s)%n file
@@ -1750,7 +1481,6 @@ modello per iniziare
- %n word(s)%n parola
@@ -1759,31 +1489,26 @@ modello per iniziare
- RemoveRimuovi
- RebuildRicostruisci
- Reindex this folder from scratch. This is slow and usually not needed.Reindicizzare questa cartella da zero. Lento e di solito non necessario.
- UpdateAggiorna
- Update the collection to the new version. This is a slow operation.Aggiorna la raccolta alla nuova versione. Questa è un'operazione lenta.
@@ -1860,109 +1585,91 @@ modello per iniziare
ModelSettings
- ModelModello
- Model SettingsSettaggi modello
- CloneClona
- RemoveRimuovi
- NameNome
- Model FileFile del modello
- System PromptPrompt di sistema
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefisso all'inizio di ogni conversazione. Deve contenere i token di inquadramento appropriati.
- Prompt TemplateSchema del prompt
- The template that wraps every prompt.Lo schema che incorpora ogni prompt.
- Must contain the string "%1" to be replaced with the user's input.Deve contenere la stringa "%1" da sostituire con l'input dell'utente.
- Chat Name PromptPrompt del nome della chat
- Prompt used to automatically generate chat names.Prompt utilizzato per generare automaticamente nomi di chat.
- Suggested FollowUp PromptPrompt di approfondimento suggerito
- Prompt used to generate suggested follow-up questions.Prompt utilizzato per generare le domande di approfondimento suggerite.
- Context LengthLunghezza del contesto
- Number of input and output tokens the model sees.Numero di token di input e output visualizzati dal modello.
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1972,19 +1679,16 @@ NOTA: non ha effetto finché non si ricarica il modello.
- TemperatureTemperatura
- Randomness of model output. Higher -> more variation.Casualità dell'uscita del modello. Più alto -> più variazione.
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.La temperatura aumenta le possibilità di scegliere token meno probabili.
@@ -1992,19 +1696,16 @@ NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedi
- Top-P
- Nucleus Sampling factor. Lower -> more predicatable.Fattore di campionamento del nucleo. Inferiore -> più prevedibile.
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Possono essere scelti solo i token più probabili fino ad una probabilità totale di top_p.
@@ -2012,67 +1713,56 @@ NOTA: impedisce la scelta di token altamente improbabili.
- Min-P
- Minimum token probability. Higher -> more predictable.Probabilità minima del token. Più alto -> più prevedibile.
- Sets the minimum relative probability for a token to be considered.Imposta la probabilità relativa minima che un token venga considerato.
- Top-K
- Size of selection pool for tokens.Dimensione del pool di selezione per i token.
- Only the top K most likely tokens will be chosen from.Solo i token Top-K più probabili verranno scelti.
- Max LengthLunghezza massima
- Maximum response length, in tokens.Lunghezza massima della risposta, in token.
- Prompt Batch SizeDimensioni del lotto di prompt
- The batch size used for prompt processing.La dimensione del lotto usata per l'elaborazione dei prompt.
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Quantità di token del prompt da elaborare contemporaneamente.
@@ -2080,43 +1770,36 @@ NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzerann
- Repeat PenaltyPenalità di ripetizione
- Repetition penalty factor. Set to 1 to disable.Fattore di penalità di ripetizione. Impostare su 1 per disabilitare.
- Repeat Penalty TokensToken di penalità ripetizione
- Number of previous tokens used for penalty.Numero di token precedenti utilizzati per la penalità.
- GPU LayersLivelli GPU
- Number of model layers to load into VRAM.Numero di livelli del modello da caricare nella VRAM.
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2129,203 +1812,168 @@ NOTA: non ha effetto finché non si ricarica il modello.
ModelsView
- No Models InstalledNessun modello installato
- Install a model to get started using GPT4AllInstalla un modello per iniziare a utilizzare GPT4All
-
- + Add Model+ Aggiungi Modello
- Shows the add model viewMostra la vista aggiungi modello
- Installed ModelsModelli installati
- Locally installed chat modelsModelli per chat installati localmente
- Model fileFile del modello
- Model file to be downloadedFile del modello da scaricare
- DescriptionDescrizione
- File descriptionDescrizione del file
- CancelAnnulla
- ResumeRiprendi
- Stop/restart/start the downloadArresta/riavvia/avvia il download
- RemoveRimuovi
- Remove model from filesystemRimuovi il modello dal sistema dei file
-
- InstallInstalla
- Install online modelInstalla il modello online
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Errore</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVVISO: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font>
- ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
- ERROR: $BASE_URL is empty.ERRORE: $BASE_URL non è valido.
- enter $BASE_URLinserisci $BASE_URL
- ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
- enter $MODEL_NAMEinserisci $MODEL_NAME
- %1 GB
- ?
- Describes an error that occurred when downloadingDescrive un errore che si è verificato durante lo scaricamento
- Error for incompatible hardwareErrore per hardware incompatibile
- Download progressBarBarra di avanzamento dello scaricamento
- Shows the progress made in the downloadMostra lo stato di avanzamento dello scaricamento
- Download speedVelocità di scaricamento
- Download speed in bytes/kilobytes/megabytes per secondVelocità di scaricamento in byte/kilobyte/megabyte al secondo
- Calculating...Calcolo in corso...
@@ -2334,58 +1982,46 @@ NOTA: non ha effetto finché non si ricarica il modello.
-
-
-
- Whether the file hash is being calculatedSe viene calcolato l'hash del file
- Busy indicatorIndicatore di occupato
- Displayed when the file hash is being calculatedVisualizzato durante il calcolo dell'hash del file
- enter $API_KEYInserire $API_KEY
- File sizeDimensione del file
- RAM requiredRAM richiesta
- ParametersParametri
- QuantQuant
- TypeTipo
@@ -2394,13 +2030,11 @@ NOTA: non ha effetto finché non si ricarica il modello.
MyFancyLink
- Fancy linkMio link
- A stylized linkUn link d'esempio
@@ -2409,7 +2043,6 @@ NOTA: non ha effetto finché non si ricarica il modello.
MySettingsStack
- Please choose a directoryScegli una cartella
@@ -2418,13 +2051,11 @@ NOTA: non ha effetto finché non si ricarica il modello.
MySettingsTab
- Restore DefaultsRiprista i valori predefiniti
- Restores settings dialog to a default stateRipristina la finestra di dialogo dei settaggi a uno stato predefinito
@@ -2433,13 +2064,11 @@ NOTA: non ha effetto finché non si ricarica il modello.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.Contribuisci con i tuoi dati al Datalake Open Source di GPT4All.
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2453,55 +2082,46 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- Terms for opt-inTermini per l'adesione
- Describes what will happen when you opt-inDescrive cosa accadrà quando effettuerai l'adesione
- Please provide a name for attribution (optional)Fornisci un nome per l'attribuzione (facoltativo)
- Attribution (optional)Attribuzione (facoltativo)
- Provide attributionFornire attribuzione
- EnableAbilita
- Enable opt-inAbilita l'adesione
- CancelAnnulla
- Cancel opt-inAnnulla l'adesione
@@ -2510,19 +2130,16 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
NewVersionDialog
- New version is availableNuova versione disponibile
- UpdateAggiorna
- Update to new versionAggiorna alla nuova versione
@@ -2531,19 +2148,16 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
PopupDialog
- Reveals a shortlived help balloonRivela un messaggio di aiuto di breve durata
- Busy indicatorIndicatore di occupato
- Displayed when the popup is showing busyVisualizzato quando la finestra a comparsa risulta occupata
@@ -2553,32 +2167,26 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
-
- SettingsSettaggi
- Contains various application settingsContiene vari settaggi dell'applicazione
- ApplicationApplicazione
- ModelModello
- LocalDocs
@@ -2587,13 +2195,11 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
StartupDialog
- Welcome!Benvenuto!
- ### Release notes
%1### Contributors
%2
@@ -2603,19 +2209,16 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- Release notesNote di rilascio
- Release notes for this versionNote di rilascio per questa versione
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2639,87 +2242,70 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
- Terms for opt-inTermini per l'adesione
- Describes what will happen when you opt-inDescrive cosa accadrà quando effettuerai l'adesione
-
- Opt-in for anonymous usage statisticsAttiva le statistiche di utilizzo anonime
-
- YesSi
- Allow opt-in for anonymous usage statisticsConsenti l'attivazione di statistiche di utilizzo anonime
-
- NoNo
- Opt-out for anonymous usage statisticsDisattiva le statistiche di utilizzo anonime
- Allow opt-out for anonymous usage statisticsConsenti la disattivazione per le statistiche di utilizzo anonime
-
- Opt-in for networkAderisci per la rete
- Allow opt-in for networkConsenti l'adesione per la rete
- Allow opt-in anonymous sharing of chats to the GPT4All DatalakeConsenti la condivisione anonima delle chat su GPT4All Datalake
- Opt-out for networkDisattiva per la rete
- Allow opt-out anonymous sharing of chats to the GPT4All DatalakeConsenti la non adesione alla condivisione anonima delle chat nel GPT4All Datalake
@@ -2728,27 +2314,22 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
SwitchModelDialog
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Avviso:</b> la modifica del modello cancellerà la conversazione corrente. Vuoi continuare?
- ContinueContinua
- Continue with model loadingContinuare con il caricamento del modello
-
- CancelAnnulla
@@ -2757,37 +2338,31 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)Modifica il testo seguente per fornire una risposta migliore. (opzionale)
- Please provide a better response...Si prega di fornire una risposta migliore...
- SubmitInvia
- Submits the user's responseInvia la risposta dell'utente
- CancelAnnulla
- Closes the response dialogChiude la finestra di dialogo della risposta
@@ -2796,151 +2371,124 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
main
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Si è verificato un errore all'avvio:</h3><br><i>"Rilevato hardware incompatibile."</i><br><br>Sfortunatamente, la tua CPU non soddisfa i requisiti minimi per eseguire questo programma. In particolare, non supporta gli elementi intrinseci AVX richiesti da questo programma per eseguire con successo un modello linguistico moderno e di grandi dimensioni. L'unica soluzione in questo momento è aggiornare il tuo hardware con una CPU più moderna.<br><br>Vedi qui per ulteriori informazioni: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https ://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Si è verificato un errore all'avvio:</h3><br><i>"Impossibile accedere al file dei settaggi."</i><br><br>Sfortunatamente, qualcosa impedisce al programma di accedere al file dei settaggi. Ciò potrebbe essere causato da autorizzazioni errate nella cartella di configurazione locale dell'app in cui si trova il file dei settaggi. Dai un'occhiata al nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per ricevere assistenza.
- Connection to datalake failed.La connessione al Datalake non è riuscita.
- Saving chats.Salvataggio delle chat.
- Network dialogDialogo di rete
- opt-in to share feedback/conversationsaderisci per condividere feedback/conversazioni
- Home viewVista iniziale
- Home view of applicationVista iniziale dell'applicazione
- HomeInizia
- Chat viewVista chat
- Chat view to interact with modelsVista chat per interagire con i modelli
- ChatsChat
-
- ModelsModelli
- Models view for installed modelsVista modelli per i modelli installati
-
- LocalDocs
- LocalDocs view to configure and use local docsVista LocalDocs per configurare e utilizzare i documenti locali
-
- SettingsSettaggi
- Settings view for application configurationVista dei settaggi per la configurazione dell'applicazione
- The datalake is enabledIl Datalake è abilitato
- Using a network modelUtilizzando un modello di rete
- Server mode is enabledLa modalità server è abilitata
- Installed modelsModelli installati
- View of installed modelsVista dei modelli installati
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 7697a400..b9bb47b5 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections← Minhas coleções
- Add Document CollectionAdicionar Coleção de Documentos
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Adicione uma pasta contendo arquivos de texto simples, PDFs ou Markdown. Configure extensões adicionais nas Configurações.
- Please choose a directoryEscolha um diretório
- NameNome
- Collection name...Nome da coleção...
- Name of the collection to add (Required)Nome da coleção (obrigatório)
- FolderPasta
- Folder path...Caminho da pasta...
- Folder path to documents (Required)Caminho da pasta com os documentos (obrigatório)
- BrowseProcurar
- Create CollectionCriar Coleção
@@ -80,295 +68,244 @@
AddModelView
- ← Existing Models← Meus Modelos
- Explore ModelsDescobrir Modelos
- Discover and download models by keyword search...Pesquisar modelos...
- Text field for discovering and filtering downloadable modelsCampo de texto para descobrir e filtrar modelos para download
- Initiate model discovery and filteringPesquisar e filtrar modelos
- Triggers discovery and filtering of modelsAciona a descoberta e filtragem de modelos
- DefaultPadrão
- LikesCurtidas
- DownloadsDownloads
- RecentRecentes
- AscAsc
- DescDesc
- NoneNenhum
- Searching · %1Pesquisando · %1
- Sort by: %1Ordenar por: %1
- Sort dir: %1Ordenar diretório: %1
- Limit: %1Limite: %1
- Network error: could not retrieve %1Erro de rede: não foi possível obter %1
-
- Busy indicatorIndicador de processamento
- Displayed when the models request is ongoingxibido enquanto os modelos estão sendo carregados
- Model fileArquivo do modelo
- Model file to be downloadedArquivo do modelo a ser baixado
- DescriptionDescrição
- File descriptionDescrição do arquivo
- CancelCancelar
- ResumeRetomar
- DownloadBaixar
- Stop/restart/start the downloadParar/reiniciar/iniciar o download
- RemoveRemover
- Remove model from filesystemRemover modelo do sistema
-
- InstallInstalar
- Install online modelInstalar modelo online
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENÇÃO: Este modelo não é recomendado para seu hardware. Ele exige mais memória (%1 GB) do que seu sistema possui (%2).</strong></font>
- ERROR: $API_KEY is empty.ERRO: A $API_KEY está vazia.
- ERROR: $BASE_URL is empty.ERRO: A $BASE_URL está vazia.
- enter $BASE_URLinserir a $BASE_URL
- ERROR: $MODEL_NAME is empty.ERRO: O $MODEL_NAME está vazio.
- enter $MODEL_NAMEinserir o $MODEL_NAME
- %1 GB%1 GB
-
- ??
- Describes an error that occurred when downloadingMostra informações sobre o erro no download
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Erro</a></strong></font>
- Error for incompatible hardwareAviso: Hardware não compatível
- Download progressBarProgresso do download
- Shows the progress made in the downloadMostra o progresso do download
- Download speedVelocidade de download
- Download speed in bytes/kilobytes/megabytes per secondVelocidade de download em bytes/kilobytes/megabytes por segundo
- Calculating...Calculando...
@@ -377,52 +314,41 @@
-
-
-
- Whether the file hash is being calculatedQuando o hash do arquivo está sendo calculado
- Displayed when the file hash is being calculatedExibido durante o cálculo do hash do arquivo
- enter $API_KEYinserir $API_KEY
- File sizeTamanho do arquivo
- RAM requiredRAM necessária
- ParametersParâmetros
- QuantQuant
- TypeTipo
@@ -431,25 +357,21 @@
ApplicationSettings
- ApplicationAplicativo
- Network dialogMensagens de rede
- opt-in to share feedback/conversationsCompartilhar feedback e conversas
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -467,109 +389,91 @@
- Error dialogMensagens de erro
- Application SettingsConfigurações
- GeneralGeral
- ThemeTema
- The application color scheme.Esquema de cores.
- DarkModo Escuro
- LightModo Claro
- LegacyDarkModo escuro (legado)
- Font SizeTamanho da Fonte
- The size of text in the application.Tamanho do texto.
- SmallPequeno
- MediumMédio
- LargeGrande
- Language and LocaleIdioma e Região
- The language and locale you wish to use.Selecione seu idioma e região.
- System LocaleLocal do Sistema
- DeviceProcessador
- The compute device used for text generation.I chose to use "Processador" instead of "Dispositivo" (Device) or "Dispositivo de Computação" (Compute Device) to simplify the terminology and make it more straightforward and understandable. "Dispositivo" can be vague and could refer to various types of hardware, whereas "Processador" clearly and specifically indicates the component responsible for processing tasks. This improves usability by avoiding the ambiguity that might arise from using more generic terms like "Dispositivo."Processador usado para gerar texto.
@@ -577,159 +481,132 @@
-
- Application defaultAplicativo padrão
- Default ModelModelo Padrão
- The preferred model for new chats. Also used as the local server fallback.Modelo padrão para novos chats e em caso de falha do modelo principal.
- Suggestion ModeModo de sugestões
- Generate suggested follow-up questions at the end of responses.Sugerir perguntas após as respostas.
- When chatting with LocalDocsAo conversar com o LocalDocs
- Whenever possibleSempre que possível
- NeverNunca
- Download PathDiretório de Download
- Where to store local models and the LocalDocs database.Pasta para modelos e banco de dados do LocalDocs.
- BrowseProcurar
- Choose where to save model filesLocal para armazenar os modelos
- Enable DatalakeHabilitar Datalake
- Send chats and feedback to the GPT4All Open-Source Datalake.Contribua para o Datalake de código aberto do GPT4All.
- AdvancedAvançado
- CPU ThreadsThreads de CPU
- The number of CPU threads used for inference and embedding.Quantidade de núcleos (threads) do processador usados para processar e responder às suas perguntas.
- Save Chat ContextI used "Histórico do Chat" (Chat History) instead of "Contexto do Chat" (Chat Context) to clearly convey that it refers to saving past messages, making it more intuitive and avoiding potential confusion with abstract terms.Salvar Histórico do Chat
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat).
- Enable Local ServerAtivar Servidor Local
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Ativar servidor local compatível com OpenAI (uso de recursos elevado).
- API Server PortPorta da API
- The port to use for the local server. Requires restart.Porta de acesso ao servidor local. (requer reinicialização).
- Check For UpdatesProcurar por Atualizações
- Manually check for an update to GPT4All.Verifica se há novas atualizações para o GPT4All.
- UpdatesAtualizações
@@ -765,73 +642,61 @@
ChatDrawer
- DrawerMenu Lateral
- Main navigation drawerMenu de navegação principal
- + New Chat+ Novo Chat
- Create a new chatCriar um novo chat
- Select the current chat or edit the chat when in edit modeSelecione o chat atual ou edite o chat quando estiver no modo de edição
- Edit chat nameEditar nome do chat
- Save chat nameSalvar nome do chat
- Delete chatExcluir chat
- Confirm chat deletionConfirmar exclusão do chat
- Cancel chat deletionCancelar exclusão do chat
- List of chatsLista de chats
- List of chats in the drawer dialogLista de chats na caixa de diálogo do menu lateral
@@ -873,141 +738,117 @@
ChatView
- <h3>Warning</h3><p>%1</p><h3>Aviso</h3><p>%1</p>
- Switch model dialogMensagem ao troca de modelo
- Warn the user if they switch models, then context will be erasedAo trocar de modelo, o contexto da conversa será apagado
- Conversation copied to clipboard.Conversa copiada.
- Code copied to clipboard.Código copiado.
- Chat panelPainel de chat
- Chat panel with optionsPainel de chat com opções
- Reload the currently loaded modelRecarregar modelo atual
- Eject the currently loaded modelEjetar o modelo carregado atualmente
- No model installed.Nenhum modelo instalado.
- Model loading error.Erro ao carregar o modelo.
- Waiting for model...Aguardando modelo...
- Switching context...Mudando de contexto...
- Choose a model...Escolha um modelo...
- Not found: %1Não encontrado: %1
- The top item is the current modelO modelo atual é exibido no topo
-
- LocalDocsLocalDocs
- Add documentsAdicionar documentos
- add collections of documents to the chatAdicionar Coleção de Documentos
- Load the default modelCarregar o modelo padrão
- Loads the default model which can be changed in settingsCarrega o modelo padrão (personalizável nas configurações)
- No Model InstalledNenhum Modelo Instalado
- GPT4All requires that you install at least one
model to get startedO GPT4All precisa de pelo menos um modelo
@@ -1015,37 +856,31 @@ modelo instalado para funcionar
- Install a ModelInstalar um Modelo
- Shows the add model viewMostra a visualização para adicionar modelo
- Conversation with the modelConversa com o modelo
- prompt / response pairs from the conversationPares de pergunta/resposta da conversa
- GPT4AllGPT4All
- YouVocê
@@ -1055,215 +890,181 @@ modelo instalado para funcionar
- response stopped ...resposta interrompida...
- processing ...processando...
- generating response ...gerando resposta...
- generating questions ...gerando perguntas...
-
- CopyCopiar
- Copy MessageCopiar Mensagem
- Disable markdownDesativar markdown
- Enable markdownAtivar markdown
- Thumbs upResposta boa
- Gives a thumbs up to the responseCurte a resposta
- Thumbs downResposta ruim
- Opens thumbs down dialogAbrir diálogo de joinha para baixo
-
-
-
- %1 Sources
- %1 Origens
-
- Suggested follow-upsPerguntas relacionadas
- Erase and reset chat sessionApagar e redefinir sessão de chat
- Copy chat session to clipboardCopiar histórico da conversa
- Redo last chat responseRefazer última resposta
- Stop generatingParar de gerar
- Stop the current response generationParar a geração da resposta atual
- Reloads the modelRecarrega modelo
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Ocorreu um erro ao carregar o modelo:</h3><br><i>"%1"</i><br><br>Falhas no carregamento do modelo podem acontecer por vários motivos, mas as causas mais comuns incluem um formato de arquivo incorreto, um download incompleto ou corrompido, o tipo de arquivo errado, memória RAM do sistema insuficiente ou um tipo de modelo incompatível. Aqui estão algumas sugestões para resolver o problema:<br><ul><li>Certifique-se de que o arquivo do modelo tenha um formato e tipo compatíveis<li>Verifique se o arquivo do modelo está completo na pasta de download<li>Você pode encontrar a pasta de download na caixa de diálogo de configurações<li>Se você carregou o modelo, certifique-se de que o arquivo não esteja corrompido verificando o md5sum<li>Leia mais sobre quais modelos são suportados em nossa <a href="https://docs.gpt4all.io/">documentação</a> para a interface gráfica<li>Confira nosso <a href="https://discord.gg/4M2QFmTt2k">canal do Discord</a> para obter ajuda
-
- Reload · %1Recarregar · %1
- Loading · %1Carregando · %1
- Load · %1 (default) →Carregar · %1 (padrão) →
- restoring from text ...Recuperando do texto...
- retrieving localdocs: %1 ...Recuperando dados em LocalDocs: %1 ...
- searching localdocs: %1 ...Buscando em LocalDocs: %1 ...
+
+
+ %n Source(s)
+
+ %n Origem
+ %n Origens
+
+
- Send a message...Enviar uma mensagem...
- Load a model to continue...Carregue um modelo para continuar...
- Send messages/prompts to the modelEnviar mensagens/prompts para o modelo
- CutRecortar
- PasteColar
- Select AllSelecionar tudo
- Send messageEnviar mensagem
- Sends the message/prompt contained in textfield to the modelEnvia a mensagem/prompt contida no campo de texto para o modelo
@@ -1272,13 +1073,11 @@ modelo instalado para funcionar
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete resultsAviso: pesquisar coleções durante a indexação pode retornar resultados incompletos
- %n file(s)%n arquivo(s)
@@ -1287,7 +1086,6 @@ modelo instalado para funcionar
- %n word(s)%n palavra(s)
@@ -1296,19 +1094,16 @@ modelo instalado para funcionar
- UpdatingAtualizando
- + Add Docs+ Adicionar Documentos
- Select a collection to make it available to the chat model.Selecione uma coleção para disponibilizá-la ao modelo de chat.
@@ -1355,109 +1150,91 @@ modelo instalado para funcionar
HomeView
- Welcome to GPT4AllBem-vindo ao GPT4All
- The privacy-first LLM chat applicationO aplicativo de chat LLM que prioriza a privacidade
- Start chattingIniciar chat
- Start ChattingIniciar Chat
- Chat with any LLMConverse com qualquer LLM
- LocalDocsLocalDocs
- Chat with your local filesConverse com seus arquivos locais
- Find ModelsEncontrar Modelos
- Explore and download modelsDescubra e baixe modelos
- Latest newsÚltimas novidades
- Latest news from GPT4AllÚltimas novidades do GPT4All
- Release NotesNotas de versão
- DocumentationDocumentação
- DiscordDiscord
- X (Twitter)X (Twitter)
- GithubGithub
- GPT4All.ioGPT4All.io
- Subscribe to NewsletterAssine nossa Newsletter
@@ -1466,140 +1243,117 @@ modelo instalado para funcionar
LocalDocsSettings
- LocalDocsLocalDocs
- LocalDocs SettingsConfigurações do LocalDocs
- IndexingIndexação
- Allowed File ExtensionsExtensões de Arquivo Permitidas
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.Lista separada por vírgulas. O LocalDocs tentará processar apenas arquivos com essas extensões.
- EmbeddingIncorporação
- Use Nomic Embed APIUsar a API Nomic Embed
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incorporar documentos usando a API Nomic rápida em vez de um modelo local privado. Requer reinicialização.
- Nomic API KeyChave da API Nomic
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Chave da API a ser usada para Nomic Embed. Obtenha uma na página de <a href="https://atlas.nomic.ai/cli-login">chaves de API do Atlas</a>. Requer reinicialização.
- Embeddings DeviceProcessamento de Incorporações
- The compute device used for embeddings. Requires restart.Dispositivo usado para processar as incorporações. Requer reinicialização.
- Application defaultAplicativo padrão
- DisplayExibir
- Show SourcesMostrar Fontes
- Display the sources used for each response.Mostra as fontes usadas para cada resposta.
- AdvancedApenas para usuários avançados
- Warning: Advanced usage only.Atenção: Apenas para usuários avançados.
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores muito altos podem causar falhas no LocalDocs, respostas extremamente lentas ou até mesmo nenhuma resposta. De forma geral, o valor {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Clique <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aqui</a> para mais informações.
- Document snippet size (characters)I translated "snippet" as "trecho" to make the term feel more natural and understandable in Portuguese. "Trecho" effectively conveys the idea of a portion or section of a document, fitting well within the context, whereas a more literal translation might sound less intuitive or awkward for users.Tamanho do trecho de documento (caracteres)
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por trecho de documento. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
- Max document snippets per promptMáximo de Trechos de Documento por Prompt
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número máximo de trechos de documentos a serem adicionados ao contexto do prompt. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
@@ -1608,19 +1362,16 @@ modelo instalado para funcionar
LocalDocsView
- LocalDocsLocalDocs
- Chat with your local filesConverse com seus arquivos locais
- + Add Collection+ Adicionar Coleção
@@ -1630,121 +1381,101 @@ modelo instalado para funcionar
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERRO: Não foi possível acessar o banco de dados do LocalDocs ou ele não é válido.</h3><br><i>Observação: Será necessário reiniciar o aplicativo após tentar qualquer uma das seguintes correções sugeridas.</i><br><ul><li>Certifique-se de que a pasta definida como <b>Caminho de Download</b> existe no sistema de arquivos.</li><li>Verifique a propriedade, bem como as permissões de leitura e gravação do <b>Caminho de Download</b>.</li><li>Se houver um arquivo <b>localdocs_v2.db</b>, verifique também sua propriedade e permissões de leitura/gravação.</li></ul><br>Se o problema persistir e houver algum arquivo 'localdocs_v*.db' presente, como último recurso, você pode<br>tentar fazer backup deles e removê-los. No entanto, você terá que recriar suas coleções.
- No Collections InstalledNenhuma Coleção Instalada
- Install a collection of local documents to get started using this featureInstale uma coleção de documentos locais para começar a usar este recurso
- + Add Doc Collection+ Adicionar Coleção de Documentos
- Shows the add model viewMostra a visualização para adicionar modelo
- Indexing progressBarBarra de progresso de indexação
- Shows the progress made in the indexingMostra o progresso da indexação
- ERRORERRO
- INDEXINGINDEXANDO
- EMBEDDINGINCORPORANDO
- REQUIRES UPDATEREQUER ATUALIZAÇÃO
- READYPRONTO
- INSTALLINGINSTALANDO
- Indexing in progressIndexação em andamento
- Embedding in progressIncorporação em andamento
- This collection requires an update after version changeEsta coleção precisa ser atualizada após a mudança de versão
- Automatically reindexes upon changes to the folderReindexa automaticamente após alterações na pasta
- Installation in progressInstalação em andamento
- %%
- %n file(s)%n arquivo(s)
@@ -1753,7 +1484,6 @@ modelo instalado para funcionar
- %n word(s)%n palavra(s)
@@ -1762,31 +1492,26 @@ modelo instalado para funcionar
- RemoveRemover
- RebuildReconstruir
- Reindex this folder from scratch. This is slow and usually not needed.eindexar pasta do zero. Lento e geralmente desnecessário.
- UpdateAtualizar
- Update the collection to the new version. This is a slow operation.Atualizar coleção para nova versão. Pode demorar.
@@ -1863,109 +1588,91 @@ modelo instalado para funcionar
ModelSettings
- ModelModelo
- Model SettingsConfigurações do Modelo
- CloneClonar
- RemoveRemover
- NameNome
- Model FileArquivo do Modelo
- System PromptPrompt do Sistema
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefixado no início de cada conversa. Deve conter os tokens de enquadramento apropriados.
- Prompt TemplateModelo de Prompt
- The template that wraps every prompt.Modelo para cada prompt.
- Must contain the string "%1" to be replaced with the user's input.Deve incluir "%1" para a entrada do usuário.
- Chat Name PromptPrompt para Nome do Chat
- Prompt used to automatically generate chat names.Prompt usado para gerar automaticamente nomes de chats.
- Suggested FollowUp PromptPrompt de Sugestão de Acompanhamento
- Prompt used to generate suggested follow-up questions.Prompt usado para gerar sugestões de perguntas.
- Context LengthTamanho do Contexto
- Number of input and output tokens the model sees.Tamanho da Janela de Contexto.
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1975,19 +1682,16 @@ Obs.: Só entrará em vigor após recarregar o modelo.
- TemperatureTemperatura
- Randomness of model output. Higher -> more variation.Aleatoriedade das respostas. Quanto maior, mais variadas.
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.Aumenta a chance de escolher tokens menos prováveis.
@@ -1995,19 +1699,16 @@ Obs.: Uma temperatura mais alta gera resultados mais criativos, mas menos previs
- Top-PTop-P
- Nucleus Sampling factor. Lower -> more predicatable.Amostragem por núcleo. Menor valor, respostas mais previsíveis.
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Apenas tokens com probabilidade total até o valor de top_p serão escolhidos.
@@ -2015,67 +1716,56 @@ Obs.: Evita tokens muito improváveis.
- Min-PMin-P
- Minimum token probability. Higher -> more predictable.Probabilidade mínima do token. Quanto maior -> mais previsível.
- Sets the minimum relative probability for a token to be considered.Define a probabilidade relativa mínima para um token ser considerado.
- Top-KTop-K
- Size of selection pool for tokens.Número de tokens considerados na amostragem.
- Only the top K most likely tokens will be chosen from.Serão escolhidos apenas os K tokens mais prováveis.
- Max LengthComprimento Máximo
- Maximum response length, in tokens.Comprimento máximo da resposta, em tokens.
- Prompt Batch SizeTamanho do Lote de Processamento
- The batch size used for prompt processing.Tokens processados por lote.
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Quantidade de tokens de prompt para processar de uma vez.
@@ -2083,43 +1773,36 @@ OBS.: Valores mais altos podem acelerar a leitura dos prompts, mas usarão mais
- Repeat PenaltyPenalidade de Repetição
- Repetition penalty factor. Set to 1 to disable.Penalidade de Repetição (1 para desativar).
- Repeat Penalty TokensTokens para penalizar repetição
- Number of previous tokens used for penalty.Número de tokens anteriores usados para penalidade.
- GPU LayersCamadas na GPU
- Number of model layers to load into VRAM.Camadas Carregadas na GPU.
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2132,203 +1815,168 @@ Obs.: Só entrará em vigor após recarregar o modelo.
ModelsView
- No Models InstalledNenhum Modelo Instalado
- Install a model to get started using GPT4AllInstale um modelo para começar a usar o GPT4All
-
- + Add Model+ Adicionar Modelo
- Shows the add model viewMostra a visualização para adicionar modelo
- Installed ModelsModelos Instalados
- Locally installed chat modelsModelos de chat instalados localmente
- Model fileArquivo do modelo
- Model file to be downloadedArquivo do modelo a ser baixado
- DescriptionDescrição
- File descriptionDescrição do arquivo
- CancelCancelar
- ResumeRetomar
- Stop/restart/start the downloadParar/reiniciar/iniciar o download
- RemoveRemover
- Remove model from filesystemRemover modelo do sistema de arquivos
-
- InstallInstalar
- Install online modelInstalar modelo online
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Erro</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVISO: Não recomendado para seu hardware. O modelo requer mais memória (%1 GB) do que seu sistema tem disponível (%2).</strong></font>
- ERROR: $API_KEY is empty.ERRO: A $API_KEY está vazia.
- ERROR: $BASE_URL is empty.ERRO: A $BASE_URL está vazia.
- enter $BASE_URLinserir a $BASE_URL
- ERROR: $MODEL_NAME is empty.ERRO: O $MODEL_NAME está vazio.
- enter $MODEL_NAMEinserir o $MODEL_NAME
- %1 GB%1 GB
- ??
- Describes an error that occurred when downloadingDescreve um erro que ocorreu durante o download
- Error for incompatible hardwareErro para hardware incompatível
- Download progressBarBarra de progresso do download
- Shows the progress made in the downloadMostra o progresso do download
- Download speedVelocidade de download
- Download speed in bytes/kilobytes/megabytes per secondVelocidade de download em bytes/kilobytes/megabytes por segundo
- Calculating...Calculando...
@@ -2337,58 +1985,46 @@ Obs.: Só entrará em vigor após recarregar o modelo.
-
-
-
- Whether the file hash is being calculatedSe o hash do arquivo está sendo calculado
- Busy indicatorIndicador de ocupado
- Displayed when the file hash is being calculatedExibido quando o hash do arquivo está sendo calculado
- enter $API_KEYinserir $API_KEY
- File sizeTamanho do arquivo
- RAM requiredRAM necessária
- ParametersParâmetros
- QuantQuant
- TypeTipo
@@ -2397,13 +2033,11 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MyFancyLink
- Fancy linkLink personalizado
- A stylized linkUm link personalizado
@@ -2412,7 +2046,6 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MySettingsStack
- Please choose a directoryEscolha um diretório
@@ -2421,13 +2054,11 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MySettingsTab
- Restore DefaultsRestaurar Configurações Padrão
- Restores settings dialog to a default stateRestaura as configurações para o estado padrão
@@ -2436,13 +2067,11 @@ Obs.: Só entrará em vigor após recarregar o modelo.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.Contribuir com dados para o Datalake de código aberto GPT4All.
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2456,55 +2085,46 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
- Terms for opt-inTermos de participação
- Describes what will happen when you opt-inDescrição do que acontece ao participar
- Please provide a name for attribution (optional)Forneça um nome para atribuição (opcional)
- Attribution (optional)Atribuição (opcional)
- Provide attributionFornecer atribuição
- EnableHabilitar
- Enable opt-inAtivar participação
- CancelCancelar
- Cancel opt-inCancelar participação
@@ -2513,19 +2133,16 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
NewVersionDialog
- New version is availableAtualização disponível
- UpdateAtualizar agora
- Update to new versionBaixa e instala a última versão do GPT4All
@@ -2534,20 +2151,17 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
PopupDialog
- Reveals a shortlived help balloonExibe uma dica rápida
- Busy indicatorThe literal translation of "busy indicator" as "indicador de ocupado" might create ambiguity in Portuguese, as it doesn't clearly convey whether the system is processing something or simply unavailable. "Progresso" (progress) was chosen to more clearly indicate that an activity is in progress and that the user should wait for its completion.Indicador de progresso
- Displayed when the popup is showing busyVisível durante o processamento
@@ -2564,33 +2178,27 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
-
- SettingsI used "Config" instead of "Configurações" to keep the UI concise and visually balanced. "Config" is a widely recognized abbreviation that maintains clarity while saving space, making the interface cleaner and more user-friendly, especially in areas with limited space.Config
- Contains various application settingsAcessar as configurações do aplicativo
- ApplicationAplicativo
- ModelModelo
- LocalDocsLocalDocs
@@ -2599,13 +2207,11 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
StartupDialog
- Welcome!Bem-vindo(a)!
- ### Release notes
%1### Contributors
%2
@@ -2615,19 +2221,16 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
- Release notesNotas de lançamento
- Release notes for this versionNotas de lançamento desta versão
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2659,87 +2262,70 @@ versão do modelo GPT4All que utilize seus dados!
- Terms for opt-inTermos de participação
- Describes what will happen when you opt-inDescrição do que acontece ao participar
-
- Opt-in for anonymous usage statisticsEnviar estatísticas de uso anônimas
-
- YesSim
- Allow opt-in for anonymous usage statisticsPermitir o envio de estatísticas de uso anônimas
-
- NoNão
- Opt-out for anonymous usage statisticsRecusar envio de estatísticas de uso anônimas
- Allow opt-out for anonymous usage statisticsPermitir recusar envio de estatísticas de uso anônimas
-
- Opt-in for networkAceitar na rede
- Allow opt-in for networkPermitir aceitação na rede
- Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermitir compartilhamento anônimo de chats no Datalake GPT4All
- Opt-out for networkRecusar na rede
- Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermitir recusar compartilhamento anônimo de chats no Datalake GPT4All
@@ -2748,27 +2334,22 @@ versão do modelo GPT4All que utilize seus dados!
SwitchModelDialog
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Atenção:</b> Ao trocar o modelo a conversa atual será perdida. Continuar?
- ContinueContinuar
- Continue with model loadingConfirma a troca do modelo
-
- CancelCancelar
@@ -2777,37 +2358,31 @@ versão do modelo GPT4All que utilize seus dados!
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)Editar resposta (opcional)
- Please provide a better response...Digite sua resposta...
- SubmitEnviar
- Submits the user's responseEnviar
- CancelCancelar
- Closes the response dialogFecha a caixa de diálogo de resposta
@@ -2816,151 +2391,124 @@ versão do modelo GPT4All que utilize seus dados!
main
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Ocorreu um erro ao iniciar:</h3><br><i>"Hardware incompatível detectado."</i><br><br>Infelizmente, seu processador não atende aos requisitos mínimos para executar este programa. Especificamente, ele não possui suporte às instruções AVX, que são necessárias para executar modelos de linguagem grandes e modernos. A única solução, no momento, é atualizar seu hardware para um processador mais recente.<br><br>Para mais informações, consulte: <a href="https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions">https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- GPT4All v%1GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Ocorreu um erro ao iniciar:</h3><br><i>"Não foi possível acessar o arquivo de configurações."</i><br><br>Infelizmente, algo está impedindo o programa de acessar o arquivo de configurações. Isso pode acontecer devido a permissões incorretas na pasta de configurações do aplicativo. Para obter ajuda, acesse nosso <a href="https://discord.gg/4M2QFmTt2k">canal no Discord</a>.
- Connection to datalake failed.Falha na conexão com o datalake.
- Saving chats.Salvando chats.
- Network dialogAvisos de rede
- opt-in to share feedback/conversationspermitir compartilhamento de feedback/conversas
- Home viewTela inicial
- Home view of applicationTela inicial do aplicativo
- HomeInício
- Chat viewVisualização do Chat
- Chat view to interact with modelsVisualização do chat para interagir com os modelos
- ChatsChats
-
- ModelsModelos
- Models view for installed modelsTela de modelos instalados
-
- LocalDocsLocalDocs
- LocalDocs view to configure and use local docsTela de configuração e uso de documentos locais do LocalDocs
-
- SettingsConfig
- Settings view for application configurationTela de configurações do aplicativo
- The datalake is enabledO datalake está ativado
- Using a network modelUsando um modelo de rede
- Server mode is enabledModo servidor ativado
- Installed modelsModelos instalados
- View of installed modelsExibe os modelos instalados
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 89c029a6..7ce96011 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -5,13 +5,11 @@
AddCollectionView
- ← Existing Collections← Colecţiile curente
- Add Document CollectionAdaugă o Colecţie de documente
@@ -22,61 +20,51 @@
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi specificate în Configurare.
- Please choose a directorySelectează un director/folder
- NameDenumire
- Collection name...Denumirea Colecţiei...
- Name of the collection to add (Required)Denumirea Colecţiei de adăugat (necesar)
- FolderFolder
- Folder path...Calea spre folder...
- Folder path to documents (Required)Calea spre documente (necesar)
- BrowseCăutare
- Create CollectionCreează Colecţia
@@ -85,209 +73,173 @@
AddModelView
- ← Existing Models← Modelele curente/instalate
- Explore ModelsCaută modele
- Discover and download models by keyword search...Caută şi descarcă modele după un cuvânt-cheie...
- Text field for discovering and filtering downloadable modelsCâmp pentru căutarea şi filtrarea modelelor ce pot fi descărcate
- Initiate model discovery and filteringIniţiază căutarea şi filtrarea modelelor
- Triggers discovery and filtering of modelsActivează căutarea şi filtrarea modelelor
- DefaultImplicit
- LikesLikes
- DownloadsDownload-uri
- RecentRecent/e
- AscAsc. (A->Z)
- DescDesc. (Z->A)
- NoneNiciunul
- Searching · %1Căutare · %1
- Sort by: %1Ordonare după: %1
- Sort dir: %1Sensul ordonării: %1
- Limit: %1Límită: %1
- Network error: could not retrieve %1Eroare de reţea: nu se poate prelua %1
-
- Busy indicatorIndicator de activitate
- Displayed when the models request is ongoingAfişat în timpul solicitării modelului
- Model fileFişierul modelului
- Model file to be downloadedFişierul modelului de descărcat
- DescriptionDescriere
- File descriptionDescrierea fişierului
- CancelAnulare
- ResumeContinuare
- DownloadDownload
- Stop/restart/start the downloadOpreşte/Reporneşte/Începe descărcarea
- RemoveŞterge
- Remove model from filesystemŞterge modelul din sistemul de fişiere
-
- InstallInstalare
- Install online modelInstalez un model din online
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Eroare</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem (%2).</strong></font>
@@ -299,21 +251,17 @@
- %1 GB%1 GB
-
- ??
- Describes an error that occurred when downloadingDescrie eroarea apărută în timpul descărcării
@@ -324,37 +272,31 @@
- Error for incompatible hardwareEroare: hardware incompatibil
- Download progressBarProgresia descărcării
- Shows the progress made in the downloadAfişează progresia descărcării
- Download speedViteza de download
- Download speed in bytes/kilobytes/megabytes per secondViteza de download în bytes/kilobytes/megabytes pe secundă
- Calculating...Calculare...
@@ -363,82 +305,66 @@
-
-
-
- Whether the file hash is being calculatedDacă se calculează hash-ul fişierului
- Displayed when the file hash is being calculatedSe afişează când se calculează hash-ul fişierului
- ERROR: $API_KEY is empty.EROARE: $API_KEY absentă
- enter $API_KEYintrodu cheia $API_KEY
- ERROR: $BASE_URL is empty.EROARE: $BASE_URL absentă
- enter $BASE_URLintrodu $BASE_URL
- ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent
- enter $MODEL_NAMEintrodu $MODEL_NAME
- File sizeDimensiunea fişierului
- RAM requiredRAM necesară
- ParametersParametri
- QuantQuant(ificare)
- TypeTip
@@ -447,19 +373,16 @@
ApplicationSettings
- ApplicationAplicaţie/Program
- Network dialogReţea
- opt-in to share feedback/conversationsoptional: partajarea (share) de comentarii/conversatii
@@ -476,67 +399,56 @@
- Error dialogEroare
- Application SettingsConfigurarea programului
- GeneralGeneral
- ThemeTema pentru interfaţă
- The application color scheme.Schema de culori a programului.
- DarkÎntunecat
- LightLuminos
- LegacyDarkÎntunecat-vechi
- Font SizeDimensiunea textului
- The size of text in the application.Dimensiunea textului în program.
- DeviceDispozitiv/Device
@@ -547,7 +459,6 @@
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -559,165 +470,137 @@
- SmallMic
- MediumMediu
- LargeMare
- Language and LocaleLimbă şi Localizare
- The language and locale you wish to use.Limba şi Localizarea de utilizat.
- System LocaleLocalizare
- The compute device used for text generation.Dispozitivul de calcul utilizat pentru generarea de text.
-
- Application defaultImplicit
- Default ModelModelul implicit
- The preferred model for new chats. Also used as the local server fallback.Modelul preferat pentru noile conversaţii. Va fi folosit drept rezervă pentru serverul local.
- Suggestion ModeModul de sugerare
- Generate suggested follow-up questions at the end of responses.Generarea de întrebări pentru continuare, la finalul replicilor.
- When chatting with LocalDocsCând se discută cu LocalDocs
- Whenever possibleOricând e posibil
- NeverNiciodată
- Download PathCalea pentru download
- Where to store local models and the LocalDocs database.Unde să fie plasate modelele şi baza de date LocalDocs.
- BrowseCăutare
- Choose where to save model filesSelectează locul unde vor fi plasate fişierele modelelor
- Enable DatalakeActivează DataLake
- Send chats and feedback to the GPT4All Open-Source Datalake.Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
- AdvancedAvansate
- CPU ThreadsThread-uri CPU
- The number of CPU threads used for inference and embedding.Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.
- Save Chat ContextSalvarea contextului conversaţiei
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.
@@ -728,7 +611,6 @@
- Enable Local ServerActivează Serverul local
@@ -739,31 +621,26 @@
- API Server PortPortul Serverului API
- The port to use for the local server. Requires restart.Portul utilizat pentru Serverul local. Necesită repornirea programului.
- Check For UpdatesCaută update-uri
- Manually check for an update to GPT4All.Caută manual update-uri pentru GPT4All.
- UpdatesUpdate-uri/Actualizări
@@ -799,73 +676,61 @@
ChatDrawer
- DrawerSertar
- Main navigation drawerSertarul principal de navigare
- + New Chat+ Conversaţie nouă
- Create a new chatCreează o Conversaţie nouă
- Select the current chat or edit the chat when in edit modeSelectează conversaţia curentă sau editeaz-o când eşti în modul editare
- Edit chat nameEditează denumirea conversaţiei
- Save chat nameSalvează denumirea conversaţiei
- Delete chatŞterge conversaţia
- Confirm chat deletionCONFIRMĂ ştergerea conversaţiei
- Cancel chat deletionANULEAZĂ ştergerea conversaţiei
- List of chatsLista conversaţiilor
- List of chats in the drawer dialogLista conversaţiilor în secţiunea-sertar
@@ -907,135 +772,112 @@
ChatView
- <h3>Warning</h3><p>%1</p><h3>Atenţie</h3><p>%1</p>
- Switch model dialogSchimbarea modelului
- Warn the user if they switch models, then context will be erasedAvertizează utilizatorul că la schimbarea modelului va fi şters contextul
- Conversation copied to clipboard.Conversaţia a fost plasată în Clipboard.
- Code copied to clipboard.Codul a fost plasat în Clipboard.
- Chat panelSecţiunea de chat
- Chat panel with optionsSecţiunea de chat cu opţiuni
- Reload the currently loaded modelReîncarcă modelul curent
- Eject the currently loaded modelEjectează modelul curent
- No model installed.Niciun model instalat.
- Model loading error.Eroare la încărcarea modelului.
- Waiting for model...Se aşteaptă modelul...
- Switching context...Se schimbă contextul...
- Choose a model...Selectează un model...
- Not found: %1Absent: %1
- The top item is the current modelPrimul element e modelul curent
-
- LocalDocsLocalDocs
- Add documentsAdaug documente
- add collections of documents to the chatadaugă Colecţii de documente la conversaţie
- Load the default modelÎncarcă modelul implicit
- Loads the default model which can be changed in settingsÎncarcă modelul implicit care poate fi stabilit în Configurare
- No Model InstalledNiciun model instalat
@@ -1046,7 +888,6 @@
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>EROARE la încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
@@ -1066,44 +907,37 @@
- GPT4All requires that you install at least one
model to get startedGPT4All necesită cel puţin un model pentru a putea rula
- Install a ModelInstalează un model
- Shows the add model viewAfişează secţiunea de adăugare a unui model
- Conversation with the modelConversaţie cu modelul
- prompt / response pairs from the conversationperechi prompt/replică din conversaţie
- GPT4AllGPT4All
- YouTu
@@ -1113,123 +947,97 @@ model to get started
- response stopped ...replică întreruptă...
- processing ...procesare...
- generating response ...se generează replica...
- generating questions ...se generează întrebări...
-
- CopyCopiere
- Copy MessageCopiez mesajul
- Disable markdownDezactivez markdown
- Enable markdownActivez markdown
- Thumbs upBravo
- Gives a thumbs up to the responseDă un Bravo acestei replici
- Thumbs downAiurea
- Opens thumbs down dialogDeschide reacţia Aiurea
-
-
-
- %1 Sources
- %1 Surse
-
- Suggested follow-upsContinuări sugerate
- Erase and reset chat sessionŞterge şi resetează sesiunea de chat
- Copy chat session to clipboardCopiez sesiunea de chat (conversaţia) în Clipboard
- Redo last chat responseReface ultima replică
- Stop generatingOpreşte generarea
- Stop the current response generationOpreşte generarea replicii curente
- Reloads the modelReîncarc modelul
@@ -1266,86 +1074,80 @@ model to get started
-
- Reload · %1Reîncărcare · %1
- Loading · %1Încărcare · %1
- Load · %1 (default) →Încarcă · %1 (implicit) →
- restoring from text ...restaurare din text...
- retrieving localdocs: %1 ...se preia din LocalDocs: %1 ...
- searching localdocs: %1 ...se caută în LocalDocs: %1 ...
+
+
+ %n Source(s)
+
+ %n Sursa
+ %n Surse
+ %n de Surse
+
+
- Send a message...Trimite un mesaj...
- Load a model to continue...Încarcă un model pentru a continua...
- Send messages/prompts to the modelTrimite mesaje/prompt-uri către model
- CutDecupare (Cut)
- PasteAlipire (Paste)
- Select AllSelectez tot
- Send messageTrimit mesajul
- Sends the message/prompt contained in textfield to the modelTrimite modelului mesajul/prompt-ul din câmpul-text
@@ -1354,45 +1156,39 @@ model to get started
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete resultsAtenţie: căutarea în Colecţii în timp ce sunt Indexate poate întoarce rezultate incomplete
- %n file(s)%n fişier%n fişiere
- %n fişiere
+ %n de fişiere
- %n word(s)%n cuvânt%n cuvinte
- %n cuvinte
+ %n de cuvinte
- UpdatingActualizare
- + Add Docs+ Adaug documente
- Select a collection to make it available to the chat model.Selectează o Colecţie pentru ca modelul să o poată accesa.
@@ -1439,97 +1235,81 @@ model to get started
HomeView
- Welcome to GPT4AllBun venit în GPT4All
- The privacy-first LLM chat applicationProgramul ce prioritizează confidenţialitatea (privacy)
- Start chattingÎncepe o conversaţie
- Start ChattingÎncepe o conversaţie
- Chat with any LLMDialoghează cu orice LLM
- LocalDocsLocalDocs
- Chat with your local filesDialoghează cu fişiere locale
- Find ModelsCaută modele
- Explore and download modelsExplorează şi descarcă modele
- Latest newsUltimele ştiri
- Latest news from GPT4AllUltimele ştiri de la GPT4All
- Release NotesDespre această versiune
- DocumentationDocumentaţie
- DiscordDiscord
- X (Twitter)X (Twitter)
- GithubGitHub
@@ -1539,13 +1319,11 @@ model to get started
- GPT4All.ioGPT4All.io
- Subscribe to NewsletterAbonare la Newsletter
@@ -1554,25 +1332,21 @@ model to get started
LocalDocsSettings
- LocalDocsLocalDocs
- LocalDocs SettingsConfigurarea LocalDocs
- IndexingIndexare
- Allowed File ExtensionsExtensii compatibile de fişier
@@ -1583,13 +1357,11 @@ model to get started
- EmbeddingEmbedding
- Use Nomic Embed APIFolosesc Nomic Embed API
@@ -1600,7 +1372,6 @@ model to get started
- Nomic API KeyCheia API Nomic
@@ -1612,85 +1383,71 @@ model to get started
- Embeddings DeviceDispozitivul pentru Embeddings
- The compute device used for embeddings. Requires restart.Dispozitivul pentru Embeddings. Necesită repornire.
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
- Application defaultImplicit
- DisplayVizualizare
- Show SourcesAfişarea Surselor
- Display the sources used for each response.Afişează Sursele utilizate pentru fiecare replică.
- AdvancedAvansate
- Warning: Advanced usage only.Atenţie: Numai pentru utilizare avansată.
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
@@ -1705,7 +1462,6 @@ model to get started
- Document snippet size (characters)Lungimea (în caractere) a citatelor din documente
@@ -1716,7 +1472,6 @@ model to get started
- Max document snippets per promptNumărul maxim de citate per prompt
@@ -1731,19 +1486,16 @@ model to get started
LocalDocsView
- LocalDocsLocalDocs
- Chat with your local filesDialoghează cu fişiere locale
- + Add Collection+ Adaugă o Colecţie
@@ -1753,165 +1505,139 @@ model to get started
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.EROARE: Baza de date LocalDocs nu poate fi accesată sau nu e validă. Programul trebuie repornit după ce se încearcă oricare din următoarele remedii sugerate.</i><br><ul><li>Asigură-te că folderul pentru <b>Download Path</b> există în sistemul de fişiere.</li><li>Verifică permisiunile şi apartenenţa folderului pentru <b>Download Path</b>.</li><li>Dacă există fişierul <b>localdocs_v2.db</b>, verifică-i apartenenţa şi permisiunile citire/scriere (read/write).</li></ul><br>Dacă problema persistă şi există vreun fişier 'localdocs_v*.db', ca ultimă soluţie poţi<br>încerca duplicarea (backup) şi apoi ştergerea lor. Oricum, va trebui să re-creezi Colecţiile.
- No Collections InstalledNu există Colecţii instalate
- Install a collection of local documents to get started using this featureInstalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta
- + Add Doc Collection+ Adaugă o Colecţie de documente
- Shows the add model viewAfişează secţiunea de adăugare a unui model
- Indexing progressBarBara de progresie a Indexării
- Shows the progress made in the indexingAfişează progresia Indexării
- ERROREROARE
- INDEXING...SE INDEXEAZĂ...
- EMBEDDING...EMBEDDINGs...
- REQUIRES UPDATENECESITĂ UPDATE
- READYGATA
- INSTALLING...INSTALARE...
- Indexing in progressSe Indexează...
- Embedding in progress...Se calculează Embeddings...
- This collection requires an update after version changeAceastă Colecţie necesită update după schimbarea versiunii
- Automatically reindexes upon changes to the folderSe reindexează automat după schimbări ale folderului
- Installation in progress...Instalare în curs...
- %%
- %n file(s)%n fişier%n fişiere
- %n fişiere
+ %n de fişiere
- %n word(s)%n cuvânt%n cuvinte
- %n cuvinte
+ %n de cuvinte
- RemoveŞterg
- RebuildReconstrucţie
- Reindex this folder from scratch. This is slow and usually not needed.Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
- UpdateUpdate/Actualizare
- Update the collection to the new version. This is a slow operation.Actualizează Colecţia la noua versiune. Această procedură e lentă.
@@ -2027,43 +1753,36 @@ model to get started
ModelSettings
- ModelModel
- Model SettingsConfigurez modelul
- CloneClonez
- RemoveŞterg
- NameDenumire
- Model FileFişierul modelului
- System PromptSystem Prompt
@@ -2074,13 +1793,11 @@ model to get started
- Prompt TemplatePrompt Template
- The template that wraps every prompt.Standardul de formulare a fiecărui prompt.
@@ -2091,37 +1808,31 @@ model to get started
- Chat Name PromptDenumirea conversaţiei
- Prompt used to automatically generate chat names.Standardul de formulare a denumirii conversaţiilor.
- Suggested FollowUp PromptPrompt-ul sugerat pentru a continua
- Prompt used to generate suggested follow-up questions.Prompt-ul folosit pentru generarea întrebărilor de continuare.
- Context LengthLungimea Contextului
- Number of input and output tokens the model sees.Numărul token-urilor de input şi de output văzute de model.
@@ -2133,13 +1844,11 @@ model to get started
- TemperatureTemperatura
- Randomness of model output. Higher -> more variation.Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.
@@ -2150,13 +1859,11 @@ model to get started
- Top-PTop-P
- Nucleus Sampling factor. Lower -> more predicatable.Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare.
@@ -2167,19 +1874,16 @@ model to get started
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare.
- Must contain the string "%1" to be replaced with the user's input.Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul.
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -2187,88 +1891,74 @@ NOTE: Does not take effect until you reload the model.
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile.
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
- Min-PMin-P
- Minimum token probability. Higher -> more predictable.Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.
- Sets the minimum relative probability for a token to be considered.Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
- Top-KTop-K
- Size of selection pool for tokens.Dimensiunea setului de token-uri.
- Only the top K most likely tokens will be chosen from.Se va alege numai din cele mai probabile K token-uri.
- Max LengthLungimea maximă
- Maximum response length, in tokens.Lungimea maximă - în token-uri - a replicii.
- Prompt Batch SizePrompt Batch Size
- The batch size used for prompt processing.Dimensiunea setului de token-uri citite simultan din prompt.
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2281,37 +1971,31 @@ NOTE: Does not take effect until you reload the model.
- Repeat PenaltyPenalizarea pentru repetare
- Repetition penalty factor. Set to 1 to disable.Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.
- Repeat Penalty TokensToken-uri pentru penalizare a repetării
- Number of previous tokens used for penalty.Numărul token-urilor anterioare considerate pentru penalizare.
- GPU LayersLayere în GPU
- Number of model layers to load into VRAM.Numărul layerelor modelului ce vor fi Încărcate în VRAM.
@@ -2327,107 +2011,88 @@ NOTE: Does not take effect until you reload the model.
ModelsView
- No Models InstalledNu există modele instalate
- Install a model to get started using GPT4AllInstalează un model pentru a începe să foloseşti GPT4All
-
- + Add Model+ Adaugă un model
- Shows the add model viewAfişează secţiunea de adăugare a unui model
- Installed ModelsModele instalate
- Locally installed chat modelsModele conversaţionale instalate local
- Model fileFişierul modelului
- Model file to be downloadedFişierul modelului ce va fi descărcat
- DescriptionDescriere
- File descriptionDescrierea fişierului
- CancelAnulare
- ResumeContinuare
- Stop/restart/start the downloadOprirea/Repornirea/Iniţierea descărcării
- RemoveŞterg
- Remove model from filesystemŞterg modelul din sistemul de fişiere
-
- InstallInstalează
- Install online modelInstalez un model din online
@@ -2444,67 +2109,56 @@ NOTE: Does not take effect until you reload the model.
- %1 GB%1 GB
- ??
- Describes an error that occurred when downloadingDescrie o eroare apărută la download
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#eroare">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
- Error for incompatible hardwareEroare - hardware incompatibil
- Download progressBarBara de progresie a descărcării
- Shows the progress made in the downloadAfişează progresia descărcării
- Download speedViteza de download
- Download speed in bytes/kilobytes/megabytes per secondViteza de download în bytes/kilobytes/megabytes pe secundă
- Calculating......Se calculează...
@@ -2513,88 +2167,71 @@ NOTE: Does not take effect until you reload the model.
-
-
-
- Whether the file hash is being calculatedDacă se va calcula hash-ul fişierului
- Busy indicatorIndicator de activitate
- Displayed when the file hash is being calculatedAfişat când se calculează hash-ul unui fişier
- ERROR: $API_KEY is empty.EROARE: $API_KEY absentă.
- enter $API_KEYintrodu cheia $API_KEY
- ERROR: $BASE_URL is empty.EROARE: $BASE_URL absentă.
- enter $BASE_URLintrodu $BASE_URL
- ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent.
- enter $MODEL_NAMEintrodu $MODEL_NAME
- File sizeDimensiunea fişierului
- RAM requiredRAM necesară
- ParametersParametri
- QuantQuant(ificare)
- TypeTip
@@ -2603,13 +2240,11 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
- Fancy linkLink haios
- A stylized linkUn link cu stil
@@ -2618,7 +2253,6 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
- Please choose a directorySelectează un director (folder)
@@ -2627,13 +2261,11 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
- Restore DefaultsRestaurez valorile implicite
- Restores settings dialog to a default stateRestaurez secţiunea de configurare la starea sa implicită
@@ -2642,7 +2274,6 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.
@@ -2681,7 +2312,6 @@ NOTE: Does not take effect until you reload the model.
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2705,55 +2335,46 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Terms for opt-inTermenii pentru participare
- Describes what will happen when you opt-inDescrie ce se întâmplă când participi
- Please provide a name for attribution (optional)Specifică o denumire pentru această apreciere (opţional)
- Attribution (optional)Apreciere (opţional)
- Provide attributionApreciază
- EnableActivează
- Enable opt-inActivează participarea
- CancelAnulare
- Cancel opt-inAnulează participarea
@@ -2762,19 +2383,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
- New version is availableO nouă versiune disponibilă!
- UpdateUpdate/Actualizare
- Update to new versionActualizează la noua versiune
@@ -2783,19 +2401,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
- Reveals a shortlived help balloonAfişează un mesaj scurt de asistenţă
- Busy indicatorIndicator de activitate
- Displayed when the popup is showing busySe afişează când procedura este în desfăşurare
@@ -2805,32 +2420,26 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
-
- SettingsConfigurare
- Contains various application settingsConţine setări ale programului
- ApplicationProgram
- ModelModel
- LocalDocsLocalDocs
@@ -2839,7 +2448,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
- Welcome!Bun venit!
@@ -2853,13 +2461,11 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Release notesDespre versiune
- Release notes for this versionDespre această versiune
@@ -2894,7 +2500,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Când un model în GPT4All îţi răspunde şi îi accepţi răspunsul, conversaţia este
trimisă la componenta
Open-source DataLake a GPT4All. Mai mult - poţi aprecia (Like/Dislike) răspunsul. Dacă
- un răspuns Nu Îţi Place (e "Aiurea"). poţi
+ un răspuns Nu Îţi Place (e "Aiurea"). poţi
sugera un răspuns alternativ. Aceste date vor fi colectate şi agregate în
componenta DataLake a GPT4All.
@@ -2912,7 +2518,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- ### Release notes
%1### Contributors
%2
@@ -2922,7 +2527,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2959,87 +2563,70 @@ care foloseşte datele tale!
- Terms for opt-inTermenii pentru participare
- Describes what will happen when you opt-inDescrie ce se întâmplă când participi
-
- Opt-in for anonymous usage statisticsAcceptă colectarea de statistici despre utilizare -anonimă-
-
- YesDa
- Allow opt-in for anonymous usage statisticsAcceptă participarea la colectarea de statistici despre utilizare -anonimă-
-
- NoNu
- Opt-out for anonymous usage statisticsAnulează participarea la colectarea de statistici despre utilizare -anonimă-
- Allow opt-out for anonymous usage statisticsPermite anularea participării la colectarea de statistici despre utilizare -anonimă-
-
- Opt-in for networkAcceptă pentru reţea
- Allow opt-in for networkPermite participarea pentru reţea
- Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermite participarea la partajarea (share) -anonimă- a conversaţiilor către DataLake a GPT4All
- Opt-out for networkRefuz participarea, pentru reţea
- Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermite anularea participării la partajarea -anonimă- a conversaţiilor către DataLake a GPT4All
@@ -3053,27 +2640,22 @@ care foloseşte datele tale!
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
- ContinueContinuă
- Continue with model loadingContinuă încărcarea modelului
-
- CancelAnulare
@@ -3082,37 +2664,31 @@ care foloseşte datele tale!
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).
- Please provide a better response...Te rog, oferă o replică mai bună...
- SubmitTrimite
- Submits the user's responseTrimite răspunsul dat de utilizator
- CancelAnulare
- Closes the response dialogÎnchide afişarea răspunsului
@@ -3134,7 +2710,6 @@ care foloseşte datele tale!
- GPT4All v%1GPT4All v%1
@@ -3154,145 +2729,119 @@ care foloseşte datele tale!
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
- Connection to datalake failed.Conectarea la DataLake a eşuat.
- Saving chats.Se salvează conversaţiile.
- Network dialogDialogul despre reţea
- opt-in to share feedback/conversationsacceptă partajarea (share) de comentarii/conversaţii
- Home viewSecţiunea de Început
- Home view of applicationSecţiunea de Început a programului
- HomePrima<br>pagină
- Chat viewSecţiunea conversaţiilor
- Chat view to interact with modelsSecţiunea de chat pentru interacţiune cu modele
- ChatsConversaţii
-
- ModelsModele
- Models view for installed modelsSecţiunea modelelor instalate
-
- LocalDocsLocalDocs
- LocalDocs view to configure and use local docsSecţiunea LocalDocs de configurare şi folosire a Documentelor Locale
-
- SettingsConfigurare
- Settings view for application configurationSecţiunea de configurare a programului
- The datalake is enabledDataLake: ACTIV
- Using a network modelSe foloseşte un model pe reţea
- Server mode is enabledModul Server: ACTIV
- Installed modelsModele instalate
- View of installed modelsSecţiunea modelelor instalate
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index 422e7940..7f675e62 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections← 存在集合
- Add Document Collection添加文档集合
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.添加一个包含纯文本文件、PDF或Markdown的文件夹。在“设置”中配置其他扩展。
- Please choose a directory请选择一个目录
- Name名称
- Collection name...集合名称
- Name of the collection to add (Required)集合名称 (必须)
- Folder目录
- Folder path...目录地址
- Folder path to documents (Required)文档的目录地址(必须)
- Browse查看
- Create Collection创建集合
@@ -80,25 +68,21 @@
AddModelView
- ← Existing Models← 存在的模型
- Explore Models发现模型
- Discover and download models by keyword search...通过关键词查找并下载模型 ...
- Text field for discovering and filtering downloadable models用于发现和筛选可下载模型的文本字段
@@ -108,37 +92,31 @@
- Initiate model discovery and filtering启动模型发现和过滤
- Triggers discovery and filtering of models触发模型的发现和筛选
- Default默认
- Likes喜欢
- Downloads下载
- Recent近期
@@ -148,13 +126,11 @@
- Asc升序
- Desc倒序
@@ -164,7 +140,6 @@
- None无
@@ -178,175 +153,144 @@
- Searching · %1搜索中 · %1
- Sort by: %1排序: %1
- Sort dir: %1排序目录: %1
- Limit: %1数量: %1
- Network error: could not retrieve %1网络错误:无法检索 %1
-
- Busy indicator繁忙程度
- Displayed when the models request is ongoing在模型请求进行中时显示
- Model file模型文件
- Model file to be downloaded待下载模型
- Description描述
- File description文件描述
- Cancel取消
- Resume继续
- Download下载
- Stop/restart/start the download停止/重启/开始下载
- Remove删除
- Remove model from filesystem从系统中删除模型
-
- Install安装
- Install online model安装在线模型
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">错误</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告: 你的设备硬件不推荐 ,模型需要的内存 (%1 GB)比你的系统还要多 (%2).</strong></font>
- ERROR: $API_KEY is empty.错误:$API_KEY 为空
- ERROR: $BASE_URL is empty.错误:$BASE_URL 为空
- enter $BASE_URL输入 $BASE_URL
- ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME为空
- enter $MODEL_NAME输入:$MODEL_NAME
- %1 GB%1 GB
-
- ??
@@ -356,7 +300,6 @@
- Describes an error that occurred when downloading描述下载过程中发生的错误
@@ -374,37 +317,31 @@
- Error for incompatible hardware硬件不兼容的错误
- Download progressBar下载进度
- Shows the progress made in the download显示下载进度
- Download speed下载速度
- Download speed in bytes/kilobytes/megabytes per second下载速度 b/kb/mb /s
- Calculating...计算中
@@ -413,34 +350,26 @@
-
-
-
- Whether the file hash is being calculated是否正在计算文件哈希
- Displayed when the file hash is being calculated在计算文件哈希时显示
- enter $API_KEY输入$API_KEY
- File size文件大小
- RAM requiredRAM 需要
@@ -450,19 +379,16 @@
- Parameters参数
- Quant量化
- Type类型
@@ -471,25 +397,21 @@
ApplicationSettings
- Application应用
- Network dialog网络对话
- opt-in to share feedback/conversations选择加入以共享反馈/对话
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -504,103 +426,86 @@
- Error dialog错误对话
- Application Settings应用设置
- General通用设置
- Theme主题
- The application color scheme.应用的主题颜色
- Dark深色
- Light亮色
- LegacyDarkLegacyDark
- Font Size字体大小
- The size of text in the application.应用中的文本大小。
- Small小
- Medium中
- Large大
- Language and Locale语言和本地化
- The language and locale you wish to use.你想使用的语言
- System Locale系统语言
- Device设备
@@ -610,165 +515,137 @@
- The compute device used for text generation.设备用于文本生成
-
- Application default程序默认
- Default Model默认模型
- The preferred model for new chats. Also used as the local server fallback.新聊天的首选模式。也用作本地服务器回退。
- Suggestion Mode建议模式
- Generate suggested follow-up questions at the end of responses.在答复结束时生成建议的后续问题。
- When chatting with LocalDocs本地文档检索
- Whenever possible只要有可能
- Never从不
- Download Path下载目录
- Where to store local models and the LocalDocs database.本地模型和本地文档数据库存储目录
- Browse查看
- Choose where to save model files模型下载目录
- Enable Datalake开启数据湖
- Send chats and feedback to the GPT4All Open-Source Datalake.发送对话和反馈给GPT4All 的开源数据湖。
- Advanced高级
- CPU ThreadsCPU线程
- The number of CPU threads used for inference and embedding.用于推理和嵌入的CPU线程数
- Save Chat Context保存对话上下文
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话.
- Enable Local Server开启本地服务
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.将OpenAI兼容服务器暴露给本地主机。警告:导致资源使用量增加。
- API Server PortAPI 服务端口
- The port to use for the local server. Requires restart.使用本地服务的端口,需要重启
- Check For Updates检查更新
- Manually check for an update to GPT4All.手动检查更新
- Updates更新
@@ -812,73 +689,61 @@
ChatDrawer
- Drawer抽屉
- Main navigation drawer导航
- + New Chat+ 新对话
- Create a new chat新对话
- Select the current chat or edit the chat when in edit mode选择当前的聊天或在编辑模式下编辑聊天
- Edit chat name修改对话名称
- Save chat name保存对话名称
- Delete chat删除对话
- Confirm chat deletion确认删除对话
- Cancel chat deletion取消删除对话
- List of chats对话列表
- List of chats in the drawer dialog对话框中的聊天列表
@@ -928,31 +793,26 @@
- <h3>Warning</h3><p>%1</p><h3>警告</h3><p>%1</p>
- Switch model dialog切换模型对话
- Warn the user if they switch models, then context will be erased如果用户切换模型,则警告用户,然后上下文将被删除
- Conversation copied to clipboard.复制对话到剪切板
- Code copied to clipboard.复制代码到剪切板
@@ -962,61 +822,51 @@
- Chat panel对话面板
- Chat panel with options对话面板选项
- Reload the currently loaded model重载当前模型
- Eject the currently loaded model弹出当前加载的模型
- No model installed.没有安装模型
- Model loading error.模型加载错误
- Waiting for model...稍等片刻
- Switching context...切换上下文
- Choose a model...选择模型
- Not found: %1没找到: %1
@@ -1030,27 +880,22 @@
- The top item is the current model当前模型的最佳选项
-
- LocalDocs本地文档
- Add documents添加文档
- add collections of documents to the chat将文档集合添加到聊天中
@@ -1064,62 +909,52 @@
- Load the default model载入默认模型
- Loads the default model which can be changed in settings加载默认模型,可以在设置中更改
- No Model Installed没有下载模型
- GPT4All requires that you install at least one
model to get startedGPT4All要求您至少安装一个模型才能开始
- Install a Model下载模型
- Shows the add model view查看添加的模型
- Conversation with the model使用此模型对话
- prompt / response pairs from the conversation对话中的提示/响应对
- GPT4AllGPT4All
- You您
@@ -1137,7 +972,6 @@ model to get started
- response stopped ...响应停止...
@@ -1151,209 +985,175 @@ model to get started
- processing ...处理中
- generating response ...响应中...
- generating questions ...生成响应
-
- Copy复制
- Copy Message复制内容
- Disable markdown不允许markdown
- Enable markdown允许markdown
- Thumbs up点赞
- Gives a thumbs up to the response点赞响应
- Thumbs down点踩
- Opens thumbs down dialog打开点踩对话框
-
-
-
- %1 Sources
- %1 资源
-
- Suggested follow-ups建议的后续行动
- Erase and reset chat session擦除并重置聊天会话
- Copy chat session to clipboard复制对话到剪切板
- Redo last chat response重新生成上个响应
- Stop generating停止生成
- Stop the current response generation停止当前响应
- Reloads the model重载模型
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>加载模型时遇到错误:</h3><br><i><%1></i><br><br>模型加载失败可能由多种原因引起,但最常见的原因包括文件格式错误、下载不完整或损坏、文件类型错误、系统 RAM 不足或模型类型不兼容。以下是一些解决问题的建议:<br><ul><li>确保模型文件具有兼容的格式和类型<li>检查下载文件夹中的模型文件是否完整<li>您可以在设置对话框中找到下载文件夹<li>如果您已侧载模型,请通过检查 md5sum 确保文件未损坏<li>在我们的 <a href="https://docs.gpt4all.io/">文档</a> 中了解有关 gui 支持哪些模型的更多信息<li>查看我们的 <a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助
-
- Reload · %1重载 · %1
- Loading · %1载入中 · %1
- Load · %1 (default) →载入 · %1 (默认) →
- restoring from text ...从文本恢复中
- retrieving localdocs: %1 ...检索本地文档: %1 ...
- searching localdocs: %1 ...搜索本地文档: %1 ...
+
+
+ %n Source(s)
+
+ %n 资源
+
+
- Send a message...发送消息...
- Load a model to continue...选择模型并继续
- Send messages/prompts to the model发送消息/提示词给模型
- Cut剪切
- Paste粘贴
- Select All全选
- Send message发送消息
- Sends the message/prompt contained in textfield to the model将文本框中包含的消息/提示发送给模型
@@ -1362,13 +1162,11 @@ model to get started
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete results提示: 索引时搜索集合可能会返回不完整的结果
- %n file(s)
@@ -1376,7 +1174,6 @@ model to get started
- %n word(s)
@@ -1384,19 +1181,16 @@ model to get started
- Updating更新中
- + Add Docs+ 添加文档
- Select a collection to make it available to the chat model.选择一个集合,使其可用于聊天模型。
@@ -1443,109 +1237,91 @@ model to get started
HomeView
- Welcome to GPT4All欢迎
- The privacy-first LLM chat application隐私至上的大模型咨询应用程序
- Start chatting开始聊天
- Start Chatting开始聊天
- Chat with any LLM大预言模型聊天
- LocalDocs本地文档
- Chat with your local files本地文件聊天
- Find Models查找模型
- Explore and download models发现并下载模型
- Latest news新闻
- Latest news from GPT4AllGPT4All新闻
- Release Notes发布日志
- Documentation文档
- DiscordDiscord
- X (Twitter)X (Twitter)
- GithubGithub
- GPT4All.ioGPT4All.io
- Subscribe to Newsletter订阅信息
@@ -1554,139 +1330,116 @@ model to get started
LocalDocsSettings
- LocalDocs本地文档
- LocalDocs Settings本地文档设置
- Indexing索引中
- Allowed File Extensions添加文档扩展名
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.逗号分隔的列表。LocalDocs 只会尝试处理具有这些扩展名的文件
- EmbeddingEmbedding
- Use Nomic Embed API使用 Nomic 内部 API
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.使用快速的 Nomic API 嵌入文档,而不是使用私有本地模型
- Nomic API KeyNomic API Key
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Nomic Embed 使用的 API 密钥。请访问官网获取,需要重启。
- Embeddings DeviceEmbeddings 设备
- The compute device used for embeddings. Requires restart.技术设备用于embeddings. 需要重启.
- Application default程序默认
- Display显示
- Show Sources查看源码
- Display the sources used for each response.显示每个响应所使用的源。
- Advanced高级
- Warning: Advanced usage only.提示: 仅限高级使用。
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.值过大可能会导致 localdocs 失败、响应速度极慢或根本无法响应。粗略地说,{N 个字符 x N 个片段} 被添加到模型的上下文窗口中。更多信息请见<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此处</a>。
- Document snippet size (characters)文档粘贴大小 (字符)
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每个文档片段的字符数。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
- Max document snippets per prompt每个提示的最大文档片段数
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.检索到的文档片段最多添加到提示上下文中的前 N 个最佳匹配项。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
@@ -1695,19 +1448,16 @@ model to get started
LocalDocsView
- LocalDocs本地文档
- Chat with your local files和本地文件对话
- + Add Collection+ 添加集合
@@ -1723,121 +1473,101 @@ model to get started
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。
- No Collections Installed没有集合
- Install a collection of local documents to get started using this feature安装一组本地文档以开始使用此功能
- + Add Doc Collection+ 添加文档集合
- Shows the add model view查看添加的模型
- Indexing progressBar索引进度
- Shows the progress made in the indexing显示索引进度
- ERROR错误
- INDEXING索引
- EMBEDDINGEMBEDDING
- REQUIRES UPDATE需更新
- READY准备
- INSTALLING安装中
- Indexing in progress构建索引中
- Embedding in progressEmbedding进度
- This collection requires an update after version change此集合需要在版本更改后进行更新
- Automatically reindexes upon changes to the folder在文件夹变动时自动重新索引
- Installation in progress安装进度
- %%
- %n file(s)%n 文件
@@ -1845,7 +1575,6 @@ model to get started
- %n word(s)%n 词
@@ -1853,31 +1582,26 @@ model to get started
- Remove删除
- Rebuild重新构建
- Reindex this folder from scratch. This is slow and usually not needed.从头开始重新索引此文件夹。这个过程较慢,通常情况下不需要。
- Update更新
- Update the collection to the new version. This is a slow operation.将集合更新为新版本。这是一个缓慢的操作。
@@ -1974,67 +1698,56 @@ model to get started
ModelSettings
- Model模型
- Model Settings模型设置
- Clone克隆
- Remove删除
- Name名称
- Model File模型文件
- System Prompt系统提示词
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.每次对话开始时的前缀
- Prompt Template提示词模版
- The template that wraps every prompt.包装每个提示的模板
- Must contain the string "%1" to be replaced with the user's input.必须包含字符串 "%1" 替换为用户的's 输入.
@@ -2045,43 +1758,36 @@ optional image
- Chat Name Prompt聊天名称提示
- Prompt used to automatically generate chat names.用于自动生成聊天名称的提示。
- Suggested FollowUp Prompt建议的后续提示
- Prompt used to generate suggested follow-up questions.用于生成建议的后续问题的提示。
- Context Length上下文长度
- Number of input and output tokens the model sees.模型看到的输入和输出令牌的数量。
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -2091,19 +1797,16 @@ NOTE: Does not take effect until you reload the model.
- Temperature温度
- Randomness of model output. Higher -> more variation.模型输出的随机性。更高->更多的变化。
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.温度增加了选择不太可能的token的机会。
@@ -2111,19 +1814,16 @@ NOTE: Higher temperature gives more creative but less predictable outputs.
- Top-PTop-P
- Nucleus Sampling factor. Lower -> more predicatable.核子取样系数。较低->更具可预测性。
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.只能选择总概率高达top_p的最有可能的令牌。
@@ -2131,67 +1831,56 @@ NOTE: Prevents choosing highly unlikely tokens.
- Min-PMin-P
- Minimum token probability. Higher -> more predictable.最小令牌概率。更高 -> 更可预测。
- Sets the minimum relative probability for a token to be considered.设置被考虑的标记的最小相对概率。
- Top-KTop-K
- Size of selection pool for tokens.令牌选择池的大小。
- Only the top K most likely tokens will be chosen from.仅从最可能的前 K 个标记中选择
- Max Length最大长度
- Maximum response length, in tokens.最大响应长度(以令牌为单位)
- Prompt Batch Size提示词大小
- The batch size used for prompt processing.用于快速处理的批量大小。
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.一次要处理的提示令牌数量。
@@ -2199,43 +1888,36 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
- Repeat Penalty重复惩罚
- Repetition penalty factor. Set to 1 to disable.重复处罚系数。设置为1可禁用。
- Repeat Penalty Tokens重复惩罚数
- Number of previous tokens used for penalty.用于惩罚的先前令牌数量。
- GPU LayersGPU 层
- Number of model layers to load into VRAM.要加载到VRAM中的模型层数。
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2248,161 +1930,133 @@ NOTE: Does not take effect until you reload the model.
ModelsView
- No Models Installed无模型
- Install a model to get started using GPT4All安装模型并开始使用
-
- + Add Model+ 添加模型
- Shows the add model view查看增加到模型
- Installed Models已安装的模型
- Locally installed chat models本地安装的聊天
- Model file模型文件
- Model file to be downloaded待下载的模型
- Description描述
- File description文件描述
- Cancel取消
- Resume继续
- Stop/restart/start the download停止/重启/开始下载
- Remove删除
- Remove model from filesystem从系统中删除模型
-
- Install按照
- Install online model安装在线模型
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
- ERROR: $API_KEY is empty.错误:$API_KEY 为空
- ERROR: $BASE_URL is empty.错误:$BASE_URL 为空
- enter $BASE_URL输入 $BASE_URL
- ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME为空
- enter $MODEL_NAME输入:$MODEL_NAME
- %1 GB%1 GB
- ??
@@ -2412,7 +2066,6 @@ NOTE: Does not take effect until you reload the model.
- Describes an error that occurred when downloading描述下载时发生的错误
@@ -2430,37 +2083,31 @@ NOTE: Does not take effect until you reload the model.
- Error for incompatible hardware硬件不兼容的错误
- Download progressBar下载进度
- Shows the progress made in the download显示下载进度
- Download speed下载速度
- Download speed in bytes/kilobytes/megabytes per second下载速度 b/kb/mb /s
- Calculating...计算中...
@@ -2469,40 +2116,31 @@ NOTE: Does not take effect until you reload the model.
-
-
-
- Whether the file hash is being calculated是否正在计算文件哈希
- Busy indicator繁忙程度
- Displayed when the file hash is being calculated在计算文件哈希时显示
- enter $API_KEY输入 $API_KEY
- File size文件大小
- RAM required需要 RAM
@@ -2512,19 +2150,16 @@ NOTE: Does not take effect until you reload the model.
- Parameters参数
- Quant量化
- Type类型
@@ -2533,13 +2168,11 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
- Fancy link精选链接
- A stylized link样式化链接
@@ -2548,7 +2181,6 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
- Please choose a directory请选择目录
@@ -2557,13 +2189,11 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
- Restore Defaults恢复初始化
- Restores settings dialog to a default state将设置对话框恢复为默认状态
@@ -2572,13 +2202,11 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.向GPT4All开源数据湖贡献数据
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2592,55 +2220,46 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Terms for opt-in选择加入的条款
- Describes what will happen when you opt-in描述选择加入时会发生的情况
- Please provide a name for attribution (optional)填写名称属性 (可选)
- Attribution (optional)属性 (可选)
- Provide attribution提供属性
- Enable启用
- Enable opt-in启用选择加入
- Cancel取消
- Cancel opt-in取消加入
@@ -2649,19 +2268,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
- New version is available新版本可选
- Update更新
- Update to new version更新到新版本
@@ -2670,19 +2286,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
- Reveals a shortlived help balloon显示一个短暂的帮助气球
- Busy indicator繁忙程度
- Displayed when the popup is showing busy在弹出窗口显示忙碌时显示
@@ -2699,32 +2312,26 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
-
- Settings设置
- Contains various application settings包含各种应用程序设置
- Application应用
- Model模型
- LocalDocs本地文档
@@ -2733,7 +2340,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
- Welcome!欢迎!
@@ -2749,7 +2355,6 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- ### Release notes
%1### Contributors
%2
@@ -2759,19 +2364,16 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
- Release notes发布日志
- Release notes for this version本版本发布日志
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2797,87 +2399,70 @@ model release that uses your data!
- Terms for opt-in选择加入选项
- Describes what will happen when you opt-in描述选择加入时会发生的情况
-
- Opt-in for anonymous usage statistics允许选择加入匿名使用统计数据
-
- Yes是
- Allow opt-in for anonymous usage statistics允许选择加入匿名使用统计数据
-
- No否
- Opt-out for anonymous usage statistics退出匿名使用统计数据
- Allow opt-out for anonymous usage statistics允许选择退出匿名使用统计数据
-
- Opt-in for network加入网络
- Allow opt-in for network允许选择加入网络
- Allow opt-in anonymous sharing of chats to the GPT4All Datalake允许选择加入匿名共享聊天至 GPT4All 数据湖
- Opt-out for network取消网络
- Allow opt-out anonymous sharing of chats to the GPT4All Datalake允许选择退出将聊天匿名共享至 GPT4All 数据湖
@@ -2886,27 +2471,22 @@ model release that uses your data!
SwitchModelDialog
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>警告:</b> 更改模型将删除当前对话。您想继续吗?
- Continue继续
- Continue with model loading模型载入时继续
-
- Cancel取消
@@ -2915,37 +2495,31 @@ model release that uses your data!
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)请编辑下方文本以提供更好的回复。(可选)
- Please provide a better response...提供更好回答...
- Submit提交
- Submits the user's response提交用户响应
- Cancel取消
- Closes the response dialog关闭的对话
@@ -3010,151 +2584,124 @@ model release that uses your data!
- GPT4All v%1GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>启动时遇到错误:</h3><br><i>“检测到不兼容的硬件。”</i><br><br>很遗憾,您的 CPU 不满足运行此程序的最低要求。特别是,它不支持此程序成功运行现代大型语言模型所需的 AVX 内在函数。目前唯一的解决方案是将您的硬件升级到更现代的 CPU。<br><br>有关更多信息,请参阅此处:<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions>>https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>启动时遇到错误:</h3><br><i>“无法访问设置文件。”</i><br><br>不幸的是,某些东西阻止程序访问设置文件。这可能是由于设置文件所在的本地应用程序配置目录中的权限不正确造成的。请查看我们的<a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助。
- Connection to datalake failed.链接数据湖失败
- Saving chats.保存对话
- Network dialog网络对话
- opt-in to share feedback/conversations选择加入以共享反馈/对话
- Home view主页
- Home view of application主页
- Home主页
- Chat view对话视图
- Chat view to interact with models聊天视图可与模型互动
- Chats对话
-
- Models模型
- Models view for installed models已安装模型的页面
-
- LocalDocs本地文档
- LocalDocs view to configure and use local docsLocalDocs视图可配置和使用本地文档
-
- Settings设置
- Settings view for application configuration设置页面
- The datalake is enabled数据湖已开启
- Using a network model使用联网模型
- Server mode is enabled服务器模式已开
- Installed models安装模型
- View of installed models查看已安装模型
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index abdbb611..3a4fc4eb 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -5,73 +5,61 @@
AddCollectionView
- ← Existing Collections← 現有收藏
- Add Document Collection新增收藏文件
- Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.新增一個含有純文字檔案、PDF 與 Markdown 文件的資料夾。可在設定上增加文件副檔名。
- Please choose a directory請選擇一個資料夾
- Name名稱
- Collection name...收藏名稱......
- Name of the collection to add (Required)新增的收藏名稱(必填)
- Folder資料夾
- Folder path...資料夾路徑......
- Folder path to documents (Required)文件所屬的資料夾路徑(必填)
- Browse瀏覽
- Create Collection建立收藏
@@ -80,266 +68,220 @@
AddModelView
- ← Existing Models← 現有模型
- Explore Models探索模型
- Discover and download models by keyword search...透過關鍵字搜尋探索並下載模型......
- Text field for discovering and filtering downloadable models用於探索與過濾可下載模型的文字字段
- Searching · %1搜尋 · %1
- Initiate model discovery and filtering探索與過濾模型
- Triggers discovery and filtering of models觸發探索與過濾模型
- Default預設
- Likes讚
- Downloads下載次數
- Recent最新
- Sort by: %1排序依據:%1
- Asc升序
- Desc降序
- Sort dir: %1排序順序:%1
- None無
- Limit: %1上限:%1
- Network error: could not retrieve %1網路錯誤:無法取得 %1
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">錯誤</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
- %1 GB%1 GB
-
- ??
-
- Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
- Displayed when the models request is ongoing當模型請求正在進行時顯示
- Model file模型檔案
- Model file to be downloaded即將下載的模型檔案
- Description描述
- File description檔案描述
- Cancel取消
- Resume恢復
- Download下載
- Stop/restart/start the download停止/重啟/開始下載
- Remove移除
- Remove model from filesystem從檔案系統移除模型
-
- Install安裝
- Install online model安裝線上模型
- Describes an error that occurred when downloading解釋下載時發生的錯誤
- Error for incompatible hardware錯誤,不相容的硬體
- Download progressBar下載進度條
- Shows the progress made in the download顯示下載進度
- Download speed下載速度
- Download speed in bytes/kilobytes/megabytes per second下載速度每秒 bytes/kilobytes/megabytes
- Calculating...計算中......
@@ -348,82 +290,66 @@
-
-
-
- Whether the file hash is being calculated是否正在計算檔案雜湊
- Displayed when the file hash is being calculated計算檔案雜湊值時顯示
- ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
- enter $API_KEY請輸入 $API_KEY
- ERROR: $BASE_URL is empty.錯誤:$BASE_URL 未填寫。
- enter $BASE_URL請輸入 $BASE_URL
- ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
- enter $MODEL_NAME請輸入 $MODEL_NAME
- File size檔案大小
- RAM required所需的記憶體
- Parameters參數
- Quant量化
- Type類型
@@ -432,25 +358,21 @@
ApplicationSettings
- Application應用程式
- Network dialog資料湖泊計畫對話視窗
- opt-in to share feedback/conversations分享回饋/對話計畫
- ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -466,267 +388,222 @@
- Error dialog錯誤對話視窗
- Application Settings應用程式設定
- General一般
- Theme主題
- The application color scheme.應用程式的配色方案。
- Dark暗色
- Light亮色
- LegacyDark傳統暗色
- Font Size字體大小
- The size of text in the application.應用程式中的字體大小。
- Small小
- Medium中
- Large大
- Language and Locale語言與區域設定
- The language and locale you wish to use.您希望使用的語言與區域設定。
- System Locale系統語系
- Device裝置
- Default Model預設模型
- The preferred model for new chats. Also used as the local server fallback.用於新交談的預設模型。也用於作為本機伺服器後援使用。
- Suggestion Mode建議模式
- When chatting with LocalDocs當使用「我的文件」交談時
- Whenever possible視情況允許
- Never永不
- Generate suggested follow-up questions at the end of responses.在回覆末尾生成後續建議的問題。
- The compute device used for text generation.用於生成文字的計算裝置。
-
- Application default應用程式預設值
- Download Path下載路徑
- Where to store local models and the LocalDocs database.儲存本機模型與「我的文件」資料庫的位置。
- Browse瀏覽
- Choose where to save model files選擇儲存模型檔案的位置
- Enable Datalake啟用資料湖泊
- Send chats and feedback to the GPT4All Open-Source Datalake.將交談與回饋傳送到 GPT4All 開放原始碼資料湖泊。
- Advanced進階
- CPU Threads中央處理器(CPU)線程
- The number of CPU threads used for inference and embedding.用於推理與嵌入的中央處理器線程數。
- Save Chat Context儲存交談語境
- Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。
- Enable Local Server啟用本機伺服器
- Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.將 OpenAI 相容伺服器公開給本機。警告:導致資源使用增加。
- API Server PortAPI 伺服器埠口
- The port to use for the local server. Requires restart.用於本機伺服器的埠口。需要重新啟動。
- Check For Updates檢查更新
- Manually check for an update to GPT4All.手動檢查 GPT4All 的更新。
- Updates更新
@@ -762,73 +639,61 @@
ChatDrawer
- Drawer側邊欄
- Main navigation drawer主要導航側邊欄
- + New Chat+ 新的交談
- Create a new chat建立新的交談
- Select the current chat or edit the chat when in edit mode選擇目前交談或在編輯模式下編輯交談
- Edit chat name修改對話名稱
- Save chat name儲存對話名稱
- Delete chat刪除對話
- Confirm chat deletion確定刪除對話
- Cancel chat deletion取消刪除對話
- List of chats交談列表
- List of chats in the drawer dialog側邊欄對話視窗的交談列表
@@ -870,161 +735,133 @@
ChatView
- <h3>Warning</h3><p>%1</p><h3>警告</h3><p>%1</p>
- Switch model dialog切換模型對話視窗
- Warn the user if they switch models, then context will be erased警告使用者如果切換模型,則語境將被刪除
- Conversation copied to clipboard.對話已複製到剪貼簿。
- Code copied to clipboard.程式碼已複製到剪貼簿。
- Chat panel交談面板
- Chat panel with options具有選項的交談面板
- Reload the currently loaded model重新載入目前已載入的模型
- Eject the currently loaded model彈出目前載入的模型
- No model installed.沒有已安裝的模型。
- Model loading error.模型載入時發生錯誤。
- Waiting for model...等待模型中......
- Switching context...切換語境中......
- Choose a model...選擇一個模型......
- Not found: %1不存在:%1
-
- Reload · %1重新載入 · %1
- Loading · %1載入中 · %1
- Load · %1 (default) →載入 · %1 (預設) →
- The top item is the current model最上面的那項是目前使用的模型
-
- LocalDocs我的文件
- Add documents新增文件
- add collections of documents to the chat將文件集合新增至交談中
- Load the default model載入預設模型
- Loads the default model which can be changed in settings預設模型可於設定中變更
- No Model Installed沒有已安裝的模型
- GPT4All requires that you install at least one
model to get startedGPT4All 要求您至少安裝一個
@@ -1032,231 +869,194 @@ model to get started
- Install a Model安裝一個模型
- Shows the add model view顯示新增模型視圖
- Conversation with the model與模型對話
- prompt / response pairs from the conversation對話中的提示詞 / 回覆組合
- GPT4AllGPT4All
- You您
- response stopped ...回覆停止......
- retrieving localdocs: %1 ...檢索本機文件中:%1 ......
- searching localdocs: %1 ...搜尋本機文件中:%1 ......
- processing ...處理中......
- generating response ...生成回覆......
- generating questions ...生成問題......
-
- Copy複製
- Copy Message複製訊息
- Disable markdown停用 Markdown
- Enable markdown啟用 Markdown
- Thumbs up讚
- Gives a thumbs up to the response對這則回覆比讚
- Thumbs down倒讚
- Opens thumbs down dialog開啟倒讚對話視窗
-
-
-
- %1 Sources
- %1 來源
-
- Suggested follow-ups後續建議
- Erase and reset chat session刪除並重置交談會話
- Copy chat session to clipboard複製交談會議到剪貼簿
- Redo last chat response復原上一個交談回覆
- Stop generating停止生成
- Stop the current response generation停止當前回覆生成
- Reloads the model重新載入模型
- <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>載入模型時發生錯誤:</h3><br><i>"%1"</i><br><br>導致模型載入失敗的原因可能有很多種,但絕大多數的原因是檔案格式損毀、下載的檔案不完整、檔案類型錯誤、系統RAM空間不足或不相容的模型類型。這裡有些建議可供疑難排解:<br><ul><li>確保使用的模型是相容的格式與類型<li>檢查位於下載資料夾的檔案是否完整<li>您可以從設定中找到您所設定的「下載資料夾路徑」<li>如果您有側載模型,請利用 md5sum 等工具確保您的檔案是完整的<li>想了解更多關於我們所支援的模型資訊,煩請詳閱<a href="https://docs.gpt4all.io/">本文件</a>。<li>歡迎洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求幫助
- restoring from text ...從文字中恢復......
+
+
+ %n Source(s)
+
+ %n 來源
+
+
- Send a message...傳送一則訊息......
- Load a model to continue...載入模型以繼續......
- Send messages/prompts to the model向模型傳送訊息/提示詞
- Cut剪下
- Paste貼上
- Select All全選
- Send message傳送訊息
- Sends the message/prompt contained in textfield to the model將文字欄位中包含的訊息/提示詞傳送到模型
@@ -1265,13 +1065,11 @@ model to get started
CollectionsDrawer
- Warning: searching collections while indexing can return incomplete results警告:在索引時搜尋收藏可能會傳回不完整的結果
- %n file(s)%n 個檔案
@@ -1279,7 +1077,6 @@ model to get started
- %n word(s)%n 個字
@@ -1287,19 +1084,16 @@ model to get started
- Updating更新中
- + Add Docs+ 新增文件
- Select a collection to make it available to the chat model.選擇一個收藏以使其可供交談模型使用。
@@ -1346,109 +1140,91 @@ model to get started
HomeView
- Welcome to GPT4All歡迎使用 GPT4All
- The privacy-first LLM chat application隱私第一的大型語言模型交談應用程式
- Start chatting開始交談
- Start Chatting開始交談
- Chat with any LLM與任何大型語言模型交談
- LocalDocs我的文件
- Chat with your local files使用「我的文件」來交談
- Find Models搜尋模型
- Explore and download models瀏覽與下載模型
- Latest news最新消息
- Latest news from GPT4All從 GPT4All 來的最新消息
- Release Notes版本資訊
- Documentation文件
- DiscordDiscord
- X (Twitter)X (Twitter)
- GithubGithub
- GPT4All.ioGPT4All.io
- Subscribe to Newsletter訂閱電子報
@@ -1457,139 +1233,116 @@ model to get started
LocalDocsSettings
- LocalDocs我的文件
- LocalDocs Settings我的文件設定
- Indexing索引中
- Allowed File Extensions允許的副檔名
- Comma-separated list. LocalDocs will only attempt to process files with these extensions.以逗號分隔的列表。「我的文件」將僅嘗試處理具有這些副檔名的檔案。
- Embedding嵌入
- Use Nomic Embed API使用 Nomic 嵌入 API
- Embed documents using the fast Nomic API instead of a private local model. Requires restart.使用快速的 Nomic API 而不是本機私有模型嵌入文件。需要重新啟動。
- Nomic API KeyNomic API 金鑰
- API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.用於 Nomic Embed 的 API 金鑰。從 Atlas <a href="https://atlas.nomic.ai/cli-login">API 金鑰頁面</a>取得一個。需要重新啟動。
- Embeddings Device嵌入裝置
- The compute device used for embeddings. Requires restart.用於嵌入的計算裝置。需要重新啟動。
- Application default應用程式預設值
- Display顯示
- Show Sources查看來源
- Display the sources used for each response.顯示每則回覆所使用的來源。
- Advanced進階
- Warning: Advanced usage only.警告:僅限進階使用。
- Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.設定太大的數值可能會導致「我的文件」處理失敗、反應速度極慢或根本無法回覆。簡單地說,這會將 {N 個字元 x N 個片段} 被添加到模型的語境視窗中。更多資訊<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此處</a>。
- Document snippet size (characters)文件片段大小(字元)
- Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每個文件片段的字元數。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
- Max document snippets per prompt每個提示詞的最大文件片段
- Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.新增至提示詞語境中的檢索到的文件片段的最大 N 個符合的項目。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
@@ -1598,139 +1351,116 @@ model to get started
LocalDocsView
- LocalDocs我的文件
- Chat with your local files使用「我的文件」來交談
- + Add Collection+ 新增收藏
- <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>錯誤:「我的文件」資料庫已無法存取或已損壞。</h3><br><i>提醒:執行完以下任何疑難排解的動作後,請務必重新啟動應用程式。</i><br><ul><li>請確保<b>「下載路徑」</b>所指向的資料夾確實存在於檔案系統當中。</li><li>檢查 <b>「下載路徑」</b>所指向的資料夾,確保其「擁有者」為您本身,以及確保您對該資料夾擁有讀寫權限。</li><li>如果該資料夾內存在一份名為 <b>localdocs_v2.db</b> 的檔案,請同時確保您對其擁有讀寫權限。</li></ul><br>如果問題依舊存在,且該資料夾內存在與「localdocs_v*.db」名稱相關的檔案,請嘗試備份並移除它們。<br>雖然這樣一來,您恐怕得著手重建您的收藏,但這將或許能夠解決這份錯誤。
- No Collections Installed沒有已安裝的收藏
- Install a collection of local documents to get started using this feature安裝本機文件收藏以開始使用此功能
- + Add Doc Collection+ 新增文件收藏
- Shows the add model view查看新增的模型視圖
- Indexing progressBar索引進度條
- Shows the progress made in the indexing顯示索引進度
- ERROR錯誤
- INDEXING索引中
- EMBEDDING嵌入中
- REQUIRES UPDATE必須更新
- READY已就緒
- INSTALLING安裝中
- Indexing in progress正在索引
- Embedding in progress正在嵌入
- This collection requires an update after version change該收藏需要在版本變更後更新
- Automatically reindexes upon changes to the folder若資料夾有變動,會自動重新索引
- Installation in progress正在安裝中
- %%
- %n file(s)%n 個檔案
@@ -1738,7 +1468,6 @@ model to get started
- %n word(s)%n 個字
@@ -1746,31 +1475,26 @@ model to get started
- Remove移除
- Rebuild重建
- Reindex this folder from scratch. This is slow and usually not needed.重新索引該資料夾。這將會耗費許多時間並且通常不太需要這樣做。
- Update更新
- Update the collection to the new version. This is a slow operation.更新收藏。這將會耗費許多時間。
@@ -1847,109 +1571,91 @@ model to get started
ModelSettings
- Model模型
- Model Settings模型設定
- Clone複製
- Remove移除
- Name名稱
- Model File模型檔案
- System Prompt系統提示詞
- Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.在每個對話的開頭加上前綴。必須包含適當的構建符元(framing tokens)。
- Prompt Template提示詞模板
- The template that wraps every prompt.包裝每個提示詞的模板。
- Must contain the string "%1" to be replaced with the user's input.必須包含要替換為使用者輸入的字串「%1」。
- Chat Name Prompt交談名稱提示詞
- Prompt used to automatically generate chat names.用於自動生成交談名稱的提示詞。
- Suggested FollowUp Prompt後續建議提示詞
- Prompt used to generate suggested follow-up questions.用於生成後續建議問題的提示詞。
- Context Length語境長度(Context Length)
- Number of input and output tokens the model sees.模型看見的輸入與輸出的符元數量。
- Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1959,19 +1665,16 @@ NOTE: Does not take effect until you reload the model.
- Temperature語境溫度(Temperature)
- Randomness of model output. Higher -> more variation.模型輸出的隨機性。更高 -> 更多變化。
- Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.語境溫度會提高選擇不容易出現的符元機率。
@@ -1979,19 +1682,16 @@ NOTE: Higher temperature gives more creative but less predictable outputs.
- Top-P核心採樣(Top-P)
- Nucleus Sampling factor. Lower -> more predicatable.核心採樣因子。更低 -> 更可預測。
- Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.只選擇總機率約為核心採樣,最有可能性的符元。
@@ -1999,67 +1699,56 @@ NOTE: Prevents choosing highly unlikely tokens.
- Min-P最小符元機率(Min-P)
- Minimum token probability. Higher -> more predictable.最小符元機率。更高 -> 更可預測。
- Sets the minimum relative probability for a token to be considered.設定要考慮的符元的最小相對機率。
- Top-K高頻率採樣機率(Top-K)
- Size of selection pool for tokens.符元選擇池的大小。
- Only the top K most likely tokens will be chosen from.只選擇前 K 個最有可能性的符元。
- Max Length最大長度(Max Length)
- Maximum response length, in tokens.最大響應長度(以符元為單位)。
- Prompt Batch Size提示詞批次大小(Prompt Batch Size)
- The batch size used for prompt processing.用於即時處理的批量大小。
- Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.一次處理的提示詞符元數量。
@@ -2067,43 +1756,36 @@ NOTE: Higher values can speed up reading prompts but will use more RAM.
- Repeat Penalty重複處罰(Repeat Penalty)
- Repetition penalty factor. Set to 1 to disable.重複懲罰因子。設定為 1 以停用。
- Repeat Penalty Tokens重複懲罰符元(Repeat Penalty Tokens)
- Number of previous tokens used for penalty.之前用於懲罰的符元數量。
- GPU Layers圖形處理器負載層(GPU Layers)
- Number of model layers to load into VRAM.要載入到顯示記憶體中的模型層數。
- How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -2116,173 +1798,143 @@ NOTE: Does not take effect until you reload the model.
ModelsView
- No Models Installed沒有已安裝的模型
- Install a model to get started using GPT4All安裝模型以開始使用 GPT4All
-
- + Add Model+ 新增模型
- Shows the add model view顯示新增模型視圖
- Installed Models已安裝的模型
- Locally installed chat models本機已安裝的交談模型
- Model file模型檔案
- Model file to be downloaded即將下載的模型檔案
- Description描述
- File description檔案描述
- Cancel取消
- Resume恢復
- Stop/restart/start the download停止/重啟/開始下載
- Remove移除
- Remove model from filesystem從檔案系統移除模型
-
- Install安裝
- Install online model安裝線上模型
- <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">錯誤</a></strong></font>
- <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
- %1 GB%1 GB
- ??
- Describes an error that occurred when downloading解釋下載時發生的錯誤
- Error for incompatible hardware錯誤,不相容的硬體
- Download progressBar下載進度條
- Shows the progress made in the download顯示下載進度
- Download speed下載速度
- Download speed in bytes/kilobytes/megabytes per second下載速度每秒 bytes/kilobytes/megabytes
- Calculating...計算中......
@@ -2291,89 +1943,72 @@ NOTE: Does not take effect until you reload the model.
-
-
-
- Whether the file hash is being calculated是否正在計算檔案雜湊
- Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
- Displayed when the file hash is being calculated計算檔案雜湊值時顯示
- ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
- enter $API_KEY請輸入 $API_KEY
- ERROR: $BASE_URL is empty.錯誤:$BASE_URL 未填寫。
- enter $BASE_URL請輸入 $BASE_URL
- ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
- enter $MODEL_NAME請輸入 $MODEL_NAME
- File size檔案大小
- RAM required所需的記憶體
- Parameters參數
- Quant量化
- Type類型
@@ -2382,13 +2017,11 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
- Fancy link精緻網址
- A stylized link個性化網址
@@ -2397,7 +2030,6 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
- Please choose a directory請選擇一個資料夾
@@ -2406,13 +2038,11 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
- Restore Defaults恢復預設值
- Restores settings dialog to a default state恢復設定對話視窗到預設狀態
@@ -2421,13 +2051,11 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
- Contribute data to the GPT4All Opensource Datalake.貢獻資料到 GPT4All 的開放原始碼資料湖泊。
- By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2446,55 +2074,46 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
- Terms for opt-in計畫規範
- Describes what will happen when you opt-in解釋當您加入計畫後,會發生什麼事情
- Please provide a name for attribution (optional)請提供署名(非必填)
- Attribution (optional)署名(非必填)
- Provide attribution提供署名
- Enable啟用
- Enable opt-in加入計畫
- Cancel取消
- Cancel opt-in拒絕計畫
@@ -2503,19 +2122,16 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
NewVersionDialog
- New version is available發現新版本
- Update更新
- Update to new version更新版本
@@ -2524,20 +2140,17 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
PopupDialog
- Reveals a shortlived help balloon呼叫提示小幫手
- Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
- Displayed when the popup is showing busy當彈出視窗忙碌時顯示
@@ -2547,32 +2160,26 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
-
- Settings設定
- Contains various application settings內含多種應用程式設定
- Application應用程式
- Model模型
- LocalDocs我的文件
@@ -2581,25 +2188,21 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
StartupDialog
- Welcome!歡迎使用!
- Release notes版本資訊
- Release notes for this version這個版本的版本資訊
- ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2628,43 +2231,34 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
- Terms for opt-in計畫規範
- Describes what will happen when you opt-in解釋當您加入計畫後,會發生什麼事情
-
- Yes是
-
- No否
-
- Opt-in for anonymous usage statistics匿名使用統計計畫
- ### Release notes
%1### Contributors
%2
@@ -2674,51 +2268,42 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
- Allow opt-in for anonymous usage statistics加入匿名使用統計計畫
- Opt-out for anonymous usage statistics退出匿名使用統計計畫
- Allow opt-out for anonymous usage statistics終止並退出匿名使用統計計畫
-
- Opt-in for network資料湖泊計畫
- Allow opt-in for network加入資料湖泊計畫
- Opt-out for network退出資料湖泊計畫
- Allow opt-in anonymous sharing of chats to the GPT4All Datalake開始將交談內容匿名分享到 GPT4All 資料湖泊
- Allow opt-out anonymous sharing of chats to the GPT4All Datalake終止將交談內容匿名分享到 GPT4All 資料湖泊
@@ -2727,27 +2312,22 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
SwitchModelDialog
- <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>警告:</b> 變更模型將會清除目前對話內容。您真的想要繼續嗎?
- Continue繼續
- Continue with model loading繼續載入模型
-
- Cancel取消
@@ -2756,37 +2336,31 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
ThumbsDownDialog
- Please edit the text below to provide a better response. (optional)請編輯以下文字,以提供更好的回覆。(非必填)
- Please provide a better response...請提供一則更好的回覆......
- Submit送出
- Submits the user's response送出使用者的回覆
- Cancel取消
- Closes the response dialog關閉回覆對話視窗
@@ -2795,151 +2369,124 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
main
- GPT4All v%1GPT4All v%1
- <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體裝置。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a>
- <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>啟動時發生錯誤:</h3><br><i>「無法存取設定檔。」</i><br><br>糟糕!有些東西正在阻止程式存取設定檔。這極為可能是由於設定檔所在的本機應用程式設定資料夾中的權限設定不正確所造成的。煩請洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求協助。
- Connection to datalake failed.連線資料湖泊失敗。
- Saving chats.儲存交談。
- Network dialog資料湖泊計畫對話視窗
- opt-in to share feedback/conversations分享回饋/對話計畫
- Home view首頁視圖
- Home view of application應用程式首頁視圖
- Home首頁
- Chat view查看交談
- Chat view to interact with models模型互動交談視圖
- Chats交談
-
- Models模型
- Models view for installed models已安裝模型的模型視圖
-
- LocalDocs我的文件
- LocalDocs view to configure and use local docs用於設定與使用我的文件的「我的文件」視圖
-
- Settings設定
- Settings view for application configuration應用程式設定視圖
- The datalake is enabled資料湖泊已啟用
- Using a network model使用一個網路模型
- Server mode is enabled伺服器模式已啟用
- Installed models已安裝的模型
- View of installed models已安裝的模型視圖
From aed68492623f44dc407cf32334995bcf171d1273 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 19 Aug 2024 15:51:47 -0400
Subject: [PATCH 46/66] readme: add blog link (#2895)
Signed-off-by: Jared Van Bortel
---
README.md | 78 ++++++++++++++++++++++++++++++++-----------------------
1 file changed, 46 insertions(+), 32 deletions(-)
diff --git a/README.md b/README.md
index f4c525e9..079c7363 100644
--- a/README.md
+++ b/README.md
@@ -1,43 +1,25 @@
GPT4All
-
GPT4All runs large language models (LLMs) privately on everyday desktops & laptops.
No API calls or GPUs required - you can just download the application and get started
-
-https://github.com/nomic-ai/gpt4all/assets/70534565/513a0f15-4964-4109-89e4-4f9a9011f311
-
-
@@ -89,7 +89,7 @@ with model.chat_session():
- Improved user workflow for LocalDocs
- Expanded access to more model architectures
- **October 19th, 2023**: GGUF Support Launches with Support for:
- - Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
+ - Mistral 7b base model, an updated model gallery on our website, several new local code models including Rift Coder v1.5
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4\_0 and Q4\_1 quantizations in GGUF.
- Offline build support for running old versions of the GPT4All Local LLM Chat Client.
- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on NVIDIA and AMD GPUs.
diff --git a/gpt4all-bindings/python/docs/gpt4all_help/troubleshooting.md b/gpt4all-bindings/python/docs/gpt4all_help/troubleshooting.md
index 63715a01..ba132616 100644
--- a/gpt4all-bindings/python/docs/gpt4all_help/troubleshooting.md
+++ b/gpt4all-bindings/python/docs/gpt4all_help/troubleshooting.md
@@ -4,7 +4,7 @@
It is possible you are trying to load a model from HuggingFace whose weights are not compatible with our [backend](https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings).
-Try downloading one of the officially supported models mentioned our [website](https://gpt4all.io/). If the problem persists, please share your experience on our [Discord](https://discord.com/channels/1076964370942267462).
+Try downloading one of the officially supported models listed on the main models page in the application. If the problem persists, please share your experience on our [Discord](https://discord.com/channels/1076964370942267462).
## Bad Responses
@@ -24,4 +24,4 @@ Including information in a prompt is not a guarantee that it will be used correc
### LocalDocs Issues
-Occasionally a model - particularly a smaller or overall weaker LLM - may not use the relevant text snippets from the files that were referenced via LocalDocs. If you are seeing this, it can help to use phrases like "in the docs" or "from the provided files" when prompting your model.
\ No newline at end of file
+Occasionally a model - particularly a smaller or overall weaker LLM - may not use the relevant text snippets from the files that were referenced via LocalDocs. If you are seeing this, it can help to use phrases like "in the docs" or "from the provided files" when prompting your model.
diff --git a/gpt4all-bindings/python/gpt4all/gpt4all.py b/gpt4all-bindings/python/gpt4all/gpt4all.py
index 1711429d..027f28df 100644
--- a/gpt4all-bindings/python/gpt4all/gpt4all.py
+++ b/gpt4all-bindings/python/gpt4all/gpt4all.py
@@ -357,7 +357,7 @@ class GPT4All:
expected_md5: str | None = None,
) -> str | os.PathLike[str]:
"""
- Download model from https://gpt4all.io.
+ Download model from gpt4all.io.
Args:
model_filename: Filename of model (with .gguf extension).
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index cd8485aa..75476875 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -74,7 +74,7 @@ setup(
long_description_content_type="text/markdown",
author="Nomic and the Open Source Community",
author_email="support@nomic.ai",
- url="https://gpt4all.io/",
+ url="https://www.nomic.ai/gpt4all",
project_urls={
"Documentation": "https://docs.gpt4all.io/gpt4all_python.html",
"Source code": "https://github.com/nomic-ai/gpt4all/tree/main/gpt4all-bindings/python",
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 7669e1c4..b55bf5c2 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -19,6 +19,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
+- Change the website link on the home page to point to the new URL ([#2915](https://github.com/nomic-ai/gpt4all/pull/2915))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index e0f6b6f8..7a96cce0 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -427,7 +427,7 @@ set(CPACK_PACKAGE_INSTALL_DIRECTORY ${COMPONENT_NAME_MAIN})
set(CPACK_PACKAGE_VERSION_MAJOR ${PROJECT_VERSION_MAJOR})
set(CPACK_PACKAGE_VERSION_MINOR ${PROJECT_VERSION_MINOR})
SET(CPACK_PACKAGE_VERSION_PATCH ${PROJECT_VERSION_PATCH})
-set(CPACK_PACKAGE_HOMEPAGE_URL "https://gpt4all.io")
+set(CPACK_PACKAGE_HOMEPAGE_URL "https://www.nomic.ai/gpt4all")
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_RESOURCE_FILE_LICENSE ${CMAKE_CURRENT_SOURCE_DIR}/LICENSE)
set(CPACK_RESOURCE_FILE_README ${CMAKE_CURRENT_SOURCE_DIR}/README.md)
@@ -436,7 +436,7 @@ set(CPACK_CREATE_DESKTOP_LINKS "GPT4All")
set(CPACK_IFW_PACKAGE_NAME "GPT4All")
set(CPACK_IFW_PACKAGE_TITLE "GPT4All Installer")
set(CPACK_IFW_PACKAGE_PUBLISHER "Nomic, Inc.")
-set(CPACK_IFW_PRODUCT_URL "https://gpt4all.io")
+set(CPACK_IFW_PRODUCT_URL "https://www.nomic.ai/gpt4all")
set(CPACK_IFW_PACKAGE_WIZARD_STYLE "Aero")
set(CPACK_IFW_PACKAGE_LOGO "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_IFW_PACKAGE_WINDOW_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-32.png")
diff --git a/gpt4all-chat/README.md b/gpt4all-chat/README.md
index ec110b4b..eca85e52 100644
--- a/gpt4all-chat/README.md
+++ b/gpt4all-chat/README.md
@@ -11,7 +11,7 @@ GPT-J model by following build instructions below.
## Install
-One click installers for macOS, Linux, and Windows at https://gpt4all.io
+One click installers for macOS, Linux, and Windows at https://www.nomic.ai/gpt4all
## Features
diff --git a/gpt4all-chat/cmake/Modules/SignWindowsBinaries.cmake b/gpt4all-chat/cmake/Modules/SignWindowsBinaries.cmake
index bd9c9fbe..0dc3f86f 100644
--- a/gpt4all-chat/cmake/Modules/SignWindowsBinaries.cmake
+++ b/gpt4all-chat/cmake/Modules/SignWindowsBinaries.cmake
@@ -3,7 +3,7 @@ function(sign_target_windows tgt)
add_custom_command(TARGET ${tgt}
POST_BUILD
COMMAND AzureSignTool.exe sign
- -du "https://gpt4all.io/index.html"
+ -du "https://www.nomic.ai/gpt4all"
-kvu https://gpt4all.vault.azure.net
-kvi "$Env{AZSignGUID}"
-kvs "$Env{AZSignPWD}"
@@ -14,4 +14,4 @@ function(sign_target_windows tgt)
$
)
endif()
-endfunction()
\ No newline at end of file
+endfunction()
diff --git a/gpt4all-chat/flatpak-manifest/io.gpt4all.gpt4all.appdata.xml b/gpt4all-chat/flatpak-manifest/io.gpt4all.gpt4all.appdata.xml
index 268933c7..150b659c 100644
--- a/gpt4all-chat/flatpak-manifest/io.gpt4all.gpt4all.appdata.xml
+++ b/gpt4all-chat/flatpak-manifest/io.gpt4all.gpt4all.appdata.xml
@@ -32,7 +32,7 @@
https://raw.githubusercontent.com/nomic-ai/gpt4all/main/gpt4all-chat/flatpak-manifest/screenshots/model.png
- https://gpt4all.io
+ https://www.nomic.ai/gpt4allhttps://github.com/nomic-ai/gpt4all/issueshttps://github.com/nomic-ai/gpt4all
@@ -46,4 +46,4 @@
moderatemild
-
\ No newline at end of file
+
diff --git a/gpt4all-chat/qml/HomeView.qml b/gpt4all-chat/qml/HomeView.qml
index f9728f6f..780cb35a 100644
--- a/gpt4all-chat/qml/HomeView.qml
+++ b/gpt4all-chat/qml/HomeView.qml
@@ -254,9 +254,9 @@ Rectangle {
spacing: 40
MyFancyLink {
- text: qsTr("GPT4All.io")
+ text: qsTr("nomic.ai")
imageSource: "qrc:/gpt4all/icons/globe.svg"
- onClicked: { Qt.openUrlExternally("https://gpt4all.io") }
+ onClicked: { Qt.openUrlExternally("https://www.nomic.ai/gpt4all") }
rightPadding: 15
}
}
From ed8bd4ceda6ae7d1c326d82cead894ec38b89485 Mon Sep 17 00:00:00 2001
From: 3Simplex <10260755+3Simplex@users.noreply.github.com>
Date: Mon, 26 Aug 2024 18:41:15 -0400
Subject: [PATCH 50/66] chat: fix typo "predicatable" (#2916)
Signed-off-by: 3Simplex <10260755+3Simplex@users.noreply.github.com>
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/qml/ModelSettings.qml | 2 +-
gpt4all-chat/translations/gpt4all_en_US.ts | 4 ++--
gpt4all-chat/translations/gpt4all_es_MX.ts | 6 +++---
gpt4all-chat/translations/gpt4all_it_IT.ts | 6 +++---
gpt4all-chat/translations/gpt4all_pt_BR.ts | 6 +++---
gpt4all-chat/translations/gpt4all_ro_RO.ts | 12 ++++++------
gpt4all-chat/translations/gpt4all_zh_CN.ts | 6 +++---
gpt4all-chat/translations/gpt4all_zh_TW.ts | 6 +++---
9 files changed, 25 insertions(+), 24 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index b55bf5c2..c581eaf0 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -16,6 +16,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
- Fix singular/plural forms of LocalDocs "x Sources" (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2885](https://github.com/nomic-ai/gpt4all/pull/2885))
+- Fixed typo in several files. (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
### Changed
- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
diff --git a/gpt4all-chat/qml/ModelSettings.qml b/gpt4all-chat/qml/ModelSettings.qml
index 5e896eb1..2435e08f 100644
--- a/gpt4all-chat/qml/ModelSettings.qml
+++ b/gpt4all-chat/qml/ModelSettings.qml
@@ -456,7 +456,7 @@ MySettingsTab {
MySettingsLabel {
id: topPLabel
text: qsTr("Top-P")
- helpText: qsTr("Nucleus Sampling factor. Lower -> more predicatable.")
+ helpText: qsTr("Nucleus Sampling factor. Lower -> more predictable.")
Layout.row: 2
Layout.column: 0
Layout.maximumWidth: 300 * theme.fontScale
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index 0221d4e7..be7dee3f 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -1217,7 +1217,7 @@ model to get started
- GPT4All.io
+ nomic.ai
@@ -1683,7 +1683,7 @@ NOTE: Higher temperature gives more creative but less predictable outputs.
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index a3498c43..0c9f516f 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -1249,8 +1249,8 @@ modelo para comenzar
- GPT4All.io
- GPT4All.io
+ nomic.ai
+ nomic.ai
@@ -1742,7 +1742,7 @@ NOTA: Una temperatura más alta da resultados más creativos pero menos predecib
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.Factor de muestreo de núcleo. Menor -> más predecible.
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index e3615035..9baf840d 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -1227,8 +1227,8 @@ modello per iniziare
- GPT4All.io
-
+ nomic.ai
+ nomic.ai
@@ -1701,7 +1701,7 @@ NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedi
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.Fattore di campionamento del nucleo. Inferiore -> più prevedibile.
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index b9bb47b5..38a9177b 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -1230,8 +1230,8 @@ modelo instalado para funcionar
- GPT4All.io
- GPT4All.io
+ nomic.ai
+ nomic.ai
@@ -1704,7 +1704,7 @@ Obs.: Uma temperatura mais alta gera resultados mais criativos, mas menos previs
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.Amostragem por núcleo. Menor valor, respostas mais previsíveis.
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 7ce96011..703e5b14 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -1314,13 +1314,13 @@ model to get started
GitHub
- GitHub
- GitHub
+
+ nomic.ai
+ nomic.ai
-
- GPT4All.io
- GPT4All.io
+ GitHub
+ GitHub
@@ -1864,7 +1864,7 @@ model to get started
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare.
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index 7f675e62..4bd6c395 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -1317,8 +1317,8 @@ model to get started
- GPT4All.io
- GPT4All.io
+ nomic.ai
+ nomic.ai
@@ -1819,7 +1819,7 @@ NOTE: Higher temperature gives more creative but less predictable outputs.
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.核子取样系数。较低->更具可预测性。
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index 3a4fc4eb..f0fd630e 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -1220,8 +1220,8 @@ model to get started
- GPT4All.io
- GPT4All.io
+ nomic.ai
+ nomic.ai
@@ -1687,7 +1687,7 @@ NOTE: Higher temperature gives more creative but less predictable outputs.
- Nucleus Sampling factor. Lower -> more predicatable.
+ Nucleus Sampling factor. Lower -> more predictable.核心採樣因子。更低 -> 更可預測。
From ca151f3519bbcd8561a1d46cf6dc4c3cc8a4cfae Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Tue, 27 Aug 2024 17:22:40 -0400
Subject: [PATCH 51/66] repo: organize sources, headers, and deps into
subdirectories (#2917)
Signed-off-by: Jared Van Bortel
---
.gitmodules | 4 +-
gpt4all-backend/CMakeLists.txt | 23 ++++--
gpt4all-backend/{ => deps}/llama.cpp-mainline | 0
.../{ => include/gpt4all-backend}/llmodel.h | 0
.../{ => include/gpt4all-backend}/llmodel_c.h | 0
.../{ => include/gpt4all-backend}/sysinfo.h | 0
gpt4all-backend/llama.cpp.cmake | 4 +-
gpt4all-backend/{ => src}/dlhandle.cpp | 0
gpt4all-backend/{ => src}/dlhandle.h | 0
gpt4all-backend/{ => src}/llamamodel.cpp | 0
gpt4all-backend/{ => src}/llamamodel_impl.h | 0
gpt4all-backend/{ => src}/llmodel.cpp | 0
gpt4all-backend/{ => src}/llmodel_c.cpp | 0
gpt4all-backend/{ => src}/llmodel_shared.cpp | 0
gpt4all-backend/{ => src}/llmodel_shared.h | 0
gpt4all-backend/{ => src}/utils.cpp | 0
gpt4all-backend/{ => src}/utils.h | 0
gpt4all-chat/CMakeLists.txt | 80 +++----------------
gpt4all-chat/{ => deps}/usearch | 0
gpt4all-chat/src/CMakeLists.txt | 72 +++++++++++++++++
gpt4all-chat/{ => src}/chat.cpp | 0
gpt4all-chat/{ => src}/chat.h | 0
gpt4all-chat/{ => src}/chatapi.cpp | 2 +-
gpt4all-chat/{ => src}/chatapi.h | 2 +-
gpt4all-chat/{ => src}/chatlistmodel.cpp | 0
gpt4all-chat/{ => src}/chatlistmodel.h | 0
gpt4all-chat/{ => src}/chatllm.cpp | 0
gpt4all-chat/{ => src}/chatllm.h | 2 +-
gpt4all-chat/{ => src}/chatmodel.h | 0
.../{ => src}/chatviewtextprocessor.cpp | 0
.../{ => src}/chatviewtextprocessor.h | 0
gpt4all-chat/{ => src}/database.cpp | 0
gpt4all-chat/{ => src}/database.h | 0
gpt4all-chat/{ => src}/download.cpp | 0
gpt4all-chat/{ => src}/download.h | 0
gpt4all-chat/{ => src}/embllm.cpp | 2 +-
gpt4all-chat/{ => src}/embllm.h | 0
gpt4all-chat/{ => src}/llm.cpp | 4 +-
gpt4all-chat/{ => src}/llm.h | 0
gpt4all-chat/{ => src}/localdocs.cpp | 0
gpt4all-chat/{ => src}/localdocs.h | 0
gpt4all-chat/{ => src}/localdocsmodel.cpp | 0
gpt4all-chat/{ => src}/localdocsmodel.h | 0
gpt4all-chat/{ => src}/logger.cpp | 0
gpt4all-chat/{ => src}/logger.h | 0
gpt4all-chat/{ => src}/main.cpp | 2 +-
gpt4all-chat/{ => src}/main.qml | 0
gpt4all-chat/{ => src}/modellist.cpp | 2 +-
gpt4all-chat/{ => src}/modellist.h | 0
gpt4all-chat/{ => src}/mysettings.cpp | 2 +-
gpt4all-chat/{ => src}/mysettings.h | 0
gpt4all-chat/{ => src}/network.cpp | 2 +-
gpt4all-chat/{ => src}/network.h | 0
.../{ => src}/qml/AddCollectionView.qml | 0
gpt4all-chat/{ => src}/qml/AddModelView.qml | 0
.../{ => src}/qml/ApplicationSettings.qml | 0
gpt4all-chat/{ => src}/qml/ChatDrawer.qml | 0
gpt4all-chat/{ => src}/qml/ChatView.qml | 0
.../{ => src}/qml/CollectionsDrawer.qml | 0
gpt4all-chat/{ => src}/qml/HomeView.qml | 0
.../{ => src}/qml/LocalDocsSettings.qml | 0
gpt4all-chat/{ => src}/qml/LocalDocsView.qml | 0
gpt4all-chat/{ => src}/qml/ModelSettings.qml | 0
gpt4all-chat/{ => src}/qml/ModelsView.qml | 0
.../{ => src}/qml/MyBusyIndicator.qml | 0
gpt4all-chat/{ => src}/qml/MyButton.qml | 0
gpt4all-chat/{ => src}/qml/MyCheckBox.qml | 0
gpt4all-chat/{ => src}/qml/MyComboBox.qml | 0
gpt4all-chat/{ => src}/qml/MyDialog.qml | 0
.../{ => src}/qml/MyDirectoryField.qml | 0
gpt4all-chat/{ => src}/qml/MyFancyLink.qml | 0
gpt4all-chat/{ => src}/qml/MyMenu.qml | 0
gpt4all-chat/{ => src}/qml/MyMenuItem.qml | 0
gpt4all-chat/{ => src}/qml/MyMiniButton.qml | 0
.../{ => src}/qml/MySettingsButton.qml | 0
.../qml/MySettingsDestructiveButton.qml | 0
.../{ => src}/qml/MySettingsLabel.qml | 0
.../{ => src}/qml/MySettingsStack.qml | 0
gpt4all-chat/{ => src}/qml/MySettingsTab.qml | 0
gpt4all-chat/{ => src}/qml/MySlug.qml | 0
gpt4all-chat/{ => src}/qml/MyTextArea.qml | 0
gpt4all-chat/{ => src}/qml/MyTextButton.qml | 0
gpt4all-chat/{ => src}/qml/MyTextField.qml | 0
gpt4all-chat/{ => src}/qml/MyToolButton.qml | 0
.../{ => src}/qml/MyWelcomeButton.qml | 0
gpt4all-chat/{ => src}/qml/NetworkDialog.qml | 0
.../{ => src}/qml/NewVersionDialog.qml | 0
gpt4all-chat/{ => src}/qml/PopupDialog.qml | 0
gpt4all-chat/{ => src}/qml/SettingsView.qml | 0
gpt4all-chat/{ => src}/qml/StartupDialog.qml | 0
.../{ => src}/qml/SwitchModelDialog.qml | 0
gpt4all-chat/{ => src}/qml/Theme.qml | 0
.../{ => src}/qml/ThumbsDownDialog.qml | 0
gpt4all-chat/{ => src}/qml/Toast.qml | 0
gpt4all-chat/{ => src}/qml/ToastManager.qml | 0
gpt4all-chat/{ => src}/server.cpp | 0
gpt4all-chat/{ => src}/server.h | 0
97 files changed, 112 insertions(+), 91 deletions(-)
rename gpt4all-backend/{ => deps}/llama.cpp-mainline (100%)
rename gpt4all-backend/{ => include/gpt4all-backend}/llmodel.h (100%)
rename gpt4all-backend/{ => include/gpt4all-backend}/llmodel_c.h (100%)
rename gpt4all-backend/{ => include/gpt4all-backend}/sysinfo.h (100%)
rename gpt4all-backend/{ => src}/dlhandle.cpp (100%)
rename gpt4all-backend/{ => src}/dlhandle.h (100%)
rename gpt4all-backend/{ => src}/llamamodel.cpp (100%)
rename gpt4all-backend/{ => src}/llamamodel_impl.h (100%)
rename gpt4all-backend/{ => src}/llmodel.cpp (100%)
rename gpt4all-backend/{ => src}/llmodel_c.cpp (100%)
rename gpt4all-backend/{ => src}/llmodel_shared.cpp (100%)
rename gpt4all-backend/{ => src}/llmodel_shared.h (100%)
rename gpt4all-backend/{ => src}/utils.cpp (100%)
rename gpt4all-backend/{ => src}/utils.h (100%)
rename gpt4all-chat/{ => deps}/usearch (100%)
create mode 100644 gpt4all-chat/src/CMakeLists.txt
rename gpt4all-chat/{ => src}/chat.cpp (100%)
rename gpt4all-chat/{ => src}/chat.h (100%)
rename gpt4all-chat/{ => src}/chatapi.cpp (99%)
rename gpt4all-chat/{ => src}/chatapi.h (99%)
rename gpt4all-chat/{ => src}/chatlistmodel.cpp (100%)
rename gpt4all-chat/{ => src}/chatlistmodel.h (100%)
rename gpt4all-chat/{ => src}/chatllm.cpp (100%)
rename gpt4all-chat/{ => src}/chatllm.h (99%)
rename gpt4all-chat/{ => src}/chatmodel.h (100%)
rename gpt4all-chat/{ => src}/chatviewtextprocessor.cpp (100%)
rename gpt4all-chat/{ => src}/chatviewtextprocessor.h (100%)
rename gpt4all-chat/{ => src}/database.cpp (100%)
rename gpt4all-chat/{ => src}/database.h (100%)
rename gpt4all-chat/{ => src}/download.cpp (100%)
rename gpt4all-chat/{ => src}/download.h (100%)
rename gpt4all-chat/{ => src}/embllm.cpp (99%)
rename gpt4all-chat/{ => src}/embllm.h (100%)
rename gpt4all-chat/{ => src}/llm.cpp (97%)
rename gpt4all-chat/{ => src}/llm.h (100%)
rename gpt4all-chat/{ => src}/localdocs.cpp (100%)
rename gpt4all-chat/{ => src}/localdocs.h (100%)
rename gpt4all-chat/{ => src}/localdocsmodel.cpp (100%)
rename gpt4all-chat/{ => src}/localdocsmodel.h (100%)
rename gpt4all-chat/{ => src}/logger.cpp (100%)
rename gpt4all-chat/{ => src}/logger.h (100%)
rename gpt4all-chat/{ => src}/main.cpp (98%)
rename gpt4all-chat/{ => src}/main.qml (100%)
rename gpt4all-chat/{ => src}/modellist.cpp (99%)
rename gpt4all-chat/{ => src}/modellist.h (100%)
rename gpt4all-chat/{ => src}/mysettings.cpp (99%)
rename gpt4all-chat/{ => src}/mysettings.h (100%)
rename gpt4all-chat/{ => src}/network.cpp (99%)
rename gpt4all-chat/{ => src}/network.h (100%)
rename gpt4all-chat/{ => src}/qml/AddCollectionView.qml (100%)
rename gpt4all-chat/{ => src}/qml/AddModelView.qml (100%)
rename gpt4all-chat/{ => src}/qml/ApplicationSettings.qml (100%)
rename gpt4all-chat/{ => src}/qml/ChatDrawer.qml (100%)
rename gpt4all-chat/{ => src}/qml/ChatView.qml (100%)
rename gpt4all-chat/{ => src}/qml/CollectionsDrawer.qml (100%)
rename gpt4all-chat/{ => src}/qml/HomeView.qml (100%)
rename gpt4all-chat/{ => src}/qml/LocalDocsSettings.qml (100%)
rename gpt4all-chat/{ => src}/qml/LocalDocsView.qml (100%)
rename gpt4all-chat/{ => src}/qml/ModelSettings.qml (100%)
rename gpt4all-chat/{ => src}/qml/ModelsView.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyBusyIndicator.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyCheckBox.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyComboBox.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyDirectoryField.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyFancyLink.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyMenu.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyMenuItem.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyMiniButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySettingsButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySettingsDestructiveButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySettingsLabel.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySettingsStack.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySettingsTab.qml (100%)
rename gpt4all-chat/{ => src}/qml/MySlug.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyTextArea.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyTextButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyTextField.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyToolButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/MyWelcomeButton.qml (100%)
rename gpt4all-chat/{ => src}/qml/NetworkDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/NewVersionDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/PopupDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/SettingsView.qml (100%)
rename gpt4all-chat/{ => src}/qml/StartupDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/SwitchModelDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/Theme.qml (100%)
rename gpt4all-chat/{ => src}/qml/ThumbsDownDialog.qml (100%)
rename gpt4all-chat/{ => src}/qml/Toast.qml (100%)
rename gpt4all-chat/{ => src}/qml/ToastManager.qml (100%)
rename gpt4all-chat/{ => src}/server.cpp (100%)
rename gpt4all-chat/{ => src}/server.h (100%)
diff --git a/.gitmodules b/.gitmodules
index 98c9a214..77533f6a 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -1,7 +1,7 @@
[submodule "llama.cpp-mainline"]
- path = gpt4all-backend/llama.cpp-mainline
+ path = gpt4all-backend/deps/llama.cpp-mainline
url = https://github.com/nomic-ai/llama.cpp.git
branch = master
[submodule "gpt4all-chat/usearch"]
- path = gpt4all-chat/usearch
+ path = gpt4all-chat/deps/usearch
url = https://github.com/nomic-ai/usearch.git
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index 684080ab..2c1fbb46 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -1,4 +1,4 @@
-cmake_minimum_required(VERSION 3.21) # for PROJECT_IS_TOP_LEVEL
+cmake_minimum_required(VERSION 3.23) # for FILE_SET
set(CMAKE_WINDOWS_EXPORT_ALL_SYMBOLS ON)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
@@ -47,7 +47,7 @@ else()
message(STATUS "Interprocedural optimization support detected")
endif()
-set(DIRECTORY llama.cpp-mainline)
+set(DIRECTORY deps/llama.cpp-mainline)
include(llama.cpp.cmake)
set(BUILD_VARIANTS)
@@ -146,9 +146,12 @@ foreach(BUILD_VARIANT IN LISTS BUILD_VARIANTS)
# Add each individual implementations
add_library(llamamodel-mainline-${BUILD_VARIANT} SHARED
- llamamodel.cpp llmodel_shared.cpp)
+ src/llamamodel.cpp src/llmodel_shared.cpp)
target_compile_definitions(llamamodel-mainline-${BUILD_VARIANT} PRIVATE
LLAMA_VERSIONS=>=3 LLAMA_DATE=999999)
+ target_include_directories(llamamodel-mainline-${BUILD_VARIANT} PRIVATE
+ src include/gpt4all-backend
+ )
prepare_target(llamamodel-mainline llama-mainline)
if (NOT PROJECT_IS_TOP_LEVEL AND BUILD_VARIANT STREQUAL cuda)
@@ -157,11 +160,19 @@ foreach(BUILD_VARIANT IN LISTS BUILD_VARIANTS)
endforeach()
add_library(llmodel
- llmodel.h llmodel.cpp llmodel_shared.cpp
- llmodel_c.h llmodel_c.cpp
- dlhandle.cpp
+ src/dlhandle.cpp
+ src/llmodel.cpp
+ src/llmodel_c.cpp
+ src/llmodel_shared.cpp
+)
+target_sources(llmodel PUBLIC
+ FILE_SET public_headers TYPE HEADERS BASE_DIRS include
+ FILES include/gpt4all-backend/llmodel.h
+ include/gpt4all-backend/llmodel_c.h
+ include/gpt4all-backend/sysinfo.h
)
target_compile_definitions(llmodel PRIVATE LIB_FILE_EXT="${CMAKE_SHARED_LIBRARY_SUFFIX}")
+target_include_directories(llmodel PRIVATE src include/gpt4all-backend)
set_target_properties(llmodel PROPERTIES
VERSION ${PROJECT_VERSION}
diff --git a/gpt4all-backend/llama.cpp-mainline b/gpt4all-backend/deps/llama.cpp-mainline
similarity index 100%
rename from gpt4all-backend/llama.cpp-mainline
rename to gpt4all-backend/deps/llama.cpp-mainline
diff --git a/gpt4all-backend/llmodel.h b/gpt4all-backend/include/gpt4all-backend/llmodel.h
similarity index 100%
rename from gpt4all-backend/llmodel.h
rename to gpt4all-backend/include/gpt4all-backend/llmodel.h
diff --git a/gpt4all-backend/llmodel_c.h b/gpt4all-backend/include/gpt4all-backend/llmodel_c.h
similarity index 100%
rename from gpt4all-backend/llmodel_c.h
rename to gpt4all-backend/include/gpt4all-backend/llmodel_c.h
diff --git a/gpt4all-backend/sysinfo.h b/gpt4all-backend/include/gpt4all-backend/sysinfo.h
similarity index 100%
rename from gpt4all-backend/sysinfo.h
rename to gpt4all-backend/include/gpt4all-backend/sysinfo.h
diff --git a/gpt4all-backend/llama.cpp.cmake b/gpt4all-backend/llama.cpp.cmake
index c82668d7..5d5ceb20 100644
--- a/gpt4all-backend/llama.cpp.cmake
+++ b/gpt4all-backend/llama.cpp.cmake
@@ -811,7 +811,8 @@ function(include_ggml SUFFIX)
list(APPEND XC_FLAGS -std=${GGML_METAL_STD})
endif()
- set(GGML_METALLIB ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/default.metallib)
+ set(GGML_METALLIB "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/default.metallib")
+ set(GGML_METALLIB "${GGML_METALLIB}" PARENT_SCOPE)
add_custom_command(
OUTPUT ${GGML_METALLIB}
COMMAND xcrun -sdk macosx metal ${XC_FLAGS} -c ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/ggml-metal.metal -o ${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/ggml-metal.air
@@ -822,7 +823,6 @@ function(include_ggml SUFFIX)
DEPENDS ${DIRECTORY}/ggml/src/ggml-metal.metal ${DIRECTORY}/ggml/src/ggml-common.h
COMMENT "Compiling Metal kernels"
)
- set_source_files_properties(${GGML_METALLIB} DIRECTORY ${CMAKE_SOURCE_DIR} PROPERTIES GENERATED ON)
add_custom_target(
ggml-metal ALL
diff --git a/gpt4all-backend/dlhandle.cpp b/gpt4all-backend/src/dlhandle.cpp
similarity index 100%
rename from gpt4all-backend/dlhandle.cpp
rename to gpt4all-backend/src/dlhandle.cpp
diff --git a/gpt4all-backend/dlhandle.h b/gpt4all-backend/src/dlhandle.h
similarity index 100%
rename from gpt4all-backend/dlhandle.h
rename to gpt4all-backend/src/dlhandle.h
diff --git a/gpt4all-backend/llamamodel.cpp b/gpt4all-backend/src/llamamodel.cpp
similarity index 100%
rename from gpt4all-backend/llamamodel.cpp
rename to gpt4all-backend/src/llamamodel.cpp
diff --git a/gpt4all-backend/llamamodel_impl.h b/gpt4all-backend/src/llamamodel_impl.h
similarity index 100%
rename from gpt4all-backend/llamamodel_impl.h
rename to gpt4all-backend/src/llamamodel_impl.h
diff --git a/gpt4all-backend/llmodel.cpp b/gpt4all-backend/src/llmodel.cpp
similarity index 100%
rename from gpt4all-backend/llmodel.cpp
rename to gpt4all-backend/src/llmodel.cpp
diff --git a/gpt4all-backend/llmodel_c.cpp b/gpt4all-backend/src/llmodel_c.cpp
similarity index 100%
rename from gpt4all-backend/llmodel_c.cpp
rename to gpt4all-backend/src/llmodel_c.cpp
diff --git a/gpt4all-backend/llmodel_shared.cpp b/gpt4all-backend/src/llmodel_shared.cpp
similarity index 100%
rename from gpt4all-backend/llmodel_shared.cpp
rename to gpt4all-backend/src/llmodel_shared.cpp
diff --git a/gpt4all-backend/llmodel_shared.h b/gpt4all-backend/src/llmodel_shared.h
similarity index 100%
rename from gpt4all-backend/llmodel_shared.h
rename to gpt4all-backend/src/llmodel_shared.h
diff --git a/gpt4all-backend/utils.cpp b/gpt4all-backend/src/utils.cpp
similarity index 100%
rename from gpt4all-backend/utils.cpp
rename to gpt4all-backend/src/utils.cpp
diff --git a/gpt4all-backend/utils.h b/gpt4all-backend/src/utils.h
similarity index 100%
rename from gpt4all-backend/utils.h
rename to gpt4all-backend/src/utils.h
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index 7a96cce0..d8c92ead 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -70,7 +70,7 @@ set(CHAT_EXE_RESOURCES)
# Metal shader library
if (APPLE)
- list(APPEND CHAT_EXE_RESOURCES "${CMAKE_RUNTIME_OUTPUT_DIRECTORY}/default.metallib")
+ list(APPEND CHAT_EXE_RESOURCES "${GGML_METALLIB}")
endif()
# App icon
@@ -105,75 +105,11 @@ if (APPLE)
list(APPEND CHAT_EXE_RESOURCES "${LOCAL_EMBEDDING_MODEL_PATH}")
endif()
-qt_add_executable(chat
- main.cpp
- chat.h chat.cpp
- chatllm.h chatllm.cpp
- chatmodel.h chatlistmodel.h chatlistmodel.cpp
- chatapi.h chatapi.cpp
- chatviewtextprocessor.h chatviewtextprocessor.cpp
- database.h database.cpp
- download.h download.cpp
- embllm.cpp embllm.h
- localdocs.h localdocs.cpp localdocsmodel.h localdocsmodel.cpp
- llm.h llm.cpp
- modellist.h modellist.cpp
- mysettings.h mysettings.cpp
- network.h network.cpp
- server.h server.cpp
- logger.h logger.cpp
- ${APP_ICON_RESOURCE}
- ${CHAT_EXE_RESOURCES}
-)
+add_subdirectory(src)
-qt_add_qml_module(chat
- URI gpt4all
- VERSION 1.0
- NO_CACHEGEN
- QML_FILES
- main.qml
- qml/AddCollectionView.qml
- qml/AddModelView.qml
- qml/ApplicationSettings.qml
- qml/ChatDrawer.qml
- qml/ChatView.qml
- qml/CollectionsDrawer.qml
- qml/HomeView.qml
- qml/LocalDocsSettings.qml
- qml/LocalDocsView.qml
- qml/ModelSettings.qml
- qml/ModelsView.qml
- qml/NetworkDialog.qml
- qml/NewVersionDialog.qml
- qml/PopupDialog.qml
- qml/SettingsView.qml
- qml/StartupDialog.qml
- qml/SwitchModelDialog.qml
- qml/Theme.qml
- qml/ThumbsDownDialog.qml
- qml/Toast.qml
- qml/ToastManager.qml
- qml/MyBusyIndicator.qml
- qml/MyButton.qml
- qml/MyCheckBox.qml
- qml/MyComboBox.qml
- qml/MyDialog.qml
- qml/MyDirectoryField.qml
- qml/MyFancyLink.qml
- qml/MyMenu.qml
- qml/MyMenuItem.qml
- qml/MyMiniButton.qml
- qml/MySettingsButton.qml
- qml/MySettingsDestructiveButton.qml
- qml/MySettingsLabel.qml
- qml/MySettingsStack.qml
- qml/MySettingsTab.qml
- qml/MySlug.qml
- qml/MyTextArea.qml
- qml/MyTextButton.qml
- qml/MyTextField.qml
- qml/MyToolButton.qml
- qml/MyWelcomeButton.qml
+target_sources(chat PRIVATE ${APP_ICON_RESOURCE} ${CHAT_EXE_RESOURCES})
+
+qt_target_qml_sources(chat
RESOURCES
icons/antenna_1.svg
icons/antenna_2.svg
@@ -286,11 +222,13 @@ endif()
target_compile_definitions(chat
PRIVATE $<$,$>:QT_QML_DEBUG>)
+target_include_directories(chat PRIVATE src)
+
# usearch uses the identifier 'slots' which conflicts with Qt's 'slots' keyword
target_compile_definitions(chat PRIVATE QT_NO_SIGNALS_SLOTS_KEYWORDS)
-target_include_directories(chat PRIVATE usearch/include
- usearch/fp16/include)
+target_include_directories(chat PRIVATE deps/usearch/include
+ deps/usearch/fp16/include)
if(LINUX)
target_link_libraries(chat
diff --git a/gpt4all-chat/usearch b/gpt4all-chat/deps/usearch
similarity index 100%
rename from gpt4all-chat/usearch
rename to gpt4all-chat/deps/usearch
diff --git a/gpt4all-chat/src/CMakeLists.txt b/gpt4all-chat/src/CMakeLists.txt
new file mode 100644
index 00000000..e489fd86
--- /dev/null
+++ b/gpt4all-chat/src/CMakeLists.txt
@@ -0,0 +1,72 @@
+set_source_files_properties("${GGML_METALLIB}" PROPERTIES GENERATED ON)
+
+qt_add_executable(chat
+ main.cpp
+ chat.cpp chat.h
+ chatapi.cpp chatapi.h
+ chatlistmodel.cpp chatlistmodel.h
+ chatllm.cpp chatllm.h
+ chatmodel.h
+ chatviewtextprocessor.cpp chatviewtextprocessor.h
+ database.cpp database.h
+ download.cpp download.h
+ embllm.cpp embllm.h
+ llm.cpp llm.h
+ localdocs.cpp localdocs.h
+ localdocsmodel.cpp localdocsmodel.h
+ logger.cpp logger.h
+ modellist.cpp modellist.h
+ mysettings.cpp mysettings.h
+ network.cpp network.h
+ server.cpp server.h
+)
+
+qt_add_qml_module(chat
+ URI gpt4all
+ VERSION 1.0
+ NO_CACHEGEN
+ QML_FILES
+ main.qml
+ qml/AddCollectionView.qml
+ qml/AddModelView.qml
+ qml/ApplicationSettings.qml
+ qml/ChatDrawer.qml
+ qml/ChatView.qml
+ qml/CollectionsDrawer.qml
+ qml/HomeView.qml
+ qml/LocalDocsSettings.qml
+ qml/LocalDocsView.qml
+ qml/ModelSettings.qml
+ qml/ModelsView.qml
+ qml/NetworkDialog.qml
+ qml/NewVersionDialog.qml
+ qml/PopupDialog.qml
+ qml/SettingsView.qml
+ qml/StartupDialog.qml
+ qml/SwitchModelDialog.qml
+ qml/Theme.qml
+ qml/ThumbsDownDialog.qml
+ qml/Toast.qml
+ qml/ToastManager.qml
+ qml/MyBusyIndicator.qml
+ qml/MyButton.qml
+ qml/MyCheckBox.qml
+ qml/MyComboBox.qml
+ qml/MyDialog.qml
+ qml/MyDirectoryField.qml
+ qml/MyFancyLink.qml
+ qml/MyMenu.qml
+ qml/MyMenuItem.qml
+ qml/MyMiniButton.qml
+ qml/MySettingsButton.qml
+ qml/MySettingsDestructiveButton.qml
+ qml/MySettingsLabel.qml
+ qml/MySettingsStack.qml
+ qml/MySettingsTab.qml
+ qml/MySlug.qml
+ qml/MyTextArea.qml
+ qml/MyTextButton.qml
+ qml/MyTextField.qml
+ qml/MyToolButton.qml
+ qml/MyWelcomeButton.qml
+)
diff --git a/gpt4all-chat/chat.cpp b/gpt4all-chat/src/chat.cpp
similarity index 100%
rename from gpt4all-chat/chat.cpp
rename to gpt4all-chat/src/chat.cpp
diff --git a/gpt4all-chat/chat.h b/gpt4all-chat/src/chat.h
similarity index 100%
rename from gpt4all-chat/chat.h
rename to gpt4all-chat/src/chat.h
diff --git a/gpt4all-chat/chatapi.cpp b/gpt4all-chat/src/chatapi.cpp
similarity index 99%
rename from gpt4all-chat/chatapi.cpp
rename to gpt4all-chat/src/chatapi.cpp
index b443f24c..06594a32 100644
--- a/gpt4all-chat/chatapi.cpp
+++ b/gpt4all-chat/src/chatapi.cpp
@@ -1,6 +1,6 @@
#include "chatapi.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/chatapi.h b/gpt4all-chat/src/chatapi.h
similarity index 99%
rename from gpt4all-chat/chatapi.h
rename to gpt4all-chat/src/chatapi.h
index 59b68f58..724178de 100644
--- a/gpt4all-chat/chatapi.h
+++ b/gpt4all-chat/src/chatapi.h
@@ -1,7 +1,7 @@
#ifndef CHATAPI_H
#define CHATAPI_H
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/chatlistmodel.cpp b/gpt4all-chat/src/chatlistmodel.cpp
similarity index 100%
rename from gpt4all-chat/chatlistmodel.cpp
rename to gpt4all-chat/src/chatlistmodel.cpp
diff --git a/gpt4all-chat/chatlistmodel.h b/gpt4all-chat/src/chatlistmodel.h
similarity index 100%
rename from gpt4all-chat/chatlistmodel.h
rename to gpt4all-chat/src/chatlistmodel.h
diff --git a/gpt4all-chat/chatllm.cpp b/gpt4all-chat/src/chatllm.cpp
similarity index 100%
rename from gpt4all-chat/chatllm.cpp
rename to gpt4all-chat/src/chatllm.cpp
diff --git a/gpt4all-chat/chatllm.h b/gpt4all-chat/src/chatllm.h
similarity index 99%
rename from gpt4all-chat/chatllm.h
rename to gpt4all-chat/src/chatllm.h
index d123358a..eb8d044f 100644
--- a/gpt4all-chat/chatllm.h
+++ b/gpt4all-chat/src/chatllm.h
@@ -4,7 +4,7 @@
#include "database.h" // IWYU pragma: keep
#include "modellist.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/chatmodel.h b/gpt4all-chat/src/chatmodel.h
similarity index 100%
rename from gpt4all-chat/chatmodel.h
rename to gpt4all-chat/src/chatmodel.h
diff --git a/gpt4all-chat/chatviewtextprocessor.cpp b/gpt4all-chat/src/chatviewtextprocessor.cpp
similarity index 100%
rename from gpt4all-chat/chatviewtextprocessor.cpp
rename to gpt4all-chat/src/chatviewtextprocessor.cpp
diff --git a/gpt4all-chat/chatviewtextprocessor.h b/gpt4all-chat/src/chatviewtextprocessor.h
similarity index 100%
rename from gpt4all-chat/chatviewtextprocessor.h
rename to gpt4all-chat/src/chatviewtextprocessor.h
diff --git a/gpt4all-chat/database.cpp b/gpt4all-chat/src/database.cpp
similarity index 100%
rename from gpt4all-chat/database.cpp
rename to gpt4all-chat/src/database.cpp
diff --git a/gpt4all-chat/database.h b/gpt4all-chat/src/database.h
similarity index 100%
rename from gpt4all-chat/database.h
rename to gpt4all-chat/src/database.h
diff --git a/gpt4all-chat/download.cpp b/gpt4all-chat/src/download.cpp
similarity index 100%
rename from gpt4all-chat/download.cpp
rename to gpt4all-chat/src/download.cpp
diff --git a/gpt4all-chat/download.h b/gpt4all-chat/src/download.h
similarity index 100%
rename from gpt4all-chat/download.h
rename to gpt4all-chat/src/download.h
diff --git a/gpt4all-chat/embllm.cpp b/gpt4all-chat/src/embllm.cpp
similarity index 99%
rename from gpt4all-chat/embllm.cpp
rename to gpt4all-chat/src/embllm.cpp
index 615a6ce4..81b1e9e1 100644
--- a/gpt4all-chat/embllm.cpp
+++ b/gpt4all-chat/src/embllm.cpp
@@ -3,7 +3,7 @@
#include "modellist.h"
#include "mysettings.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/embllm.h b/gpt4all-chat/src/embllm.h
similarity index 100%
rename from gpt4all-chat/embllm.h
rename to gpt4all-chat/src/embllm.h
diff --git a/gpt4all-chat/llm.cpp b/gpt4all-chat/src/llm.cpp
similarity index 97%
rename from gpt4all-chat/llm.cpp
rename to gpt4all-chat/src/llm.cpp
index c8a3c3fe..aed1a7db 100644
--- a/gpt4all-chat/llm.cpp
+++ b/gpt4all-chat/src/llm.cpp
@@ -1,7 +1,7 @@
#include "llm.h"
-#include "../gpt4all-backend/llmodel.h"
-#include "../gpt4all-backend/sysinfo.h"
+#include
+#include
#include
#include
diff --git a/gpt4all-chat/llm.h b/gpt4all-chat/src/llm.h
similarity index 100%
rename from gpt4all-chat/llm.h
rename to gpt4all-chat/src/llm.h
diff --git a/gpt4all-chat/localdocs.cpp b/gpt4all-chat/src/localdocs.cpp
similarity index 100%
rename from gpt4all-chat/localdocs.cpp
rename to gpt4all-chat/src/localdocs.cpp
diff --git a/gpt4all-chat/localdocs.h b/gpt4all-chat/src/localdocs.h
similarity index 100%
rename from gpt4all-chat/localdocs.h
rename to gpt4all-chat/src/localdocs.h
diff --git a/gpt4all-chat/localdocsmodel.cpp b/gpt4all-chat/src/localdocsmodel.cpp
similarity index 100%
rename from gpt4all-chat/localdocsmodel.cpp
rename to gpt4all-chat/src/localdocsmodel.cpp
diff --git a/gpt4all-chat/localdocsmodel.h b/gpt4all-chat/src/localdocsmodel.h
similarity index 100%
rename from gpt4all-chat/localdocsmodel.h
rename to gpt4all-chat/src/localdocsmodel.h
diff --git a/gpt4all-chat/logger.cpp b/gpt4all-chat/src/logger.cpp
similarity index 100%
rename from gpt4all-chat/logger.cpp
rename to gpt4all-chat/src/logger.cpp
diff --git a/gpt4all-chat/logger.h b/gpt4all-chat/src/logger.h
similarity index 100%
rename from gpt4all-chat/logger.h
rename to gpt4all-chat/src/logger.h
diff --git a/gpt4all-chat/main.cpp b/gpt4all-chat/src/main.cpp
similarity index 98%
rename from gpt4all-chat/main.cpp
rename to gpt4all-chat/src/main.cpp
index 49f1ea3a..8521f4ea 100644
--- a/gpt4all-chat/main.cpp
+++ b/gpt4all-chat/src/main.cpp
@@ -8,7 +8,7 @@
#include "mysettings.h"
#include "network.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/main.qml b/gpt4all-chat/src/main.qml
similarity index 100%
rename from gpt4all-chat/main.qml
rename to gpt4all-chat/src/main.qml
diff --git a/gpt4all-chat/modellist.cpp b/gpt4all-chat/src/modellist.cpp
similarity index 99%
rename from gpt4all-chat/modellist.cpp
rename to gpt4all-chat/src/modellist.cpp
index 580b615f..13711057 100644
--- a/gpt4all-chat/modellist.cpp
+++ b/gpt4all-chat/src/modellist.cpp
@@ -4,7 +4,7 @@
#include "mysettings.h"
#include "network.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/modellist.h b/gpt4all-chat/src/modellist.h
similarity index 100%
rename from gpt4all-chat/modellist.h
rename to gpt4all-chat/src/modellist.h
diff --git a/gpt4all-chat/mysettings.cpp b/gpt4all-chat/src/mysettings.cpp
similarity index 99%
rename from gpt4all-chat/mysettings.cpp
rename to gpt4all-chat/src/mysettings.cpp
index 525ccc1e..29354382 100644
--- a/gpt4all-chat/mysettings.cpp
+++ b/gpt4all-chat/src/mysettings.cpp
@@ -1,6 +1,6 @@
#include "mysettings.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/mysettings.h b/gpt4all-chat/src/mysettings.h
similarity index 100%
rename from gpt4all-chat/mysettings.h
rename to gpt4all-chat/src/mysettings.h
diff --git a/gpt4all-chat/network.cpp b/gpt4all-chat/src/network.cpp
similarity index 99%
rename from gpt4all-chat/network.cpp
rename to gpt4all-chat/src/network.cpp
index e7ee616c..19e96a10 100644
--- a/gpt4all-chat/network.cpp
+++ b/gpt4all-chat/src/network.cpp
@@ -9,7 +9,7 @@
#include "modellist.h"
#include "mysettings.h"
-#include "../gpt4all-backend/llmodel.h"
+#include
#include
#include
diff --git a/gpt4all-chat/network.h b/gpt4all-chat/src/network.h
similarity index 100%
rename from gpt4all-chat/network.h
rename to gpt4all-chat/src/network.h
diff --git a/gpt4all-chat/qml/AddCollectionView.qml b/gpt4all-chat/src/qml/AddCollectionView.qml
similarity index 100%
rename from gpt4all-chat/qml/AddCollectionView.qml
rename to gpt4all-chat/src/qml/AddCollectionView.qml
diff --git a/gpt4all-chat/qml/AddModelView.qml b/gpt4all-chat/src/qml/AddModelView.qml
similarity index 100%
rename from gpt4all-chat/qml/AddModelView.qml
rename to gpt4all-chat/src/qml/AddModelView.qml
diff --git a/gpt4all-chat/qml/ApplicationSettings.qml b/gpt4all-chat/src/qml/ApplicationSettings.qml
similarity index 100%
rename from gpt4all-chat/qml/ApplicationSettings.qml
rename to gpt4all-chat/src/qml/ApplicationSettings.qml
diff --git a/gpt4all-chat/qml/ChatDrawer.qml b/gpt4all-chat/src/qml/ChatDrawer.qml
similarity index 100%
rename from gpt4all-chat/qml/ChatDrawer.qml
rename to gpt4all-chat/src/qml/ChatDrawer.qml
diff --git a/gpt4all-chat/qml/ChatView.qml b/gpt4all-chat/src/qml/ChatView.qml
similarity index 100%
rename from gpt4all-chat/qml/ChatView.qml
rename to gpt4all-chat/src/qml/ChatView.qml
diff --git a/gpt4all-chat/qml/CollectionsDrawer.qml b/gpt4all-chat/src/qml/CollectionsDrawer.qml
similarity index 100%
rename from gpt4all-chat/qml/CollectionsDrawer.qml
rename to gpt4all-chat/src/qml/CollectionsDrawer.qml
diff --git a/gpt4all-chat/qml/HomeView.qml b/gpt4all-chat/src/qml/HomeView.qml
similarity index 100%
rename from gpt4all-chat/qml/HomeView.qml
rename to gpt4all-chat/src/qml/HomeView.qml
diff --git a/gpt4all-chat/qml/LocalDocsSettings.qml b/gpt4all-chat/src/qml/LocalDocsSettings.qml
similarity index 100%
rename from gpt4all-chat/qml/LocalDocsSettings.qml
rename to gpt4all-chat/src/qml/LocalDocsSettings.qml
diff --git a/gpt4all-chat/qml/LocalDocsView.qml b/gpt4all-chat/src/qml/LocalDocsView.qml
similarity index 100%
rename from gpt4all-chat/qml/LocalDocsView.qml
rename to gpt4all-chat/src/qml/LocalDocsView.qml
diff --git a/gpt4all-chat/qml/ModelSettings.qml b/gpt4all-chat/src/qml/ModelSettings.qml
similarity index 100%
rename from gpt4all-chat/qml/ModelSettings.qml
rename to gpt4all-chat/src/qml/ModelSettings.qml
diff --git a/gpt4all-chat/qml/ModelsView.qml b/gpt4all-chat/src/qml/ModelsView.qml
similarity index 100%
rename from gpt4all-chat/qml/ModelsView.qml
rename to gpt4all-chat/src/qml/ModelsView.qml
diff --git a/gpt4all-chat/qml/MyBusyIndicator.qml b/gpt4all-chat/src/qml/MyBusyIndicator.qml
similarity index 100%
rename from gpt4all-chat/qml/MyBusyIndicator.qml
rename to gpt4all-chat/src/qml/MyBusyIndicator.qml
diff --git a/gpt4all-chat/qml/MyButton.qml b/gpt4all-chat/src/qml/MyButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MyButton.qml
rename to gpt4all-chat/src/qml/MyButton.qml
diff --git a/gpt4all-chat/qml/MyCheckBox.qml b/gpt4all-chat/src/qml/MyCheckBox.qml
similarity index 100%
rename from gpt4all-chat/qml/MyCheckBox.qml
rename to gpt4all-chat/src/qml/MyCheckBox.qml
diff --git a/gpt4all-chat/qml/MyComboBox.qml b/gpt4all-chat/src/qml/MyComboBox.qml
similarity index 100%
rename from gpt4all-chat/qml/MyComboBox.qml
rename to gpt4all-chat/src/qml/MyComboBox.qml
diff --git a/gpt4all-chat/qml/MyDialog.qml b/gpt4all-chat/src/qml/MyDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/MyDialog.qml
rename to gpt4all-chat/src/qml/MyDialog.qml
diff --git a/gpt4all-chat/qml/MyDirectoryField.qml b/gpt4all-chat/src/qml/MyDirectoryField.qml
similarity index 100%
rename from gpt4all-chat/qml/MyDirectoryField.qml
rename to gpt4all-chat/src/qml/MyDirectoryField.qml
diff --git a/gpt4all-chat/qml/MyFancyLink.qml b/gpt4all-chat/src/qml/MyFancyLink.qml
similarity index 100%
rename from gpt4all-chat/qml/MyFancyLink.qml
rename to gpt4all-chat/src/qml/MyFancyLink.qml
diff --git a/gpt4all-chat/qml/MyMenu.qml b/gpt4all-chat/src/qml/MyMenu.qml
similarity index 100%
rename from gpt4all-chat/qml/MyMenu.qml
rename to gpt4all-chat/src/qml/MyMenu.qml
diff --git a/gpt4all-chat/qml/MyMenuItem.qml b/gpt4all-chat/src/qml/MyMenuItem.qml
similarity index 100%
rename from gpt4all-chat/qml/MyMenuItem.qml
rename to gpt4all-chat/src/qml/MyMenuItem.qml
diff --git a/gpt4all-chat/qml/MyMiniButton.qml b/gpt4all-chat/src/qml/MyMiniButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MyMiniButton.qml
rename to gpt4all-chat/src/qml/MyMiniButton.qml
diff --git a/gpt4all-chat/qml/MySettingsButton.qml b/gpt4all-chat/src/qml/MySettingsButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MySettingsButton.qml
rename to gpt4all-chat/src/qml/MySettingsButton.qml
diff --git a/gpt4all-chat/qml/MySettingsDestructiveButton.qml b/gpt4all-chat/src/qml/MySettingsDestructiveButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MySettingsDestructiveButton.qml
rename to gpt4all-chat/src/qml/MySettingsDestructiveButton.qml
diff --git a/gpt4all-chat/qml/MySettingsLabel.qml b/gpt4all-chat/src/qml/MySettingsLabel.qml
similarity index 100%
rename from gpt4all-chat/qml/MySettingsLabel.qml
rename to gpt4all-chat/src/qml/MySettingsLabel.qml
diff --git a/gpt4all-chat/qml/MySettingsStack.qml b/gpt4all-chat/src/qml/MySettingsStack.qml
similarity index 100%
rename from gpt4all-chat/qml/MySettingsStack.qml
rename to gpt4all-chat/src/qml/MySettingsStack.qml
diff --git a/gpt4all-chat/qml/MySettingsTab.qml b/gpt4all-chat/src/qml/MySettingsTab.qml
similarity index 100%
rename from gpt4all-chat/qml/MySettingsTab.qml
rename to gpt4all-chat/src/qml/MySettingsTab.qml
diff --git a/gpt4all-chat/qml/MySlug.qml b/gpt4all-chat/src/qml/MySlug.qml
similarity index 100%
rename from gpt4all-chat/qml/MySlug.qml
rename to gpt4all-chat/src/qml/MySlug.qml
diff --git a/gpt4all-chat/qml/MyTextArea.qml b/gpt4all-chat/src/qml/MyTextArea.qml
similarity index 100%
rename from gpt4all-chat/qml/MyTextArea.qml
rename to gpt4all-chat/src/qml/MyTextArea.qml
diff --git a/gpt4all-chat/qml/MyTextButton.qml b/gpt4all-chat/src/qml/MyTextButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MyTextButton.qml
rename to gpt4all-chat/src/qml/MyTextButton.qml
diff --git a/gpt4all-chat/qml/MyTextField.qml b/gpt4all-chat/src/qml/MyTextField.qml
similarity index 100%
rename from gpt4all-chat/qml/MyTextField.qml
rename to gpt4all-chat/src/qml/MyTextField.qml
diff --git a/gpt4all-chat/qml/MyToolButton.qml b/gpt4all-chat/src/qml/MyToolButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MyToolButton.qml
rename to gpt4all-chat/src/qml/MyToolButton.qml
diff --git a/gpt4all-chat/qml/MyWelcomeButton.qml b/gpt4all-chat/src/qml/MyWelcomeButton.qml
similarity index 100%
rename from gpt4all-chat/qml/MyWelcomeButton.qml
rename to gpt4all-chat/src/qml/MyWelcomeButton.qml
diff --git a/gpt4all-chat/qml/NetworkDialog.qml b/gpt4all-chat/src/qml/NetworkDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/NetworkDialog.qml
rename to gpt4all-chat/src/qml/NetworkDialog.qml
diff --git a/gpt4all-chat/qml/NewVersionDialog.qml b/gpt4all-chat/src/qml/NewVersionDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/NewVersionDialog.qml
rename to gpt4all-chat/src/qml/NewVersionDialog.qml
diff --git a/gpt4all-chat/qml/PopupDialog.qml b/gpt4all-chat/src/qml/PopupDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/PopupDialog.qml
rename to gpt4all-chat/src/qml/PopupDialog.qml
diff --git a/gpt4all-chat/qml/SettingsView.qml b/gpt4all-chat/src/qml/SettingsView.qml
similarity index 100%
rename from gpt4all-chat/qml/SettingsView.qml
rename to gpt4all-chat/src/qml/SettingsView.qml
diff --git a/gpt4all-chat/qml/StartupDialog.qml b/gpt4all-chat/src/qml/StartupDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/StartupDialog.qml
rename to gpt4all-chat/src/qml/StartupDialog.qml
diff --git a/gpt4all-chat/qml/SwitchModelDialog.qml b/gpt4all-chat/src/qml/SwitchModelDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/SwitchModelDialog.qml
rename to gpt4all-chat/src/qml/SwitchModelDialog.qml
diff --git a/gpt4all-chat/qml/Theme.qml b/gpt4all-chat/src/qml/Theme.qml
similarity index 100%
rename from gpt4all-chat/qml/Theme.qml
rename to gpt4all-chat/src/qml/Theme.qml
diff --git a/gpt4all-chat/qml/ThumbsDownDialog.qml b/gpt4all-chat/src/qml/ThumbsDownDialog.qml
similarity index 100%
rename from gpt4all-chat/qml/ThumbsDownDialog.qml
rename to gpt4all-chat/src/qml/ThumbsDownDialog.qml
diff --git a/gpt4all-chat/qml/Toast.qml b/gpt4all-chat/src/qml/Toast.qml
similarity index 100%
rename from gpt4all-chat/qml/Toast.qml
rename to gpt4all-chat/src/qml/Toast.qml
diff --git a/gpt4all-chat/qml/ToastManager.qml b/gpt4all-chat/src/qml/ToastManager.qml
similarity index 100%
rename from gpt4all-chat/qml/ToastManager.qml
rename to gpt4all-chat/src/qml/ToastManager.qml
diff --git a/gpt4all-chat/server.cpp b/gpt4all-chat/src/server.cpp
similarity index 100%
rename from gpt4all-chat/server.cpp
rename to gpt4all-chat/src/server.cpp
diff --git a/gpt4all-chat/server.h b/gpt4all-chat/src/server.h
similarity index 100%
rename from gpt4all-chat/server.h
rename to gpt4all-chat/src/server.h
From e8d74d8bf4611f21cded1fab8eb26c34cb56e96b Mon Sep 17 00:00:00 2001
From: Riccardo Giovanetti <29801031+Harvester62@users.noreply.github.com>
Date: Wed, 28 Aug 2024 02:13:34 +0200
Subject: [PATCH 52/66] translations: update Italian (#2909)
Signed-off-by: Riccardo Giovanetti
---
gpt4all-chat/CHANGELOG.md | 2 +-
gpt4all-chat/translations/gpt4all_it_IT.ts | 12 ++++++------
2 files changed, 7 insertions(+), 7 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index c581eaf0..987b8370 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -11,7 +11,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
-- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872))
+- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872) and [#2909](https://github.com/nomic-ai/gpt4all/pull/2909))
- Correct typos in Traditional Chinese translation (by [@supersonictw](https://github.com/supersonictw) in [#2852](https://github.com/nomic-ai/gpt4all/pull/2852))
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index 9baf840d..85360148 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -1352,7 +1352,7 @@ modello per iniziare
Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
- Numero massimo di N corrispondenze migliori di frammenti di documento recuperati da aggiungere al contesto del prompt. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
+ Il numero massimo di frammenti di documento recuperati, che presentano le migliori corrispondenze, da includere nel contesto del prompt. Numeri più alti aumentano la probabilità di ricevere risposte basate sui fatti, ma comportano anche una generazione più lenta.
@@ -1708,7 +1708,7 @@ NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedi
Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
- Possono essere scelti solo i token più probabili fino ad una probabilità totale di top_p.
+ Solo i token più probabili, fino a un totale di probabilità di top_p, possono essere scelti.
NOTA: impedisce la scelta di token altamente improbabili.
@@ -1724,7 +1724,7 @@ NOTA: impedisce la scelta di token altamente improbabili.
Sets the minimum relative probability for a token to be considered.
- Imposta la probabilità relativa minima che un token venga considerato.
+ Imposta la probabilità relativa minima affinché un token venga considerato.
@@ -1734,12 +1734,12 @@ NOTA: impedisce la scelta di token altamente improbabili.
Size of selection pool for tokens.
- Dimensione del pool di selezione per i token.
+ Dimensione del lotto di selezione per i token.Only the top K most likely tokens will be chosen from.
- Solo i token Top-K più probabili verranno scelti.
+ Saranno scelti solo i primi K token più probabili.
@@ -1765,7 +1765,7 @@ NOTA: impedisce la scelta di token altamente improbabili.
Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
- Quantità di token del prompt da elaborare contemporaneamente.
+ Numero di token del prompt da elaborare contemporaneamente.
NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM.
From ed85cd8b6ac4c0385c39b83bb7cbb226966aa2ad Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Wed, 28 Aug 2024 12:49:43 -0400
Subject: [PATCH 53/66] qml: dynamic min win size, smaller default size,
scaling tweaks (#2904)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 3 +++
gpt4all-chat/src/main.qml | 8 ++++----
gpt4all-chat/src/qml/HomeView.qml | 12 ++++++------
3 files changed, 13 insertions(+), 10 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 987b8370..5b42240a 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -9,6 +9,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
+### Changed
+- Smaller default window size, dynamic minimum size, and scaling tweaks ([#2904](https://github.com/nomic-ai/gpt4all/pull/2904))
+
### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
- Correct a few strings in the Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#2872](https://github.com/nomic-ai/gpt4all/pull/2872) and [#2909](https://github.com/nomic-ai/gpt4all/pull/2909))
diff --git a/gpt4all-chat/src/main.qml b/gpt4all-chat/src/main.qml
index 16b5b65c..e02141c4 100644
--- a/gpt4all-chat/src/main.qml
+++ b/gpt4all-chat/src/main.qml
@@ -15,10 +15,10 @@ import mysettings
Window {
id: window
- width: 1920
- height: 1080
- minimumWidth: 1280
- minimumHeight: 720
+ width: 1440
+ height: 810
+ minimumWidth: 658 + 470 * theme.fontScale
+ minimumHeight: 384 + 160 * theme.fontScale
visible: true
title: qsTr("GPT4All v%1").arg(Qt.application.version)
diff --git a/gpt4all-chat/src/qml/HomeView.qml b/gpt4all-chat/src/qml/HomeView.qml
index 780cb35a..eefff018 100644
--- a/gpt4all-chat/src/qml/HomeView.qml
+++ b/gpt4all-chat/src/qml/HomeView.qml
@@ -76,8 +76,8 @@ Rectangle {
MyWelcomeButton {
Layout.fillWidth: true
- Layout.maximumWidth: 500
- Layout.preferredHeight: 150
+ Layout.maximumWidth: 150 + 200 * theme.fontScale
+ Layout.preferredHeight: 40 + 90 * theme.fontScale
text: qsTr("Start Chatting")
description: qsTr("Chat with any LLM")
imageSource: "qrc:/gpt4all/icons/chat.svg"
@@ -87,8 +87,8 @@ Rectangle {
}
MyWelcomeButton {
Layout.fillWidth: true
- Layout.maximumWidth: 500
- Layout.preferredHeight: 150
+ Layout.maximumWidth: 150 + 200 * theme.fontScale
+ Layout.preferredHeight: 40 + 90 * theme.fontScale
text: qsTr("LocalDocs")
description: qsTr("Chat with your local files")
imageSource: "qrc:/gpt4all/icons/db.svg"
@@ -98,8 +98,8 @@ Rectangle {
}
MyWelcomeButton {
Layout.fillWidth: true
- Layout.maximumWidth: 500
- Layout.preferredHeight: 150
+ Layout.maximumWidth: 150 + 200 * theme.fontScale
+ Layout.preferredHeight: 40 + 90 * theme.fontScale
text: qsTr("Find Models")
description: qsTr("Explore and download models")
imageSource: "qrc:/gpt4all/icons/models.svg"
From 82491fe1540de52a150e1e7d00b91011b0e90275 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Thu, 29 Aug 2024 12:10:12 -0400
Subject: [PATCH 54/66] qml: fix copy-paste error in antenna description logic
(#2922)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/src/main.qml | 2 +-
2 files changed, 2 insertions(+), 1 deletion(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 5b42240a..7765932b 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -20,6 +20,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
- Fix singular/plural forms of LocalDocs "x Sources" (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2885](https://github.com/nomic-ai/gpt4all/pull/2885))
- Fixed typo in several files. (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
+- Fix the antenna icon tooltip when using the local server ([#2922](https://github.com/nomic-ai/gpt4all/pull/2922))
### Changed
- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
diff --git a/gpt4all-chat/src/main.qml b/gpt4all-chat/src/main.qml
index e02141c4..1e685385 100644
--- a/gpt4all-chat/src/main.qml
+++ b/gpt4all-chat/src/main.qml
@@ -422,7 +422,7 @@ Window {
return qsTr("The datalake is enabled")
else if (currentChat.modelInfo.isOnline)
return qsTr("Using a network model")
- else if (currentChat.modelInfo.isOnline)
+ else if (currentChat.isServer)
return qsTr("Server mode is enabled")
return ""
}
From e1d49d970f274ba9dbb0bc9c8758b367d7377969 Mon Sep 17 00:00:00 2001
From: AT
Date: Thu, 29 Aug 2024 12:59:13 -0400
Subject: [PATCH 55/66] server: use configured system prompt, ignore system
messages (#2921)
Signed-off-by: Adam Treat
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/src/chatllm.cpp | 7 ++++---
gpt4all-chat/src/server.cpp | 4 ++++
3 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 7765932b..2bea4a7c 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -8,6 +8,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
+- Use configured system prompt in server mode and ignore system messages ([#2921](https://github.com/nomic-ai/gpt4all/pull/2921))
### Changed
- Smaller default window size, dynamic minimum size, and scaling tweaks ([#2904](https://github.com/nomic-ai/gpt4all/pull/2904))
diff --git a/gpt4all-chat/src/chatllm.cpp b/gpt4all-chat/src/chatllm.cpp
index e9fb7f31..86e0026c 100644
--- a/gpt4all-chat/src/chatllm.cpp
+++ b/gpt4all-chat/src/chatllm.cpp
@@ -719,8 +719,6 @@ bool ChatLLM::prompt(const QList &collectionList, const QString &prompt
processRestoreStateFromText();
}
- if (!m_processedSystemPrompt)
- processSystemPrompt();
const QString promptTemplate = MySettings::globalInstance()->modelPromptTemplate(m_modelInfo);
const int32_t n_predict = MySettings::globalInstance()->modelMaxLength(m_modelInfo);
const int32_t top_k = MySettings::globalInstance()->modelTopK(m_modelInfo);
@@ -741,6 +739,9 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
if (!isModelLoaded())
return false;
+ if (!m_processedSystemPrompt)
+ processSystemPrompt();
+
QList databaseResults;
const int retrievalSize = MySettings::globalInstance()->localDocsRetrievalSize();
if (!collectionList.isEmpty()) {
@@ -1206,7 +1207,7 @@ void ChatLLM::restoreState()
void ChatLLM::processSystemPrompt()
{
Q_ASSERT(isModelLoaded());
- if (!isModelLoaded() || m_processedSystemPrompt || m_restoreStateFromText || m_isServer)
+ if (!isModelLoaded() || m_processedSystemPrompt || m_restoreStateFromText)
return;
const std::string systemPrompt = MySettings::globalInstance()->modelSystemPrompt(m_modelInfo).toStdString();
diff --git a/gpt4all-chat/src/server.cpp b/gpt4all-chat/src/server.cpp
index c8485d93..227c30a9 100644
--- a/gpt4all-chat/src/server.cpp
+++ b/gpt4all-chat/src/server.cpp
@@ -340,6 +340,10 @@ QHttpServerResponse Server::handleCompletionRequest(const QHttpServerRequest &re
QList chats;
for (int i = 0; i < messages.count(); ++i) {
QJsonValue v = messages.at(i);
+ // FIXME: Deal with system messages correctly
+ QString role = v.toObject()["role"].toString();
+ if (role != "user")
+ continue;
QString content = v.toObject()["content"].toString();
if (!content.endsWith("\n") && i < messages.count() - 1)
content += "\n";
From 2f02cd407f4b978108256f62424e638524e8da58 Mon Sep 17 00:00:00 2001
From: AT
Date: Fri, 30 Aug 2024 12:11:32 -0400
Subject: [PATCH 56/66] Only allow a single instance of program to be run at a
time (#2923)
Signed-off-by: Adam Treat
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
.gitmodules | 3 ++
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/CMakeLists.txt | 4 ++-
gpt4all-chat/deps/SingleApplication | 1 +
gpt4all-chat/src/main.cpp | 47 ++++++++++++++++++++++++++---
5 files changed, 50 insertions(+), 6 deletions(-)
create mode 160000 gpt4all-chat/deps/SingleApplication
diff --git a/.gitmodules b/.gitmodules
index 77533f6a..b59d07fd 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -5,3 +5,6 @@
[submodule "gpt4all-chat/usearch"]
path = gpt4all-chat/deps/usearch
url = https://github.com/nomic-ai/usearch.git
+[submodule "gpt4all-chat/deps/SingleApplication"]
+ path = gpt4all-chat/deps/SingleApplication
+ url = https://github.com/nomic-ai/SingleApplication.git
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 2bea4a7c..6f1457dd 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Changed
- Smaller default window size, dynamic minimum size, and scaling tweaks ([#2904](https://github.com/nomic-ai/gpt4all/pull/2904))
+- Only allow a single instance of program to be run at a time ([#2923](https://github.com/nomic-ai/gpt4all/pull/2923]))
### Fixed
- Bring back "Auto" option for Embeddings Device as "Application default," which went missing in v3.1.0 ([#2873](https://github.com/nomic-ai/gpt4all/pull/2873))
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index d8c92ead..fa70a793 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -105,6 +105,8 @@ if (APPLE)
list(APPEND CHAT_EXE_RESOURCES "${LOCAL_EMBEDDING_MODEL_PATH}")
endif()
+set(QAPPLICATION_CLASS QGuiApplication)
+add_subdirectory(deps/SingleApplication)
add_subdirectory(src)
target_sources(chat PRIVATE ${APP_ICON_RESOURCE} ${CHAT_EXE_RESOURCES})
@@ -238,7 +240,7 @@ else()
PRIVATE Qt6::Quick Qt6::Svg Qt6::HttpServer Qt6::Sql Qt6::Pdf)
endif()
target_link_libraries(chat
- PRIVATE llmodel)
+ PRIVATE llmodel SingleApplication)
# -- install --
diff --git a/gpt4all-chat/deps/SingleApplication b/gpt4all-chat/deps/SingleApplication
new file mode 160000
index 00000000..21bdef01
--- /dev/null
+++ b/gpt4all-chat/deps/SingleApplication
@@ -0,0 +1 @@
+Subproject commit 21bdef01eddcbd78044eea1d50b9dee08d218ff2
diff --git a/gpt4all-chat/src/main.cpp b/gpt4all-chat/src/main.cpp
index 8521f4ea..6aab7b51 100644
--- a/gpt4all-chat/src/main.cpp
+++ b/gpt4all-chat/src/main.cpp
@@ -9,15 +9,14 @@
#include "network.h"
#include
+#include
#include
-#include
#include
#include
-#include
+#include
#include
#include
-#include
#include
#include
@@ -25,6 +24,29 @@
# include
#endif
+#ifdef Q_OS_WINDOWS
+# include
+#endif
+
+using namespace Qt::Literals::StringLiterals;
+
+
+static void raiseWindow(QWindow *window)
+{
+#ifdef Q_OS_WINDOWS
+ HWND hwnd = HWND(window->winId());
+
+ // check if window is minimized to Windows task bar
+ if (IsIconic(hwnd))
+ ShowWindow(hwnd, SW_RESTORE);
+
+ SetForegroundWindow(hwnd);
+#else
+ window->show();
+ window->raise();
+ window->requestActivate();
+#endif
+}
int main(int argc, char *argv[])
{
@@ -36,7 +58,15 @@ int main(int argc, char *argv[])
Logger::globalInstance();
- QGuiApplication app(argc, argv);
+ SingleApplication app(argc, argv, true /*allowSecondary*/);
+ if (app.isSecondary()) {
+#ifdef Q_OS_WINDOWS
+ AllowSetForegroundWindow(DWORD(app.primaryPid()));
+#endif
+ app.sendMessage("RAISE_WINDOW");
+ return 0;
+ }
+
#ifdef Q_OS_LINUX
app.setWindowIcon(QIcon(":/gpt4all/icons/gpt4all.svg"));
#endif
@@ -77,7 +107,7 @@ int main(int argc, char *argv[])
qmlRegisterSingletonInstance("localdocs", 1, 0, "LocalDocs", LocalDocs::globalInstance());
qmlRegisterUncreatableMetaObject(MySettingsEnums::staticMetaObject, "mysettingsenums", 1, 0, "MySettingsEnums", "Error: only enums");
- const QUrl url(u"qrc:/gpt4all/main.qml"_qs);
+ const QUrl url(u"qrc:/gpt4all/main.qml"_s);
QObject::connect(&engine, &QQmlApplicationEngine::objectCreated,
&app, [url](QObject *obj, const QUrl &objUrl) {
@@ -86,6 +116,13 @@ int main(int argc, char *argv[])
}, Qt::QueuedConnection);
engine.load(url);
+ QObject *rootObject = engine.rootObjects().first();
+ QQuickWindow *windowObject = qobject_cast(rootObject);
+ Q_ASSERT(windowObject);
+ if (windowObject)
+ QObject::connect(&app, &SingleApplication::receivedMessage,
+ windowObject, [windowObject] () { raiseWindow(windowObject); } );
+
#if 0
QDirIterator it("qrc:", QDirIterator::Subdirectories);
while (it.hasNext()) {
From 813ccaf5d10a27c23f9fedf43d9e2a87bf64d8b8 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 30 Aug 2024 12:30:24 -0400
Subject: [PATCH 57/66] server: do not process the system prompt twice for new
models (#2924)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 8 +++-----
gpt4all-chat/src/chat.cpp | 1 -
gpt4all-chat/src/chat.h | 1 -
gpt4all-chat/src/chatllm.cpp | 21 ++++++++++++++-------
gpt4all-chat/src/server.cpp | 8 ++++----
5 files changed, 21 insertions(+), 18 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 6f1457dd..47926f05 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -8,9 +8,11 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
-- Use configured system prompt in server mode and ignore system messages ([#2921](https://github.com/nomic-ai/gpt4all/pull/2921))
+- Use configured system prompt in server mode and ignore system messages ([#2921](https://github.com/nomic-ai/gpt4all/pull/2921), [#2924](https://github.com/nomic-ai/gpt4all/pull/2924))
### Changed
+- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
+- Change the website link on the home page to point to the new URL ([#2915](https://github.com/nomic-ai/gpt4all/pull/2915))
- Smaller default window size, dynamic minimum size, and scaling tweaks ([#2904](https://github.com/nomic-ai/gpt4all/pull/2904))
- Only allow a single instance of program to be run at a time ([#2923](https://github.com/nomic-ai/gpt4all/pull/2923]))
@@ -24,10 +26,6 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fixed typo in several files. (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
- Fix the antenna icon tooltip when using the local server ([#2922](https://github.com/nomic-ai/gpt4all/pull/2922))
-### Changed
-- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
-- Change the website link on the home page to point to the new URL ([#2915](https://github.com/nomic-ai/gpt4all/pull/2915))
-
## [3.2.1] - 2024-08-13
### Fixed
diff --git a/gpt4all-chat/src/chat.cpp b/gpt4all-chat/src/chat.cpp
index a44022c0..d9a66091 100644
--- a/gpt4all-chat/src/chat.cpp
+++ b/gpt4all-chat/src/chat.cpp
@@ -74,7 +74,6 @@ void Chat::connectLLM()
connect(this, &Chat::promptRequested, m_llmodel, &ChatLLM::prompt, Qt::QueuedConnection);
connect(this, &Chat::modelChangeRequested, m_llmodel, &ChatLLM::modelChangeRequested, Qt::QueuedConnection);
connect(this, &Chat::loadDefaultModelRequested, m_llmodel, &ChatLLM::loadDefaultModel, Qt::QueuedConnection);
- connect(this, &Chat::loadModelRequested, m_llmodel, &ChatLLM::loadModel, Qt::QueuedConnection);
connect(this, &Chat::generateNameRequested, m_llmodel, &ChatLLM::generateName, Qt::QueuedConnection);
connect(this, &Chat::regenerateResponseRequested, m_llmodel, &ChatLLM::regenerateResponse, Qt::QueuedConnection);
connect(this, &Chat::resetResponseRequested, m_llmodel, &ChatLLM::resetResponse, Qt::QueuedConnection);
diff --git a/gpt4all-chat/src/chat.h b/gpt4all-chat/src/chat.h
index 065c624e..e3cdc756 100644
--- a/gpt4all-chat/src/chat.h
+++ b/gpt4all-chat/src/chat.h
@@ -146,7 +146,6 @@ Q_SIGNALS:
void modelInfoChanged();
void restoringFromTextChanged();
void loadDefaultModelRequested();
- void loadModelRequested(const ModelInfo &modelInfo);
void generateNameRequested();
void modelLoadingErrorChanged();
void isServerChanged();
diff --git a/gpt4all-chat/src/chatllm.cpp b/gpt4all-chat/src/chatllm.cpp
index 86e0026c..fd9316f5 100644
--- a/gpt4all-chat/src/chatllm.cpp
+++ b/gpt4all-chat/src/chatllm.cpp
@@ -249,9 +249,11 @@ bool ChatLLM::loadModel(const ModelInfo &modelInfo)
// and what the type and name of that model is. I've tried to comment extensively in this method
// to provide an overview of what we're doing here.
- // We're already loaded with this model
- if (isModelLoaded() && this->modelInfo() == modelInfo)
- return true;
+ if (isModelLoaded() && this->modelInfo() == modelInfo) {
+ // already acquired -> keep it and reset
+ resetContext();
+ return true; // already loaded
+ }
// reset status
emit modelLoadingPercentageChanged(std::numeric_limits::min()); // small non-zero positive value
@@ -659,20 +661,25 @@ void ChatLLM::setModelInfo(const ModelInfo &modelInfo)
emit modelInfoChanged(modelInfo);
}
-void ChatLLM::acquireModel() {
+void ChatLLM::acquireModel()
+{
m_llModelInfo = LLModelStore::globalInstance()->acquireModel();
emit loadedModelInfoChanged();
}
-void ChatLLM::resetModel() {
+void ChatLLM::resetModel()
+{
m_llModelInfo = {};
emit loadedModelInfoChanged();
}
void ChatLLM::modelChangeRequested(const ModelInfo &modelInfo)
{
- m_shouldBeLoaded = true;
- loadModel(modelInfo);
+ // ignore attempts to switch to the same model twice
+ if (!isModelLoaded() || this->modelInfo() != modelInfo) {
+ m_shouldBeLoaded = true;
+ loadModel(modelInfo);
+ }
}
bool ChatLLM::handlePrompt(int32_t token)
diff --git a/gpt4all-chat/src/server.cpp b/gpt4all-chat/src/server.cpp
index 227c30a9..1da962f5 100644
--- a/gpt4all-chat/src/server.cpp
+++ b/gpt4all-chat/src/server.cpp
@@ -361,14 +361,14 @@ QHttpServerResponse Server::handleCompletionRequest(const QHttpServerRequest &re
if (modelInfo.filename().isEmpty()) {
std::cerr << "ERROR: couldn't load default model " << modelRequested.toStdString() << std::endl;
return QHttpServerResponse(QHttpServerResponder::StatusCode::BadRequest);
- } else if (!loadModel(modelInfo)) {
+ }
+
+ // NB: this resets the context, regardless of whether this model is already loaded
+ if (!loadModel(modelInfo)) {
std::cerr << "ERROR: couldn't load model " << modelInfo.name().toStdString() << std::endl;
return QHttpServerResponse(QHttpServerResponder::StatusCode::InternalServerError);
}
- // don't remember any context
- resetContext();
-
const QString promptTemplate = modelInfo.promptTemplate();
const float top_k = modelInfo.topK();
const int n_batch = modelInfo.promptBatchSize();
From 55946ffc93dc5377e43f002797f1ceb0551e6ced Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 30 Aug 2024 12:44:10 -0400
Subject: [PATCH 58/66] modellist: fix a few issues with loading remote models
(#2875)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 3 +-
gpt4all-chat/src/modellist.cpp | 243 ++---
gpt4all-chat/src/modellist.h | 2 +
gpt4all-chat/translations/gpt4all_en_US.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_es_MX.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_it_IT.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_pt_BR.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_ro_RO.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_zh_CN.ts | 983 +++++++++++----------
gpt4all-chat/translations/gpt4all_zh_TW.ts | 983 +++++++++++----------
10 files changed, 3608 insertions(+), 3521 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 47926f05..c774a040 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -23,8 +23,9 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Set the window icon on Linux ([#2880](https://github.com/nomic-ai/gpt4all/pull/2880))
- Corrections to the Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#2890](https://github.com/nomic-ai/gpt4all/pull/2890))
- Fix singular/plural forms of LocalDocs "x Sources" (by [@cosmic-snow](https://github.com/cosmic-snow) in [#2885](https://github.com/nomic-ai/gpt4all/pull/2885))
-- Fixed typo in several files. (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
+- Fix a typo in Model Settings (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
- Fix the antenna icon tooltip when using the local server ([#2922](https://github.com/nomic-ai/gpt4all/pull/2922))
+- Fix a few issues with locating files and handling errors when loading remote models on startup ([#2875](https://github.com/nomic-ai/gpt4all/pull/2875))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/src/modellist.cpp b/gpt4all-chat/src/modellist.cpp
index 13711057..d53c8fbf 100644
--- a/gpt4all-chat/src/modellist.cpp
+++ b/gpt4all-chat/src/modellist.cpp
@@ -1208,132 +1208,139 @@ bool ModelList::modelExists(const QString &modelFilename) const
return false;
}
+void ModelList::updateOldRemoteModels(const QString &path)
+{
+ QDirIterator it(path, QDir::Files, QDirIterator::Subdirectories);
+ while (it.hasNext()) {
+ QFileInfo info = it.nextFileInfo();
+ QString filename = it.fileName();
+ if (!filename.startsWith("chatgpt-") || !filename.endsWith(".txt"))
+ continue;
+
+ QString apikey;
+ QString modelname(filename);
+ modelname.chop(4); // strip ".txt" extension
+ modelname.remove(0, 8); // strip "chatgpt-" prefix
+ QFile file(info.filePath());
+ if (!file.open(QIODevice::ReadOnly)) {
+ qWarning().noquote() << tr("cannot open \"%1\": %2").arg(file.fileName(), file.errorString());
+ continue;
+ }
+
+ {
+ QTextStream in(&file);
+ apikey = in.readAll();
+ file.close();
+ }
+
+ QFile newfile(u"%1/gpt4all-%2.rmodel"_s.arg(info.dir().path(), modelname));
+ if (!newfile.open(QIODevice::ReadWrite)) {
+ qWarning().noquote() << tr("cannot create \"%1\": %2").arg(newfile.fileName(), file.errorString());
+ continue;
+ }
+
+ QJsonObject obj {
+ { "apiKey", apikey },
+ { "modelName", modelname },
+ };
+
+ QTextStream out(&newfile);
+ out << QJsonDocument(obj).toJson();
+ newfile.close();
+
+ file.remove();
+ }
+}
+
+void ModelList::processModelDirectory(const QString &path)
+{
+ QDirIterator it(path, QDir::Files, QDirIterator::Subdirectories);
+ while (it.hasNext()) {
+ QFileInfo info = it.nextFileInfo();
+
+ QString filename = it.fileName();
+ if (filename.startsWith("incomplete") || FILENAME_BLACKLIST.contains(filename))
+ continue;
+ if (!filename.endsWith(".gguf") && !filename.endsWith(".rmodel"))
+ continue;
+
+ bool isOnline(filename.endsWith(".rmodel"));
+ bool isCompatibleApi(filename.endsWith("-capi.rmodel"));
+
+ QString name;
+ QString description;
+ if (isCompatibleApi) {
+ QJsonObject obj;
+ {
+ QFile file(info.filePath());
+ if (!file.open(QIODeviceBase::ReadOnly)) {
+ qWarning().noquote() << tr("cannot open \"%1\": %2").arg(file.fileName(), file.errorString());
+ continue;
+ }
+ QJsonDocument doc = QJsonDocument::fromJson(file.readAll());
+ obj = doc.object();
+ }
+ {
+ QString apiKey(obj["apiKey"].toString());
+ QString baseUrl(obj["baseUrl"].toString());
+ QString modelName(obj["modelName"].toString());
+ apiKey = apiKey.length() < 10 ? "*****" : apiKey.left(5) + "*****";
+ name = tr("%1 (%2)").arg(modelName, baseUrl);
+ description = tr("OpenAI-Compatible API Model "
+ "
API Key: %1
"
+ "
Base URL: %2
"
+ "
Model Name: %3
")
+ .arg(apiKey, baseUrl, modelName);
+ }
+ }
+
+ QVector modelsById;
+ {
+ QMutexLocker locker(&m_mutex);
+ for (ModelInfo *info : m_models)
+ if (info->filename() == filename)
+ modelsById.append(info->id());
+ }
+
+ if (modelsById.isEmpty()) {
+ if (!contains(filename))
+ addModel(filename);
+ modelsById.append(filename);
+ }
+
+ for (const QString &id : modelsById) {
+ QVector> data {
+ { InstalledRole, true },
+ { FilenameRole, filename },
+ { OnlineRole, isOnline },
+ { CompatibleApiRole, isCompatibleApi },
+ { DirpathRole, info.dir().absolutePath() + "/" },
+ { FilesizeRole, toFileSize(info.size()) },
+ };
+ if (isCompatibleApi) {
+ // The data will be saved to "GPT4All.ini".
+ data.append({ NameRole, name });
+ // The description is hard-coded into "GPT4All.ini" due to performance issue.
+ // If the description goes to be dynamic from its .rmodel file, it will get high I/O usage while using the ModelList.
+ data.append({ DescriptionRole, description });
+ // Prompt template should be clear while using ChatML format which is using in most of OpenAI-Compatible API server.
+ data.append({ PromptTemplateRole, "%1" });
+ }
+ updateData(id, data);
+ }
+ }
+}
+
void ModelList::updateModelsFromDirectory()
{
const QString exePath = QCoreApplication::applicationDirPath() + QDir::separator();
const QString localPath = MySettings::globalInstance()->modelPath();
- auto updateOldRemoteModels = [&](const QString& path) {
- QDirIterator it(path, QDirIterator::Subdirectories);
- while (it.hasNext()) {
- it.next();
- if (!it.fileInfo().isDir()) {
- QString filename = it.fileName();
- if (filename.startsWith("chatgpt-") && filename.endsWith(".txt")) {
- QString apikey;
- QString modelname(filename);
- modelname.chop(4); // strip ".txt" extension
- modelname.remove(0, 8); // strip "chatgpt-" prefix
- QFile file(path + filename);
- if (file.open(QIODevice::ReadWrite)) {
- QTextStream in(&file);
- apikey = in.readAll();
- file.close();
- }
-
- QJsonObject obj;
- obj.insert("apiKey", apikey);
- obj.insert("modelName", modelname);
- QJsonDocument doc(obj);
-
- auto newfilename = u"gpt4all-%1.rmodel"_s.arg(modelname);
- QFile newfile(path + newfilename);
- if (newfile.open(QIODevice::ReadWrite)) {
- QTextStream out(&newfile);
- out << doc.toJson();
- newfile.close();
- }
- file.remove();
- }
- }
- }
- };
-
- auto processDirectory = [&](const QString& path) {
- QDirIterator it(path, QDir::Files, QDirIterator::Subdirectories);
- while (it.hasNext()) {
- it.next();
-
- QString filename = it.fileName();
- if (filename.startsWith("incomplete") || FILENAME_BLACKLIST.contains(filename))
- continue;
- if (!filename.endsWith(".gguf") && !filename.endsWith(".rmodel"))
- continue;
-
- QVector modelsById;
- {
- QMutexLocker locker(&m_mutex);
- for (ModelInfo *info : m_models)
- if (info->filename() == filename)
- modelsById.append(info->id());
- }
-
- if (modelsById.isEmpty()) {
- if (!contains(filename))
- addModel(filename);
- modelsById.append(filename);
- }
-
- QFileInfo info = it.fileInfo();
-
- bool isOnline(filename.endsWith(".rmodel"));
- bool isCompatibleApi(filename.endsWith("-capi.rmodel"));
-
- QString name;
- QString description;
- if (isCompatibleApi) {
- QJsonObject obj;
- {
- QFile file(path + filename);
- bool success = file.open(QIODeviceBase::ReadOnly);
- (void)success;
- Q_ASSERT(success);
- QJsonDocument doc = QJsonDocument::fromJson(file.readAll());
- obj = doc.object();
- }
- {
- QString apiKey(obj["apiKey"].toString());
- QString baseUrl(obj["baseUrl"].toString());
- QString modelName(obj["modelName"].toString());
- apiKey = apiKey.length() < 10 ? "*****" : apiKey.left(5) + "*****";
- name = tr("%1 (%2)").arg(modelName, baseUrl);
- description = tr("OpenAI-Compatible API Model "
- "
API Key: %1
"
- "
Base URL: %2
"
- "
Model Name: %3
")
- .arg(apiKey, baseUrl, modelName);
- }
- }
-
- for (const QString &id : modelsById) {
- QVector> data {
- { InstalledRole, true },
- { FilenameRole, filename },
- { OnlineRole, isOnline },
- { CompatibleApiRole, isCompatibleApi },
- { DirpathRole, info.dir().absolutePath() + "/" },
- { FilesizeRole, toFileSize(info.size()) },
- };
- if (isCompatibleApi) {
- // The data will be saved to "GPT4All.ini".
- data.append({ NameRole, name });
- // The description is hard-coded into "GPT4All.ini" due to performance issue.
- // If the description goes to be dynamic from its .rmodel file, it will get high I/O usage while using the ModelList.
- data.append({ DescriptionRole, description });
- // Prompt template should be clear while using ChatML format which is using in most of OpenAI-Compatible API server.
- data.append({ PromptTemplateRole, "%1" });
- }
- updateData(id, data);
- }
- }
- };
-
-
updateOldRemoteModels(exePath);
- processDirectory(exePath);
+ processModelDirectory(exePath);
if (localPath != exePath) {
updateOldRemoteModels(localPath);
- processDirectory(localPath);
+ processModelDirectory(localPath);
}
}
diff --git a/gpt4all-chat/src/modellist.h b/gpt4all-chat/src/modellist.h
index 7c13da8e..21d9aeef 100644
--- a/gpt4all-chat/src/modellist.h
+++ b/gpt4all-chat/src/modellist.h
@@ -502,6 +502,8 @@ private:
void parseModelsJsonFile(const QByteArray &jsonData, bool save);
void parseDiscoveryJsonFile(const QByteArray &jsonData);
QString uniqueModelName(const ModelInfo &model) const;
+ void updateOldRemoteModels(const QString &path);
+ void processModelDirectory(const QString &path);
private:
mutable QMutex m_mutex;
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index be7dee3f..e46ae94c 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections
-
+ Add Document Collection
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.
-
+ Please choose a directory
-
+ Name
-
+ Collection name...
-
+ Name of the collection to add (Required)
-
+ Folder
-
+ Folder path...
-
+ Folder path to documents (Required)
-
+ Browse
-
+ Create Collection
@@ -67,288 +67,288 @@
AddModelView
-
+ ← Existing Models
-
+ Explore Models
-
+ Discover and download models by keyword search...
-
+ Text field for discovering and filtering downloadable models
-
+ Initiate model discovery and filtering
-
+ Triggers discovery and filtering of models
-
+ Default
-
+ Likes
-
+ Downloads
-
+ Recent
-
+ Asc
-
+ Desc
-
+ None
-
+ Searching · %1
-
+ Sort by: %1
-
+ Sort dir: %1
-
+ Limit: %1
-
+ Network error: could not retrieve %1
-
-
+
+ Busy indicator
-
+ Displayed when the models request is ongoing
-
+ Model file
-
+ Model file to be downloaded
-
+ Description
-
+ File description
-
+ Cancel
-
+ Resume
-
+ Download
-
+ Stop/restart/start the download
-
+ Remove
-
+ Remove model from filesystem
-
-
+
+ Install
-
+ Install online model
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.
-
+ ERROR: $BASE_URL is empty.
-
+ enter $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.
-
+ enter $MODEL_NAME
-
+ %1 GB
-
-
+
+ ?
-
+ Describes an error that occurred when downloading
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ Error for incompatible hardware
-
+ Download progressBar
-
+ Shows the progress made in the download
-
+ Download speed
-
+ Download speed in bytes/kilobytes/megabytes per second
-
+ Calculating...
-
-
-
-
+
+
+
+ Whether the file hash is being calculated
-
+ Displayed when the file hash is being calculated
-
+ enter $API_KEY
-
+ File size
-
+ RAM required
-
+ Parameters
-
+ Quant
-
+ Type
@@ -356,22 +356,22 @@
ApplicationSettings
-
+ Application
-
+ Network dialog
-
+ opt-in to share feedback/conversations
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -382,223 +382,223 @@
-
+ Error dialog
-
+ Application Settings
-
+ General
-
+ Theme
-
+ The application color scheme.
-
+ Dark
-
+ Light
-
+ LegacyDark
-
+ Font Size
-
+ The size of text in the application.
-
+ Small
-
+ Medium
-
+ Large
-
+ Language and Locale
-
+ The language and locale you wish to use.
-
+ System Locale
-
+ Device
-
+ The compute device used for text generation.
-
-
+
+ Application default
-
+ Default Model
-
+ The preferred model for new chats. Also used as the local server fallback.
-
+ Suggestion Mode
-
+ Generate suggested follow-up questions at the end of responses.
-
+ When chatting with LocalDocs
-
+ Whenever possible
-
+ Never
-
+ Download Path
-
+ Where to store local models and the LocalDocs database.
-
+ Browse
-
+ Choose where to save model files
-
+ Enable Datalake
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.
-
+ Advanced
-
+ CPU Threads
-
+ The number of CPU threads used for inference and embedding.
-
+ Save Chat Context
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.
-
+ Enable Local Server
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
-
+ API Server Port
-
+ The port to use for the local server. Requires restart.
-
+ Check For Updates
-
+ Manually check for an update to GPT4All.
-
+ Updates
@@ -606,13 +606,13 @@
Chat
-
-
+
+ New Chat
-
+ Server Chat
@@ -620,12 +620,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API server
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2
@@ -633,62 +633,62 @@
ChatDrawer
-
+ Drawer
-
+ Main navigation drawer
-
+ + New Chat
-
+ Create a new chat
-
+ Select the current chat or edit the chat when in edit mode
-
+ Edit chat name
-
+ Save chat name
-
+ Delete chat
-
+ Confirm chat deletion
-
+ Cancel chat deletion
-
+ List of chats
-
+ List of chats in the drawer dialog
@@ -696,32 +696,32 @@
ChatListModel
-
+ TODAY
-
+ THIS WEEK
-
+ THIS MONTH
-
+ LAST SIX MONTHS
-
+ THIS YEAR
-
+ LAST YEAR
@@ -729,215 +729,215 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p>
-
+ Switch model dialog
-
+ Warn the user if they switch models, then context will be erased
-
+ Conversation copied to clipboard.
-
+ Code copied to clipboard.
-
+ Chat panel
-
+ Chat panel with options
-
+ Reload the currently loaded model
-
+ Eject the currently loaded model
-
+ No model installed.
-
+ Model loading error.
-
+ Waiting for model...
-
+ Switching context...
-
+ Choose a model...
-
+ Not found: %1
-
+ The top item is the current model
-
-
+
+ LocalDocs
-
+ Add documents
-
+ add collections of documents to the chat
-
+ Load the default model
-
+ Loads the default model which can be changed in settings
-
+ No Model Installed
-
+ GPT4All requires that you install at least one
model to get started
-
+ Install a Model
-
+ Shows the add model view
-
+ Conversation with the model
-
+ prompt / response pairs from the conversation
-
+ GPT4All
-
+ You
-
+ response stopped ...
-
+ processing ...
-
+ generating response ...
-
+ generating questions ...
-
-
+
+ Copy
-
+ Copy Message
-
+ Disable markdown
-
+ Enable markdown
-
+ Thumbs up
-
+ Gives a thumbs up to the response
-
+ Thumbs down
-
+ Opens thumbs down dialog
-
+ %n Source(s)%n Source
@@ -945,113 +945,113 @@ model to get started
-
+ Suggested follow-ups
-
+ Erase and reset chat session
-
+ Copy chat session to clipboard
-
+ Redo last chat response
-
+ Stop generating
-
+ Stop the current response generation
-
+ Reloads the model
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help
-
-
+
+ Reload · %1
-
+ Loading · %1
-
+ Load · %1 (default) →
-
+ restoring from text ...
-
+ retrieving localdocs: %1 ...
-
+ searching localdocs: %1 ...
-
+ Send a message...
-
+ Load a model to continue...
-
+ Send messages/prompts to the model
-
+ Cut
-
+ Paste
-
+ Select All
-
+ Send message
-
+ Sends the message/prompt contained in textfield to the model
@@ -1059,12 +1059,12 @@ model to get started
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete results
-
+ %n file(s)%n file
@@ -1072,7 +1072,7 @@ model to get started
-
+ %n word(s)%n word
@@ -1080,17 +1080,17 @@ model to get started
-
+ Updating
-
+ + Add Docs
-
+ Select a collection to make it available to the chat model.
@@ -1098,37 +1098,37 @@ model to get started
Download
-
+ Model "%1" is installed successfully.
-
+ ERROR: $MODEL_NAME is empty.
-
+ ERROR: $API_KEY is empty.
-
+ ERROR: $BASE_URL is invalid.
-
+ ERROR: Model "%1 (%2)" is conflict.
-
+ Model "%1 (%2)" is installed successfully.
-
+ Model "%1" is removed.
@@ -1136,92 +1136,92 @@ model to get started
HomeView
-
+ Welcome to GPT4All
-
+ The privacy-first LLM chat application
-
+ Start chatting
-
+ Start Chatting
-
+ Chat with any LLM
-
+ LocalDocs
-
+ Chat with your local files
-
+ Find Models
-
+ Explore and download models
-
+ Latest news
-
+ Latest news from GPT4All
-
+ Release Notes
-
+ Documentation
-
+ Discord
-
+ X (Twitter)
-
+ Github
-
+ nomic.ai
-
+ Subscribe to Newsletter
@@ -1229,117 +1229,117 @@ model to get started
LocalDocsSettings
-
+ LocalDocs
-
+ LocalDocs Settings
-
+ Indexing
-
+ Allowed File Extensions
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.
-
+ Embedding
-
+ Use Nomic Embed API
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.
-
+ Nomic API Key
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.
-
+ Embeddings Device
-
+ The compute device used for embeddings. Requires restart.
-
+ Application default
-
+ Display
-
+ Show Sources
-
+ Display the sources used for each response.
-
+ Advanced
-
+ Warning: Advanced usage only.
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.
-
+ Document snippet size (characters)
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.
-
+ Max document snippets per prompt
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.
@@ -1347,117 +1347,117 @@ model to get started
LocalDocsView
-
+ LocalDocs
-
+ Chat with your local files
-
+ + Add Collection
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.
-
+ No Collections Installed
-
+ Install a collection of local documents to get started using this feature
-
+ + Add Doc Collection
-
+ Shows the add model view
-
+ Indexing progressBar
-
+ Shows the progress made in the indexing
-
+ ERROR
-
+ INDEXING
-
+ EMBEDDING
-
+ REQUIRES UPDATE
-
+ READY
-
+ INSTALLING
-
+ Indexing in progress
-
+ Embedding in progress
-
+ This collection requires an update after version change
-
+ Automatically reindexes upon changes to the folder
-
+ Installation in progress
-
+ %
-
+ %n file(s)%n file
@@ -1465,7 +1465,7 @@ model to get started
-
+ %n word(s)%n word
@@ -1473,27 +1473,27 @@ model to get started
-
+ Remove
-
+ Rebuild
-
+ Reindex this folder from scratch. This is slow and usually not needed.
-
+ Update
-
+ Update the collection to the new version. This is a slow operation.
@@ -1501,67 +1501,78 @@ model to get started
ModelList
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2
-
+ <strong>Mistral Tiny model</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul>
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
@@ -1569,217 +1580,217 @@ model to get started
ModelSettings
-
+ Model
-
+ Model Settings
-
+ Clone
-
+ Remove
-
+ Name
-
+ Model File
-
+ System Prompt
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.
-
+ Prompt Template
-
+ The template that wraps every prompt.
-
+ Must contain the string "%1" to be replaced with the user's input.
-
+ Chat Name Prompt
-
+ Prompt used to automatically generate chat names.
-
+ Suggested FollowUp Prompt
-
+ Prompt used to generate suggested follow-up questions.
-
+ Context Length
-
+ Number of input and output tokens the model sees.
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
-
+ Temperature
-
+ Randomness of model output. Higher -> more variation.
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.
-
+ Top-P
-
+ Nucleus Sampling factor. Lower -> more predictable.
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.
-
+ Min-P
-
+ Minimum token probability. Higher -> more predictable.
-
+ Sets the minimum relative probability for a token to be considered.
-
+ Top-K
-
+ Size of selection pool for tokens.
-
+ Only the top K most likely tokens will be chosen from.
-
+ Max Length
-
+ Maximum response length, in tokens.
-
+ Prompt Batch Size
-
+ The batch size used for prompt processing.
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.
-
+ Repeat Penalty
-
+ Repetition penalty factor. Set to 1 to disable.
-
+ Repeat Penalty Tokens
-
+ Number of previous tokens used for penalty.
-
+ GPU Layers
-
+ Number of model layers to load into VRAM.
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1789,217 +1800,217 @@ NOTE: Does not take effect until you reload the model.
ModelsView
-
+ No Models Installed
-
+ Install a model to get started using GPT4All
-
-
+
+ + Add Model
-
+ Shows the add model view
-
+ Installed Models
-
+ Locally installed chat models
-
+ Model file
-
+ Model file to be downloaded
-
+ Description
-
+ File description
-
+ Cancel
-
+ Resume
-
+ Stop/restart/start the download
-
+ Remove
-
+ Remove model from filesystem
-
-
+
+ Install
-
+ Install online model
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.
-
+ ERROR: $BASE_URL is empty.
-
+ enter $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.
-
+ enter $MODEL_NAME
-
+ %1 GB
-
+ ?
-
+ Describes an error that occurred when downloading
-
+ Error for incompatible hardware
-
+ Download progressBar
-
+ Shows the progress made in the download
-
+ Download speed
-
+ Download speed in bytes/kilobytes/megabytes per second
-
+ Calculating...
-
-
-
-
+
+
+
+ Whether the file hash is being calculated
-
+ Busy indicator
-
+ Displayed when the file hash is being calculated
-
+ enter $API_KEY
-
+ File size
-
+ RAM required
-
+ Parameters
-
+ Quant
-
+ Type
@@ -2007,12 +2018,12 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
-
+ Fancy link
-
+ A stylized link
@@ -2020,7 +2031,7 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
-
+ Please choose a directory
@@ -2028,12 +2039,12 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
-
+ Restore Defaults
-
+ Restores settings dialog to a default state
@@ -2041,12 +2052,12 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2055,47 +2066,47 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
-
+ Terms for opt-in
-
+ Describes what will happen when you opt-in
-
+ Please provide a name for attribution (optional)
-
+ Attribution (optional)
-
+ Provide attribution
-
+ Enable
-
+ Enable opt-in
-
+ Cancel
-
+ Cancel opt-in
@@ -2103,17 +2114,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
-
+ New version is available
-
+ Update
-
+ Update to new version
@@ -2121,17 +2132,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
-
+ Reveals a shortlived help balloon
-
+ Busy indicator
-
+ Displayed when the popup is showing busy
@@ -2139,28 +2150,28 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
SettingsView
-
-
+
+ Settings
-
+ Contains various application settings
-
+ Application
-
+ Model
-
+ LocalDocs
@@ -2168,29 +2179,29 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
-
+ Welcome!
-
+ ### Release notes
%1### Contributors
%2
-
+ Release notes
-
+ Release notes for this version
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2208,71 +2219,71 @@ model release that uses your data!
-
+ Terms for opt-in
-
+ Describes what will happen when you opt-in
-
-
+
+ Opt-in for anonymous usage statistics
-
-
+
+ Yes
-
+ Allow opt-in for anonymous usage statistics
-
-
+
+ No
-
+ Opt-out for anonymous usage statistics
-
+ Allow opt-out for anonymous usage statistics
-
-
+
+ Opt-in for network
-
+ Allow opt-in for network
-
+ Allow opt-in anonymous sharing of chats to the GPT4All Datalake
-
+ Opt-out for network
-
+ Allow opt-out anonymous sharing of chats to the GPT4All Datalake
@@ -2280,23 +2291,23 @@ model release that uses your data!
SwitchModelDialog
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?
-
+ Continue
-
+ Continue with model loading
-
-
+
+ Cancel
@@ -2304,32 +2315,32 @@ model release that uses your data!
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)
-
+ Please provide a better response...
-
+ Submit
-
+ Submits the user's response
-
+ Cancel
-
+ Closes the response dialog
@@ -2337,125 +2348,125 @@ model release that uses your data!
main
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.
-
+ Connection to datalake failed.
-
+ Saving chats.
-
+ Network dialog
-
+ opt-in to share feedback/conversations
-
+ Home view
-
+ Home view of application
-
+ Home
-
+ Chat view
-
+ Chat view to interact with models
-
+ Chats
-
-
+
+ Models
-
+ Models view for installed models
-
-
+
+ LocalDocs
-
+ LocalDocs view to configure and use local docs
-
-
+
+ Settings
-
+ Settings view for application configuration
-
+ The datalake is enabled
-
+ Using a network model
-
+ Server mode is enabled
-
+ Installed models
-
+ View of installed models
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index 0c9f516f..a3290102 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections← Colecciones existentes
-
+ Add Document CollectionAgregar colección de documentos
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Agregue una carpeta que contenga archivos de texto plano, PDFs o Markdown. Configure extensiones adicionales en Configuración.
-
+ Please choose a directoryPor favor, elija un directorio
-
+ NameNombre
-
+ Collection name...Nombre de la colección...
-
+ Name of the collection to add (Required)Nombre de la colección a agregar (Requerido)
-
+ FolderCarpeta
-
+ Folder path...Ruta de la carpeta...
-
+ Folder path to documents (Required)Ruta de la carpeta de documentos (Requerido)
-
+ BrowseExplorar
-
+ Create CollectionCrear colección
@@ -67,288 +67,288 @@
AddModelView
-
+ ← Existing Models← Modelos existentes
-
+ Explore ModelsExplorar modelos
-
+ Discover and download models by keyword search...Descubre y descarga modelos mediante búsqueda por palabras clave...
-
+ Text field for discovering and filtering downloadable modelsCampo de texto para descubrir y filtrar modelos descargables
-
+ Initiate model discovery and filteringIniciar descubrimiento y filtrado de modelos
-
+ Triggers discovery and filtering of modelsActiva el descubrimiento y filtrado de modelos
-
+ DefaultPredeterminado
-
+ LikesMe gusta
-
+ DownloadsDescargas
-
+ RecentReciente
-
+ AscAsc
-
+ DescDesc
-
+ NoneNinguno
-
+ Searching · %1Buscando · %1
-
+ Sort by: %1Ordenar por: %1
-
+ Sort dir: %1Dirección de ordenamiento: %1
-
+ Limit: %1Límite: %1
-
+ Network error: could not retrieve %1Error de red: no se pudo recuperar %1
-
-
+
+ Busy indicatorIndicador de ocupado
-
+ Displayed when the models request is ongoingSe muestra cuando la solicitud de modelos está en curso
-
+ Model fileArchivo del modelo
-
+ Model file to be downloadedArchivo del modelo a descargar
-
+ DescriptionDescripción
-
+ File descriptionDescripción del archivo
-
+ CancelCancelar
-
+ ResumeReanudar
-
+ DownloadDescargar
-
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
+ RemoveEliminar
-
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
+
+ InstallInstalar
-
+ Install online modelInstalar modelo en línea
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para tu hardware. El modelo requiere más memoria (%1 GB) de la que tu sistema tiene disponible (%2).</strong></font>
-
+ %1 GB%1 GB
-
-
+
+ ??
-
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
-
+ Error for incompatible hardwareError por hardware incompatible
-
+ Download progressBarBarra de progreso de descarga
-
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
+ Download speedVelocidad de descarga
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
+ Calculating...Calculando...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
-
+ enter $API_KEYingrese $API_KEY
-
+ File sizeTamaño del archivo
-
+ RAM requiredRAM requerida
-
+ ParametersParámetros
-
+ QuantCuantificación
-
+ TypeTipo
-
+ ERROR: $API_KEY is empty.ERROR: $API_KEY está vacío.
-
+ ERROR: $BASE_URL is empty.ERROR: $BASE_URL está vacío.
-
+ enter $BASE_URLingrese $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
-
+ enter $MODEL_NAMEingrese $MODEL_NAME
@@ -356,22 +356,22 @@
ApplicationSettings
-
+ ApplicationAplicación
-
+ Network dialogDiálogo de red
-
+ opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -382,57 +382,57 @@
-
+ Error dialogDiálogo de error
-
+ Application SettingsConfiguración de la aplicación
-
+ GeneralGeneral
-
+ ThemeTema
-
+ The application color scheme.El esquema de colores de la aplicación.
-
+ DarkOscuro
-
+ LightClaro
-
+ LegacyDarkOscuro legado
-
+ Font SizeTamaño de fuente
-
+ The size of text in the application.El tamaño del texto en la aplicación.
-
+ DeviceDispositivo
@@ -441,127 +441,127 @@
El dispositivo de cómputo utilizado para la generación de texto. "Auto" utiliza Vulkan o Metal.
-
+ SmallPequeño
-
+ MediumMediano
-
+ LargeGrande
-
+ Language and LocaleIdioma y configuración regional
-
+ The language and locale you wish to use.El idioma y la configuración regional que deseas usar.
-
+ Default ModelModelo predeterminado
-
+ The preferred model for new chats. Also used as the local server fallback.El modelo preferido para nuevos chats. También se utiliza como respaldo del servidor local.
-
+ Suggestion ModeModo de sugerencia
-
+ Generate suggested follow-up questions at the end of responses.Generar preguntas de seguimiento sugeridas al final de las respuestas.
-
+ When chatting with LocalDocsAl chatear con LocalDocs
-
+ Whenever possibleSiempre que sea posible
-
+ NeverNunca
-
+ Download PathRuta de descarga
-
+ Where to store local models and the LocalDocs database.Dónde almacenar los modelos locales y la base de datos de LocalDocs.
-
+ BrowseExplorar
-
+ Choose where to save model filesElegir dónde guardar los archivos del modelo
-
+ Enable DatalakeHabilitar Datalake
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.Enviar chats y comentarios al Datalake de código abierto de GPT4All.
-
+ AdvancedAvanzado
-
+ CPU ThreadsHilos de CPU
-
+ The number of CPU threads used for inference and embedding.El número de hilos de CPU utilizados para inferencia e incrustación.
-
+ Save Chat ContextGuardar contexto del chat
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat.
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta en un mayor uso de recursos.
-
+ Enable Local ServerHabilitar servidor local
@@ -572,43 +572,43 @@
en un mayor uso de recursos.
-
+ API Server PortPuerto del servidor API
-
+ The port to use for the local server. Requires restart.El puerto a utilizar para el servidor local. Requiere reinicio.
-
+ Check For UpdatesBuscar actualizaciones
-
+ Manually check for an update to GPT4All.Buscar manualmente una actualización para GPT4All.
-
+ UpdatesActualizaciones
-
+ System LocaleRegional del sistema
-
+ The compute device used for text generation.El dispositivo de cómputo utilizado para la generación de texto.
-
-
+
+ Application defaultPredeterminado de la aplicación
@@ -632,13 +632,13 @@
Chat
-
-
+
+ New ChatNuevo chat
-
+ Server ChatChat del servidor
@@ -646,12 +646,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API serverERROR: Ocurrió un error de red al conectar con el servidor API
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished obtuvo Error HTTP %1 %2
@@ -659,62 +659,62 @@
ChatDrawer
-
+ DrawerCajón
-
+ Main navigation drawerCajón de navegación principal
-
+ + New Chat+ Nuevo chat
-
+ Create a new chatCrear un nuevo chat
-
+ Select the current chat or edit the chat when in edit modeSeleccionar el chat actual o editar el chat cuando esté en modo de edición
-
+ Edit chat nameEditar nombre del chat
-
+ Save chat nameGuardar nombre del chat
-
+ Delete chatEliminar chat
-
+ Confirm chat deletionConfirmar eliminación del chat
-
+ Cancel chat deletionCancelar eliminación del chat
-
+ List of chatsLista de chats
-
+ List of chats in the drawer dialogLista de chats en el diálogo del cajón
@@ -722,32 +722,32 @@
ChatListModel
-
+ TODAYHOY
-
+ THIS WEEKESTA SEMANA
-
+ THIS MONTHESTE MES
-
+ LAST SIX MONTHSÚLTIMOS SEIS MESES
-
+ THIS YEARESTE AÑO
-
+ LAST YEARAÑO PASADO
@@ -755,148 +755,148 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p><h3>Advertencia</h3><p>%1</p>
-
+ Switch model dialogDiálogo para cambiar de modelo
-
+ Warn the user if they switch models, then context will be erasedAdvertir al usuario si cambia de modelo, entonces se borrará el contexto
-
+ Conversation copied to clipboard.Conversación copiada al portapapeles.
-
+ Code copied to clipboard.Código copiado al portapapeles.
-
+ Chat panelPanel de chat
-
+ Chat panel with optionsPanel de chat con opciones
-
+ Reload the currently loaded modelRecargar el modelo actualmente cargado
-
+ Eject the currently loaded modelExpulsar el modelo actualmente cargado
-
+ No model installed.No hay modelo instalado.
-
+ Model loading error.Error al cargar el modelo.
-
+ Waiting for model...Esperando al modelo...
-
+ Switching context...Cambiando contexto...
-
+ Choose a model...Elige un modelo...
-
+ Not found: %1No encontrado: %1
-
+ The top item is the current modelEl elemento superior es el modelo actual
-
-
+
+ LocalDocsDocumentosLocales
-
+ Add documentsAgregar documentos
-
+ add collections of documents to the chatagregar colecciones de documentos al chat
-
+ Load the default modelCargar el modelo predeterminado
-
+ Loads the default model which can be changed in settingsCarga el modelo predeterminado que se puede cambiar en la configuración
-
+ No Model InstalledNo hay modelo instalado
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Se encontró un error al cargar el modelo:</h3><br><i>"%1"</i><br><br>Los fallos en la carga de modelos pueden ocurrir por varias razones, pero las causas más comunes incluyen un formato de archivo incorrecto, una descarga incompleta o corrupta, un tipo de archivo equivocado, RAM del sistema insuficiente o un tipo de modelo incompatible. Aquí hay algunas sugerencias para resolver el problema:<br><ul><li>Asegúrate de que el archivo del modelo tenga un formato y tipo compatibles<li>Verifica que el archivo del modelo esté completo en la carpeta de descargas<li>Puedes encontrar la carpeta de descargas en el diálogo de configuración<li>Si has cargado el modelo manualmente, asegúrate de que el archivo no esté corrupto verificando el md5sum<li>Lee más sobre qué modelos son compatibles en nuestra <a href="https://docs.gpt4all.io/">documentación</a> para la interfaz gráfica<li>Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de discord</a> para obtener ayuda
-
+ Install a ModelInstalar un modelo
-
+ Shows the add model viewMuestra la vista de agregar modelo
-
+ Conversation with the modelConversación con el modelo
-
+ prompt / response pairs from the conversationpares de pregunta / respuesta de la conversación
-
+ GPT4AllGPT4All
-
+ YouTú
@@ -905,129 +905,129 @@
recalculando contexto ...
-
+ response stopped ...respuesta detenida ...
-
+ processing ...procesando ...
-
+ generating response ...generando respuesta ...
-
+ generating questions ...generando preguntas ...
-
-
+
+ CopyCopiar
-
+ Copy MessageCopiar mensaje
-
+ Disable markdownDesactivar markdown
-
+ Enable markdownActivar markdown
-
+ Thumbs upMe gusta
-
+ Gives a thumbs up to the responseDa un me gusta a la respuesta
-
+ Thumbs downNo me gusta
-
+ Opens thumbs down dialogAbre el diálogo de no me gusta
-
+ Suggested follow-upsSeguimientos sugeridos
-
+ Erase and reset chat sessionBorrar y reiniciar sesión de chat
-
+ Copy chat session to clipboardCopiar sesión de chat al portapapeles
-
+ Redo last chat responseRehacer última respuesta del chat
-
+ Stop generatingDetener generación
-
+ Stop the current response generationDetener la generación de la respuesta actual
-
+ Reloads the modelRecarga el modelo
-
-
+
+ Reload · %1Recargar · %1
-
+ Loading · %1Cargando · %1
-
+ Load · %1 (default) →Cargar · %1 (predeterminado) →
-
+ retrieving localdocs: %1 ...recuperando documentos locales: %1 ...
-
+ searching localdocs: %1 ...buscando en documentos locales: %1 ...
-
+ %n Source(s)%n Fuente
@@ -1035,47 +1035,47 @@
-
+ Send a message...Enviar un mensaje...
-
+ Load a model to continue...Carga un modelo para continuar...
-
+ Send messages/prompts to the modelEnviar mensajes/indicaciones al modelo
-
+ CutCortar
-
+ PastePegar
-
+ Select AllSeleccionar todo
-
+ Send messageEnviar mensaje
-
+ Sends the message/prompt contained in textfield to the modelEnvía el mensaje/indicación contenido en el campo de texto al modelo
-
+ GPT4All requires that you install at least one
model to get startedGPT4All requiere que instale al menos un
@@ -1083,7 +1083,7 @@ modelo para comenzar
-
+ restoring from text ...restaurando desde texto ...
@@ -1091,12 +1091,12 @@ modelo para comenzar
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete resultsAdvertencia: buscar en colecciones mientras se indexan puede devolver resultados incompletos
-
+ %n file(s)%n archivo
@@ -1104,7 +1104,7 @@ modelo para comenzar
-
+ %n word(s)%n palabra
@@ -1112,17 +1112,17 @@ modelo para comenzar
-
+ UpdatingActualizando
-
+ + Add Docs+ Agregar documentos
-
+ Select a collection to make it available to the chat model.Seleccione una colección para hacerla disponible al modelo de chat.
@@ -1130,37 +1130,37 @@ modelo para comenzar
Download
-
+ Model "%1" is installed successfully.El modelo "%1" se ha instalado correctamente.
-
+ ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
-
+ ERROR: $API_KEY is empty.ERROR: $API_KEY está vacía.
-
+ ERROR: $BASE_URL is invalid.ERROR: $BASE_URL no es válida.
-
+ ERROR: Model "%1 (%2)" is conflict.ERROR: El modelo "%1 (%2)" está en conflicto.
-
+ Model "%1 (%2)" is installed successfully.El modelo "%1 (%2)" se ha instalado correctamente.
-
+ Model "%1" is removed.El modelo "%1" ha sido eliminado.
@@ -1168,92 +1168,92 @@ modelo para comenzar
HomeView
-
+ Welcome to GPT4AllBienvenido a GPT4All
-
+ The privacy-first LLM chat applicationLa aplicación de chat LLM que prioriza la privacidad
-
+ Start chattingComenzar a chatear
-
+ Start ChattingIniciar chat
-
+ Chat with any LLMChatear con cualquier LLM
-
+ LocalDocsDocumentosLocales
-
+ Chat with your local filesChatear con tus archivos locales
-
+ Find ModelsBuscar modelos
-
+ Explore and download modelsExplorar y descargar modelos
-
+ Latest newsÚltimas noticias
-
+ Latest news from GPT4AllÚltimas noticias de GPT4All
-
+ Release NotesNotas de la versión
-
+ DocumentationDocumentación
-
+ DiscordDiscord
-
+ X (Twitter)X (Twitter)
-
+ GithubGithub
-
+ nomic.ainomic.ai
-
+ Subscribe to NewsletterSuscribirse al boletín
@@ -1261,22 +1261,22 @@ modelo para comenzar
LocalDocsSettings
-
+ LocalDocsDocumentosLocales
-
+ LocalDocs SettingsConfiguración de DocumentosLocales
-
+ IndexingIndexación
-
+ Allowed File ExtensionsExtensiones de archivo permitidas
@@ -1287,12 +1287,12 @@ modelo para comenzar
archivos con estas extensiones.
-
+ EmbeddingIncrustación
-
+ Use Nomic Embed APIUsar API de incrustación Nomic
@@ -1303,77 +1303,77 @@ modelo para comenzar
local privado. Requiere reinicio.
-
+ Nomic API KeyClave API de Nomic
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Clave API para usar con Nomic Embed. Obtén una en la <a href="https://atlas.nomic.ai/cli-login">página de claves API</a> de Atlas. Requiere reinicio.
-
+ Embeddings DeviceDispositivo de incrustaciones
-
+ The compute device used for embeddings. Requires restart.El dispositivo de cómputo utilizado para las incrustaciones. Requiere reinicio.
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.Lista separada por comas. LocalDocs solo intentará procesar archivos con estas extensiones.
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incrustar documentos usando la API rápida de Nomic en lugar de un modelo local privado. Requiere reinicio.
-
+ Application defaultPredeterminado de la aplicación
-
+ DisplayVisualización
-
+ Show SourcesMostrar fuentes
-
+ Display the sources used for each response.Mostrar las fuentes utilizadas para cada respuesta.
-
+ AdvancedAvanzado
-
+ Warning: Advanced usage only.Advertencia: Solo para uso avanzado.
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores demasiado grandes pueden causar fallos en localdocs, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se añaden a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por fragmento de documento. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Máximo de N mejores coincidencias de fragmentos de documentos recuperados para añadir al contexto del prompt. Números más grandes aumentan la probabilidad de respuestas verídicas, pero también resultan en una generación más lenta.
@@ -1383,12 +1383,12 @@ modelo para comenzar
Valores demasiado grandes pueden causar fallos en documentos locales, respuestas extremadamente lentas o falta de respuesta. En términos generales, los {N caracteres x N fragmentos} se agregan a la ventana de contexto del modelo. Más información <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aquí</a>.
-
+ Document snippet size (characters)Tamaño del fragmento de documento (caracteres)
-
+ Max document snippets per promptMáximo de fragmentos de documento por indicación
@@ -1396,17 +1396,17 @@ modelo para comenzar
LocalDocsView
-
+ LocalDocsDocumentosLocales
-
+ Chat with your local filesChatea con tus archivos locales
-
+ + Add Collection+ Agregar colección
@@ -1415,97 +1415,97 @@ modelo para comenzar
ERROR: La base de datos de DocumentosLocales no es válida.
-
+ No Collections InstalledNo hay colecciones instaladas
-
+ Install a collection of local documents to get started using this featureInstala una colección de documentos locales para comenzar a usar esta función
-
+ + Add Doc Collection+ Agregar colección de documentos
-
+ Shows the add model viewMuestra la vista de agregar modelo
-
+ Indexing progressBarBarra de progreso de indexación
-
+ Shows the progress made in the indexingMuestra el progreso realizado en la indexación
-
+ ERRORERROR
-
+ INDEXINGINDEXANDO
-
+ EMBEDDINGINCRUSTANDO
-
+ REQUIRES UPDATEREQUIERE ACTUALIZACIÓN
-
+ READYLISTO
-
+ INSTALLINGINSTALANDO
-
+ Indexing in progressIndexación en progreso
-
+ Embedding in progressIncrustación en progreso
-
+ This collection requires an update after version changeEsta colección requiere una actualización después del cambio de versión
-
+ Automatically reindexes upon changes to the folderReindexación automática al cambiar la carpeta
-
+ Installation in progressInstalación en progreso
-
+ %%
-
+ %n file(s)%n archivo
@@ -1513,7 +1513,7 @@ modelo para comenzar
-
+ %n word(s)%n palabra
@@ -1521,32 +1521,32 @@ modelo para comenzar
-
+ RemoveEliminar
-
+ RebuildReconstruir
-
+ Reindex this folder from scratch. This is slow and usually not needed.Reindexar esta carpeta desde cero. Esto es lento y generalmente no es necesario.
-
+ UpdateActualizar
-
+ Update the collection to the new version. This is a slow operation.Actualizar la colección a la nueva versión. Esta es una operación lenta.
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERROR: No se puede acceder a la base de datos LocalDocs o no es válida.</h3><br><i>Nota: Necesitará reiniciar después de intentar cualquiera de las siguientes soluciones sugeridas.</i><br><ul><li>Asegúrese de que la carpeta establecida como <b>Ruta de Descarga</b> exista en el sistema de archivos.</li><li>Verifique la propiedad y los permisos de lectura y escritura de la <b>Ruta de Descarga</b>.</li><li>Si hay un archivo <b>localdocs_v2.db</b>, verifique también su propiedad y permisos de lectura/escritura.</li></ul><br>Si el problema persiste y hay archivos 'localdocs_v*.db' presentes, como último recurso puede<br>intentar hacer una copia de seguridad y eliminarlos. Sin embargo, tendrá que recrear sus colecciones.
@@ -1554,7 +1554,7 @@ modelo para comenzar
ModelList
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>Requiere clave API personal de OpenAI.</li><li>ADVERTENCIA: ¡Enviará sus chats a OpenAI!</li><li>Su clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con OpenAI</li><li>Puede solicitar una clave API <a href="https://platform.openai.com/account/api-keys">aquí.</a></li>
@@ -1565,62 +1565,73 @@ modelo para comenzar
OpenAI</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>Modelo ChatGPT GPT-3.5 Turbo de OpenAI</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* Aunque pagues a OpenAI por ChatGPT-4, esto no garantiza el acceso a la clave API. Contacta a OpenAI para más información.
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Modelo ChatGPT GPT-4 de OpenAI</strong><br> %1 %2
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Requiere una clave API personal de Mistral.</li><li>ADVERTENCIA: ¡Enviará tus chats a Mistral!</li><li>Tu clave API se almacenará en el disco</li><li>Solo se usará para comunicarse con Mistral</li><li>Puedes solicitar una clave API <a href="https://console.mistral.ai/user/api-keys">aquí</a>.</li>
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Modelo Mistral Medium</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Creado por %1.</strong><br><ul><li>Publicado el %2.<li>Este modelo tiene %3 me gusta.<li>Este modelo tiene %4 descargas.<li>Más información puede encontrarse <a href="https://huggingface.co/%5">aquí.</a></ul>
-
+ %1 (%2)%1 (%2)
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>Modelo de API compatible con OpenAI</strong><br><ul><li>Clave API: %1</li><li>URL base: %2</li><li>Nombre del modelo: %3</li></ul>
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>Requiere una clave API personal y la URL base de la API.</li><li>ADVERTENCIA: ¡Enviará sus chats al servidor de API compatible con OpenAI que especificó!</li><li>Su clave API se almacenará en el disco</li><li>Solo se utilizará para comunicarse con el servidor de API compatible con OpenAI</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Conectar al servidor de API compatible con OpenAI</strong><br> %1
@@ -1628,87 +1639,87 @@ modelo para comenzar
ModelSettings
-
+ ModelModelo
-
+ Model SettingsConfiguración del modelo
-
+ CloneClonar
-
+ RemoveEliminar
-
+ NameNombre
-
+ Model FileArchivo del modelo
-
+ System PromptIndicación del sistema
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefijado al inicio de cada conversación. Debe contener los tokens de encuadre apropiados.
-
+ Prompt TemplatePlantilla de indicación
-
+ The template that wraps every prompt.La plantilla que envuelve cada indicación.
-
+ Must contain the string "%1" to be replaced with the user's input.Debe contener la cadena "%1" para ser reemplazada con la entrada del usuario.
-
+ Chat Name PromptIndicación para el nombre del chat
-
+ Prompt used to automatically generate chat names.Indicación utilizada para generar automáticamente nombres de chat.
-
+ Suggested FollowUp PromptIndicación de seguimiento sugerida
-
+ Prompt used to generate suggested follow-up questions.Indicación utilizada para generar preguntas de seguimiento sugeridas.
-
+ Context LengthLongitud del contexto
-
+ Number of input and output tokens the model sees.Número de tokens de entrada y salida que el modelo ve.
@@ -1719,128 +1730,128 @@ NOTE: Does not take effect until you reload the model.
NOTA: No tiene efecto hasta que recargues el modelo.
-
+ TemperatureTemperatura
-
+ Randomness of model output. Higher -> more variation.Aleatoriedad de la salida del modelo. Mayor -> más variación.
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.La temperatura aumenta las probabilidades de elegir tokens menos probables.
NOTA: Una temperatura más alta da resultados más creativos pero menos predecibles.
-
+ Top-PTop-P
-
+ Nucleus Sampling factor. Lower -> more predictable.Factor de muestreo de núcleo. Menor -> más predecible.
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Solo se pueden elegir los tokens más probables hasta una probabilidad total de top_p.
NOTA: Evita elegir tokens altamente improbables.
-
+ Min-PMin-P
-
+ Minimum token probability. Higher -> more predictable.Probabilidad mínima del token. Mayor -> más predecible.
-
+ Sets the minimum relative probability for a token to be considered.Establece la probabilidad relativa mínima para que un token sea considerado.
-
+ Top-KTop-K
-
+ Size of selection pool for tokens.Tamaño del grupo de selección para tokens.
-
+ Only the top K most likely tokens will be chosen from.Solo se elegirán los K tokens más probables.
-
+ Max LengthLongitud máxima
-
+ Maximum response length, in tokens.Longitud máxima de respuesta, en tokens.
-
+ Prompt Batch SizeTamaño del lote de indicaciones
-
+ The batch size used for prompt processing.El tamaño del lote utilizado para el procesamiento de indicaciones.
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Cantidad de tokens de prompt a procesar de una vez.
NOTA: Valores más altos pueden acelerar la lectura de prompts, pero usarán más RAM.
-
+ Repeat PenaltyPenalización por repetición
-
+ Repetition penalty factor. Set to 1 to disable.Factor de penalización por repetición. Establecer a 1 para desactivar.
-
+ Repeat Penalty TokensTokens de penalización por repetición
-
+ Number of previous tokens used for penalty.Número de tokens anteriores utilizados para la penalización.
-
+ GPU LayersCapas de GPU
-
+ Number of model layers to load into VRAM.Número de capas del modelo a cargar en la VRAM.
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1849,7 +1860,7 @@ Usar más contexto del que el modelo fue entrenado producirá resultados deficie
NOTA: No surtirá efecto hasta que recargue el modelo.
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1861,217 +1872,217 @@ NOTA: No surte efecto hasta que recargue el modelo.
ModelsView
-
+ No Models InstalledNo hay modelos instalados
-
+ Install a model to get started using GPT4AllInstala un modelo para empezar a usar GPT4All
-
-
+
+ + Add Model+ Agregar modelo
-
+ Shows the add model viewMuestra la vista de agregar modelo
-
+ Installed ModelsModelos instalados
-
+ Locally installed chat modelsModelos de chat instalados localmente
-
+ Model fileArchivo del modelo
-
+ Model file to be downloadedArchivo del modelo a descargar
-
+ DescriptionDescripción
-
+ File descriptionDescripción del archivo
-
+ CancelCancelar
-
+ ResumeReanudar
-
+ Stop/restart/start the downloadDetener/reiniciar/iniciar la descarga
-
+ RemoveEliminar
-
+ Remove model from filesystemEliminar modelo del sistema de archivos
-
-
+
+ InstallInstalar
-
+ Install online modelInstalar modelo en línea
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ADVERTENCIA: No recomendado para su hardware. El modelo requiere más memoria (%1 GB) de la que su sistema tiene disponible (%2).</strong></font>
-
+ %1 GB%1 GB
-
+ ??
-
+ Describes an error that occurred when downloadingDescribe un error que ocurrió durante la descarga
-
+ Error for incompatible hardwareError por hardware incompatible
-
+ Download progressBarBarra de progreso de descarga
-
+ Shows the progress made in the downloadMuestra el progreso realizado en la descarga
-
+ Download speedVelocidad de descarga
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocidad de descarga en bytes/kilobytes/megabytes por segundo
-
+ Calculating...Calculando...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedSi se está calculando el hash del archivo
-
+ Busy indicatorIndicador de ocupado
-
+ Displayed when the file hash is being calculatedSe muestra cuando se está calculando el hash del archivo
-
+ enter $API_KEYingrese $API_KEY
-
+ File sizeTamaño del archivo
-
+ RAM requiredRAM requerida
-
+ ParametersParámetros
-
+ QuantCuantificación
-
+ TypeTipo
-
+ ERROR: $API_KEY is empty.ERROR: $API_KEY está vacía.
-
+ ERROR: $BASE_URL is empty.ERROR: $BASE_URL está vacía.
-
+ enter $BASE_URLingrese $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERROR: $MODEL_NAME está vacío.
-
+ enter $MODEL_NAMEingrese $MODEL_NAME
@@ -2079,12 +2090,12 @@ NOTA: No surte efecto hasta que recargue el modelo.
MyFancyLink
-
+ Fancy linkEnlace elegante
-
+ A stylized linkUn enlace estilizado
@@ -2092,7 +2103,7 @@ NOTA: No surte efecto hasta que recargue el modelo.
MySettingsStack
-
+ Please choose a directoryPor favor, elija un directorio
@@ -2100,12 +2111,12 @@ NOTA: No surte efecto hasta que recargue el modelo.
MySettingsTab
-
+ Restore DefaultsRestaurar valores predeterminados
-
+ Restores settings dialog to a default stateRestaura el diálogo de configuración a su estado predeterminado
@@ -2113,7 +2124,7 @@ NOTA: No surte efecto hasta que recargue el modelo.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.Contribuir datos al Datalake de código abierto de GPT4All.
@@ -2131,7 +2142,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NOTA: Al activar esta función, enviará sus datos al Datalake de código abierto de GPT4All. No debe esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puede esperar una atribución opcional si lo desea. Sus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a sus datos y se le acreditará como contribuyente en cualquier lanzamiento de modelo GPT4All que utilice sus datos.
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2144,47 +2155,47 @@ Cuando un modelo GPT4All te responda y hayas aceptado participar, tu conversaci
NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Código Abierto de GPT4All. No debes esperar privacidad en el chat cuando esta función esté habilitada. Sin embargo, puedes esperar una atribución opcional si lo deseas. Tus datos de chat estarán disponibles abiertamente para que cualquiera los descargue y serán utilizados por Nomic AI para mejorar futuros modelos de GPT4All. Nomic AI conservará toda la información de atribución adjunta a tus datos y se te acreditará como contribuyente en cualquier lanzamiento de modelo GPT4All que utilice tus datos.
-
+ Terms for opt-inTérminos para optar por participar
-
+ Describes what will happen when you opt-inDescribe lo que sucederá cuando opte por participar
-
+ Please provide a name for attribution (optional)Por favor, proporcione un nombre para la atribución (opcional)
-
+ Attribution (optional)Atribución (opcional)
-
+ Provide attributionProporcionar atribución
-
+ EnableHabilitar
-
+ Enable opt-inHabilitar participación
-
+ CancelCancelar
-
+ Cancel opt-inCancelar participación
@@ -2192,17 +2203,17 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
NewVersionDialog
-
+ New version is availableNueva versión disponible
-
+ UpdateActualizar
-
+ Update to new versionActualizar a nueva versión
@@ -2210,17 +2221,17 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
PopupDialog
-
+ Reveals a shortlived help balloonMuestra un globo de ayuda de corta duración
-
+ Busy indicatorIndicador de ocupado
-
+ Displayed when the popup is showing busySe muestra cuando la ventana emergente está ocupada
@@ -2235,28 +2246,28 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
SettingsView
-
-
+
+ SettingsConfiguración
-
+ Contains various application settingsContiene varias configuraciones de la aplicación
-
+ ApplicationAplicación
-
+ ModelModelo
-
+ LocalDocsDocumentosLocales
@@ -2264,17 +2275,17 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
StartupDialog
-
+ Welcome!¡Bienvenido!
-
+ Release notesNotas de la versión
-
+ Release notes for this versionNotas de la versión para esta versión
@@ -2300,76 +2311,76 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
-
+ Terms for opt-inTérminos para aceptar
-
+ Describes what will happen when you opt-inDescribe lo que sucederá cuando acepte
-
-
+
+ Opt-in for anonymous usage statisticsAceptar estadísticas de uso anónimas
-
-
+
+ YesSí
-
+ Allow opt-in for anonymous usage statisticsPermitir aceptación de estadísticas de uso anónimas
-
-
+
+ NoNo
-
+ Opt-out for anonymous usage statisticsRechazar estadísticas de uso anónimas
-
+ Allow opt-out for anonymous usage statisticsPermitir rechazo de estadísticas de uso anónimas
-
-
+
+ Opt-in for networkAceptar para la red
-
+ Allow opt-in for networkPermitir aceptación para la red
-
+ Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermitir compartir anónimamente los chats con el Datalake de GPT4All
-
+ Opt-out for networkRechazar para la red
-
+ Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermitir rechazar el compartir anónimo de chats con el Datalake de GPT4All
-
+ ### Release notes
%1### Contributors
%2
@@ -2379,7 +2390,7 @@ NOTA: Al activar esta función, estarás enviando tus datos al Datalake de Códi
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2419,23 +2430,23 @@ lanzamiento de modelo GPT4All que utilice sus datos.
actual. ¿Desea continuar?
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Advertencia:</b> cambiar el modelo borrará la conversación actual. ¿Deseas continuar?
-
+ ContinueContinuar
-
+ Continue with model loadingContinuar con la carga del modelo
-
-
+
+ CancelCancelar
@@ -2443,33 +2454,33 @@ lanzamiento de modelo GPT4All que utilice sus datos.
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)Por favor, edite el texto a continuación para proporcionar una mejor
respuesta. (opcional)
-
+ Please provide a better response...Por favor, proporcione una mejor respuesta...
-
+ SubmitEnviar
-
+ Submits the user's responseEnvía la respuesta del usuario
-
+ CancelCancelar
-
+ Closes the response dialogCierra el diálogo de respuesta
@@ -2477,126 +2488,126 @@ lanzamiento de modelo GPT4All que utilice sus datos.
main
-
+ GPT4All v%1GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Se encontró un error al iniciar:</h3><br><i>"Se detectó hardware incompatible."</i><br><br>Desafortunadamente, tu CPU no cumple con los requisitos mínimos para ejecutar este programa. En particular, no soporta instrucciones AVX, las cuales este programa requiere para ejecutar con éxito un modelo de lenguaje grande moderno. La única solución en este momento es actualizar tu hardware a una CPU más moderna.<br><br>Consulta aquí para más información: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Se encontró un error al iniciar:</h3><br><i>"No se puede acceder al archivo de configuración."</i><br><br>Desafortunadamente, algo está impidiendo que el programa acceda al archivo de configuración. Esto podría ser causado por permisos incorrectos en el directorio de configuración local de la aplicación donde se encuentra el archivo de configuración. Visita nuestro <a href="https://discord.gg/4M2QFmTt2k">canal de Discord</a> para obtener ayuda.
-
+ Connection to datalake failed.La conexión al datalake falló.
-
+ Saving chats.Guardando chats.
-
+ Network dialogDiálogo de red
-
+ opt-in to share feedback/conversationsoptar por compartir comentarios/conversaciones
-
+ Home viewVista de inicio
-
+ Home view of applicationVista de inicio de la aplicación
-
+ HomeInicio
-
+ Chat viewVista de chat
-
+ Chat view to interact with modelsVista de chat para interactuar con modelos
-
+ ChatsChats
-
-
+
+ ModelsModelos
-
+ Models view for installed modelsVista de modelos para modelos instalados
-
-
+
+ LocalDocsDocs
Locales
-
+ LocalDocs view to configure and use local docsVista de DocumentosLocales para configurar y usar documentos locales
-
-
+
+ SettingsConfig.
-
+ Settings view for application configurationVista de configuración para la configuración de la aplicación
-
+ The datalake is enabledEl datalake está habilitado
-
+ Using a network modelUsando un modelo de red
-
+ Server mode is enabledEl modo servidor está habilitado
-
+ Installed modelsModelos instalados
-
+ View of installed modelsVista de modelos instalados
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index 85360148..7a6ec7e1 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections← Raccolte esistenti
-
+ Add Document CollectionAggiungi raccolta documenti
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Aggiungi una cartella contenente file di testo semplice, PDF o Markdown. Configura estensioni aggiuntive in Settaggi.
-
+ Please choose a directoryScegli una cartella
-
+ NameNome
-
+ Collection name...Nome della raccolta...
-
+ Name of the collection to add (Required)Nome della raccolta da aggiungere (Obbligatorio)
-
+ FolderCartella
-
+ Folder path...Percorso cartella...
-
+ Folder path to documents (Required)Percorso della cartella dei documenti (richiesto)
-
+ BrowseEsplora
-
+ Create CollectionCrea raccolta
@@ -67,288 +67,288 @@
AddModelView
-
+ ← Existing Models← Modelli esistenti
-
+ Explore ModelsEsplora modelli
-
+ Discover and download models by keyword search...Scopri e scarica i modelli tramite ricerca per parole chiave...
-
+ Text field for discovering and filtering downloadable modelsCampo di testo per scoprire e filtrare i modelli scaricabili
-
+ Initiate model discovery and filteringAvvia rilevamento e filtraggio dei modelli
-
+ Triggers discovery and filtering of modelsAttiva la scoperta e il filtraggio dei modelli
-
+ DefaultPredefinito
-
+ LikesMi piace
-
+ DownloadsScaricamenti
-
+ RecentRecenti
-
+ AscAsc
-
+ DescDisc
-
+ NoneNiente
-
+ Searching · %1Ricerca · %1
-
+ Sort by: %1Ordina per: %1
-
+ Sort dir: %1Direzione ordinamento: %1
-
+ Limit: %1Limite: %1
-
+ Network error: could not retrieve %1Errore di rete: impossibile recuperare %1
-
-
+
+ Busy indicatorIndicatore di occupato
-
+ Displayed when the models request is ongoingVisualizzato quando la richiesta dei modelli è in corso
-
+ Model fileFile del modello
-
+ Model file to be downloadedFile del modello da scaricare
-
+ DescriptionDescrizione
-
+ File descriptionDescrizione del file
-
+ CancelAnnulla
-
+ ResumeRiprendi
-
+ DownloadScarica
-
+ Stop/restart/start the downloadArresta/riavvia/avvia il download
-
+ RemoveRimuovi
-
+ Remove model from filesystemRimuovi il modello dal sistema dei file
-
-
+
+ InstallInstalla
-
+ Install online modelInstalla il modello online
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVVERTENZA: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
-
+ ERROR: $BASE_URL is empty.ERRORE: $BASE_URL non è valido.
-
+ enter $BASE_URLinserisci $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
-
+ enter $MODEL_NAMEinserisci $MODEL_NAME
-
+ %1 GB
-
-
+
+ ?
-
+ Describes an error that occurred when downloadingDescrive un errore che si è verificato durante lo scaricamento
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Errore</a></strong></font>
-
+ Error for incompatible hardwareErrore per hardware incompatibile
-
+ Download progressBarBarra di avanzamento dello scaricamento
-
+ Shows the progress made in the downloadMostra lo stato di avanzamento dello scaricamento
-
+ Download speedVelocità di scaricamento
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocità di scaricamento in byte/kilobyte/megabyte al secondo
-
+ Calculating...Calcolo in corso...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedSe viene calcolato l'hash del file
-
+ Displayed when the file hash is being calculatedVisualizzato durante il calcolo dell'hash del file
-
+ enter $API_KEYInserire $API_KEY
-
+ File sizeDimensione del file
-
+ RAM requiredRAM richiesta
-
+ ParametersParametri
-
+ QuantQuant
-
+ TypeTipo
@@ -356,22 +356,22 @@
ApplicationSettings
-
+ ApplicationApplicazione
-
+ Network dialogDialogo di rete
-
+ opt-in to share feedback/conversationsaderisci per condividere feedback/conversazioni
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -382,87 +382,87 @@
-
+ Error dialogDialogo d'errore
-
+ Application SettingsSettaggi applicazione
-
+ GeneralGenerale
-
+ ThemeTema
-
+ The application color scheme.La combinazione di colori dell'applicazione.
-
+ DarkScuro
-
+ LightChiaro
-
+ LegacyDarkScuro Legacy
-
+ Font SizeDimensioni del Font
-
+ The size of text in the application.La dimensione del testo nell'applicazione.
-
+ SmallPiccolo
-
+ MediumMedio
-
+ LargeGrande
-
+ Language and LocaleLingua e settaggi locali
-
+ The language and locale you wish to use.La lingua e i settaggi locali che vuoi utilizzare.
-
+ System LocaleSettaggi locali del sistema
-
+ DeviceDispositivo
@@ -471,139 +471,139 @@
Il dispositivo di calcolo utilizzato per la generazione del testo. "Auto" utilizza Vulkan o Metal.
-
+ The compute device used for text generation.Il dispositivo di calcolo utilizzato per la generazione del testo.
-
-
+
+ Application defaultApplicazione predefinita
-
+ Default ModelModello predefinito
-
+ The preferred model for new chats. Also used as the local server fallback.Il modello preferito per le nuove chat. Utilizzato anche come ripiego del server locale.
-
+ Suggestion ModeModalità suggerimento
-
+ Generate suggested follow-up questions at the end of responses.Genera le domande di approfondimento suggerite alla fine delle risposte.
-
+ When chatting with LocalDocsQuando chatti con LocalDocs
-
+ Whenever possibleQuando possibile
-
+ NeverMai
-
+ Download PathPercorso di scarico
-
+ Where to store local models and the LocalDocs database.Dove archiviare i modelli locali e il database LocalDocs.
-
+ BrowseEsplora
-
+ Choose where to save model filesScegli dove salvare i file del modello
-
+ Enable DatalakeAbilita Datalake
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.Invia chat e commenti al Datalake Open Source GPT4All.
-
+ AdvancedAvanzate
-
+ CPU ThreadsThread della CPUTread CPU
-
+ The number of CPU threads used for inference and embedding.Il numero di thread della CPU utilizzati per l'inferenza e l'incorporamento.
-
+ Save Chat ContextSalva il contesto della chat
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salva lo stato del modello di chat su disco per un caricamento più rapido. ATTENZIONE: utilizza circa 2 GB per chat.
-
+ Enable Local ServerAbilita server locale
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Esporre un server compatibile con OpenAI a localhost. ATTENZIONE: comporta un maggiore utilizzo delle risorse.
-
+ API Server PortPorta del server API
-
+ The port to use for the local server. Requires restart.La porta da utilizzare per il server locale. Richiede il riavvio.
-
+ Check For UpdatesControlla gli aggiornamenti
-
+ Manually check for an update to GPT4All.Verifica manualmente l'aggiornamento di GPT4All.
-
+ UpdatesAggiornamenti
@@ -611,13 +611,13 @@
Chat
-
-
+
+ New ChatNuova Chat
-
+ Server ChatChat del server
@@ -625,12 +625,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API serverERRORE: si è verificato un errore di rete durante la connessione al server API
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished ha ricevuto l'errore HTTP %1 %2
@@ -638,62 +638,62 @@
ChatDrawer
-
+ DrawerCassetto
-
+ Main navigation drawerCassetto di navigazione principale
-
+ + New Chat+ Nuova Chat
-
+ Create a new chatCrea una nuova chat
-
+ Select the current chat or edit the chat when in edit modeSeleziona la chat corrente o modifica la chat in modalità modifica
-
+ Edit chat nameModifica il nome della chat
-
+ Save chat nameSalva il nome della chat
-
+ Delete chatElimina chat
-
+ Confirm chat deletionConferma l'eliminazione della chat
-
+ Cancel chat deletionAnnulla l'eliminazione della chat
-
+ List of chatsElenco delle chat
-
+ List of chats in the drawer dialogElenco delle chat nella finestra di dialogo del cassetto
@@ -701,32 +701,32 @@
ChatListModel
-
+ TODAYOGGI
-
+ THIS WEEKQUESTA SETTIMANA
-
+ THIS MONTHQUESTO MESE
-
+ LAST SIX MONTHSULTIMI SEI MESI
-
+ THIS YEARQUEST'ANNO
-
+ LAST YEARL'ANNO SCORSO
@@ -734,150 +734,150 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p><h3>Avviso</h3><p>%1</p>
-
+ Switch model dialogFinestra di dialogo Cambia modello
-
+ Warn the user if they switch models, then context will be erasedAvvisa l'utente che se cambia modello, il contesto verrà cancellato
-
+ Conversation copied to clipboard.Conversazione copiata negli appunti.
-
+ Code copied to clipboard.Codice copiato negli appunti.
-
+ Chat panelPannello chat
-
+ Chat panel with optionsPannello chat con opzioni
-
+ Reload the currently loaded modelRicarica il modello attualmente caricato
-
+ Eject the currently loaded modelEspelli il modello attualmente caricato
-
+ No model installed.Nessun modello installato.
-
+ Model loading error.Errore di caricamento del modello.
-
+ Waiting for model...In attesa del modello...
-
+ Switching context...Cambio contesto...
-
+ Choose a model...Scegli un modello...
-
+ Not found: %1Non trovato: %1
-
+ The top item is the current modelL'elemento in alto è il modello attuale
-
-
+
+ LocalDocs
-
+ Add documentsAggiungi documenti
-
+ add collections of documents to the chataggiungi raccolte di documenti alla chat
-
+ Load the default modelCarica il modello predefinito
-
+ Loads the default model which can be changed in settingsCarica il modello predefinito che può essere modificato nei settaggi
-
+ No Model InstalledNessun modello installato
-
+ GPT4All requires that you install at least one
model to get startedGPT4All richiede l'installazione di almeno un
modello per iniziare
-
+ Install a ModelInstalla un modello
-
+ Shows the add model viewMostra la vista aggiungi modello
-
+ Conversation with the modelConversazione con il modello
-
+ prompt / response pairs from the conversationcoppie prompt/risposta dalla conversazione
-
+ GPT4All
-
+ YouTu
@@ -886,139 +886,139 @@ modello per iniziare
ricalcolo contesto ...
-
+ response stopped ...risposta interrotta ...
-
+ processing ...elaborazione ...
-
+ generating response ...generazione risposta ...
-
+ generating questions ...generarzione domande ...
-
-
+
+ CopyCopia
-
+ Copy MessageCopia messaggio
-
+ Disable markdownDisabilita Markdown
-
+ Enable markdownAbilita Markdown
-
+ Thumbs upMi piace
-
+ Gives a thumbs up to the responseDà un mi piace alla risposta
-
+ Thumbs downNon mi piace
-
+ Opens thumbs down dialogApre la finestra di dialogo "Non mi piace"
-
+ Suggested follow-upsApprofondimenti suggeriti
-
+ Erase and reset chat sessionCancella e ripristina la sessione di chat
-
+ Copy chat session to clipboardCopia la sessione di chat negli appunti
-
+ Redo last chat responseRiesegui l'ultima risposta della chat
-
+ Stop generatingInterrompi la generazione
-
+ Stop the current response generationArresta la generazione della risposta corrente
-
+ Reloads the modelRicarica il modello
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Si è verificato un errore durante il caricamento del modello:</h3><br><i>"%1"</i><br><br>Gli errori di caricamento del modello possono verificarsi per diversi motivi, ma le cause più comuni includono un formato di file non valido, un download incompleto o danneggiato, il tipo di file sbagliato, RAM di sistema insufficiente o un tipo di modello incompatibile. Ecco alcuni suggerimenti per risolvere il problema:<br><ul><li>Assicurati che il file del modello abbia un formato e un tipo compatibili<li>Verifica che il file del modello sia completo nella cartella di download<li>Puoi trovare la cartella di download nella finestra di dialogo dei settaggi<li>Se hai scaricato manualmente il modello, assicurati che il file non sia danneggiato controllando md5sum<li>Leggi ulteriori informazioni su quali modelli sono supportati nella nostra <a href="https://docs.gpt4all.io/ ">documentazione</a> per la GUI<li>Consulta il nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per assistenza
-
-
+
+ Reload · %1Ricarica · %1
-
+ Loading · %1Caricamento · %1
-
+ Load · %1 (default) →Carica · %1 (predefinito) →
-
+ restoring from text ...ripristino dal testo ...
-
+ retrieving localdocs: %1 ...recupero documenti locali: %1 ...
-
+ searching localdocs: %1 ...ricerca in documenti locali: %1 ...
-
+ %n Source(s)%n Fonte
@@ -1026,42 +1026,42 @@ modello per iniziare
-
+ Send a message...Manda un messaggio...
-
+ Load a model to continue...Carica un modello per continuare...
-
+ Send messages/prompts to the modelInvia messaggi/prompt al modello
-
+ CutTaglia
-
+ PasteIncolla
-
+ Select AllSeleziona tutto
-
+ Send messageInvia messaggio
-
+ Sends the message/prompt contained in textfield to the modelInvia il messaggio/prompt contenuto nel campo di testo al modello
@@ -1069,12 +1069,12 @@ modello per iniziare
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete resultsAvviso: la ricerca nelle raccolte durante l'indicizzazione può restituire risultati incompleti
-
+ %n file(s)%n file
@@ -1082,7 +1082,7 @@ modello per iniziare
-
+ %n word(s)%n parola
@@ -1090,17 +1090,17 @@ modello per iniziare
-
+ UpdatingIn aggiornamento
-
+ + Add Docs+ Aggiungi documenti
-
+ Select a collection to make it available to the chat model.Seleziona una raccolta per renderla disponibile al modello in chat.
@@ -1108,37 +1108,37 @@ modello per iniziare
Download
-
+ Model "%1" is installed successfully.Il modello "%1" è stato installato correttamente.
-
+ ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
-
+ ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
-
+ ERROR: $BASE_URL is invalid.ERRORE: $BASE_URL non è valido.
-
+ ERROR: Model "%1 (%2)" is conflict.ERRORE: il modello "%1 (%2)" è in conflitto.
-
+ Model "%1 (%2)" is installed successfully.Il modello "%1 (%2)" è stato installato correttamente.
-
+ Model "%1" is removed.Il modello "%1" è stato rimosso.
@@ -1146,92 +1146,92 @@ modello per iniziare
HomeView
-
+ Welcome to GPT4AllBenvenuto in GPT4All
-
+ The privacy-first LLM chat applicationL'applicazione di chat LLM che mette al primo posto la privacy
-
+ Start chattingInizia a chattare
-
+ Start ChattingInizia a Chattare
-
+ Chat with any LLMChatta con qualsiasi LLM
-
+ LocalDocs
-
+ Chat with your local filesChatta con i tuoi file locali
-
+ Find ModelsTrova modelli
-
+ Explore and download modelsEsplora e scarica i modelli
-
+ Latest newsUltime notizie
-
+ Latest news from GPT4AllUltime notizie da GPT4All
-
+ Release NotesNote di rilascio
-
+ DocumentationDocumentazione
-
+ Discord
-
+ X (Twitter)
-
+ Github
-
+ nomic.ainomic.ai
-
+ Subscribe to NewsletterIscriviti alla Newsletter
@@ -1239,118 +1239,118 @@ modello per iniziare
LocalDocsSettings
-
+ LocalDocs
-
+ LocalDocs SettingsSettaggi LocalDocs
-
+ IndexingIndicizzazione
-
+ Allowed File ExtensionsEstensioni di file consentite
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.Elenco separato da virgole. LocalDocs tenterà di elaborare solo file con queste estensioni.
-
+ EmbeddingQuesto termine si dovrebbe tradurre come "Incorporamento". This term has been translated in other applications like A1111 and InvokeAI as "Incorporamento"Incorporamento
-
+ Use Nomic Embed APIUtilizza l'API di incorporamento Nomic Embed
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incorpora documenti utilizzando la veloce API di Nomic invece di un modello locale privato. Richiede il riavvio.
-
+ Nomic API KeyChiave API di Nomic
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Chiave API da utilizzare per Nomic Embed. Ottienine una dalla <a href="https://atlas.nomic.ai/cli-login">pagina delle chiavi API</a> di Atlas. Richiede il riavvio.
-
+ Embeddings DeviceDispositivo per incorporamenti
-
+ The compute device used for embeddings. Requires restart.Il dispositivo di calcolo utilizzato per gli incorporamenti. Richiede il riavvio.
-
+ Application defaultApplicazione predefinita
-
+ DisplayMostra
-
+ Show SourcesMostra le fonti
-
+ Display the sources used for each response.Visualizza le fonti utilizzate per ciascuna risposta.
-
+ AdvancedAvanzate
-
+ Warning: Advanced usage only.Avvertenza: solo per uso avanzato.
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valori troppo grandi possono causare errori di Localdocs, risposte estremamente lente o l'impossibilità di rispondere. In parole povere, {N caratteri x N frammenti} vengono aggiunti alla finestra di contesto del modello. Maggiori informazioni <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">qui</a>.
-
+ Document snippet size (characters)Dimensioni del frammento di documento (caratteri)
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numero di caratteri per frammento di documento. Numeri più grandi aumentano la probabilità di risposte basate sui fatti, ma comportano anche una generazione più lenta.
-
+ Max document snippets per promptNumero massimo di frammenti di documento per prompt
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Il numero massimo di frammenti di documento recuperati, che presentano le migliori corrispondenze, da includere nel contesto del prompt. Numeri più alti aumentano la probabilità di ricevere risposte basate sui fatti, ma comportano anche una generazione più lenta.
@@ -1358,17 +1358,17 @@ modello per iniziare
LocalDocsView
-
+ LocalDocs
-
+ Chat with your local filesChatta con i tuoi file locali
-
+ + Add Collection+ Aggiungi raccolta
@@ -1377,102 +1377,102 @@ modello per iniziare
ERRORE: il database di LocalDocs non è valido.
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERRORE: Impossibile accedere al database LocalDocs o non è valido.</h3><br><i>Nota: sarà necessario riavviare dopo aver provato una delle seguenti soluzioni suggerite.</i><br><ul><li>Assicurati che la cartella impostata come <b>Percorso di download</b> esista nel file system.</li><li>Controlla la proprietà e i permessi di lettura e scrittura del <b>Percorso di download</b>.</li><li>Se è presente un file <b>localdocs_v2.db</b>, controlla anche la sua proprietà e i permessi di lettura/scrittura.</li></ul><br>Se il problema persiste e sono presenti file 'localdocs_v*.db', come ultima risorsa puoi<br>provare a eseguirne il backup e a rimuoverli. Tuttavia, dovrai ricreare le tue raccolte.
-
+ No Collections InstalledNessuna raccolta installata
-
+ Install a collection of local documents to get started using this featureInstalla una raccolta di documenti locali per iniziare a utilizzare questa funzionalità
-
+ + Add Doc Collection+ Aggiungi raccolta di documenti
-
+ Shows the add model viewMostra la vista aggiungi modello
-
+ Indexing progressBarBarra di avanzamento indicizzazione
-
+ Shows the progress made in the indexingMostra lo stato di avanzamento dell'indicizzazione
-
+ ERRORERRORE
-
+ INDEXINGINDICIZZAZIONE
-
+ EMBEDDINGINCORPORAMENTO
-
+ REQUIRES UPDATERICHIEDE AGGIORNAMENTO
-
+ READYPRONTO
-
+ INSTALLINGINSTALLAZIONE
-
+ Indexing in progressIndicizzazione in corso
-
+ Embedding in progressIncorporamento in corso
-
+ This collection requires an update after version changeQuesta raccolta richiede un aggiornamento dopo il cambio di versione
-
+ Automatically reindexes upon changes to the folderReindicizza automaticamente in caso di modifiche alla cartella
-
+ Installation in progressInstallazione in corso
-
+ %%
-
+ %n file(s)%n file
@@ -1480,7 +1480,7 @@ modello per iniziare
-
+ %n word(s)%n parola
@@ -1488,27 +1488,27 @@ modello per iniziare
-
+ RemoveRimuovi
-
+ RebuildRicostruisci
-
+ Reindex this folder from scratch. This is slow and usually not needed.Reindicizzare questa cartella da zero. Lento e di solito non necessario.
-
+ UpdateAggiorna
-
+ Update the collection to the new version. This is a slow operation.Aggiorna la raccolta alla nuova versione. Questa è un'operazione lenta.
@@ -1516,67 +1516,78 @@ modello per iniziare
ModelList
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>Richiede una chiave API OpenAI personale.</li><li>ATTENZIONE: invierà le tue chat a OpenAI!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con OpenAI</li><li>Puoi richiedere una chiave API <a href="https://platform.openai.com/account/api-keys">qui.</a> </li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2
-
+ <strong>Mistral Tiny model</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* Anche se paghi OpenAI per ChatGPT-4 questo non garantisce l'accesso alla chiave API. Contatta OpenAI per maggiori informazioni.
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)%1 (%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>Modello API compatibile con OpenAI</strong><br><ul><li>Chiave API: %1</li><li>URL di base: %2</li><li>Nome modello: %3</li></ul>
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Richiede una chiave API Mistral personale.</li><li>ATTENZIONE: invierà le tue chat a Mistral!</li><li>La tua chiave API verrà archiviata su disco</li><li> Verrà utilizzato solo per comunicare con Mistral</li><li>Puoi richiedere una chiave API <a href="https://console.mistral.ai/user/api-keys">qui</a>. </li>
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>Richiede una chiave API personale e l'URL di base dell'API.</li><li>ATTENZIONE: invierà le tue chat al server API compatibile con OpenAI che hai specificato!</li><li>La tua chiave API verrà archiviata su disco</li><li>Verrà utilizzata solo per comunicare con il server API compatibile con OpenAI</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Connetti al server API compatibile con OpenAI</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Creato da %1.</strong><br><ul><li>Pubblicato il %2.<li>Questo modello ha %3 Mi piace.<li>Questo modello ha %4 download.<li>Altro informazioni possono essere trovate <a href="https://huggingface.co/%5">qui.</a></ul>
@@ -1584,92 +1595,92 @@ modello per iniziare
ModelSettings
-
+ ModelModello
-
+ Model SettingsSettaggi modello
-
+ CloneClona
-
+ RemoveRimuovi
-
+ NameNome
-
+ Model FileFile del modello
-
+ System PromptPrompt di sistema
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefisso all'inizio di ogni conversazione. Deve contenere i token di inquadramento appropriati.
-
+ Prompt TemplateSchema del prompt
-
+ The template that wraps every prompt.Lo schema che incorpora ogni prompt.
-
+ Must contain the string "%1" to be replaced with the user's input.Deve contenere la stringa "%1" da sostituire con l'input dell'utente.
-
+ Chat Name PromptPrompt del nome della chat
-
+ Prompt used to automatically generate chat names.Prompt utilizzato per generare automaticamente nomi di chat.
-
+ Suggested FollowUp PromptPrompt di approfondimento suggerito
-
+ Prompt used to generate suggested follow-up questions.Prompt utilizzato per generare le domande di approfondimento suggerite.
-
+ Context LengthLunghezza del contesto
-
+ Number of input and output tokens the model sees.Numero di token di input e output visualizzati dal modello.
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1678,128 +1689,128 @@ L'utilizzo di un contesto maggiore rispetto a quello su cui è stato addest
NOTA: non ha effetto finché non si ricarica il modello.
-
+ TemperatureTemperatura
-
+ Randomness of model output. Higher -> more variation.Casualità dell'uscita del modello. Più alto -> più variazione.
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.La temperatura aumenta le possibilità di scegliere token meno probabili.
NOTA: una temperatura più elevata offre risultati più creativi ma meno prevedibili.
-
+ Top-P
-
+ Nucleus Sampling factor. Lower -> more predictable.Fattore di campionamento del nucleo. Inferiore -> più prevedibile.
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Solo i token più probabili, fino a un totale di probabilità di top_p, possono essere scelti.
NOTA: impedisce la scelta di token altamente improbabili.
-
+ Min-P
-
+ Minimum token probability. Higher -> more predictable.Probabilità minima del token. Più alto -> più prevedibile.
-
+ Sets the minimum relative probability for a token to be considered.Imposta la probabilità relativa minima affinché un token venga considerato.
-
+ Top-K
-
+ Size of selection pool for tokens.Dimensione del lotto di selezione per i token.
-
+ Only the top K most likely tokens will be chosen from.Saranno scelti solo i primi K token più probabili.
-
+ Max LengthLunghezza massima
-
+ Maximum response length, in tokens.Lunghezza massima della risposta, in token.
-
+ Prompt Batch SizeDimensioni del lotto di prompt
-
+ The batch size used for prompt processing.La dimensione del lotto usata per l'elaborazione dei prompt.
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Numero di token del prompt da elaborare contemporaneamente.
NOTA: valori più alti possono velocizzare la lettura dei prompt ma utilizzeranno più RAM.
-
+ Repeat PenaltyPenalità di ripetizione
-
+ Repetition penalty factor. Set to 1 to disable.Fattore di penalità di ripetizione. Impostare su 1 per disabilitare.
-
+ Repeat Penalty TokensToken di penalità ripetizione
-
+ Number of previous tokens used for penalty.Numero di token precedenti utilizzati per la penalità.
-
+ GPU LayersLivelli GPU
-
+ Number of model layers to load into VRAM.Numero di livelli del modello da caricare nella VRAM.
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1811,217 +1822,217 @@ NOTA: non ha effetto finché non si ricarica il modello.
ModelsView
-
+ No Models InstalledNessun modello installato
-
+ Install a model to get started using GPT4AllInstalla un modello per iniziare a utilizzare GPT4All
-
-
+
+ + Add Model+ Aggiungi Modello
-
+ Shows the add model viewMostra la vista aggiungi modello
-
+ Installed ModelsModelli installati
-
+ Locally installed chat modelsModelli per chat installati localmente
-
+ Model fileFile del modello
-
+ Model file to be downloadedFile del modello da scaricare
-
+ DescriptionDescrizione
-
+ File descriptionDescrizione del file
-
+ CancelAnnulla
-
+ ResumeRiprendi
-
+ Stop/restart/start the downloadArresta/riavvia/avvia il download
-
+ RemoveRimuovi
-
+ Remove model from filesystemRimuovi il modello dal sistema dei file
-
-
+
+ InstallInstalla
-
+ Install online modelInstalla il modello online
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Errore</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVVISO: non consigliato per il tuo hardware. Il modello richiede più memoria (%1 GB) di quella disponibile nel sistema (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.ERRORE: $API_KEY è vuoto.
-
+ ERROR: $BASE_URL is empty.ERRORE: $BASE_URL non è valido.
-
+ enter $BASE_URLinserisci $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERRORE: $MODEL_NAME è vuoto.
-
+ enter $MODEL_NAMEinserisci $MODEL_NAME
-
+ %1 GB
-
+ ?
-
+ Describes an error that occurred when downloadingDescrive un errore che si è verificato durante lo scaricamento
-
+ Error for incompatible hardwareErrore per hardware incompatibile
-
+ Download progressBarBarra di avanzamento dello scaricamento
-
+ Shows the progress made in the downloadMostra lo stato di avanzamento dello scaricamento
-
+ Download speedVelocità di scaricamento
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocità di scaricamento in byte/kilobyte/megabyte al secondo
-
+ Calculating...Calcolo in corso...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedSe viene calcolato l'hash del file
-
+ Busy indicatorIndicatore di occupato
-
+ Displayed when the file hash is being calculatedVisualizzato durante il calcolo dell'hash del file
-
+ enter $API_KEYInserire $API_KEY
-
+ File sizeDimensione del file
-
+ RAM requiredRAM richiesta
-
+ ParametersParametri
-
+ QuantQuant
-
+ TypeTipo
@@ -2029,12 +2040,12 @@ NOTA: non ha effetto finché non si ricarica il modello.
MyFancyLink
-
+ Fancy linkMio link
-
+ A stylized linkUn link d'esempio
@@ -2042,7 +2053,7 @@ NOTA: non ha effetto finché non si ricarica il modello.
MySettingsStack
-
+ Please choose a directoryScegli una cartella
@@ -2050,12 +2061,12 @@ NOTA: non ha effetto finché non si ricarica il modello.
MySettingsTab
-
+ Restore DefaultsRiprista i valori predefiniti
-
+ Restores settings dialog to a default stateRipristina la finestra di dialogo dei settaggi a uno stato predefinito
@@ -2063,12 +2074,12 @@ NOTA: non ha effetto finché non si ricarica il modello.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.Contribuisci con i tuoi dati al Datalake Open Source di GPT4All.
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2081,47 +2092,47 @@ Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione
NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti, tuttavia, aspettarti un'attribuzione facoltativa, se lo desideri. I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
-
+ Terms for opt-inTermini per l'adesione
-
+ Describes what will happen when you opt-inDescrive cosa accadrà quando effettuerai l'adesione
-
+ Please provide a name for attribution (optional)Fornisci un nome per l'attribuzione (facoltativo)
-
+ Attribution (optional)Attribuzione (facoltativo)
-
+ Provide attributionFornire attribuzione
-
+ EnableAbilita
-
+ Enable opt-inAbilita l'adesione
-
+ CancelAnnulla
-
+ Cancel opt-inAnnulla l'adesione
@@ -2129,17 +2140,17 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
NewVersionDialog
-
+ New version is availableNuova versione disponibile
-
+ UpdateAggiorna
-
+ Update to new versionAggiorna alla nuova versione
@@ -2147,17 +2158,17 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
PopupDialog
-
+ Reveals a shortlived help balloonRivela un messaggio di aiuto di breve durata
-
+ Busy indicatorIndicatore di occupato
-
+ Displayed when the popup is showing busyVisualizzato quando la finestra a comparsa risulta occupata
@@ -2165,28 +2176,28 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
SettingsView
-
-
+
+ SettingsSettaggi
-
+ Contains various application settingsContiene vari settaggi dell'applicazione
-
+ ApplicationApplicazione
-
+ ModelModello
-
+ LocalDocs
@@ -2194,12 +2205,12 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
StartupDialog
-
+ Welcome!Benvenuto!
-
+ ### Release notes
%1### Contributors
%2
@@ -2208,17 +2219,17 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
%2
-
+ Release notesNote di rilascio
-
+ Release notes for this versionNote di rilascio per questa versione
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2241,71 +2252,71 @@ Quando un modello di GPT4All ti risponde e tu hai aderito, la tua conversazione
NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di GPT4All. Non dovresti avere aspettative sulla privacy della chat quando questa funzione è abilitata. Dovresti, tuttavia, aspettarti un'attribuzione facoltativa, se lo desideri, . I tuoi dati di chat saranno liberamente disponibili per essere scaricati da chiunque e verranno utilizzati da Nomic AI per migliorare i futuri modelli GPT4All. Nomic AI conserverà tutte le informazioni di attribuzione allegate ai tuoi dati e verrai accreditato come collaboratore a qualsiasi versione del modello GPT4All che utilizza i tuoi dati!
-
+ Terms for opt-inTermini per l'adesione
-
+ Describes what will happen when you opt-inDescrive cosa accadrà quando effettuerai l'adesione
-
-
+
+ Opt-in for anonymous usage statisticsAttiva le statistiche di utilizzo anonime
-
-
+
+ YesSi
-
+ Allow opt-in for anonymous usage statisticsConsenti l'attivazione di statistiche di utilizzo anonime
-
-
+
+ NoNo
-
+ Opt-out for anonymous usage statisticsDisattiva le statistiche di utilizzo anonime
-
+ Allow opt-out for anonymous usage statisticsConsenti la disattivazione per le statistiche di utilizzo anonime
-
-
+
+ Opt-in for networkAderisci per la rete
-
+ Allow opt-in for networkConsenti l'adesione per la rete
-
+ Allow opt-in anonymous sharing of chats to the GPT4All DatalakeConsenti la condivisione anonima delle chat su GPT4All Datalake
-
+ Opt-out for networkDisattiva per la rete
-
+ Allow opt-out anonymous sharing of chats to the GPT4All DatalakeConsenti la non adesione alla condivisione anonima delle chat nel GPT4All Datalake
@@ -2313,23 +2324,23 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
SwitchModelDialog
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Avviso:</b> la modifica del modello cancellerà la conversazione corrente. Vuoi continuare?
-
+ ContinueContinua
-
+ Continue with model loadingContinuare con il caricamento del modello
-
-
+
+ CancelAnnulla
@@ -2337,32 +2348,32 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)Modifica il testo seguente per fornire una risposta migliore. (opzionale)
-
+ Please provide a better response...Si prega di fornire una risposta migliore...
-
+ SubmitInvia
-
+ Submits the user's responseInvia la risposta dell'utente
-
+ CancelAnnulla
-
+ Closes the response dialogChiude la finestra di dialogo della risposta
@@ -2370,125 +2381,125 @@ NOTA: attivando questa funzione, invierai i tuoi dati al Datalake Open Source di
main
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Si è verificato un errore all'avvio:</h3><br><i>"Rilevato hardware incompatibile."</i><br><br>Sfortunatamente, la tua CPU non soddisfa i requisiti minimi per eseguire questo programma. In particolare, non supporta gli elementi intrinseci AVX richiesti da questo programma per eseguire con successo un modello linguistico moderno e di grandi dimensioni. L'unica soluzione in questo momento è aggiornare il tuo hardware con una CPU più moderna.<br><br>Vedi qui per ulteriori informazioni: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https ://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Si è verificato un errore all'avvio:</h3><br><i>"Impossibile accedere al file dei settaggi."</i><br><br>Sfortunatamente, qualcosa impedisce al programma di accedere al file dei settaggi. Ciò potrebbe essere causato da autorizzazioni errate nella cartella di configurazione locale dell'app in cui si trova il file dei settaggi. Dai un'occhiata al nostro <a href="https://discord.gg/4M2QFmTt2k">canale Discord</a> per ricevere assistenza.
-
+ Connection to datalake failed.La connessione al Datalake non è riuscita.
-
+ Saving chats.Salvataggio delle chat.
-
+ Network dialogDialogo di rete
-
+ opt-in to share feedback/conversationsaderisci per condividere feedback/conversazioni
-
+ Home viewVista iniziale
-
+ Home view of applicationVista iniziale dell'applicazione
-
+ HomeInizia
-
+ Chat viewVista chat
-
+ Chat view to interact with modelsVista chat per interagire con i modelli
-
+ ChatsChat
-
-
+
+ ModelsModelli
-
+ Models view for installed modelsVista modelli per i modelli installati
-
-
+
+ LocalDocs
-
+ LocalDocs view to configure and use local docsVista LocalDocs per configurare e utilizzare i documenti locali
-
-
+
+ SettingsSettaggi
-
+ Settings view for application configurationVista dei settaggi per la configurazione dell'applicazione
-
+ The datalake is enabledIl Datalake è abilitato
-
+ Using a network modelUtilizzando un modello di rete
-
+ Server mode is enabledLa modalità server è abilitata
-
+ Installed modelsModelli installati
-
+ View of installed modelsVista dei modelli installati
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 38a9177b..76b5b5aa 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections← Minhas coleções
-
+ Add Document CollectionAdicionar Coleção de Documentos
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Adicione uma pasta contendo arquivos de texto simples, PDFs ou Markdown. Configure extensões adicionais nas Configurações.
-
+ Please choose a directoryEscolha um diretório
-
+ NameNome
-
+ Collection name...Nome da coleção...
-
+ Name of the collection to add (Required)Nome da coleção (obrigatório)
-
+ FolderPasta
-
+ Folder path...Caminho da pasta...
-
+ Folder path to documents (Required)Caminho da pasta com os documentos (obrigatório)
-
+ BrowseProcurar
-
+ Create CollectionCriar Coleção
@@ -67,288 +67,288 @@
AddModelView
-
+ ← Existing Models← Meus Modelos
-
+ Explore ModelsDescobrir Modelos
-
+ Discover and download models by keyword search...Pesquisar modelos...
-
+ Text field for discovering and filtering downloadable modelsCampo de texto para descobrir e filtrar modelos para download
-
+ Initiate model discovery and filteringPesquisar e filtrar modelos
-
+ Triggers discovery and filtering of modelsAciona a descoberta e filtragem de modelos
-
+ DefaultPadrão
-
+ LikesCurtidas
-
+ DownloadsDownloads
-
+ RecentRecentes
-
+ AscAsc
-
+ DescDesc
-
+ NoneNenhum
-
+ Searching · %1Pesquisando · %1
-
+ Sort by: %1Ordenar por: %1
-
+ Sort dir: %1Ordenar diretório: %1
-
+ Limit: %1Limite: %1
-
+ Network error: could not retrieve %1Erro de rede: não foi possível obter %1
-
-
+
+ Busy indicatorIndicador de processamento
-
+ Displayed when the models request is ongoingxibido enquanto os modelos estão sendo carregados
-
+ Model fileArquivo do modelo
-
+ Model file to be downloadedArquivo do modelo a ser baixado
-
+ DescriptionDescrição
-
+ File descriptionDescrição do arquivo
-
+ CancelCancelar
-
+ ResumeRetomar
-
+ DownloadBaixar
-
+ Stop/restart/start the downloadParar/reiniciar/iniciar o download
-
+ RemoveRemover
-
+ Remove model from filesystemRemover modelo do sistema
-
-
+
+ InstallInstalar
-
+ Install online modelInstalar modelo online
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENÇÃO: Este modelo não é recomendado para seu hardware. Ele exige mais memória (%1 GB) do que seu sistema possui (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.ERRO: A $API_KEY está vazia.
-
+ ERROR: $BASE_URL is empty.ERRO: A $BASE_URL está vazia.
-
+ enter $BASE_URLinserir a $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERRO: O $MODEL_NAME está vazio.
-
+ enter $MODEL_NAMEinserir o $MODEL_NAME
-
+ %1 GB%1 GB
-
-
+
+ ??
-
+ Describes an error that occurred when downloadingMostra informações sobre o erro no download
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Erro</a></strong></font>
-
+ Error for incompatible hardwareAviso: Hardware não compatível
-
+ Download progressBarProgresso do download
-
+ Shows the progress made in the downloadMostra o progresso do download
-
+ Download speedVelocidade de download
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocidade de download em bytes/kilobytes/megabytes por segundo
-
+ Calculating...Calculando...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedQuando o hash do arquivo está sendo calculado
-
+ Displayed when the file hash is being calculatedExibido durante o cálculo do hash do arquivo
-
+ enter $API_KEYinserir $API_KEY
-
+ File sizeTamanho do arquivo
-
+ RAM requiredRAM necessária
-
+ ParametersParâmetros
-
+ QuantQuant
-
+ TypeTipo
@@ -356,22 +356,22 @@
ApplicationSettings
-
+ ApplicationAplicativo
-
+ Network dialogMensagens de rede
-
+ opt-in to share feedback/conversationsCompartilhar feedback e conversas
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -388,225 +388,225 @@
reinstalar o aplicativo.
-
+ Error dialogMensagens de erro
-
+ Application SettingsConfigurações
-
+ GeneralGeral
-
+ ThemeTema
-
+ The application color scheme.Esquema de cores.
-
+ DarkModo Escuro
-
+ LightModo Claro
-
+ LegacyDarkModo escuro (legado)
-
+ Font SizeTamanho da Fonte
-
+ The size of text in the application.Tamanho do texto.
-
+ SmallPequeno
-
+ MediumMédio
-
+ LargeGrande
-
+ Language and LocaleIdioma e Região
-
+ The language and locale you wish to use.Selecione seu idioma e região.
-
+ System LocaleLocal do Sistema
-
+ DeviceProcessador
-
+ The compute device used for text generation.I chose to use "Processador" instead of "Dispositivo" (Device) or "Dispositivo de Computação" (Compute Device) to simplify the terminology and make it more straightforward and understandable. "Dispositivo" can be vague and could refer to various types of hardware, whereas "Processador" clearly and specifically indicates the component responsible for processing tasks. This improves usability by avoiding the ambiguity that might arise from using more generic terms like "Dispositivo."Processador usado para gerar texto.
-
-
+
+ Application defaultAplicativo padrão
-
+ Default ModelModelo Padrão
-
+ The preferred model for new chats. Also used as the local server fallback.Modelo padrão para novos chats e em caso de falha do modelo principal.
-
+ Suggestion ModeModo de sugestões
-
+ Generate suggested follow-up questions at the end of responses.Sugerir perguntas após as respostas.
-
+ When chatting with LocalDocsAo conversar com o LocalDocs
-
+ Whenever possibleSempre que possível
-
+ NeverNunca
-
+ Download PathDiretório de Download
-
+ Where to store local models and the LocalDocs database.Pasta para modelos e banco de dados do LocalDocs.
-
+ BrowseProcurar
-
+ Choose where to save model filesLocal para armazenar os modelos
-
+ Enable DatalakeHabilitar Datalake
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.Contribua para o Datalake de código aberto do GPT4All.
-
+ AdvancedAvançado
-
+ CPU ThreadsThreads de CPU
-
+ The number of CPU threads used for inference and embedding.Quantidade de núcleos (threads) do processador usados para processar e responder às suas perguntas.
-
+ Save Chat ContextI used "Histórico do Chat" (Chat History) instead of "Contexto do Chat" (Chat Context) to clearly convey that it refers to saving past messages, making it more intuitive and avoiding potential confusion with abstract terms.Salvar Histórico do Chat
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvar histórico do chat para carregamento mais rápido. (Usa aprox. 2GB por chat).
-
+ Enable Local ServerAtivar Servidor Local
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Ativar servidor local compatível com OpenAI (uso de recursos elevado).
-
+ API Server PortPorta da API
-
+ The port to use for the local server. Requires restart.Porta de acesso ao servidor local. (requer reinicialização).
-
+ Check For UpdatesProcurar por Atualizações
-
+ Manually check for an update to GPT4All.Verifica se há novas atualizações para o GPT4All.
-
+ UpdatesAtualizações
@@ -614,13 +614,13 @@
Chat
-
-
+
+ New ChatNovo Chat
-
+ Server ChatChat com o Servidor
@@ -628,12 +628,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API serverERRO: Ocorreu um erro de rede ao conectar-se ao servidor da API
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished recebeu erro HTTP %1 %2
@@ -641,62 +641,62 @@
ChatDrawer
-
+ DrawerMenu Lateral
-
+ Main navigation drawerMenu de navegação principal
-
+ + New Chat+ Novo Chat
-
+ Create a new chatCriar um novo chat
-
+ Select the current chat or edit the chat when in edit modeSelecione o chat atual ou edite o chat quando estiver no modo de edição
-
+ Edit chat nameEditar nome do chat
-
+ Save chat nameSalvar nome do chat
-
+ Delete chatExcluir chat
-
+ Confirm chat deletionConfirmar exclusão do chat
-
+ Cancel chat deletionCancelar exclusão do chat
-
+ List of chatsLista de chats
-
+ List of chats in the drawer dialogLista de chats na caixa de diálogo do menu lateral
@@ -704,32 +704,32 @@
ChatListModel
-
+ TODAYHOJE
-
+ THIS WEEKESTA SEMANA
-
+ THIS MONTHESTE MÊS
-
+ LAST SIX MONTHSÚLTIMOS SEIS MESES
-
+ THIS YEARESTE ANO
-
+ LAST YEARANO PASSADO
@@ -737,150 +737,150 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p><h3>Aviso</h3><p>%1</p>
-
+ Switch model dialogMensagem ao troca de modelo
-
+ Warn the user if they switch models, then context will be erasedAo trocar de modelo, o contexto da conversa será apagado
-
+ Conversation copied to clipboard.Conversa copiada.
-
+ Code copied to clipboard.Código copiado.
-
+ Chat panelPainel de chat
-
+ Chat panel with optionsPainel de chat com opções
-
+ Reload the currently loaded modelRecarregar modelo atual
-
+ Eject the currently loaded modelEjetar o modelo carregado atualmente
-
+ No model installed.Nenhum modelo instalado.
-
+ Model loading error.Erro ao carregar o modelo.
-
+ Waiting for model...Aguardando modelo...
-
+ Switching context...Mudando de contexto...
-
+ Choose a model...Escolha um modelo...
-
+ Not found: %1Não encontrado: %1
-
+ The top item is the current modelO modelo atual é exibido no topo
-
-
+
+ LocalDocsLocalDocs
-
+ Add documentsAdicionar documentos
-
+ add collections of documents to the chatAdicionar Coleção de Documentos
-
+ Load the default modelCarregar o modelo padrão
-
+ Loads the default model which can be changed in settingsCarrega o modelo padrão (personalizável nas configurações)
-
+ No Model InstalledNenhum Modelo Instalado
-
+ GPT4All requires that you install at least one
model to get startedO GPT4All precisa de pelo menos um modelo
modelo instalado para funcionar
-
+ Install a ModelInstalar um Modelo
-
+ Shows the add model viewMostra a visualização para adicionar modelo
-
+ Conversation with the modelConversa com o modelo
-
+ prompt / response pairs from the conversationPares de pergunta/resposta da conversa
-
+ GPT4AllGPT4All
-
+ YouVocê
@@ -889,139 +889,139 @@ modelo instalado para funcionar
recalculando contexto...
-
+ response stopped ...resposta interrompida...
-
+ processing ...processando...
-
+ generating response ...gerando resposta...
-
+ generating questions ...gerando perguntas...
-
-
+
+ CopyCopiar
-
+ Copy MessageCopiar Mensagem
-
+ Disable markdownDesativar markdown
-
+ Enable markdownAtivar markdown
-
+ Thumbs upResposta boa
-
+ Gives a thumbs up to the responseCurte a resposta
-
+ Thumbs downResposta ruim
-
+ Opens thumbs down dialogAbrir diálogo de joinha para baixo
-
+ Suggested follow-upsPerguntas relacionadas
-
+ Erase and reset chat sessionApagar e redefinir sessão de chat
-
+ Copy chat session to clipboardCopiar histórico da conversa
-
+ Redo last chat responseRefazer última resposta
-
+ Stop generatingParar de gerar
-
+ Stop the current response generationParar a geração da resposta atual
-
+ Reloads the modelRecarrega modelo
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>Ocorreu um erro ao carregar o modelo:</h3><br><i>"%1"</i><br><br>Falhas no carregamento do modelo podem acontecer por vários motivos, mas as causas mais comuns incluem um formato de arquivo incorreto, um download incompleto ou corrompido, o tipo de arquivo errado, memória RAM do sistema insuficiente ou um tipo de modelo incompatível. Aqui estão algumas sugestões para resolver o problema:<br><ul><li>Certifique-se de que o arquivo do modelo tenha um formato e tipo compatíveis<li>Verifique se o arquivo do modelo está completo na pasta de download<li>Você pode encontrar a pasta de download na caixa de diálogo de configurações<li>Se você carregou o modelo, certifique-se de que o arquivo não esteja corrompido verificando o md5sum<li>Leia mais sobre quais modelos são suportados em nossa <a href="https://docs.gpt4all.io/">documentação</a> para a interface gráfica<li>Confira nosso <a href="https://discord.gg/4M2QFmTt2k">canal do Discord</a> para obter ajuda
-
-
+
+ Reload · %1Recarregar · %1
-
+ Loading · %1Carregando · %1
-
+ Load · %1 (default) →Carregar · %1 (padrão) →
-
+ restoring from text ...Recuperando do texto...
-
+ retrieving localdocs: %1 ...Recuperando dados em LocalDocs: %1 ...
-
+ searching localdocs: %1 ...Buscando em LocalDocs: %1 ...
-
+ %n Source(s)%n Origem
@@ -1029,42 +1029,42 @@ modelo instalado para funcionar
-
+ Send a message...Enviar uma mensagem...
-
+ Load a model to continue...Carregue um modelo para continuar...
-
+ Send messages/prompts to the modelEnviar mensagens/prompts para o modelo
-
+ CutRecortar
-
+ PasteColar
-
+ Select AllSelecionar tudo
-
+ Send messageEnviar mensagem
-
+ Sends the message/prompt contained in textfield to the modelEnvia a mensagem/prompt contida no campo de texto para o modelo
@@ -1072,12 +1072,12 @@ modelo instalado para funcionar
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete resultsAviso: pesquisar coleções durante a indexação pode retornar resultados incompletos
-
+ %n file(s)%n arquivo(s)
@@ -1085,7 +1085,7 @@ modelo instalado para funcionar
-
+ %n word(s)%n palavra(s)
@@ -1093,17 +1093,17 @@ modelo instalado para funcionar
-
+ UpdatingAtualizando
-
+ + Add Docs+ Adicionar Documentos
-
+ Select a collection to make it available to the chat model.Selecione uma coleção para disponibilizá-la ao modelo de chat.
@@ -1111,37 +1111,37 @@ modelo instalado para funcionar
Download
-
+ Model "%1" is installed successfully.Modelo "%1" instalado com sucesso.
-
+ ERROR: $MODEL_NAME is empty.ERRO: O nome do modelo ($MODEL_NAME) está vazio.
-
+ ERROR: $API_KEY is empty.ERRO: A chave da API ($API_KEY) está vazia.
-
+ ERROR: $BASE_URL is invalid.ERRO: A URL base ($BASE_URL) é inválida.
-
+ ERROR: Model "%1 (%2)" is conflict.ERRO: Conflito com o modelo "%1 (%2)".
-
+ Model "%1 (%2)" is installed successfully.Modelo "%1 (%2)" instalado com sucesso.
-
+ Model "%1" is removed.Modelo "%1" removido.
@@ -1149,92 +1149,92 @@ modelo instalado para funcionar
HomeView
-
+ Welcome to GPT4AllBem-vindo ao GPT4All
-
+ The privacy-first LLM chat applicationO aplicativo de chat LLM que prioriza a privacidade
-
+ Start chattingIniciar chat
-
+ Start ChattingIniciar Chat
-
+ Chat with any LLMConverse com qualquer LLM
-
+ LocalDocsLocalDocs
-
+ Chat with your local filesConverse com seus arquivos locais
-
+ Find ModelsEncontrar Modelos
-
+ Explore and download modelsDescubra e baixe modelos
-
+ Latest newsÚltimas novidades
-
+ Latest news from GPT4AllÚltimas novidades do GPT4All
-
+ Release NotesNotas de versão
-
+ DocumentationDocumentação
-
+ DiscordDiscord
-
+ X (Twitter)X (Twitter)
-
+ GithubGithub
-
+ nomic.ainomic.ai
-
+ Subscribe to NewsletterAssine nossa Newsletter
@@ -1242,118 +1242,118 @@ modelo instalado para funcionar
LocalDocsSettings
-
+ LocalDocsLocalDocs
-
+ LocalDocs SettingsConfigurações do LocalDocs
-
+ IndexingIndexação
-
+ Allowed File ExtensionsExtensões de Arquivo Permitidas
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.Lista separada por vírgulas. O LocalDocs tentará processar apenas arquivos com essas extensões.
-
+ EmbeddingIncorporação
-
+ Use Nomic Embed APIUsar a API Nomic Embed
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.Incorporar documentos usando a API Nomic rápida em vez de um modelo local privado. Requer reinicialização.
-
+ Nomic API KeyChave da API Nomic
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Chave da API a ser usada para Nomic Embed. Obtenha uma na página de <a href="https://atlas.nomic.ai/cli-login">chaves de API do Atlas</a>. Requer reinicialização.
-
+ Embeddings DeviceProcessamento de Incorporações
-
+ The compute device used for embeddings. Requires restart.Dispositivo usado para processar as incorporações. Requer reinicialização.
-
+ Application defaultAplicativo padrão
-
+ DisplayExibir
-
+ Show SourcesMostrar Fontes
-
+ Display the sources used for each response.Mostra as fontes usadas para cada resposta.
-
+ AdvancedApenas para usuários avançados
-
+ Warning: Advanced usage only.Atenção: Apenas para usuários avançados.
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valores muito altos podem causar falhas no LocalDocs, respostas extremamente lentas ou até mesmo nenhuma resposta. De forma geral, o valor {Número de Caracteres x Número de Trechos} é adicionado à janela de contexto do modelo. Clique <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aqui</a> para mais informações.
-
+ Document snippet size (characters)I translated "snippet" as "trecho" to make the term feel more natural and understandable in Portuguese. "Trecho" effectively conveys the idea of a portion or section of a document, fitting well within the context, whereas a more literal translation might sound less intuitive or awkward for users.Tamanho do trecho de documento (caracteres)
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número de caracteres por trecho de documento. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
-
+ Max document snippets per promptMáximo de Trechos de Documento por Prompt
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Número máximo de trechos de documentos a serem adicionados ao contexto do prompt. Valores maiores aumentam a chance de respostas factuais, mas também tornam a geração mais lenta.
@@ -1361,17 +1361,17 @@ modelo instalado para funcionar
LocalDocsView
-
+ LocalDocsLocalDocs
-
+ Chat with your local filesConverse com seus arquivos locais
-
+ + Add Collection+ Adicionar Coleção
@@ -1380,102 +1380,102 @@ modelo instalado para funcionar
ERRO: O banco de dados do LocalDocs não é válido.
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>ERRO: Não foi possível acessar o banco de dados do LocalDocs ou ele não é válido.</h3><br><i>Observação: Será necessário reiniciar o aplicativo após tentar qualquer uma das seguintes correções sugeridas.</i><br><ul><li>Certifique-se de que a pasta definida como <b>Caminho de Download</b> existe no sistema de arquivos.</li><li>Verifique a propriedade, bem como as permissões de leitura e gravação do <b>Caminho de Download</b>.</li><li>Se houver um arquivo <b>localdocs_v2.db</b>, verifique também sua propriedade e permissões de leitura/gravação.</li></ul><br>Se o problema persistir e houver algum arquivo 'localdocs_v*.db' presente, como último recurso, você pode<br>tentar fazer backup deles e removê-los. No entanto, você terá que recriar suas coleções.
-
+ No Collections InstalledNenhuma Coleção Instalada
-
+ Install a collection of local documents to get started using this featureInstale uma coleção de documentos locais para começar a usar este recurso
-
+ + Add Doc Collection+ Adicionar Coleção de Documentos
-
+ Shows the add model viewMostra a visualização para adicionar modelo
-
+ Indexing progressBarBarra de progresso de indexação
-
+ Shows the progress made in the indexingMostra o progresso da indexação
-
+ ERRORERRO
-
+ INDEXINGINDEXANDO
-
+ EMBEDDINGINCORPORANDO
-
+ REQUIRES UPDATEREQUER ATUALIZAÇÃO
-
+ READYPRONTO
-
+ INSTALLINGINSTALANDO
-
+ Indexing in progressIndexação em andamento
-
+ Embedding in progressIncorporação em andamento
-
+ This collection requires an update after version changeEsta coleção precisa ser atualizada após a mudança de versão
-
+ Automatically reindexes upon changes to the folderReindexa automaticamente após alterações na pasta
-
+ Installation in progressInstalação em andamento
-
+ %%
-
+ %n file(s)%n arquivo(s)
@@ -1483,7 +1483,7 @@ modelo instalado para funcionar
-
+ %n word(s)%n palavra(s)
@@ -1491,27 +1491,27 @@ modelo instalado para funcionar
-
+ RemoveRemover
-
+ RebuildReconstruir
-
+ Reindex this folder from scratch. This is slow and usually not needed.eindexar pasta do zero. Lento e geralmente desnecessário.
-
+ UpdateAtualizar
-
+ Update the collection to the new version. This is a slow operation.Atualizar coleção para nova versão. Pode demorar.
@@ -1519,67 +1519,78 @@ modelo instalado para funcionar
ModelList
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>É necessária uma chave de API da OpenAI.</li><li>AVISO: Seus chats serão enviados para a OpenAI!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a OpenAI</li><li>Você pode solicitar uma chave de API <a href="https://platform.openai.com/account/api-keys">aqui.</a></li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>Modelo ChatGPT GPT-3.5 Turbo da OpenAI</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Modelo ChatGPT GPT-4 da OpenAI</strong><br> %1 %2
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Modelo Mistral Tiny</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Modelo Mistral Small</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Modelo Mistral Medium</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* Mesmo que você pague pelo ChatGPT-4 da OpenAI, isso não garante acesso à chave de API. Contate a OpenAI para mais informações.
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)%1 (%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>Modelo de API Compatível com OpenAI</strong><br><ul><li>Chave da API: %1</li><li>URL Base: %2</li><li>Nome do Modelo: %3</li></ul>
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>É necessária uma chave de API da Mistral.</li><li>AVISO: Seus chats serão enviados para a Mistral!</li><li>Sua chave de API será armazenada localmente</li><li>Ela será usada apenas para comunicação com a Mistral</li><li>Você pode solicitar uma chave de API <a href="https://console.mistral.ai/user/api-keys">aqui</a>.</li>
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>É necessária uma chave de API e a URL da API.</li><li>AVISO: Seus chats serão enviados para o servidor de API compatível com OpenAI que você especificou!</li><li>Sua chave de API será armazenada no disco</li><li>Será usada apenas para comunicação com o servidor de API compatível com OpenAI</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Conectar a um servidor de API compatível com OpenAI</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Criado por %1.</strong><br><ul><li>Publicado em %2.<li>Este modelo tem %3 curtidas.<li>Este modelo tem %4 downloads.<li>Mais informações podem ser encontradas <a href="https://huggingface.co/%5">aqui.</a></ul>
@@ -1587,92 +1598,92 @@ modelo instalado para funcionar
ModelSettings
-
+ ModelModelo
-
+ Model SettingsConfigurações do Modelo
-
+ CloneClonar
-
+ RemoveRemover
-
+ NameNome
-
+ Model FileArquivo do Modelo
-
+ System PromptPrompt do Sistema
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Prefixado no início de cada conversa. Deve conter os tokens de enquadramento apropriados.
-
+ Prompt TemplateModelo de Prompt
-
+ The template that wraps every prompt.Modelo para cada prompt.
-
+ Must contain the string "%1" to be replaced with the user's input.Deve incluir "%1" para a entrada do usuário.
-
+ Chat Name PromptPrompt para Nome do Chat
-
+ Prompt used to automatically generate chat names.Prompt usado para gerar automaticamente nomes de chats.
-
+ Suggested FollowUp PromptPrompt de Sugestão de Acompanhamento
-
+ Prompt used to generate suggested follow-up questions.Prompt usado para gerar sugestões de perguntas.
-
+ Context LengthTamanho do Contexto
-
+ Number of input and output tokens the model sees.Tamanho da Janela de Contexto.
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1681,128 +1692,128 @@ Usar mais contexto do que o modelo foi treinado pode gerar resultados ruins.
Obs.: Só entrará em vigor após recarregar o modelo.
-
+ TemperatureTemperatura
-
+ Randomness of model output. Higher -> more variation.Aleatoriedade das respostas. Quanto maior, mais variadas.
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.Aumenta a chance de escolher tokens menos prováveis.
Obs.: Uma temperatura mais alta gera resultados mais criativos, mas menos previsíveis.
-
+ Top-PTop-P
-
+ Nucleus Sampling factor. Lower -> more predictable.Amostragem por núcleo. Menor valor, respostas mais previsíveis.
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Apenas tokens com probabilidade total até o valor de top_p serão escolhidos.
Obs.: Evita tokens muito improváveis.
-
+ Min-PMin-P
-
+ Minimum token probability. Higher -> more predictable.Probabilidade mínima do token. Quanto maior -> mais previsível.
-
+ Sets the minimum relative probability for a token to be considered.Define a probabilidade relativa mínima para um token ser considerado.
-
+ Top-KTop-K
-
+ Size of selection pool for tokens.Número de tokens considerados na amostragem.
-
+ Only the top K most likely tokens will be chosen from.Serão escolhidos apenas os K tokens mais prováveis.
-
+ Max LengthComprimento Máximo
-
+ Maximum response length, in tokens.Comprimento máximo da resposta, em tokens.
-
+ Prompt Batch SizeTamanho do Lote de Processamento
-
+ The batch size used for prompt processing.Tokens processados por lote.
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Quantidade de tokens de prompt para processar de uma vez.
OBS.: Valores mais altos podem acelerar a leitura dos prompts, mas usarão mais RAM.
-
+ Repeat PenaltyPenalidade de Repetição
-
+ Repetition penalty factor. Set to 1 to disable.Penalidade de Repetição (1 para desativar).
-
+ Repeat Penalty TokensTokens para penalizar repetição
-
+ Number of previous tokens used for penalty.Número de tokens anteriores usados para penalidade.
-
+ GPU LayersCamadas na GPU
-
+ Number of model layers to load into VRAM.Camadas Carregadas na GPU.
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1814,217 +1825,217 @@ Obs.: Só entrará em vigor após recarregar o modelo.
ModelsView
-
+ No Models InstalledNenhum Modelo Instalado
-
+ Install a model to get started using GPT4AllInstale um modelo para começar a usar o GPT4All
-
-
+
+ + Add Model+ Adicionar Modelo
-
+ Shows the add model viewMostra a visualização para adicionar modelo
-
+ Installed ModelsModelos Instalados
-
+ Locally installed chat modelsModelos de chat instalados localmente
-
+ Model fileArquivo do modelo
-
+ Model file to be downloadedArquivo do modelo a ser baixado
-
+ DescriptionDescrição
-
+ File descriptionDescrição do arquivo
-
+ CancelCancelar
-
+ ResumeRetomar
-
+ Stop/restart/start the downloadParar/reiniciar/iniciar o download
-
+ RemoveRemover
-
+ Remove model from filesystemRemover modelo do sistema de arquivos
-
-
+
+ InstallInstalar
-
+ Install online modelInstalar modelo online
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Erro</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">AVISO: Não recomendado para seu hardware. O modelo requer mais memória (%1 GB) do que seu sistema tem disponível (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.ERRO: A $API_KEY está vazia.
-
+ ERROR: $BASE_URL is empty.ERRO: A $BASE_URL está vazia.
-
+ enter $BASE_URLinserir a $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.ERRO: O $MODEL_NAME está vazio.
-
+ enter $MODEL_NAMEinserir o $MODEL_NAME
-
+ %1 GB%1 GB
-
+ ??
-
+ Describes an error that occurred when downloadingDescreve um erro que ocorreu durante o download
-
+ Error for incompatible hardwareErro para hardware incompatível
-
+ Download progressBarBarra de progresso do download
-
+ Shows the progress made in the downloadMostra o progresso do download
-
+ Download speedVelocidade de download
-
+ Download speed in bytes/kilobytes/megabytes per secondVelocidade de download em bytes/kilobytes/megabytes por segundo
-
+ Calculating...Calculando...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedSe o hash do arquivo está sendo calculado
-
+ Busy indicatorIndicador de ocupado
-
+ Displayed when the file hash is being calculatedExibido quando o hash do arquivo está sendo calculado
-
+ enter $API_KEYinserir $API_KEY
-
+ File sizeTamanho do arquivo
-
+ RAM requiredRAM necessária
-
+ ParametersParâmetros
-
+ QuantQuant
-
+ TypeTipo
@@ -2032,12 +2043,12 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MyFancyLink
-
+ Fancy linkLink personalizado
-
+ A stylized linkUm link personalizado
@@ -2045,7 +2056,7 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MySettingsStack
-
+ Please choose a directoryEscolha um diretório
@@ -2053,12 +2064,12 @@ Obs.: Só entrará em vigor após recarregar o modelo.
MySettingsTab
-
+ Restore DefaultsRestaurar Configurações Padrão
-
+ Restores settings dialog to a default stateRestaura as configurações para o estado padrão
@@ -2066,12 +2077,12 @@ Obs.: Só entrará em vigor após recarregar o modelo.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.Contribuir com dados para o Datalake de código aberto GPT4All.
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2084,47 +2095,47 @@ Quando um modelo GPT4All responder a você e você tiver optado por participar,
OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake de Código Aberto do GPT4All. Você não deve ter nenhuma expectativa de privacidade no chat quando este recurso estiver ativado. No entanto, você deve ter a expectativa de uma atribuição opcional, se desejar. Seus dados de chat estarão disponíveis para qualquer pessoa baixar e serão usados pela Nomic AI para melhorar os futuros modelos GPT4All. A Nomic AI manterá todas as informações de atribuição anexadas aos seus dados e você será creditado como colaborador em qualquer versão do modelo GPT4All que utilize seus dados!
-
+ Terms for opt-inTermos de participação
-
+ Describes what will happen when you opt-inDescrição do que acontece ao participar
-
+ Please provide a name for attribution (optional)Forneça um nome para atribuição (opcional)
-
+ Attribution (optional)Atribuição (opcional)
-
+ Provide attributionFornecer atribuição
-
+ EnableHabilitar
-
+ Enable opt-inAtivar participação
-
+ CancelCancelar
-
+ Cancel opt-inCancelar participação
@@ -2132,17 +2143,17 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
NewVersionDialog
-
+ New version is availableAtualização disponível
-
+ UpdateAtualizar agora
-
+ Update to new versionBaixa e instala a última versão do GPT4All
@@ -2150,18 +2161,18 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
PopupDialog
-
+ Reveals a shortlived help balloonExibe uma dica rápida
-
+ Busy indicatorThe literal translation of "busy indicator" as "indicador de ocupado" might create ambiguity in Portuguese, as it doesn't clearly convey whether the system is processing something or simply unavailable. "Progresso" (progress) was chosen to more clearly indicate that an activity is in progress and that the user should wait for its completion.Indicador de progresso
-
+ Displayed when the popup is showing busyVisível durante o processamento
@@ -2176,29 +2187,29 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
SettingsView
-
-
+
+ SettingsI used "Config" instead of "Configurações" to keep the UI concise and visually balanced. "Config" is a widely recognized abbreviation that maintains clarity while saving space, making the interface cleaner and more user-friendly, especially in areas with limited space.Config
-
+ Contains various application settingsAcessar as configurações do aplicativo
-
+ ApplicationAplicativo
-
+ ModelModelo
-
+ LocalDocsLocalDocs
@@ -2206,12 +2217,12 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
StartupDialog
-
+ Welcome!Bem-vindo(a)!
-
+ ### Release notes
%1### Contributors
%2
@@ -2220,17 +2231,17 @@ OBS.: Ao ativar este recurso, você estará enviando seus dados para o Datalake
%2
-
+ Release notesNotas de lançamento
-
+ Release notes for this versionNotas de lançamento desta versão
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2261,71 +2272,71 @@ todas as informações de atribuição anexadas aos seus dados e você será cre
versão do modelo GPT4All que utilize seus dados!
-
+ Terms for opt-inTermos de participação
-
+ Describes what will happen when you opt-inDescrição do que acontece ao participar
-
-
+
+ Opt-in for anonymous usage statisticsEnviar estatísticas de uso anônimas
-
-
+
+ YesSim
-
+ Allow opt-in for anonymous usage statisticsPermitir o envio de estatísticas de uso anônimas
-
-
+
+ NoNão
-
+ Opt-out for anonymous usage statisticsRecusar envio de estatísticas de uso anônimas
-
+ Allow opt-out for anonymous usage statisticsPermitir recusar envio de estatísticas de uso anônimas
-
-
+
+ Opt-in for networkAceitar na rede
-
+ Allow opt-in for networkPermitir aceitação na rede
-
+ Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermitir compartilhamento anônimo de chats no Datalake GPT4All
-
+ Opt-out for networkRecusar na rede
-
+ Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermitir recusar compartilhamento anônimo de chats no Datalake GPT4All
@@ -2333,23 +2344,23 @@ versão do modelo GPT4All que utilize seus dados!
SwitchModelDialog
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Atenção:</b> Ao trocar o modelo a conversa atual será perdida. Continuar?
-
+ ContinueContinuar
-
+ Continue with model loadingConfirma a troca do modelo
-
-
+
+ CancelCancelar
@@ -2357,32 +2368,32 @@ versão do modelo GPT4All que utilize seus dados!
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)Editar resposta (opcional)
-
+ Please provide a better response...Digite sua resposta...
-
+ SubmitEnviar
-
+ Submits the user's responseEnviar
-
+ CancelCancelar
-
+ Closes the response dialogFecha a caixa de diálogo de resposta
@@ -2390,125 +2401,125 @@ versão do modelo GPT4All que utilize seus dados!
main
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>Ocorreu um erro ao iniciar:</h3><br><i>"Hardware incompatível detectado."</i><br><br>Infelizmente, seu processador não atende aos requisitos mínimos para executar este programa. Especificamente, ele não possui suporte às instruções AVX, que são necessárias para executar modelos de linguagem grandes e modernos. A única solução, no momento, é atualizar seu hardware para um processador mais recente.<br><br>Para mais informações, consulte: <a href="https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions">https://pt.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ GPT4All v%1GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>Ocorreu um erro ao iniciar:</h3><br><i>"Não foi possível acessar o arquivo de configurações."</i><br><br>Infelizmente, algo está impedindo o programa de acessar o arquivo de configurações. Isso pode acontecer devido a permissões incorretas na pasta de configurações do aplicativo. Para obter ajuda, acesse nosso <a href="https://discord.gg/4M2QFmTt2k">canal no Discord</a>.
-
+ Connection to datalake failed.Falha na conexão com o datalake.
-
+ Saving chats.Salvando chats.
-
+ Network dialogAvisos de rede
-
+ opt-in to share feedback/conversationspermitir compartilhamento de feedback/conversas
-
+ Home viewTela inicial
-
+ Home view of applicationTela inicial do aplicativo
-
+ HomeInício
-
+ Chat viewVisualização do Chat
-
+ Chat view to interact with modelsVisualização do chat para interagir com os modelos
-
+ ChatsChats
-
-
+
+ ModelsModelos
-
+ Models view for installed modelsTela de modelos instalados
-
-
+
+ LocalDocsLocalDocs
-
+ LocalDocs view to configure and use local docsTela de configuração e uso de documentos locais do LocalDocs
-
-
+
+ SettingsConfig
-
+ Settings view for application configurationTela de configurações do aplicativo
-
+ The datalake is enabledO datalake está ativado
-
+ Using a network modelUsando um modelo de rede
-
+ Server mode is enabledModo servidor ativado
-
+ Installed modelsModelos instalados
-
+ View of installed modelsExibe os modelos instalados
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index 703e5b14..a1eefb88 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -4,12 +4,12 @@
AddCollectionView
-
+ ← Existing Collections← Colecţiile curente
-
+ Add Document CollectionAdaugă o Colecţie de documente
@@ -19,52 +19,52 @@
Adaugă un folder care conţine fişiere în cu text-simplu, PDF sau Markdown. Extensii suplimentare pot fi specificate în Configurare.
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.Adaugă un folder cu fişiere în format text, PDF sau Markdown. Alte extensii pot fi specificate în Configurare.
-
+ Please choose a directorySelectează un director/folder
-
+ NameDenumire
-
+ Collection name...Denumirea Colecţiei...
-
+ Name of the collection to add (Required)Denumirea Colecţiei de adăugat (necesar)
-
+ FolderFolder
-
+ Folder path...Calea spre folder...
-
+ Folder path to documents (Required)Calea spre documente (necesar)
-
+ BrowseCăutare
-
+ Create CollectionCreează Colecţia
@@ -72,174 +72,174 @@
AddModelView
-
+ ← Existing Models← Modelele curente/instalate
-
+ Explore ModelsCaută modele
-
+ Discover and download models by keyword search...Caută şi descarcă modele după un cuvânt-cheie...
-
+ Text field for discovering and filtering downloadable modelsCâmp pentru căutarea şi filtrarea modelelor ce pot fi descărcate
-
+ Initiate model discovery and filteringIniţiază căutarea şi filtrarea modelelor
-
+ Triggers discovery and filtering of modelsActivează căutarea şi filtrarea modelelor
-
+ DefaultImplicit
-
+ LikesLikes
-
+ DownloadsDownload-uri
-
+ RecentRecent/e
-
+ AscAsc. (A->Z)
-
+ DescDesc. (Z->A)
-
+ NoneNiciunul
-
+ Searching · %1Căutare · %1
-
+ Sort by: %1Ordonare după: %1
-
+ Sort dir: %1Sensul ordonării: %1
-
+ Limit: %1Límită: %1
-
+ Network error: could not retrieve %1Eroare de reţea: nu se poate prelua %1
-
-
+
+ Busy indicatorIndicator de activitate
-
+ Displayed when the models request is ongoingAfişat în timpul solicitării modelului
-
+ Model fileFişierul modelului
-
+ Model file to be downloadedFişierul modelului de descărcat
-
+ DescriptionDescriere
-
+ File descriptionDescrierea fişierului
-
+ CancelAnulare
-
+ ResumeContinuare
-
+ DownloadDownload
-
+ Stop/restart/start the downloadOpreşte/Reporneşte/Începe descărcarea
-
+ RemoveŞterge
-
+ Remove model from filesystemŞterge modelul din sistemul de fişiere
-
-
+
+ InstallInstalare
-
+ Install online modelInstalez un model din online
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Eroare</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem (%2).</strong></font>
@@ -250,18 +250,18 @@
<strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are acest sistem (%2).</strong></font>
-
+ %1 GB%1 GB
-
-
+
+ ??
-
+ Describes an error that occurred when downloadingDescrie eroarea apărută în timpul descărcării
@@ -271,100 +271,100 @@
<strong><font size="1"><a href="#eroare">Eroare</a></strong></font>
-
+ Error for incompatible hardwareEroare: hardware incompatibil
-
+ Download progressBarProgresia descărcării
-
+ Shows the progress made in the downloadAfişează progresia descărcării
-
+ Download speedViteza de download
-
+ Download speed in bytes/kilobytes/megabytes per secondViteza de download în bytes/kilobytes/megabytes pe secundă
-
+ Calculating...Calculare...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedDacă se calculează hash-ul fişierului
-
+ Displayed when the file hash is being calculatedSe afişează când se calculează hash-ul fişierului
-
+ ERROR: $API_KEY is empty.EROARE: $API_KEY absentă
-
+ enter $API_KEYintrodu cheia $API_KEY
-
+ ERROR: $BASE_URL is empty.EROARE: $BASE_URL absentă
-
+ enter $BASE_URLintrodu $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent
-
+ enter $MODEL_NAMEintrodu $MODEL_NAME
-
+ File sizeDimensiunea fişierului
-
+ RAM requiredRAM necesară
-
+ ParametersParametri
-
+ QuantQuant(ificare)
-
+ TypeTip
@@ -372,17 +372,17 @@
ApplicationSettings
-
+ ApplicationAplicaţie/Program
-
+ Network dialogReţea
-
+ opt-in to share feedback/conversationsoptional: partajarea (share) de comentarii/conversatii
@@ -398,57 +398,57 @@
EROARE: Sistemul de actualizare nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
-
+ Error dialogEroare
-
+ Application SettingsConfigurarea programului
-
+ GeneralGeneral
-
+ ThemeTema pentru interfaţă
-
+ The application color scheme.Schema de culori a programului.
-
+ DarkÎntunecat
-
+ LightLuminos
-
+ LegacyDarkÎntunecat-vechi
-
+ Font SizeDimensiunea textului
-
+ The size of text in the application.Dimensiunea textului în program.
-
+ DeviceDispozitiv/Device
@@ -458,7 +458,7 @@
Dispozitivul de calcul utilizat pentru generarea de text. "Auto" apelează la Vulkan sau la Metal.
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -469,138 +469,138 @@
EROARE: Sistemul de Update nu poate găsi componenta MaintenanceTool<br> necesară căutării de versiuni noi!<br><br> Ai instalat acest program folosind kitul online? Dacă da,<br> atunci MaintenanceTool trebuie să fie un nivel mai sus de folderul<br> unde ai instalat programul.<br><br> Dacă nu poate fi lansată manual, atunci programul trebuie reinstalat.
-
+ SmallMic
-
+ MediumMediu
-
+ LargeMare
-
+ Language and LocaleLimbă şi Localizare
-
+ The language and locale you wish to use.Limba şi Localizarea de utilizat.
-
+ System LocaleLocalizare
-
+ The compute device used for text generation.Dispozitivul de calcul utilizat pentru generarea de text.
-
-
+
+ Application defaultImplicit
-
+ Default ModelModelul implicit
-
+ The preferred model for new chats. Also used as the local server fallback.Modelul preferat pentru noile conversaţii. Va fi folosit drept rezervă pentru serverul local.
-
+ Suggestion ModeModul de sugerare
-
+ Generate suggested follow-up questions at the end of responses.Generarea de întrebări pentru continuare, la finalul replicilor.
-
+ When chatting with LocalDocsCând se discută cu LocalDocs
-
+ Whenever possibleOricând e posibil
-
+ NeverNiciodată
-
+ Download PathCalea pentru download
-
+ Where to store local models and the LocalDocs database.Unde să fie plasate modelele şi baza de date LocalDocs.
-
+ BrowseCăutare
-
+ Choose where to save model filesSelectează locul unde vor fi plasate fişierele modelelor
-
+ Enable DatalakeActivează DataLake
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.Trimite conversaţii şi comentarii către componenta Open-source DataLake a GPT4All.
-
+ AdvancedAvansate
-
+ CPU ThreadsThread-uri CPU
-
+ The number of CPU threads used for inference and embedding.Numărul de thread-uri CPU utilizate pentru inferenţă şi embedding.
-
+ Save Chat ContextSalvarea contextului conversaţiei
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.
@@ -610,7 +610,7 @@
Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
-
+ Enable Local ServerActivează Serverul local
@@ -620,27 +620,27 @@
Activează pe localhost un Server compatibil cu Open-AI. ATENŢIE: Creşte consumul de resurse.
-
+ API Server PortPortul Serverului API
-
+ The port to use for the local server. Requires restart.Portul utilizat pentru Serverul local. Necesită repornirea programului.
-
+ Check For UpdatesCaută update-uri
-
+ Manually check for an update to GPT4All.Caută manual update-uri pentru GPT4All.
-
+ UpdatesUpdate-uri/Actualizări
@@ -648,13 +648,13 @@
Chat
-
-
+
+ New ChatConversaţie Nouă
-
+ Server ChatConversaţie cu Serverul
@@ -662,12 +662,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API serverEROARE: Eroare de reţea - conectarea la serverul API
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished - eroare: HTTP Error %1 %2
@@ -675,62 +675,62 @@
ChatDrawer
-
+ DrawerSertar
-
+ Main navigation drawerSertarul principal de navigare
-
+ + New Chat+ Conversaţie nouă
-
+ Create a new chatCreează o Conversaţie nouă
-
+ Select the current chat or edit the chat when in edit modeSelectează conversaţia curentă sau editeaz-o când eşti în modul editare
-
+ Edit chat nameEditează denumirea conversaţiei
-
+ Save chat nameSalvează denumirea conversaţiei
-
+ Delete chatŞterge conversaţia
-
+ Confirm chat deletionCONFIRMĂ ştergerea conversaţiei
-
+ Cancel chat deletionANULEAZĂ ştergerea conversaţiei
-
+ List of chatsLista conversaţiilor
-
+ List of chats in the drawer dialogLista conversaţiilor în secţiunea-sertar
@@ -738,32 +738,32 @@
ChatListModel
-
+ TODAYASTĂZI
-
+ THIS WEEKSĂPTĂMÂNA ACEASTA
-
+ THIS MONTHLUNA ACEASTA
-
+ LAST SIX MONTHSULTIMELE ŞASE LUNI
-
+ THIS YEARANUL ACESTA
-
+ LAST YEARANUL TRECUT
@@ -771,113 +771,113 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p><h3>Atenţie</h3><p>%1</p>
-
+ Switch model dialogSchimbarea modelului
-
+ Warn the user if they switch models, then context will be erasedAvertizează utilizatorul că la schimbarea modelului va fi şters contextul
-
+ Conversation copied to clipboard.Conversaţia a fost plasată în Clipboard.
-
+ Code copied to clipboard.Codul a fost plasat în Clipboard.
-
+ Chat panelSecţiunea de chat
-
+ Chat panel with optionsSecţiunea de chat cu opţiuni
-
+ Reload the currently loaded modelReîncarcă modelul curent
-
+ Eject the currently loaded modelEjectează modelul curent
-
+ No model installed.Niciun model instalat.
-
+ Model loading error.Eroare la încărcarea modelului.
-
+ Waiting for model...Se aşteaptă modelul...
-
+ Switching context...Se schimbă contextul...
-
+ Choose a model...Selectează un model...
-
+ Not found: %1Absent: %1
-
+ The top item is the current modelPrimul element e modelul curent
-
-
+
+ LocalDocsLocalDocs
-
+ Add documentsAdaug documente
-
+ add collections of documents to the chatadaugă Colecţii de documente la conversaţie
-
+ Load the default modelÎncarcă modelul implicit
-
+ Loads the default model which can be changed in settingsÎncarcă modelul implicit care poate fi stabilit în Configurare
-
+ No Model InstalledNiciun model instalat
@@ -887,7 +887,7 @@
GPT4All necesită cel puţin un model pentru a putea porni
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>EROARE la încărcarea
modelului:</h3><br><i>"%1"</i><br><br>Astfel
@@ -906,38 +906,38 @@
se oferă ajutor
-
+ GPT4All requires that you install at least one
model to get startedGPT4All necesită cel puţin un model pentru a putea rula
-
+ Install a ModelInstalează un model
-
+ Shows the add model viewAfişează secţiunea de adăugare a unui model
-
+ Conversation with the modelConversaţie cu modelul
-
+ prompt / response pairs from the conversationperechi prompt/replică din conversaţie
-
+ GPT4AllGPT4All
-
+ YouTu
@@ -946,98 +946,98 @@ model to get started
se recalculează contextul...
-
+ response stopped ...replică întreruptă...
-
+ processing ...procesare...
-
+ generating response ...se generează replica...
-
+ generating questions ...se generează întrebări...
-
-
+
+ CopyCopiere
-
+ Copy MessageCopiez mesajul
-
+ Disable markdownDezactivez markdown
-
+ Enable markdownActivez markdown
-
+ Thumbs upBravo
-
+ Gives a thumbs up to the responseDă un Bravo acestei replici
-
+ Thumbs downAiurea
-
+ Opens thumbs down dialogDeschide reacţia Aiurea
-
+ Suggested follow-upsContinuări sugerate
-
+ Erase and reset chat sessionŞterge şi resetează sesiunea de chat
-
+ Copy chat session to clipboardCopiez sesiunea de chat (conversaţia) în Clipboard
-
+ Redo last chat responseReface ultima replică
-
+ Stop generatingOpreşte generarea
-
+ Stop the current response generationOpreşte generarea replicii curente
-
+ Reloads the modelReîncarc modelul
@@ -1072,38 +1072,38 @@ model to get started
se oferă ajutor
-
-
+
+ Reload · %1Reîncărcare · %1
-
+ Loading · %1Încărcare · %1
-
+ Load · %1 (default) →Încarcă · %1 (implicit) →
-
+ restoring from text ...restaurare din text...
-
+ retrieving localdocs: %1 ...se preia din LocalDocs: %1 ...
-
+ searching localdocs: %1 ...se caută în LocalDocs: %1 ...
-
+ %n Source(s)%n Sursa
@@ -1112,42 +1112,42 @@ model to get started
-
+ Send a message...Trimite un mesaj...
-
+ Load a model to continue...Încarcă un model pentru a continua...
-
+ Send messages/prompts to the modelTrimite mesaje/prompt-uri către model
-
+ CutDecupare (Cut)
-
+ PasteAlipire (Paste)
-
+ Select AllSelectez tot
-
+ Send messageTrimit mesajul
-
+ Sends the message/prompt contained in textfield to the modelTrimite modelului mesajul/prompt-ul din câmpul-text
@@ -1155,12 +1155,12 @@ model to get started
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete resultsAtenţie: căutarea în Colecţii în timp ce sunt Indexate poate întoarce rezultate incomplete
-
+ %n file(s)%n fişier
@@ -1169,7 +1169,7 @@ model to get started
-
+ %n word(s)%n cuvânt
@@ -1178,17 +1178,17 @@ model to get started
-
+ UpdatingActualizare
-
+ + Add Docs+ Adaug documente
-
+ Select a collection to make it available to the chat model.Selectează o Colecţie pentru ca modelul să o poată accesa.
@@ -1196,37 +1196,37 @@ model to get started
Download
-
+ Model "%1" is installed successfully.Modelul "%1" - instalat cu succes.
-
+ ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent.
-
+ ERROR: $API_KEY is empty.EROARE: $API_KEY absentă
-
+ ERROR: $BASE_URL is invalid.EROARE: $API_KEY incorecta
-
+ ERROR: Model "%1 (%2)" is conflict.EROARE: Model "%1 (%2)" conflictual.
-
+ Model "%1 (%2)" is installed successfully.Modelul "%1 (%2)" - instalat cu succes.
-
+ Model "%1" is removed.Modelul "%1" - îndepărtat
@@ -1234,87 +1234,87 @@ model to get started
HomeView
-
+ Welcome to GPT4AllBun venit în GPT4All
-
+ The privacy-first LLM chat applicationProgramul ce prioritizează confidenţialitatea (privacy)
-
+ Start chattingÎncepe o conversaţie
-
+ Start ChattingÎncepe o conversaţie
-
+ Chat with any LLMDialoghează cu orice LLM
-
+ LocalDocsLocalDocs
-
+ Chat with your local filesDialoghează cu fişiere locale
-
+ Find ModelsCaută modele
-
+ Explore and download modelsExplorează şi descarcă modele
-
+ Latest newsUltimele ştiri
-
+ Latest news from GPT4AllUltimele ştiri de la GPT4All
-
+ Release NotesDespre această versiune
-
+ DocumentationDocumentaţie
-
+ DiscordDiscord
-
+ X (Twitter)X (Twitter)
-
+ GithubGitHub
-
+ nomic.ainomic.ai
@@ -1323,7 +1323,7 @@ model to get started
GitHub
-
+ Subscribe to NewsletterAbonare la Newsletter
@@ -1331,22 +1331,22 @@ model to get started
LocalDocsSettings
-
+ LocalDocsLocalDocs
-
+ LocalDocs SettingsConfigurarea LocalDocs
-
+ IndexingIndexare
-
+ Allowed File ExtensionsExtensii compatibile de fişier
@@ -1356,12 +1356,12 @@ model to get started
Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.
-
+ EmbeddingEmbedding
-
+ Use Nomic Embed APIFolosesc Nomic Embed API
@@ -1371,7 +1371,7 @@ model to get started
Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
-
+ Nomic API KeyCheia API Nomic
@@ -1382,72 +1382,72 @@ model to get started
Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
-
+ Embeddings DeviceDispozitivul pentru Embeddings
-
+ The compute device used for embeddings. Requires restart.Dispozitivul pentru Embeddings. Necesită repornire.
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.Extensiile, separate prin virgulă. LocalDocs va încerca procesarea numai a fişierelor cu aceste extensii.
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.Embedding pe documente folosind API de la Nomic în locul unui model local. Necesită repornire.
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Cheia API de utilizat cu Nomic Embed. Obţine o cheie prin Atlas: <a href="https://atlas.nomic.ai/cli-login">pagina cheilor API</a> Necesită repornire.
-
+ Application defaultImplicit
-
+ DisplayVizualizare
-
+ Show SourcesAfişarea Surselor
-
+ Display the sources used for each response.Afişează Sursele utilizate pentru fiecare replică.
-
+ AdvancedAvansate
-
+ Warning: Advanced usage only.Atenţie: Numai pentru utilizare avansată.
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.Valori prea mari pot cauza erori cu LocalDocs, replici foarte lente sau chiar absenţa lor. În mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.Numărul maxim al citatelor ce corespund şi care vor fi adăugate la contextul pentru prompt. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea cauzează generare lentă.
@@ -1461,7 +1461,7 @@ model to get started
Valori prea mari pot cauza erori cu LocalDocs, replici lente sau absenţa lor completă. în mare, numărul {N caractere x N citate} este adăugat la Context Window/Size/Length a modelului. Mai multe informaţii: <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">aici</a>.
-
+ Document snippet size (characters)Lungimea (în caractere) a citatelor din documente
@@ -1471,7 +1471,7 @@ model to get started
numărul caracterelor din fiecare citat. Numere mari amplifică probabilitatea unor replici corecte, dar de asemenea pot cauza generare lentă.
-
+ Max document snippets per promptNumărul maxim de citate per prompt
@@ -1485,17 +1485,17 @@ model to get started
LocalDocsView
-
+ LocalDocsLocalDocs
-
+ Chat with your local filesDialoghează cu fişiere locale
-
+ + Add Collection+ Adaugă o Colecţie
@@ -1504,102 +1504,102 @@ model to get started
EROARE: Baza de date LocalDocs nu e validă.
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.EROARE: Baza de date LocalDocs nu poate fi accesată sau nu e validă. Programul trebuie repornit după ce se încearcă oricare din următoarele remedii sugerate.</i><br><ul><li>Asigură-te că folderul pentru <b>Download Path</b> există în sistemul de fişiere.</li><li>Verifică permisiunile şi apartenenţa folderului pentru <b>Download Path</b>.</li><li>Dacă există fişierul <b>localdocs_v2.db</b>, verifică-i apartenenţa şi permisiunile citire/scriere (read/write).</li></ul><br>Dacă problema persistă şi există vreun fişier 'localdocs_v*.db', ca ultimă soluţie poţi<br>încerca duplicarea (backup) şi apoi ştergerea lor. Oricum, va trebui să re-creezi Colecţiile.
-
+ No Collections InstalledNu există Colecţii instalate
-
+ Install a collection of local documents to get started using this featureInstalează o Colecţie de documente pentru a putea utiliza funcţionalitatea aceasta
-
+ + Add Doc Collection+ Adaugă o Colecţie de documente
-
+ Shows the add model viewAfişează secţiunea de adăugare a unui model
-
+ Indexing progressBarBara de progresie a Indexării
-
+ Shows the progress made in the indexingAfişează progresia Indexării
-
+ ERROREROARE
-
+ INDEXING...SE INDEXEAZĂ...
-
+ EMBEDDING...EMBEDDINGs...
-
+ REQUIRES UPDATENECESITĂ UPDATE
-
+ READYGATA
-
+ INSTALLING...INSTALARE...
-
+ Indexing in progressSe Indexează...
-
+ Embedding in progress...Se calculează Embeddings...
-
+ This collection requires an update after version changeAceastă Colecţie necesită update după schimbarea versiunii
-
+ Automatically reindexes upon changes to the folderSe reindexează automat după schimbări ale folderului
-
+ Installation in progress...Instalare în curs...
-
+ %%
-
+ %n file(s)%n fişier
@@ -1608,7 +1608,7 @@ model to get started
-
+ %n word(s)%n cuvânt
@@ -1617,27 +1617,27 @@ model to get started
-
+ RemoveŞterg
-
+ RebuildReconstrucţie
-
+ Reindex this folder from scratch. This is slow and usually not needed.Reindexează de la zero acest folder. Procesul e lent şi de obicei inutil.
-
+ UpdateUpdate/Actualizare
-
+ Update the collection to the new version. This is a slow operation.Actualizează Colecţia la noua versiune. Această procedură e lentă.
@@ -1661,67 +1661,78 @@ model to get started
<strong>Modelul ChatGPT GPT-3.5 Turbo al OpenAI</strong><br> %1
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)%1 (%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>Model API compatibil cu OpenAI</strong><br><ul><li>Cheia API: %1</li><li>Base URL: %2</li><li>Numele modelului: %3</li></ul>
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>Necesită o cheie API OpenAI personală. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la OpenAI!</li><li>Cheia ta API va fi stocată pe disc (local) </li><li>Va fi utilizată numai pentru comunicarea cu OpenAI</li><li>Poţi solicita o cheie API aici: <a href="https://platform.openai.com/account/api-keys">aici.</a></li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>Modelul OpenAI's ChatGPT GPT-3.5 Turbo</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* Chiar dacă plăteşti la OpenAI pentru ChatGPT-4, aceasta nu garantează accesul la cheia API. Contactează OpenAI pentru mai multe informaţii.
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>Modelul ChatGPT GPT-4 al OpenAI</strong><br> %1 %2
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Necesită cheia personală Mistral API. </li><li>ATENŢIE: Conversaţiile tale vor fi trimise la Mistral!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu Mistral</li><li>Poţi solicita o cheie API aici: <a href="https://console.mistral.ai/user/api-keys">aici</a>.</li>
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Modelul Mistral Tiny</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Modelul Mistral Small</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Modelul Mistral Medium</strong><br> %1
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>Necesită cheia personală API si base-URL a API.</li><li>ATENŢIE: Conversaţiile tale vor fi trimise la serverul API compatibil cu OpenAI specificat!</li><li>Cheia ta API va fi stocată pe disc (local)</li><li>Va fi utilizată numai pentru comunicarea cu serverul API compatibil cu OpenAI</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>Conectare la un server API compatibil cu OpenAI</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Creat de către %1.</strong><br><ul><li>Publicat in: %2.<li>Acest model are %3 Likes.<li>Acest model are %4 download-uri.<li>Mai multe informaţii pot fi găsite la: <a href="https://huggingface.co/%5">aici.</a></ul>
@@ -1752,37 +1763,37 @@ model to get started
ModelSettings
-
+ ModelModel
-
+ Model SettingsConfigurez modelul
-
+ CloneClonez
-
+ RemoveŞterg
-
+ NameDenumire
-
+ Model FileFişierul modelului
-
+ System PromptSystem Prompt
@@ -1792,12 +1803,12 @@ model to get started
Plasat la Începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de Încadrare.
-
+ Prompt TemplatePrompt Template
-
+ The template that wraps every prompt.Standardul de formulare a fiecărui prompt.
@@ -1807,32 +1818,32 @@ model to get started
Trebuie să conţină textul "%1" care va fi Înlocuit cu ceea ce scrie utilizatorul.
-
+ Chat Name PromptDenumirea conversaţiei
-
+ Prompt used to automatically generate chat names.Standardul de formulare a denumirii conversaţiilor.
-
+ Suggested FollowUp PromptPrompt-ul sugerat pentru a continua
-
+ Prompt used to generate suggested follow-up questions.Prompt-ul folosit pentru generarea întrebărilor de continuare.
-
+ Context LengthLungimea Contextului
-
+ Number of input and output tokens the model sees.Numărul token-urilor de input şi de output văzute de model.
@@ -1843,12 +1854,12 @@ model to get started
Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.
-
+ TemperatureTemperatura
-
+ Randomness of model output. Higher -> more variation.Libertatea/Confuzia din replica modelului. Mai mare -> mai multă libertate.
@@ -1858,12 +1869,12 @@ model to get started
Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile.
-
+ Top-PTop-P
-
+ Nucleus Sampling factor. Lower -> more predictable.Factorul de Nucleus Sampling. Mai mic -> predictibilitate mai mare.
@@ -1873,92 +1884,92 @@ model to get started
Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.Plasat la începutul fiecărei conversaţii. Trebuie să conţină token-uri(le) adecvate de încadrare.
-
+ Must contain the string "%1" to be replaced with the user's input.Trebuie să conţină textul "%1" care va fi înlocuit cu ceea ce scrie utilizatorul.
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.Numărul maxim combinat al token-urilor în prompt+replică înainte de a se pierde informaţie. Utilizarea unui context mai mare decât cel cu care a fost instruit modelul va întoarce rezultate mai slabe. NOTĂ: Nu are efect până la reîncărcarea modelului.
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.Temperatura creşte probabilitatea de alegere a unor token-uri puţin probabile. NOTĂ: O temperatură tot mai înaltă determină replici tot mai creative şi mai puţin predictibile.
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.Pot fi alese numai cele mai probabile token-uri a căror probabilitate totală este Top-P. NOTĂ: Se evită selectarea token-urilor foarte improbabile.
-
+ Min-PMin-P
-
+ Minimum token probability. Higher -> more predictable.Probabilitatea mínimă a unui token. Mai mare -> mai predictibil.
-
+ Sets the minimum relative probability for a token to be considered.Stabileşte probabilitatea minimă relativă a unui token de luat în considerare.
-
+ Top-KTop-K
-
+ Size of selection pool for tokens.Dimensiunea setului de token-uri.
-
+ Only the top K most likely tokens will be chosen from.Se va alege numai din cele mai probabile K token-uri.
-
+ Max LengthLungimea maximă
-
+ Maximum response length, in tokens.Lungimea maximă - în token-uri - a replicii.
-
+ Prompt Batch SizePrompt Batch Size
-
+ The batch size used for prompt processing.Dimensiunea setului de token-uri citite simultan din prompt.
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.Numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1970,32 +1981,32 @@ NOTE: Does not take effect until you reload the model.
numărul token-urilor procesate simultan. NOTĂ: Valori tot mai mari pot accelera citirea prompt-urilor, dar şi utiliza mai multă RAM.
-
+ Repeat PenaltyPenalizarea pentru repetare
-
+ Repetition penalty factor. Set to 1 to disable.Factorul de penalizare a repetării ce se dezactivează cu valoarea 1.
-
+ Repeat Penalty TokensToken-uri pentru penalizare a repetării
-
+ Number of previous tokens used for penalty.Numărul token-urilor anterioare considerate pentru penalizare.
-
+ GPU LayersLayere în GPU
-
+ Number of model layers to load into VRAM.Numărul layerelor modelului ce vor fi Încărcate în VRAM.
@@ -2010,89 +2021,89 @@ NOTE: Does not take effect until you reload the model.
ModelsView
-
+ No Models InstalledNu există modele instalate
-
+ Install a model to get started using GPT4AllInstalează un model pentru a începe să foloseşti GPT4All
-
-
+
+ + Add Model+ Adaugă un model
-
+ Shows the add model viewAfişează secţiunea de adăugare a unui model
-
+ Installed ModelsModele instalate
-
+ Locally installed chat modelsModele conversaţionale instalate local
-
+ Model fileFişierul modelului
-
+ Model file to be downloadedFişierul modelului ce va fi descărcat
-
+ DescriptionDescriere
-
+ File descriptionDescrierea fişierului
-
+ CancelAnulare
-
+ ResumeContinuare
-
+ Stop/restart/start the downloadOprirea/Repornirea/Iniţierea descărcării
-
+ RemoveŞterg
-
+ Remove model from filesystemŞterg modelul din sistemul de fişiere
-
-
+
+ InstallInstalează
-
+ Install online modelInstalez un model din online
@@ -2108,130 +2119,130 @@ NOTE: Does not take effect until you reload the model.
<strong><font size="2">ATENţIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău (%2).</strong></font>
-
+ %1 GB%1 GB
-
+ ??
-
+ Describes an error that occurred when downloadingDescrie o eroare apărută la download
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#eroare">Error</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">ATENŢIE: Nerecomandat pentru acest hardware. Modelul necesită mai multă memorie (%1 GB) decât are sistemul tău(%2).</strong></font>
-
+ Error for incompatible hardwareEroare - hardware incompatibil
-
+ Download progressBarBara de progresie a descărcării
-
+ Shows the progress made in the downloadAfişează progresia descărcării
-
+ Download speedViteza de download
-
+ Download speed in bytes/kilobytes/megabytes per secondViteza de download în bytes/kilobytes/megabytes pe secundă
-
+ Calculating......Se calculează...
-
-
-
-
+
+
+
+ Whether the file hash is being calculatedDacă se va calcula hash-ul fişierului
-
+ Busy indicatorIndicator de activitate
-
+ Displayed when the file hash is being calculatedAfişat când se calculează hash-ul unui fişier
-
+ ERROR: $API_KEY is empty.EROARE: $API_KEY absentă.
-
+ enter $API_KEYintrodu cheia $API_KEY
-
+ ERROR: $BASE_URL is empty.EROARE: $BASE_URL absentă.
-
+ enter $BASE_URLintrodu $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.EROARE: $MODEL_NAME absent.
-
+ enter $MODEL_NAMEintrodu $MODEL_NAME
-
+ File sizeDimensiunea fişierului
-
+ RAM requiredRAM necesară
-
+ ParametersParametri
-
+ QuantQuant(ificare)
-
+ TypeTip
@@ -2239,12 +2250,12 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
-
+ Fancy linkLink haios
-
+ A stylized linkUn link cu stil
@@ -2252,7 +2263,7 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
-
+ Please choose a directorySelectează un director (folder)
@@ -2260,12 +2271,12 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
-
+ Restore DefaultsRestaurez valorile implicite
-
+ Restores settings dialog to a default stateRestaurez secţiunea de configurare la starea sa implicită
@@ -2273,7 +2284,7 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.Contribuie cu date/informaţii la componenta Open-source DataLake a GPT4All.
@@ -2311,7 +2322,7 @@ NOTE: Does not take effect until you reload the model.
participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2334,47 +2345,47 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
participant contribuitor la orice lansare a unui model GPT4All care foloseşte datele tale!
-
+ Terms for opt-inTermenii pentru participare
-
+ Describes what will happen when you opt-inDescrie ce se întâmplă când participi
-
+ Please provide a name for attribution (optional)Specifică o denumire pentru această apreciere (opţional)
-
+ Attribution (optional)Apreciere (opţional)
-
+ Provide attributionApreciază
-
+ EnableActivează
-
+ Enable opt-inActivează participarea
-
+ CancelAnulare
-
+ Cancel opt-inAnulează participarea
@@ -2382,17 +2393,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
-
+ New version is availableO nouă versiune disponibilă!
-
+ UpdateUpdate/Actualizare
-
+ Update to new versionActualizează la noua versiune
@@ -2400,17 +2411,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
-
+ Reveals a shortlived help balloonAfişează un mesaj scurt de asistenţă
-
+ Busy indicatorIndicator de activitate
-
+ Displayed when the popup is showing busySe afişează când procedura este în desfăşurare
@@ -2418,28 +2429,28 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
SettingsView
-
-
+
+ SettingsConfigurare
-
+ Contains various application settingsConţine setări ale programului
-
+ ApplicationProgram
-
+ ModelModel
-
+ LocalDocsLocalDocs
@@ -2447,7 +2458,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
-
+ Welcome!Bun venit!
@@ -2460,12 +2471,12 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
%2
-
+ Release notesDespre versiune
-
+ Release notes for this versionDespre această versiune
@@ -2517,7 +2528,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
care foloseşte datele tale!
-
+ ### Release notes
%1### Contributors
%2
@@ -2526,7 +2537,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
%2
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2562,71 +2573,71 @@ participant contribuitor la orice lansare a unui model GPT4All
care foloseşte datele tale!
-
+ Terms for opt-inTermenii pentru participare
-
+ Describes what will happen when you opt-inDescrie ce se întâmplă când participi
-
-
+
+ Opt-in for anonymous usage statisticsAcceptă colectarea de statistici despre utilizare -anonimă-
-
-
+
+ YesDa
-
+ Allow opt-in for anonymous usage statisticsAcceptă participarea la colectarea de statistici despre utilizare -anonimă-
-
-
+
+ NoNu
-
+ Opt-out for anonymous usage statisticsAnulează participarea la colectarea de statistici despre utilizare -anonimă-
-
+ Allow opt-out for anonymous usage statisticsPermite anularea participării la colectarea de statistici despre utilizare -anonimă-
-
-
+
+ Opt-in for networkAcceptă pentru reţea
-
+ Allow opt-in for networkPermite participarea pentru reţea
-
+ Allow opt-in anonymous sharing of chats to the GPT4All DatalakePermite participarea la partajarea (share) -anonimă- a conversaţiilor către DataLake a GPT4All
-
+ Opt-out for networkRefuz participarea, pentru reţea
-
+ Allow opt-out anonymous sharing of chats to the GPT4All DatalakePermite anularea participării la partajarea -anonimă- a conversaţiilor către DataLake a GPT4All
@@ -2639,23 +2650,23 @@ care foloseşte datele tale!
<b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>Atenţie:</b> schimbarea modelului va şterge conversaţia curentă. Confirmi aceasta?
-
+ ContinueContinuă
-
+ Continue with model loadingContinuă încărcarea modelului
-
-
+
+ CancelAnulare
@@ -2663,32 +2674,32 @@ care foloseşte datele tale!
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)Te rog, editează textul de mai jos pentru a oferi o replică mai bună (opţional).
-
+ Please provide a better response...Te rog, oferă o replică mai bună...
-
+ SubmitTrimite
-
+ Submits the user's responseTrimite răspunsul dat de utilizator
-
+ CancelAnulare
-
+ Closes the response dialogÎnchide afişarea răspunsului
@@ -2709,7 +2720,7 @@ care foloseşte datele tale!
<h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ GPT4All v%1GPT4All v%1
@@ -2728,120 +2739,120 @@ care foloseşte datele tale!
<h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>A apărut o eroare la iniţializare:; </h3><br><i>"Hardware incompatibil. "</i><br><br>Din păcate, procesorul (CPU) nu întruneşte condiţiile minime pentru a rula acest program. În particular, nu suportă instrucţiunile AVX pe care programul le necesită pentru a integra un model conversaţional modern. În acest moment, unica soluţie este să îţi aduci la zi sistemul hardware cu un CPU mai recent.<br><br>Aici sunt mai multe informaţii: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>A apărut o eroare la iniţializare:; </h3><br><i>"Nu poate fi accesat fişierul de configurare a programului."</i><br><br>Din păcate, ceva împiedică programul în a accesa acel fişier. Cauza poate fi un set de permisiuni incorecte pe directorul/folderul local de configurare unde se află acel fişier. Poţi parcurge canalul nostru <a href="https://discord.gg/4M2QFmTt2k">Discord</a> unde vei putea primi asistenţă.
-
+ Connection to datalake failed.Conectarea la DataLake a eşuat.
-
+ Saving chats.Se salvează conversaţiile.
-
+ Network dialogDialogul despre reţea
-
+ opt-in to share feedback/conversationsacceptă partajarea (share) de comentarii/conversaţii
-
+ Home viewSecţiunea de Început
-
+ Home view of applicationSecţiunea de Început a programului
-
+ HomePrima<br>pagină
-
+ Chat viewSecţiunea conversaţiilor
-
+ Chat view to interact with modelsSecţiunea de chat pentru interacţiune cu modele
-
+ ChatsConversaţii
-
-
+
+ ModelsModele
-
+ Models view for installed modelsSecţiunea modelelor instalate
-
-
+
+ LocalDocsLocalDocs
-
+ LocalDocs view to configure and use local docsSecţiunea LocalDocs de configurare şi folosire a Documentelor Locale
-
-
+
+ SettingsConfigurare
-
+ Settings view for application configurationSecţiunea de configurare a programului
-
+ The datalake is enabledDataLake: ACTIV
-
+ Using a network modelSe foloseşte un model pe reţea
-
+ Server mode is enabledModul Server: ACTIV
-
+ Installed modelsModele instalate
-
+ View of installed modelsSecţiunea modelelor instalate
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index 4bd6c395..5ee7e9b4 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections← 存在集合
-
+ Add Document Collection添加文档集合
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.添加一个包含纯文本文件、PDF或Markdown的文件夹。在“设置”中配置其他扩展。
-
+ Please choose a directory请选择一个目录
-
+ Name名称
-
+ Collection name...集合名称
-
+ Name of the collection to add (Required)集合名称 (必须)
-
+ Folder目录
-
+ Folder path...目录地址
-
+ Folder path to documents (Required)文档的目录地址(必须)
-
+ Browse查看
-
+ Create Collection创建集合
@@ -67,22 +67,22 @@
AddModelView
-
+ ← Existing Models← 存在的模型
-
+ Explore Models发现模型
-
+ Discover and download models by keyword search...通过关键词查找并下载模型 ...
-
+ Text field for discovering and filtering downloadable models用于发现和筛选可下载模型的文本字段
@@ -91,32 +91,32 @@
搜索中
-
+ Initiate model discovery and filtering启动模型发现和过滤
-
+ Triggers discovery and filtering of models触发模型的发现和筛选
-
+ Default默认
-
+ Likes喜欢
-
+ Downloads下载
-
+ Recent近期
@@ -125,12 +125,12 @@
排序:
-
+ Asc升序
-
+ Desc倒序
@@ -139,7 +139,7 @@
排序目录:
-
+ None无
@@ -152,145 +152,145 @@
网络问题:无法访问 http://gpt4all.io/models/models3.json
-
+ Searching · %1搜索中 · %1
-
+ Sort by: %1排序: %1
-
+ Sort dir: %1排序目录: %1
-
+ Limit: %1数量: %1
-
+ Network error: could not retrieve %1网络错误:无法检索 %1
-
-
+
+ Busy indicator繁忙程度
-
+ Displayed when the models request is ongoing在模型请求进行中时显示
-
+ Model file模型文件
-
+ Model file to be downloaded待下载模型
-
+ Description描述
-
+ File description文件描述
-
+ Cancel取消
-
+ Resume继续
-
+ Download下载
-
+ Stop/restart/start the download停止/重启/开始下载
-
+ Remove删除
-
+ Remove model from filesystem从系统中删除模型
-
-
+
+ Install安装
-
+ Install online model安装在线模型
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">错误</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告: 你的设备硬件不推荐 ,模型需要的内存 (%1 GB)比你的系统还要多 (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.错误:$API_KEY 为空
-
+ ERROR: $BASE_URL is empty.错误:$BASE_URL 为空
-
+ enter $BASE_URL输入 $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME为空
-
+ enter $MODEL_NAME输入:$MODEL_NAME
-
+ %1 GB%1 GB
-
-
+
+ ??
@@ -299,7 +299,7 @@
<a href="#error">错误</a>
-
+ Describes an error that occurred when downloading描述下载过程中发生的错误
@@ -316,60 +316,60 @@
你的系统需要 (
-
+ Error for incompatible hardware硬件不兼容的错误
-
+ Download progressBar下载进度
-
+ Shows the progress made in the download显示下载进度
-
+ Download speed下载速度
-
+ Download speed in bytes/kilobytes/megabytes per second下载速度 b/kb/mb /s
-
+ Calculating...计算中
-
-
-
-
+
+
+
+ Whether the file hash is being calculated是否正在计算文件哈希
-
+ Displayed when the file hash is being calculated在计算文件哈希时显示
-
+ enter $API_KEY输入$API_KEY
-
+ File size文件大小
-
+ RAM requiredRAM 需要
@@ -378,17 +378,17 @@
GB
-
+ Parameters参数
-
+ Quant量化
-
+ Type类型
@@ -396,22 +396,22 @@
ApplicationSettings
-
+ Application应用
-
+ Network dialog网络对话
-
+ opt-in to share feedback/conversations选择加入以共享反馈/对话
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -425,87 +425,87 @@
如果无法手动启动它,那么恐怕您需要重新安装。
-
+ Error dialog错误对话
-
+ Application Settings应用设置
-
+ General通用设置
-
+ Theme主题
-
+ The application color scheme.应用的主题颜色
-
+ Dark深色
-
+ Light亮色
-
+ LegacyDarkLegacyDark
-
+ Font Size字体大小
-
+ The size of text in the application.应用中的文本大小。
-
+ Small小
-
+ Medium中
-
+ Large大
-
+ Language and Locale语言和本地化
-
+ The language and locale you wish to use.你想使用的语言
-
+ System Locale系统语言
-
+ Device设备
@@ -514,138 +514,138 @@
用于文本生成的计算设备. "自动" 使用 Vulkan or Metal.
-
+ The compute device used for text generation.设备用于文本生成
-
-
+
+ Application default程序默认
-
+ Default Model默认模型
-
+ The preferred model for new chats. Also used as the local server fallback.新聊天的首选模式。也用作本地服务器回退。
-
+ Suggestion Mode建议模式
-
+ Generate suggested follow-up questions at the end of responses.在答复结束时生成建议的后续问题。
-
+ When chatting with LocalDocs本地文档检索
-
+ Whenever possible只要有可能
-
+ Never从不
-
+ Download Path下载目录
-
+ Where to store local models and the LocalDocs database.本地模型和本地文档数据库存储目录
-
+ Browse查看
-
+ Choose where to save model files模型下载目录
-
+ Enable Datalake开启数据湖
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.发送对话和反馈给GPT4All 的开源数据湖。
-
+ Advanced高级
-
+ CPU ThreadsCPU线程
-
+ The number of CPU threads used for inference and embedding.用于推理和嵌入的CPU线程数
-
+ Save Chat Context保存对话上下文
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话.
-
+ Enable Local Server开启本地服务
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.将OpenAI兼容服务器暴露给本地主机。警告:导致资源使用量增加。
-
+ API Server PortAPI 服务端口
-
+ The port to use for the local server. Requires restart.使用本地服务的端口,需要重启
-
+ Check For Updates检查更新
-
+ Manually check for an update to GPT4All.手动检查更新
-
+ Updates更新
@@ -653,13 +653,13 @@
Chat
-
-
+
+ New Chat新对话
-
+ Server Chat服务器对话
@@ -675,12 +675,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API server错误:连接到 API 服务器时发生网络错误
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished 收到 HTTP 错误 %1 %2
@@ -688,62 +688,62 @@
ChatDrawer
-
+ Drawer抽屉
-
+ Main navigation drawer导航
-
+ + New Chat+ 新对话
-
+ Create a new chat新对话
-
+ Select the current chat or edit the chat when in edit mode选择当前的聊天或在编辑模式下编辑聊天
-
+ Edit chat name修改对话名称
-
+ Save chat name保存对话名称
-
+ Delete chat删除对话
-
+ Confirm chat deletion确认删除对话
-
+ Cancel chat deletion取消删除对话
-
+ List of chats对话列表
-
+ List of chats in the drawer dialog对话框中的聊天列表
@@ -751,32 +751,32 @@
ChatListModel
-
+ TODAY今天
-
+ THIS WEEK本周
-
+ THIS MONTH本月
-
+ LAST SIX MONTHS半年内
-
+ THIS YEAR今年内
-
+ LAST YEAR去年
@@ -792,27 +792,27 @@
lt;br><br>模型加载失败可能由多种原因造成,但最常见的原因包括文件格式错误、下载不完整或损坏、文件类型错误、系统 RAM 不足或模型类型不兼容。以下是解决该问题的一些建议:<br><ul><li>确保模型文件具有兼容的格式和类型<li>检查下载文件夹中的模型文件是否完整<li>您可以在设置对话框中找到下载文件夹<li>如果您已侧载模型,请通过检查 md5sum 确保文件未损坏<li>在我们的<a href="https://docs.gpt4all.io/">文档</a>中了解有关支持哪些模型的更多信息对于 gui<li>查看我们的<a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 获取帮助
-
+ <h3>Warning</h3><p>%1</p><h3>警告</h3><p>%1</p>
-
+ Switch model dialog切换模型对话
-
+ Warn the user if they switch models, then context will be erased如果用户切换模型,则警告用户,然后上下文将被删除
-
+ Conversation copied to clipboard.复制对话到剪切板
-
+ Code copied to clipboard.复制代码到剪切板
@@ -821,52 +821,52 @@
响应:
-
+ Chat panel对话面板
-
+ Chat panel with options对话面板选项
-
+ Reload the currently loaded model重载当前模型
-
+ Eject the currently loaded model弹出当前加载的模型
-
+ No model installed.没有安装模型
-
+ Model loading error.模型加载错误
-
+ Waiting for model...稍等片刻
-
+ Switching context...切换上下文
-
+ Choose a model...选择模型
-
+ Not found: %1没找到: %1
@@ -879,23 +879,23 @@
载入中·
-
+ The top item is the current model当前模型的最佳选项
-
-
+
+ LocalDocs本地文档
-
+ Add documents添加文档
-
+ add collections of documents to the chat将文档集合添加到聊天中
@@ -908,53 +908,53 @@
(默认) →
-
+ Load the default model载入默认模型
-
+ Loads the default model which can be changed in settings加载默认模型,可以在设置中更改
-
+ No Model Installed没有下载模型
-
+ GPT4All requires that you install at least one
model to get startedGPT4All要求您至少安装一个模型才能开始
-
+ Install a Model下载模型
-
+ Shows the add model view查看添加的模型
-
+ Conversation with the model使用此模型对话
-
+ prompt / response pairs from the conversation对话中的提示/响应对
-
+ GPT4AllGPT4All
-
+ You您
@@ -971,7 +971,7 @@ model to get started
重新生成上下文...
-
+ response stopped ...响应停止...
@@ -984,176 +984,176 @@ model to get started
检索本地文档:
-
+ processing ...处理中
-
+ generating response ...响应中...
-
+ generating questions ...生成响应
-
-
+
+ Copy复制
-
+ Copy Message复制内容
-
+ Disable markdown不允许markdown
-
+ Enable markdown允许markdown
-
+ Thumbs up点赞
-
+ Gives a thumbs up to the response点赞响应
-
+ Thumbs down点踩
-
+ Opens thumbs down dialog打开点踩对话框
-
+ Suggested follow-ups建议的后续行动
-
+ Erase and reset chat session擦除并重置聊天会话
-
+ Copy chat session to clipboard复制对话到剪切板
-
+ Redo last chat response重新生成上个响应
-
+ Stop generating停止生成
-
+ Stop the current response generation停止当前响应
-
+ Reloads the model重载模型
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>加载模型时遇到错误:</h3><br><i><%1></i><br><br>模型加载失败可能由多种原因引起,但最常见的原因包括文件格式错误、下载不完整或损坏、文件类型错误、系统 RAM 不足或模型类型不兼容。以下是一些解决问题的建议:<br><ul><li>确保模型文件具有兼容的格式和类型<li>检查下载文件夹中的模型文件是否完整<li>您可以在设置对话框中找到下载文件夹<li>如果您已侧载模型,请通过检查 md5sum 确保文件未损坏<li>在我们的 <a href="https://docs.gpt4all.io/">文档</a> 中了解有关 gui 支持哪些模型的更多信息<li>查看我们的 <a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助
-
-
+
+ Reload · %1重载 · %1
-
+ Loading · %1载入中 · %1
-
+ Load · %1 (default) →载入 · %1 (默认) →
-
+ restoring from text ...从文本恢复中
-
+ retrieving localdocs: %1 ...检索本地文档: %1 ...
-
+ searching localdocs: %1 ...搜索本地文档: %1 ...
-
+ %n Source(s)%n 资源
-
+ Send a message...发送消息...
-
+ Load a model to continue...选择模型并继续
-
+ Send messages/prompts to the model发送消息/提示词给模型
-
+ Cut剪切
-
+ Paste粘贴
-
+ Select All全选
-
+ Send message发送消息
-
+ Sends the message/prompt contained in textfield to the model将文本框中包含的消息/提示发送给模型
@@ -1161,36 +1161,36 @@ model to get started
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete results提示: 索引时搜索集合可能会返回不完整的结果
-
+ %n file(s)
-
+ %n word(s)
-
+ Updating更新中
-
+ + Add Docs+ 添加文档
-
+ Select a collection to make it available to the chat model.选择一个集合,使其可用于聊天模型。
@@ -1198,37 +1198,37 @@ model to get started
Download
-
+ Model "%1" is installed successfully.模型 "%1" 安装成功
-
+ ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME 为空
-
+ ERROR: $API_KEY is empty.错误:$API_KEY为空
-
+ ERROR: $BASE_URL is invalid.错误:$BASE_URL 非法
-
+ ERROR: Model "%1 (%2)" is conflict.错误: 模型 "%1 (%2)" 有冲突.
-
+ Model "%1 (%2)" is installed successfully.模型 "%1 (%2)" 安装成功.
-
+ Model "%1" is removed.模型 "%1" 已删除.
@@ -1236,92 +1236,92 @@ model to get started
HomeView
-
+ Welcome to GPT4All欢迎
-
+ The privacy-first LLM chat application隐私至上的大模型咨询应用程序
-
+ Start chatting开始聊天
-
+ Start Chatting开始聊天
-
+ Chat with any LLM大预言模型聊天
-
+ LocalDocs本地文档
-
+ Chat with your local files本地文件聊天
-
+ Find Models查找模型
-
+ Explore and download models发现并下载模型
-
+ Latest news新闻
-
+ Latest news from GPT4AllGPT4All新闻
-
+ Release Notes发布日志
-
+ Documentation文档
-
+ DiscordDiscord
-
+ X (Twitter)X (Twitter)
-
+ GithubGithub
-
+ nomic.ainomic.ai
-
+ Subscribe to Newsletter订阅信息
@@ -1329,117 +1329,117 @@ model to get started
LocalDocsSettings
-
+ LocalDocs本地文档
-
+ LocalDocs Settings本地文档设置
-
+ Indexing索引中
-
+ Allowed File Extensions添加文档扩展名
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.逗号分隔的列表。LocalDocs 只会尝试处理具有这些扩展名的文件
-
+ EmbeddingEmbedding
-
+ Use Nomic Embed API使用 Nomic 内部 API
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.使用快速的 Nomic API 嵌入文档,而不是使用私有本地模型
-
+ Nomic API KeyNomic API Key
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.Nomic Embed 使用的 API 密钥。请访问官网获取,需要重启。
-
+ Embeddings DeviceEmbeddings 设备
-
+ The compute device used for embeddings. Requires restart.技术设备用于embeddings. 需要重启.
-
+ Application default程序默认
-
+ Display显示
-
+ Show Sources查看源码
-
+ Display the sources used for each response.显示每个响应所使用的源。
-
+ Advanced高级
-
+ Warning: Advanced usage only.提示: 仅限高级使用。
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.值过大可能会导致 localdocs 失败、响应速度极慢或根本无法响应。粗略地说,{N 个字符 x N 个片段} 被添加到模型的上下文窗口中。更多信息请见<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此处</a>。
-
+ Document snippet size (characters)文档粘贴大小 (字符)
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每个文档片段的字符数。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
-
+ Max document snippets per prompt每个提示的最大文档片段数
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.检索到的文档片段最多添加到提示上下文中的前 N 个最佳匹配项。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。
@@ -1447,17 +1447,17 @@ model to get started
LocalDocsView
-
+ LocalDocs本地文档
-
+ Chat with your local files和本地文件对话
-
+ + Add Collection+ 添加集合
@@ -1472,136 +1472,136 @@ model to get started
<h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。
-
+ No Collections Installed没有集合
-
+ Install a collection of local documents to get started using this feature安装一组本地文档以开始使用此功能
-
+ + Add Doc Collection+ 添加文档集合
-
+ Shows the add model view查看添加的模型
-
+ Indexing progressBar索引进度
-
+ Shows the progress made in the indexing显示索引进度
-
+ ERROR错误
-
+ INDEXING索引
-
+ EMBEDDINGEMBEDDING
-
+ REQUIRES UPDATE需更新
-
+ READY准备
-
+ INSTALLING安装中
-
+ Indexing in progress构建索引中
-
+ Embedding in progressEmbedding进度
-
+ This collection requires an update after version change此集合需要在版本更改后进行更新
-
+ Automatically reindexes upon changes to the folder在文件夹变动时自动重新索引
-
+ Installation in progress安装进度
-
+ %%
-
+ %n file(s)%n 文件
-
+ %n word(s)%n 词
-
+ Remove删除
-
+ Rebuild重新构建
-
+ Reindex this folder from scratch. This is slow and usually not needed.从头开始重新索引此文件夹。这个过程较慢,通常情况下不需要。
-
+ Update更新
-
+ Update the collection to the new version. This is a slow operation.将集合更新为新版本。这是一个缓慢的操作。
@@ -1609,52 +1609,63 @@ model to get started
ModelList
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)%1 (%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>与 OpenAI 兼容的 API 模型</strong><br><ul><li>API 密钥:%1</li><li>基本 URL:%2</li><li>模型名称:%3</li></ul>
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>需要个人 OpenAI API 密钥。</li><li>警告:将把您的聊天内容发送给 OpenAI!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与 OpenAI 通信</li><li>您可以在此处<a href="https://platform.openai.com/account/api-keys">申请 API 密钥。</a></li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Mistral Tiny model</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Mistral Small model</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Mistral Medium model</strong><br> %1
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>需要个人 API 密钥和 API 基本 URL。</li><li>警告:将把您的聊天内容发送到您指定的与 OpenAI 兼容的 API 服务器!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与与 OpenAI 兼容的 API 服务器通信</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>连接到与 OpenAI 兼容的 API 服务器</strong><br> %1
@@ -1663,7 +1674,7 @@ model to get started
<strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br>
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* 即使您为ChatGPT-4向OpenAI付款,这也不能保证API密钥访问。联系OpenAI获取更多信息。
@@ -1672,7 +1683,7 @@ model to get started
<strong>OpenAI's ChatGPT model GPT-4</strong><br>
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li>
@@ -1689,7 +1700,7 @@ model to get started
<strong>Mistral Medium model</strong><br>
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul>
@@ -1697,57 +1708,57 @@ model to get started
ModelSettings
-
+ Model模型
-
+ Model Settings模型设置
-
+ Clone克隆
-
+ Remove删除
-
+ Name名称
-
+ Model File模型文件
-
+ System Prompt系统提示词
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.每次对话开始时的前缀
-
+ Prompt Template提示词模版
-
+ The template that wraps every prompt.包装每个提示的模板
-
+ Must contain the string "%1" to be replaced with the user's input.必须包含字符串 "%1" 替换为用户的's 输入.
@@ -1757,37 +1768,37 @@ optional image
添加可选图片
-
+ Chat Name Prompt聊天名称提示
-
+ Prompt used to automatically generate chat names.用于自动生成聊天名称的提示。
-
+ Suggested FollowUp Prompt建议的后续提示
-
+ Prompt used to generate suggested follow-up questions.用于生成建议的后续问题的提示。
-
+ Context Length上下文长度
-
+ Number of input and output tokens the model sees.模型看到的输入和输出令牌的数量。
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1796,128 +1807,128 @@ NOTE: Does not take effect until you reload the model.
注意:在重新加载模型之前不会生效。
-
+ Temperature温度
-
+ Randomness of model output. Higher -> more variation.模型输出的随机性。更高->更多的变化。
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.温度增加了选择不太可能的token的机会。
注:温度越高,输出越有创意,但预测性越低。
-
+ Top-PTop-P
-
+ Nucleus Sampling factor. Lower -> more predictable.核子取样系数。较低->更具可预测性。
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.只能选择总概率高达top_p的最有可能的令牌。
注意:防止选择极不可能的token。
-
+ Min-PMin-P
-
+ Minimum token probability. Higher -> more predictable.最小令牌概率。更高 -> 更可预测。
-
+ Sets the minimum relative probability for a token to be considered.设置被考虑的标记的最小相对概率。
-
+ Top-KTop-K
-
+ Size of selection pool for tokens.令牌选择池的大小。
-
+ Only the top K most likely tokens will be chosen from.仅从最可能的前 K 个标记中选择
-
+ Max Length最大长度
-
+ Maximum response length, in tokens.最大响应长度(以令牌为单位)
-
+ Prompt Batch Size提示词大小
-
+ The batch size used for prompt processing.用于快速处理的批量大小。
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.一次要处理的提示令牌数量。
注意:较高的值可以加快读取提示,但会使用更多的RAM。
-
+ Repeat Penalty重复惩罚
-
+ Repetition penalty factor. Set to 1 to disable.重复处罚系数。设置为1可禁用。
-
+ Repeat Penalty Tokens重复惩罚数
-
+ Number of previous tokens used for penalty.用于惩罚的先前令牌数量。
-
+ GPU LayersGPU 层
-
+ Number of model layers to load into VRAM.要加载到VRAM中的模型层数。
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1929,134 +1940,134 @@ NOTE: Does not take effect until you reload the model.
ModelsView
-
+ No Models Installed无模型
-
+ Install a model to get started using GPT4All安装模型并开始使用
-
-
+
+ + Add Model+ 添加模型
-
+ Shows the add model view查看增加到模型
-
+ Installed Models已安装的模型
-
+ Locally installed chat models本地安装的聊天
-
+ Model file模型文件
-
+ Model file to be downloaded待下载的模型
-
+ Description描述
-
+ File description文件描述
-
+ Cancel取消
-
+ Resume继续
-
+ Stop/restart/start the download停止/重启/开始下载
-
+ Remove删除
-
+ Remove model from filesystem从系统中删除模型
-
-
+
+ Install按照
-
+ Install online model安装在线模型
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">Error</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>
-
+ ERROR: $API_KEY is empty.错误:$API_KEY 为空
-
+ ERROR: $BASE_URL is empty.错误:$BASE_URL 为空
-
+ enter $BASE_URL输入 $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.错误:$MODEL_NAME为空
-
+ enter $MODEL_NAME输入:$MODEL_NAME
-
+ %1 GB%1 GB
-
+ ??
@@ -2065,7 +2076,7 @@ NOTE: Does not take effect until you reload the model.
<a href="#错误">错误</a>
-
+ Describes an error that occurred when downloading描述下载时发生的错误
@@ -2082,65 +2093,65 @@ NOTE: Does not take effect until you reload the model.
GB) 你的系统需要 (
-
+ Error for incompatible hardware硬件不兼容的错误
-
+ Download progressBar下载进度
-
+ Shows the progress made in the download显示下载进度
-
+ Download speed下载速度
-
+ Download speed in bytes/kilobytes/megabytes per second下载速度 b/kb/mb /s
-
+ Calculating...计算中...
-
-
-
-
+
+
+
+ Whether the file hash is being calculated是否正在计算文件哈希
-
+ Busy indicator繁忙程度
-
+ Displayed when the file hash is being calculated在计算文件哈希时显示
-
+ enter $API_KEY输入 $API_KEY
-
+ File size文件大小
-
+ RAM required需要 RAM
@@ -2149,17 +2160,17 @@ NOTE: Does not take effect until you reload the model.
GB
-
+ Parameters参数
-
+ Quant量化
-
+ Type类型
@@ -2167,12 +2178,12 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
-
+ Fancy link精选链接
-
+ A stylized link样式化链接
@@ -2180,7 +2191,7 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
-
+ Please choose a directory请选择目录
@@ -2188,12 +2199,12 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
-
+ Restore Defaults恢复初始化
-
+ Restores settings dialog to a default state将设置对话框恢复为默认状态
@@ -2201,12 +2212,12 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.向GPT4All开源数据湖贡献数据
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2219,47 +2230,47 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
注意:通过启用此功能,您将把数据发送到 GPT4All 开源数据湖。启用此功能后,您不应该期望聊天隐私。但是,如果您愿意,您应该期望可选的归因。您的聊天数据将公开供任何人下载,并将被 Nomic AI 用于改进未来的 GPT4All 模型。Nomic AI 将保留与您的数据相关的所有归因信息,并且您将被视为使用您的数据的任何 GPT4All 模型发布的贡献者!
-
+ Terms for opt-in选择加入的条款
-
+ Describes what will happen when you opt-in描述选择加入时会发生的情况
-
+ Please provide a name for attribution (optional)填写名称属性 (可选)
-
+ Attribution (optional)属性 (可选)
-
+ Provide attribution提供属性
-
+ Enable启用
-
+ Enable opt-in启用选择加入
-
+ Cancel取消
-
+ Cancel opt-in取消加入
@@ -2267,17 +2278,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
NewVersionDialog
-
+ New version is available新版本可选
-
+ Update更新
-
+ Update to new version更新到新版本
@@ -2285,17 +2296,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
PopupDialog
-
+ Reveals a shortlived help balloon显示一个短暂的帮助气球
-
+ Busy indicator繁忙程度
-
+ Displayed when the popup is showing busy在弹出窗口显示忙碌时显示
@@ -2310,28 +2321,28 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
SettingsView
-
-
+
+ Settings设置
-
+ Contains various application settings包含各种应用程序设置
-
+ Application应用
-
+ Model模型
-
+ LocalDocs本地文档
@@ -2339,7 +2350,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
StartupDialog
-
+ Welcome!欢迎!
@@ -2354,7 +2365,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
### 贡献者
-
+ ### Release notes
%1### Contributors
%2
@@ -2363,17 +2374,17 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
%2
-
+ Release notes发布日志
-
+ Release notes for this version本版本发布日志
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2398,71 +2409,71 @@ model release that uses your data!
模型发布的贡献者!
-
+ Terms for opt-in选择加入选项
-
+ Describes what will happen when you opt-in描述选择加入时会发生的情况
-
-
+
+ Opt-in for anonymous usage statistics允许选择加入匿名使用统计数据
-
-
+
+ Yes是
-
+ Allow opt-in for anonymous usage statistics允许选择加入匿名使用统计数据
-
-
+
+ No否
-
+ Opt-out for anonymous usage statistics退出匿名使用统计数据
-
+ Allow opt-out for anonymous usage statistics允许选择退出匿名使用统计数据
-
-
+
+ Opt-in for network加入网络
-
+ Allow opt-in for network允许选择加入网络
-
+ Allow opt-in anonymous sharing of chats to the GPT4All Datalake允许选择加入匿名共享聊天至 GPT4All 数据湖
-
+ Opt-out for network取消网络
-
+ Allow opt-out anonymous sharing of chats to the GPT4All Datalake允许选择退出将聊天匿名共享至 GPT4All 数据湖
@@ -2470,23 +2481,23 @@ model release that uses your data!
SwitchModelDialog
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>警告:</b> 更改模型将删除当前对话。您想继续吗?
-
+ Continue继续
-
+ Continue with model loading模型载入时继续
-
-
+
+ Cancel取消
@@ -2494,32 +2505,32 @@ model release that uses your data!
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)请编辑下方文本以提供更好的回复。(可选)
-
+ Please provide a better response...提供更好回答...
-
+ Submit提交
-
+ Submits the user's response提交用户响应
-
+ Cancel取消
-
+ Closes the response dialog关闭的对话
@@ -2583,125 +2594,125 @@ model release that uses your data!
检查链接 <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> 寻求.
-
+ GPT4All v%1GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>启动时遇到错误:</h3><br><i>“检测到不兼容的硬件。”</i><br><br>很遗憾,您的 CPU 不满足运行此程序的最低要求。特别是,它不支持此程序成功运行现代大型语言模型所需的 AVX 内在函数。目前唯一的解决方案是将您的硬件升级到更现代的 CPU。<br><br>有关更多信息,请参阅此处:<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions>>https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>启动时遇到错误:</h3><br><i>“无法访问设置文件。”</i><br><br>不幸的是,某些东西阻止程序访问设置文件。这可能是由于设置文件所在的本地应用程序配置目录中的权限不正确造成的。请查看我们的<a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助。
-
+ Connection to datalake failed.链接数据湖失败
-
+ Saving chats.保存对话
-
+ Network dialog网络对话
-
+ opt-in to share feedback/conversations选择加入以共享反馈/对话
-
+ Home view主页
-
+ Home view of application主页
-
+ Home主页
-
+ Chat view对话视图
-
+ Chat view to interact with models聊天视图可与模型互动
-
+ Chats对话
-
-
+
+ Models模型
-
+ Models view for installed models已安装模型的页面
-
-
+
+ LocalDocs本地文档
-
+ LocalDocs view to configure and use local docsLocalDocs视图可配置和使用本地文档
-
-
+
+ Settings设置
-
+ Settings view for application configuration设置页面
-
+ The datalake is enabled数据湖已开启
-
+ Using a network model使用联网模型
-
+ Server mode is enabled服务器模式已开
-
+ Installed models安装模型
-
+ View of installed models查看已安装模型
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index f0fd630e..e6473e0e 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -4,62 +4,62 @@
AddCollectionView
-
+ ← Existing Collections← 現有收藏
-
+ Add Document Collection新增收藏文件
-
+ Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings.新增一個含有純文字檔案、PDF 與 Markdown 文件的資料夾。可在設定上增加文件副檔名。
-
+ Please choose a directory請選擇一個資料夾
-
+ Name名稱
-
+ Collection name...收藏名稱......
-
+ Name of the collection to add (Required)新增的收藏名稱(必填)
-
+ Folder資料夾
-
+ Folder path...資料夾路徑......
-
+ Folder path to documents (Required)文件所屬的資料夾路徑(必填)
-
+ Browse瀏覽
-
+ Create Collection建立收藏
@@ -67,289 +67,289 @@
AddModelView
-
+ ← Existing Models← 現有模型
-
+ Explore Models探索模型
-
+ Discover and download models by keyword search...透過關鍵字搜尋探索並下載模型......
-
+ Text field for discovering and filtering downloadable models用於探索與過濾可下載模型的文字字段
-
+ Searching · %1搜尋 · %1
-
+ Initiate model discovery and filtering探索與過濾模型
-
+ Triggers discovery and filtering of models觸發探索與過濾模型
-
+ Default預設
-
+ Likes讚
-
+ Downloads下載次數
-
+ Recent最新
-
+ Sort by: %1排序依據:%1
-
+ Asc升序
-
+ Desc降序
-
+ Sort dir: %1排序順序:%1
-
+ None無
-
+ Limit: %1上限:%1
-
+ Network error: could not retrieve %1網路錯誤:無法取得 %1
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">錯誤</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
-
+ %1 GB%1 GB
-
-
+
+ ??
-
-
+
+ Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
-
+ Displayed when the models request is ongoing當模型請求正在進行時顯示
-
+ Model file模型檔案
-
+ Model file to be downloaded即將下載的模型檔案
-
+ Description描述
-
+ File description檔案描述
-
+ Cancel取消
-
+ Resume恢復
-
+ Download下載
-
+ Stop/restart/start the download停止/重啟/開始下載
-
+ Remove移除
-
+ Remove model from filesystem從檔案系統移除模型
-
-
+
+ Install安裝
-
+ Install online model安裝線上模型
-
+ Describes an error that occurred when downloading解釋下載時發生的錯誤
-
+ Error for incompatible hardware錯誤,不相容的硬體
-
+ Download progressBar下載進度條
-
+ Shows the progress made in the download顯示下載進度
-
+ Download speed下載速度
-
+ Download speed in bytes/kilobytes/megabytes per second下載速度每秒 bytes/kilobytes/megabytes
-
+ Calculating...計算中......
-
-
-
-
+
+
+
+ Whether the file hash is being calculated是否正在計算檔案雜湊
-
+ Displayed when the file hash is being calculated計算檔案雜湊值時顯示
-
+ ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
-
+ enter $API_KEY請輸入 $API_KEY
-
+ ERROR: $BASE_URL is empty.錯誤:$BASE_URL 未填寫。
-
+ enter $BASE_URL請輸入 $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
-
+ enter $MODEL_NAME請輸入 $MODEL_NAME
-
+ File size檔案大小
-
+ RAM required所需的記憶體
-
+ Parameters參數
-
+ Quant量化
-
+ Type類型
@@ -357,22 +357,22 @@
ApplicationSettings
-
+ Application應用程式
-
+ Network dialog資料湖泊計畫對話視窗
-
+ opt-in to share feedback/conversations分享回饋/對話計畫
-
+ ERROR: Update system could not find the MaintenanceTool used<br>
to check for updates!<br><br>
Did you install this application using the online installer? If so,<br>
@@ -387,223 +387,223 @@
如果您無法順利啟動,您可能得重新安裝本應用程式。
-
+ Error dialog錯誤對話視窗
-
+ Application Settings應用程式設定
-
+ General一般
-
+ Theme主題
-
+ The application color scheme.應用程式的配色方案。
-
+ Dark暗色
-
+ Light亮色
-
+ LegacyDark傳統暗色
-
+ Font Size字體大小
-
+ The size of text in the application.應用程式中的字體大小。
-
+ Small小
-
+ Medium中
-
+ Large大
-
+ Language and Locale語言與區域設定
-
+ The language and locale you wish to use.您希望使用的語言與區域設定。
-
+ System Locale系統語系
-
+ Device裝置
-
+ Default Model預設模型
-
+ The preferred model for new chats. Also used as the local server fallback.用於新交談的預設模型。也用於作為本機伺服器後援使用。
-
+ Suggestion Mode建議模式
-
+ When chatting with LocalDocs當使用「我的文件」交談時
-
+ Whenever possible視情況允許
-
+ Never永不
-
+ Generate suggested follow-up questions at the end of responses.在回覆末尾生成後續建議的問題。
-
+ The compute device used for text generation.用於生成文字的計算裝置。
-
-
+
+ Application default應用程式預設值
-
+ Download Path下載路徑
-
+ Where to store local models and the LocalDocs database.儲存本機模型與「我的文件」資料庫的位置。
-
+ Browse瀏覽
-
+ Choose where to save model files選擇儲存模型檔案的位置
-
+ Enable Datalake啟用資料湖泊
-
+ Send chats and feedback to the GPT4All Open-Source Datalake.將交談與回饋傳送到 GPT4All 開放原始碼資料湖泊。
-
+ Advanced進階
-
+ CPU Threads中央處理器(CPU)線程
-
+ The number of CPU threads used for inference and embedding.用於推理與嵌入的中央處理器線程數。
-
+ Save Chat Context儲存交談語境
-
+ Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。
-
+ Enable Local Server啟用本機伺服器
-
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.將 OpenAI 相容伺服器公開給本機。警告:導致資源使用增加。
-
+ API Server PortAPI 伺服器埠口
-
+ The port to use for the local server. Requires restart.用於本機伺服器的埠口。需要重新啟動。
-
+ Check For Updates檢查更新
-
+ Manually check for an update to GPT4All.手動檢查 GPT4All 的更新。
-
+ Updates更新
@@ -611,13 +611,13 @@
Chat
-
-
+
+ New Chat新的交談
-
+ Server Chat伺服器交談
@@ -625,12 +625,12 @@
ChatAPIWorker
-
+ ERROR: Network error occurred while connecting to the API server錯誤:網路錯誤,無法連線到目標 API 伺服器
-
+ ChatAPIWorker::handleFinished got HTTP Error %1 %2ChatAPIWorker::handleFinished 遇到一個 HTTP 錯誤 %1 %2
@@ -638,62 +638,62 @@
ChatDrawer
-
+ Drawer側邊欄
-
+ Main navigation drawer主要導航側邊欄
-
+ + New Chat+ 新的交談
-
+ Create a new chat建立新的交談
-
+ Select the current chat or edit the chat when in edit mode選擇目前交談或在編輯模式下編輯交談
-
+ Edit chat name修改對話名稱
-
+ Save chat name儲存對話名稱
-
+ Delete chat刪除對話
-
+ Confirm chat deletion確定刪除對話
-
+ Cancel chat deletion取消刪除對話
-
+ List of chats交談列表
-
+ List of chats in the drawer dialog側邊欄對話視窗的交談列表
@@ -701,32 +701,32 @@
ChatListModel
-
+ TODAY今天
-
+ THIS WEEK這星期
-
+ THIS MONTH這個月
-
+ LAST SIX MONTHS前六個月
-
+ THIS YEAR今年
-
+ LAST YEAR去年
@@ -734,329 +734,329 @@
ChatView
-
+ <h3>Warning</h3><p>%1</p><h3>警告</h3><p>%1</p>
-
+ Switch model dialog切換模型對話視窗
-
+ Warn the user if they switch models, then context will be erased警告使用者如果切換模型,則語境將被刪除
-
+ Conversation copied to clipboard.對話已複製到剪貼簿。
-
+ Code copied to clipboard.程式碼已複製到剪貼簿。
-
+ Chat panel交談面板
-
+ Chat panel with options具有選項的交談面板
-
+ Reload the currently loaded model重新載入目前已載入的模型
-
+ Eject the currently loaded model彈出目前載入的模型
-
+ No model installed.沒有已安裝的模型。
-
+ Model loading error.模型載入時發生錯誤。
-
+ Waiting for model...等待模型中......
-
+ Switching context...切換語境中......
-
+ Choose a model...選擇一個模型......
-
+ Not found: %1不存在:%1
-
-
+
+ Reload · %1重新載入 · %1
-
+ Loading · %1載入中 · %1
-
+ Load · %1 (default) →載入 · %1 (預設) →
-
+ The top item is the current model最上面的那項是目前使用的模型
-
-
+
+ LocalDocs我的文件
-
+ Add documents新增文件
-
+ add collections of documents to the chat將文件集合新增至交談中
-
+ Load the default model載入預設模型
-
+ Loads the default model which can be changed in settings預設模型可於設定中變更
-
+ No Model Installed沒有已安裝的模型
-
+ GPT4All requires that you install at least one
model to get startedGPT4All 要求您至少安裝一個
模型開始
-
+ Install a Model安裝一個模型
-
+ Shows the add model view顯示新增模型視圖
-
+ Conversation with the model與模型對話
-
+ prompt / response pairs from the conversation對話中的提示詞 / 回覆組合
-
+ GPT4AllGPT4All
-
+ You您
-
+ response stopped ...回覆停止......
-
+ retrieving localdocs: %1 ...檢索本機文件中:%1 ......
-
+ searching localdocs: %1 ...搜尋本機文件中:%1 ......
-
+ processing ...處理中......
-
+ generating response ...生成回覆......
-
+ generating questions ...生成問題......
-
-
+
+ Copy複製
-
+ Copy Message複製訊息
-
+ Disable markdown停用 Markdown
-
+ Enable markdown啟用 Markdown
-
+ Thumbs up讚
-
+ Gives a thumbs up to the response對這則回覆比讚
-
+ Thumbs down倒讚
-
+ Opens thumbs down dialog開啟倒讚對話視窗
-
+ Suggested follow-ups後續建議
-
+ Erase and reset chat session刪除並重置交談會話
-
+ Copy chat session to clipboard複製交談會議到剪貼簿
-
+ Redo last chat response復原上一個交談回覆
-
+ Stop generating停止生成
-
+ Stop the current response generation停止當前回覆生成
-
+ Reloads the model重新載入模型
-
+ <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help<h3>載入模型時發生錯誤:</h3><br><i>"%1"</i><br><br>導致模型載入失敗的原因可能有很多種,但絕大多數的原因是檔案格式損毀、下載的檔案不完整、檔案類型錯誤、系統RAM空間不足或不相容的模型類型。這裡有些建議可供疑難排解:<br><ul><li>確保使用的模型是相容的格式與類型<li>檢查位於下載資料夾的檔案是否完整<li>您可以從設定中找到您所設定的「下載資料夾路徑」<li>如果您有側載模型,請利用 md5sum 等工具確保您的檔案是完整的<li>想了解更多關於我們所支援的模型資訊,煩請詳閱<a href="https://docs.gpt4all.io/">本文件</a>。<li>歡迎洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求幫助
-
+ restoring from text ...從文字中恢復......
-
+ %n Source(s)%n 來源
-
+ Send a message...傳送一則訊息......
-
+ Load a model to continue...載入模型以繼續......
-
+ Send messages/prompts to the model向模型傳送訊息/提示詞
-
+ Cut剪下
-
+ Paste貼上
-
+ Select All全選
-
+ Send message傳送訊息
-
+ Sends the message/prompt contained in textfield to the model將文字欄位中包含的訊息/提示詞傳送到模型
@@ -1064,36 +1064,36 @@ model to get started
CollectionsDrawer
-
+ Warning: searching collections while indexing can return incomplete results警告:在索引時搜尋收藏可能會傳回不完整的結果
-
+ %n file(s)%n 個檔案
-
+ %n word(s)%n 個字
-
+ Updating更新中
-
+ + Add Docs+ 新增文件
-
+ Select a collection to make it available to the chat model.選擇一個收藏以使其可供交談模型使用。
@@ -1101,37 +1101,37 @@ model to get started
Download
-
+ Model "%1" is installed successfully.模型「%1」已安裝成功。
-
+ ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
-
+ ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
-
+ ERROR: $BASE_URL is invalid.錯誤:$BASE_URL 無效。
-
+ ERROR: Model "%1 (%2)" is conflict.錯誤:模型「%1 (%2)」發生衝突。
-
+ Model "%1 (%2)" is installed successfully.模型「%1(%2)」已安裝成功。
-
+ Model "%1" is removed.模型「%1」已移除。
@@ -1139,92 +1139,92 @@ model to get started
HomeView
-
+ Welcome to GPT4All歡迎使用 GPT4All
-
+ The privacy-first LLM chat application隱私第一的大型語言模型交談應用程式
-
+ Start chatting開始交談
-
+ Start Chatting開始交談
-
+ Chat with any LLM與任何大型語言模型交談
-
+ LocalDocs我的文件
-
+ Chat with your local files使用「我的文件」來交談
-
+ Find Models搜尋模型
-
+ Explore and download models瀏覽與下載模型
-
+ Latest news最新消息
-
+ Latest news from GPT4All從 GPT4All 來的最新消息
-
+ Release Notes版本資訊
-
+ Documentation文件
-
+ DiscordDiscord
-
+ X (Twitter)X (Twitter)
-
+ GithubGithub
-
+ nomic.ainomic.ai
-
+ Subscribe to Newsletter訂閱電子報
@@ -1232,117 +1232,117 @@ model to get started
LocalDocsSettings
-
+ LocalDocs我的文件
-
+ LocalDocs Settings我的文件設定
-
+ Indexing索引中
-
+ Allowed File Extensions允許的副檔名
-
+ Comma-separated list. LocalDocs will only attempt to process files with these extensions.以逗號分隔的列表。「我的文件」將僅嘗試處理具有這些副檔名的檔案。
-
+ Embedding嵌入
-
+ Use Nomic Embed API使用 Nomic 嵌入 API
-
+ Embed documents using the fast Nomic API instead of a private local model. Requires restart.使用快速的 Nomic API 而不是本機私有模型嵌入文件。需要重新啟動。
-
+ Nomic API KeyNomic API 金鑰
-
+ API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart.用於 Nomic Embed 的 API 金鑰。從 Atlas <a href="https://atlas.nomic.ai/cli-login">API 金鑰頁面</a>取得一個。需要重新啟動。
-
+ Embeddings Device嵌入裝置
-
+ The compute device used for embeddings. Requires restart.用於嵌入的計算裝置。需要重新啟動。
-
+ Application default應用程式預設值
-
+ Display顯示
-
+ Show Sources查看來源
-
+ Display the sources used for each response.顯示每則回覆所使用的來源。
-
+ Advanced進階
-
+ Warning: Advanced usage only.警告:僅限進階使用。
-
+ Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>.設定太大的數值可能會導致「我的文件」處理失敗、反應速度極慢或根本無法回覆。簡單地說,這會將 {N 個字元 x N 個片段} 被添加到模型的語境視窗中。更多資訊<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此處</a>。
-
+ Document snippet size (characters)文件片段大小(字元)
-
+ Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation.每個文件片段的字元數。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
-
+ Max document snippets per prompt每個提示詞的最大文件片段
-
+ Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation.新增至提示詞語境中的檢索到的文件片段的最大 N 個符合的項目。較大的數字會增加實際反應的可能性,但也會導致生成速度變慢。
@@ -1350,151 +1350,151 @@ model to get started
LocalDocsView
-
+ LocalDocs我的文件
-
+ Chat with your local files使用「我的文件」來交談
-
+ + Add Collection+ 新增收藏
-
+ <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however.<h3>錯誤:「我的文件」資料庫已無法存取或已損壞。</h3><br><i>提醒:執行完以下任何疑難排解的動作後,請務必重新啟動應用程式。</i><br><ul><li>請確保<b>「下載路徑」</b>所指向的資料夾確實存在於檔案系統當中。</li><li>檢查 <b>「下載路徑」</b>所指向的資料夾,確保其「擁有者」為您本身,以及確保您對該資料夾擁有讀寫權限。</li><li>如果該資料夾內存在一份名為 <b>localdocs_v2.db</b> 的檔案,請同時確保您對其擁有讀寫權限。</li></ul><br>如果問題依舊存在,且該資料夾內存在與「localdocs_v*.db」名稱相關的檔案,請嘗試備份並移除它們。<br>雖然這樣一來,您恐怕得著手重建您的收藏,但這將或許能夠解決這份錯誤。
-
+ No Collections Installed沒有已安裝的收藏
-
+ Install a collection of local documents to get started using this feature安裝本機文件收藏以開始使用此功能
-
+ + Add Doc Collection+ 新增文件收藏
-
+ Shows the add model view查看新增的模型視圖
-
+ Indexing progressBar索引進度條
-
+ Shows the progress made in the indexing顯示索引進度
-
+ ERROR錯誤
-
+ INDEXING索引中
-
+ EMBEDDING嵌入中
-
+ REQUIRES UPDATE必須更新
-
+ READY已就緒
-
+ INSTALLING安裝中
-
+ Indexing in progress正在索引
-
+ Embedding in progress正在嵌入
-
+ This collection requires an update after version change該收藏需要在版本變更後更新
-
+ Automatically reindexes upon changes to the folder若資料夾有變動,會自動重新索引
-
+ Installation in progress正在安裝中
-
+ %%
-
+ %n file(s)%n 個檔案
-
+ %n word(s)%n 個字
-
+ Remove移除
-
+ Rebuild重建
-
+ Reindex this folder from scratch. This is slow and usually not needed.重新索引該資料夾。這將會耗費許多時間並且通常不太需要這樣做。
-
+ Update更新
-
+ Update the collection to the new version. This is a slow operation.更新收藏。這將會耗費許多時間。
@@ -1502,67 +1502,78 @@ model to get started
ModelList
-
+
+
+ cannot open "%1": %2
+
+
+
+
+ cannot create "%1": %2
+
+
+
+ %1 (%2)%1(%2)
-
+ <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul><strong>OpenAI API 相容模型</strong><br><ul><li>API 金鑰:%1</li><li>基底 URL:%2</li><li>模型名稱:%3</li></ul>
-
+ <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li><ul><li>需要個人的 OpenAI API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 OpenAI</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 OpenAI 進行通訊</li><li>您可以在<a href="https://platform.openai.com/account/api-keys">此處</a>申請一個 API 金鑰。</li>
-
+ <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1<strong>OpenAI 的 ChatGPT 模型 GPT-3.5 Turbo</strong><br> %1
-
+ <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info.<br><br><i>* 即使您已向 OpenAI 付費購買了 ChatGPT 的 GPT-4 模型使用權,但這也不能保證您能擁有 API 金鑰的使用權限。請聯繫 OpenAI 以查閱更多資訊。
-
+ <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2<strong>OpenAI 的 ChatGPT 模型 GPT-4</strong><br> %1 %2
-
+ <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li><ul><li>需要個人的 Mistral API 金鑰。</li><li>警告:這將會傳送您的交談紀錄到 Mistral!</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與 Mistral 進行通訊</li><li>您可以在<a href="https://console.mistral.ai/user/api-keys">此處</a>申請一個 API 金鑰。</li>
-
+ <strong>Mistral Tiny model</strong><br> %1<strong>Mistral 迷你模型</strong><br> %1
-
+ <strong>Mistral Small model</strong><br> %1<strong>Mistral 小型模型</strong><br> %1
-
+ <strong>Mistral Medium model</strong><br> %1<strong>Mistral 中型模型</strong><br> %1
-
+ <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li><ul><li>需要個人的 API 金鑰和 API 的基底 URL(Base URL)。</li><li>警告:這將會傳送您的交談紀錄到您所指定的 OpenAI API 相容伺服器</li><li>您的 API 金鑰將被儲存在硬碟上</li><li>它只被用於與其 OpenAI API 相容伺服器進行通訊</li>
-
+ <strong>Connect to OpenAI-compatible API server</strong><br> %1<strong>連線到 OpenAI API 相容伺服器</strong><br> %1
-
+ <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul><strong>模型作者:%1</strong><br><ul><li>發佈日期:%2<li>累積讚數:%3 個讚<li>下載次數:%4 次<li>更多資訊請查閱<a href="https://huggingface.co/%5">此處</a>。</ul>
@@ -1570,92 +1581,92 @@ model to get started
ModelSettings
-
+ Model模型
-
+ Model Settings模型設定
-
+ Clone複製
-
+ Remove移除
-
+ Name名稱
-
+ Model File模型檔案
-
+ System Prompt系統提示詞
-
+ Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.在每個對話的開頭加上前綴。必須包含適當的構建符元(framing tokens)。
-
+ Prompt Template提示詞模板
-
+ The template that wraps every prompt.包裝每個提示詞的模板。
-
+ Must contain the string "%1" to be replaced with the user's input.必須包含要替換為使用者輸入的字串「%1」。
-
+ Chat Name Prompt交談名稱提示詞
-
+ Prompt used to automatically generate chat names.用於自動生成交談名稱的提示詞。
-
+ Suggested FollowUp Prompt後續建議提示詞
-
+ Prompt used to generate suggested follow-up questions.用於生成後續建議問題的提示詞。
-
+ Context Length語境長度(Context Length)
-
+ Number of input and output tokens the model sees.模型看見的輸入與輸出的符元數量。
-
+ Maximum combined prompt/response tokens before information is lost.
Using more context than the model was trained on will yield poor results.
NOTE: Does not take effect until you reload the model.
@@ -1664,128 +1675,128 @@ NOTE: Does not take effect until you reload the model.
注意:重新載入模型後才會生效。
-
+ Temperature語境溫度(Temperature)
-
+ Randomness of model output. Higher -> more variation.模型輸出的隨機性。更高 -> 更多變化。
-
+ Temperature increases the chances of choosing less likely tokens.
NOTE: Higher temperature gives more creative but less predictable outputs.語境溫度會提高選擇不容易出現的符元機率。
注意:較高的語境溫度會生成更多創意,但輸出的可預測性會相對較差。
-
+ Top-P核心採樣(Top-P)
-
+ Nucleus Sampling factor. Lower -> more predictable.核心採樣因子。更低 -> 更可預測。
-
+ Only the most likely tokens up to a total probability of top_p can be chosen.
NOTE: Prevents choosing highly unlikely tokens.只選擇總機率約為核心採樣,最有可能性的符元。
注意:用於避免選擇不容易出現的符元。
-
+ Min-P最小符元機率(Min-P)
-
+ Minimum token probability. Higher -> more predictable.最小符元機率。更高 -> 更可預測。
-
+ Sets the minimum relative probability for a token to be considered.設定要考慮的符元的最小相對機率。
-
+ Top-K高頻率採樣機率(Top-K)
-
+ Size of selection pool for tokens.符元選擇池的大小。
-
+ Only the top K most likely tokens will be chosen from.只選擇前 K 個最有可能性的符元。
-
+ Max Length最大長度(Max Length)
-
+ Maximum response length, in tokens.最大響應長度(以符元為單位)。
-
+ Prompt Batch Size提示詞批次大小(Prompt Batch Size)
-
+ The batch size used for prompt processing.用於即時處理的批量大小。
-
+ Amount of prompt tokens to process at once.
NOTE: Higher values can speed up reading prompts but will use more RAM.一次處理的提示詞符元數量。
注意:較高的值可以加快讀取提示詞的速度,但會使用比較多的記憶體。
-
+ Repeat Penalty重複處罰(Repeat Penalty)
-
+ Repetition penalty factor. Set to 1 to disable.重複懲罰因子。設定為 1 以停用。
-
+ Repeat Penalty Tokens重複懲罰符元(Repeat Penalty Tokens)
-
+ Number of previous tokens used for penalty.之前用於懲罰的符元數量。
-
+ GPU Layers圖形處理器負載層(GPU Layers)
-
+ Number of model layers to load into VRAM.要載入到顯示記憶體中的模型層數。
-
+ How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model.
Lower values increase CPU load and RAM usage, and make inference slower.
NOTE: Does not take effect until you reload the model.
@@ -1797,218 +1808,218 @@ NOTE: Does not take effect until you reload the model.
ModelsView
-
+ No Models Installed沒有已安裝的模型
-
+ Install a model to get started using GPT4All安裝模型以開始使用 GPT4All
-
-
+
+ + Add Model+ 新增模型
-
+ Shows the add model view顯示新增模型視圖
-
+ Installed Models已安裝的模型
-
+ Locally installed chat models本機已安裝的交談模型
-
+ Model file模型檔案
-
+ Model file to be downloaded即將下載的模型檔案
-
+ Description描述
-
+ File description檔案描述
-
+ Cancel取消
-
+ Resume恢復
-
+ Stop/restart/start the download停止/重啟/開始下載
-
+ Remove移除
-
+ Remove model from filesystem從檔案系統移除模型
-
-
+
+ Install安裝
-
+ Install online model安裝線上模型
-
+ <strong><font size="1"><a href="#error">Error</a></strong></font><strong><font size="1"><a href="#error">錯誤</a></strong></font>
-
+ <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font><strong><font size="2">警告:不推薦在您的硬體上運作。模型需要比較多的記憶體(%1 GB),但您的系統記憶體空間不足(%2)。</strong></font>
-
+ %1 GB%1 GB
-
+ ??
-
+ Describes an error that occurred when downloading解釋下載時發生的錯誤
-
+ Error for incompatible hardware錯誤,不相容的硬體
-
+ Download progressBar下載進度條
-
+ Shows the progress made in the download顯示下載進度
-
+ Download speed下載速度
-
+ Download speed in bytes/kilobytes/megabytes per second下載速度每秒 bytes/kilobytes/megabytes
-
+ Calculating...計算中......
-
-
-
-
+
+
+
+ Whether the file hash is being calculated是否正在計算檔案雜湊
-
+ Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
-
+ Displayed when the file hash is being calculated計算檔案雜湊值時顯示
-
+ ERROR: $API_KEY is empty.錯誤:$API_KEY 未填寫。
-
+ enter $API_KEY請輸入 $API_KEY
-
+ ERROR: $BASE_URL is empty.錯誤:$BASE_URL 未填寫。
-
+ enter $BASE_URL請輸入 $BASE_URL
-
+ ERROR: $MODEL_NAME is empty.錯誤:$MODEL_NAME 未填寫。
-
+ enter $MODEL_NAME請輸入 $MODEL_NAME
-
+ File size檔案大小
-
+ RAM required所需的記憶體
-
+ Parameters參數
-
+ Quant量化
-
+ Type類型
@@ -2016,12 +2027,12 @@ NOTE: Does not take effect until you reload the model.
MyFancyLink
-
+ Fancy link精緻網址
-
+ A stylized link個性化網址
@@ -2029,7 +2040,7 @@ NOTE: Does not take effect until you reload the model.
MySettingsStack
-
+ Please choose a directory請選擇一個資料夾
@@ -2037,12 +2048,12 @@ NOTE: Does not take effect until you reload the model.
MySettingsTab
-
+ Restore Defaults恢復預設值
-
+ Restores settings dialog to a default state恢復設定對話視窗到預設狀態
@@ -2050,12 +2061,12 @@ NOTE: Does not take effect until you reload the model.
NetworkDialog
-
+ Contribute data to the GPT4All Opensource Datalake.貢獻資料到 GPT4All 的開放原始碼資料湖泊。
-
+ By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements.
When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake.
@@ -2073,47 +2084,47 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將被認可為任何使用您的資料的 GPT4All 模型版本的貢獻者!
-
+ Terms for opt-in計畫規範
-
+ Describes what will happen when you opt-in解釋當您加入計畫後,會發生什麼事情
-
+ Please provide a name for attribution (optional)請提供署名(非必填)
-
+ Attribution (optional)署名(非必填)
-
+ Provide attribution提供署名
-
+ Enable啟用
-
+ Enable opt-in加入計畫
-
+ Cancel取消
-
+ Cancel opt-in拒絕計畫
@@ -2121,17 +2132,17 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
NewVersionDialog
-
+ New version is available發現新版本
-
+ Update更新
-
+ Update to new version更新版本
@@ -2139,18 +2150,18 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
PopupDialog
-
+ Reveals a shortlived help balloon呼叫提示小幫手
-
+ Busy indicator參考自 https://terms.naer.edu.tw忙線指示器
-
+ Displayed when the popup is showing busy當彈出視窗忙碌時顯示
@@ -2158,28 +2169,28 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
SettingsView
-
-
+
+ Settings設定
-
+ Contains various application settings內含多種應用程式設定
-
+ Application應用程式
-
+ Model模型
-
+ LocalDocs我的文件
@@ -2187,22 +2198,22 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
StartupDialog
-
+ Welcome!歡迎使用!
-
+ Release notes版本資訊
-
+ Release notes for this version這個版本的版本資訊
-
+ ### Opt-ins for anonymous usage analytics and datalake
By enabling these features, you will be able to participate in the democratic process of training a
large language model by contributing data for future model improvements.
@@ -2230,35 +2241,35 @@ model release that uses your data!
Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將被認可為任何使用您的資料的 GPT4All 模型版本的貢獻者!
-
+ Terms for opt-in計畫規範
-
+ Describes what will happen when you opt-in解釋當您加入計畫後,會發生什麼事情
-
-
+
+ Yes是
-
-
+
+ No否
-
-
+
+ Opt-in for anonymous usage statistics匿名使用統計計畫
-
+ ### Release notes
%1### Contributors
%2
@@ -2267,43 +2278,43 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
%2
-
+ Allow opt-in for anonymous usage statistics加入匿名使用統計計畫
-
+ Opt-out for anonymous usage statistics退出匿名使用統計計畫
-
+ Allow opt-out for anonymous usage statistics終止並退出匿名使用統計計畫
-
-
+
+ Opt-in for network資料湖泊計畫
-
+ Allow opt-in for network加入資料湖泊計畫
-
+ Opt-out for network退出資料湖泊計畫
-
+ Allow opt-in anonymous sharing of chats to the GPT4All Datalake開始將交談內容匿名分享到 GPT4All 資料湖泊
-
+ Allow opt-out anonymous sharing of chats to the GPT4All Datalake終止將交談內容匿名分享到 GPT4All 資料湖泊
@@ -2311,23 +2322,23 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
SwitchModelDialog
-
+ <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?<b>警告:</b> 變更模型將會清除目前對話內容。您真的想要繼續嗎?
-
+ Continue繼續
-
+ Continue with model loading繼續載入模型
-
-
+
+ Cancel取消
@@ -2335,32 +2346,32 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
ThumbsDownDialog
-
+ Please edit the text below to provide a better response. (optional)請編輯以下文字,以提供更好的回覆。(非必填)
-
+ Please provide a better response...請提供一則更好的回覆......
-
+ Submit送出
-
+ Submits the user's response送出使用者的回覆
-
+ Cancel取消
-
+ Closes the response dialog關閉回覆對話視窗
@@ -2368,125 +2379,125 @@ Nomic AI 將保留附加在您的資料上的所有署名訊息,並且您將
main
-
+ GPT4All v%1GPT4All v%1
-
+ <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a><h3>啟動時發生錯誤:</h3><br><i>「偵測到不相容的硬體。」</i><br><br>糟糕!您的中央處理器不符合運行所需的最低需求。尤其,它不支援本程式運行現代大型語言模型所需的 AVX 指令集。目前唯一的解決方案,只有更新您的中央處理器及其相關硬體裝置。<br><br>更多資訊請查閱:<a href="https://zh.wikipedia.org/wiki/AVX指令集">AVX 指令集 - 維基百科</a>
-
+ <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help.<h3>啟動時發生錯誤:</h3><br><i>「無法存取設定檔。」</i><br><br>糟糕!有些東西正在阻止程式存取設定檔。這極為可能是由於設定檔所在的本機應用程式設定資料夾中的權限設定不正確所造成的。煩請洽詢我們的 <a href="https://discord.gg/4M2QFmTt2k">Discord 伺服器</a> 以尋求協助。
-
+ Connection to datalake failed.連線資料湖泊失敗。
-
+ Saving chats.儲存交談。
-
+ Network dialog資料湖泊計畫對話視窗
-
+ opt-in to share feedback/conversations分享回饋/對話計畫
-
+ Home view首頁視圖
-
+ Home view of application應用程式首頁視圖
-
+ Home首頁
-
+ Chat view查看交談
-
+ Chat view to interact with models模型互動交談視圖
-
+ Chats交談
-
-
+
+ Models模型
-
+ Models view for installed models已安裝模型的模型視圖
-
-
+
+ LocalDocs我的文件
-
+ LocalDocs view to configure and use local docs用於設定與使用我的文件的「我的文件」視圖
-
-
+
+ Settings設定
-
+ Settings view for application configuration應用程式設定視圖
-
+ The datalake is enabled資料湖泊已啟用
-
+ Using a network model使用一個網路模型
-
+ Server mode is enabled伺服器模式已啟用
-
+ Installed models已安裝的模型
-
+ View of installed models已安裝的模型視圖
From 46314dc7f313db88ebbbd22a535b307b37489b6c Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 30 Aug 2024 12:54:20 -0400
Subject: [PATCH 59/66] python: warn if Microsoft Visual C++ runtime libs are
not found (#2920)
Signed-off-by: Jared Van Bortel
---
gpt4all-bindings/python/CHANGELOG.md | 6 ++++++
gpt4all-bindings/python/gpt4all/_pyllmodel.py | 15 ++++++++++++++-
gpt4all-bindings/python/setup.py | 2 +-
3 files changed, 21 insertions(+), 2 deletions(-)
diff --git a/gpt4all-bindings/python/CHANGELOG.md b/gpt4all-bindings/python/CHANGELOG.md
index 69346498..2bc195ec 100644
--- a/gpt4all-bindings/python/CHANGELOG.md
+++ b/gpt4all-bindings/python/CHANGELOG.md
@@ -4,6 +4,11 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
+## [Unreleased]
+
+### Added
+- Warn on Windows if the Microsoft Visual C++ runtime libraries are not found ([#2920](https://github.com/nomic-ai/gpt4all/pull/2920))
+
## [2.8.2] - 2024-08-14
### Fixed
@@ -56,6 +61,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Restore leading space removal logic that was incorrectly removed in [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
- CUDA: Cherry-pick llama.cpp DMMV cols requirement fix that caused a crash with long conversations since [#2694](https://github.com/nomic-ai/gpt4all/pull/2694)
+[Unreleased]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.2...HEAD
[2.8.2]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.1...python-v2.8.2
[2.8.1]: https://github.com/nomic-ai/gpt4all/compare/python-v2.8.0...python-v2.8.1
[2.8.0]: https://github.com/nomic-ai/gpt4all/compare/python-v2.7.0...python-v2.8.0
diff --git a/gpt4all-bindings/python/gpt4all/_pyllmodel.py b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
index d623b528..208d834c 100644
--- a/gpt4all-bindings/python/gpt4all/_pyllmodel.py
+++ b/gpt4all-bindings/python/gpt4all/_pyllmodel.py
@@ -37,7 +37,20 @@ if platform.system() == "Darwin" and platform.processor() == "i386":
raise RuntimeError(textwrap.dedent("""\
Running GPT4All under Rosetta is not supported due to CPU feature requirements.
Please install GPT4All in an environment that uses a native ARM64 Python interpreter.
- """))
+ """).strip())
+
+# Check for C++ runtime libraries
+if platform.system() == "Windows":
+ try:
+ ctypes.CDLL("msvcp140.dll")
+ ctypes.CDLL("vcruntime140.dll")
+ ctypes.CDLL("vcruntime140_1.dll")
+ except OSError as e:
+ print(textwrap.dedent(f"""\
+ {e!r}
+ The Microsoft Visual C++ runtime libraries were not found. Please install them from
+ https://aka.ms/vs/17/release/vc_redist.x64.exe
+ """), file=sys.stderr)
def _load_cuda(rtver: str, blasver: str) -> None:
diff --git a/gpt4all-bindings/python/setup.py b/gpt4all-bindings/python/setup.py
index 75476875..e96f5840 100644
--- a/gpt4all-bindings/python/setup.py
+++ b/gpt4all-bindings/python/setup.py
@@ -68,7 +68,7 @@ def get_long_description():
setup(
name=package_name,
- version="2.8.2",
+ version="2.8.3.dev0",
description="Python bindings for GPT4All",
long_description=get_long_description(),
long_description_content_type="text/markdown",
From e48571003e30b47c77d76cbb5a421c4d4720cf07 Mon Sep 17 00:00:00 2001
From: AT
Date: Fri, 30 Aug 2024 13:00:33 -0400
Subject: [PATCH 60/66] settings: tweak the name of the local server option
(#2928)
Signed-off-by: Adam Treat
Signed-off-by: Jared Van Bortel
Co-authored-by: Jared Van Bortel
---
gpt4all-chat/src/qml/ApplicationSettings.qml | 2 +-
gpt4all-chat/translations/gpt4all_en_US.ts | 2 +-
gpt4all-chat/translations/gpt4all_es_MX.ts | 8 ++++++--
gpt4all-chat/translations/gpt4all_it_IT.ts | 6 +++++-
gpt4all-chat/translations/gpt4all_pt_BR.ts | 6 +++++-
gpt4all-chat/translations/gpt4all_ro_RO.ts | 8 ++++++--
gpt4all-chat/translations/gpt4all_zh_CN.ts | 6 +++++-
gpt4all-chat/translations/gpt4all_zh_TW.ts | 8 ++++++--
8 files changed, 35 insertions(+), 11 deletions(-)
diff --git a/gpt4all-chat/src/qml/ApplicationSettings.qml b/gpt4all-chat/src/qml/ApplicationSettings.qml
index 1e459e99..7c83f356 100644
--- a/gpt4all-chat/src/qml/ApplicationSettings.qml
+++ b/gpt4all-chat/src/qml/ApplicationSettings.qml
@@ -502,7 +502,7 @@ MySettingsTab {
}
MySettingsLabel {
id: serverChatLabel
- text: qsTr("Enable Local Server")
+ text: qsTr("Enable Local API Server")
helpText: qsTr("Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.")
Layout.row: 13
Layout.column: 0
diff --git a/gpt4all-chat/translations/gpt4all_en_US.ts b/gpt4all-chat/translations/gpt4all_en_US.ts
index e46ae94c..763c9ed0 100644
--- a/gpt4all-chat/translations/gpt4all_en_US.ts
+++ b/gpt4all-chat/translations/gpt4all_en_US.ts
@@ -569,7 +569,7 @@
- Enable Local Server
+ Enable Local API Server
diff --git a/gpt4all-chat/translations/gpt4all_es_MX.ts b/gpt4all-chat/translations/gpt4all_es_MX.ts
index a3290102..610d6f95 100644
--- a/gpt4all-chat/translations/gpt4all_es_MX.ts
+++ b/gpt4all-chat/translations/gpt4all_es_MX.ts
@@ -555,15 +555,19 @@
Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Guardar el estado del modelo de chat en el disco para una carga más rápida. ADVERTENCIA: Usa ~2GB por chat.
+
+
+ Enable Local API Server
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.Exponer un servidor compatible con OpenAI a localhost. ADVERTENCIA: Resulta en un mayor uso de recursos.
- Enable Local Server
- Habilitar servidor local
+ Habilitar servidor localExpose an OpenAI-Compatible server to localhost. WARNING: Results in increased
diff --git a/gpt4all-chat/translations/gpt4all_it_IT.ts b/gpt4all-chat/translations/gpt4all_it_IT.ts
index 7a6ec7e1..75c018f9 100644
--- a/gpt4all-chat/translations/gpt4all_it_IT.ts
+++ b/gpt4all-chat/translations/gpt4all_it_IT.ts
@@ -574,8 +574,12 @@
+ Enable Local API Server
+
+
+ Enable Local Server
- Abilita server locale
+ Abilita server locale
diff --git a/gpt4all-chat/translations/gpt4all_pt_BR.ts b/gpt4all-chat/translations/gpt4all_pt_BR.ts
index 76b5b5aa..70702c30 100644
--- a/gpt4all-chat/translations/gpt4all_pt_BR.ts
+++ b/gpt4all-chat/translations/gpt4all_pt_BR.ts
@@ -577,8 +577,12 @@
+ Enable Local API Server
+
+
+ Enable Local Server
- Ativar Servidor Local
+ Ativar Servidor Local
diff --git a/gpt4all-chat/translations/gpt4all_ro_RO.ts b/gpt4all-chat/translations/gpt4all_ro_RO.ts
index a1eefb88..87e11831 100644
--- a/gpt4all-chat/translations/gpt4all_ro_RO.ts
+++ b/gpt4all-chat/translations/gpt4all_ro_RO.ts
@@ -599,6 +599,11 @@
Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
+
+
+ Enable Local API Server
+
+ Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.
@@ -610,9 +615,8 @@
Salvează pe disc starea modelului pentru încărcare mai rapidă. ATENŢIE: Consumă ~2GB/conversaţie.
- Enable Local Server
- Activează Serverul local
+ Activează Serverul localExpose an OpenAI-Compatible server to localhost. WARNING: Results in increased
diff --git a/gpt4all-chat/translations/gpt4all_zh_CN.ts b/gpt4all-chat/translations/gpt4all_zh_CN.ts
index 5ee7e9b4..6cf4b1c8 100644
--- a/gpt4all-chat/translations/gpt4all_zh_CN.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_CN.ts
@@ -616,8 +616,12 @@
+ Enable Local API Server
+
+
+ Enable Local Server
- 开启本地服务
+ 开启本地服务
diff --git a/gpt4all-chat/translations/gpt4all_zh_TW.ts b/gpt4all-chat/translations/gpt4all_zh_TW.ts
index e6473e0e..478099ea 100644
--- a/gpt4all-chat/translations/gpt4all_zh_TW.ts
+++ b/gpt4all-chat/translations/gpt4all_zh_TW.ts
@@ -501,6 +501,11 @@
Never永不
+
+
+ Enable Local API Server
+
+ Generate suggested follow-up questions at the end of responses.
@@ -573,9 +578,8 @@
將交談模型的狀態儲存到磁碟以加快載入速度。警告:每次交談使用約 2GB。
- Enable Local Server
- 啟用本機伺服器
+ 啟用本機伺服器
From facb706211d3e47b8bc0fdf985993aec4ff19ca8 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 6 Sep 2024 12:03:30 -0400
Subject: [PATCH 61/66] ci: improve readability and correctness (#2941)
Signed-off-by: Jared Van Bortel
---
.circleci/config.yml | 1 -
.circleci/continue_config.yml | 143 ++++++++++++++--------------------
2 files changed, 57 insertions(+), 87 deletions(-)
diff --git a/.circleci/config.yml b/.circleci/config.yml
index 26bcbd6e..3e7af588 100644
--- a/.circleci/config.yml
+++ b/.circleci/config.yml
@@ -16,4 +16,3 @@ workflows:
gpt4all-bindings/python/.* run-python-workflow true
gpt4all-bindings/typescript/.* run-ts-workflow true
gpt4all-chat/.* run-chat-workflow true
- .* run-default-workflow true
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index 5d12ac43..1f5c6dcc 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -8,9 +8,6 @@ parameters:
run-all-workflows:
type: boolean
default: false
- run-default-workflow:
- type: boolean
- default: false
run-python-workflow:
type: boolean
default: false
@@ -22,11 +19,6 @@ parameters:
default: false
jobs:
- default-job:
- docker:
- - image: circleci/python:3.7
- steps:
- - run: echo "CircleCI pipeline triggered"
build-offline-chat-installer-macos:
macos:
xcode: 15.4.0
@@ -75,7 +67,7 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S ../gpt4all-chat -B . -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
- -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
+ -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
@@ -207,7 +199,7 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S ../gpt4all-chat -B . -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
- -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
+ -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
@@ -312,7 +304,16 @@ jobs:
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
- sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk patchelf cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server libmysqlclient21 libodbc2 libpq5
+ packages=(
+ bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
+ libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
+ libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 patchelf
+ python3 vulkan-sdk
+ )
+ sudo apt-get update
+ sudo apt-get install -y "${packages[@]}"
- run:
name: Installing Qt
command: |
@@ -372,7 +373,16 @@ jobs:
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
- sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk patchelf cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server libmysqlclient21 libodbc2 libpq5
+ packages=(
+ bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
+ libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
+ libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 patchelf
+ python3 vulkan-sdk
+ )
+ sudo apt-get update
+ sudo apt-get install -y "${packages[@]}"
- run:
name: Installing Qt
command: |
@@ -465,23 +475,12 @@ jobs:
name: Build
no_output_timeout: 30m
command: |
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
+ $vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
+ Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
+ Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
+
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.7\bin"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
mkdir build
@@ -593,23 +592,12 @@ jobs:
name: Build
no_output_timeout: 30m
command: |
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
+ $vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
+ Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
+ Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
+
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.7\bin"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
mkdir build
@@ -692,7 +680,16 @@ jobs:
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
- sudo apt update && sudo apt install -y libfontconfig1 libfreetype6 libx11-6 libx11-xcb1 libxext6 libxfixes3 libxi6 libxrender1 libxcb1 libxcb-cursor0 libxcb-glx0 libxcb-keysyms1 libxcb-image0 libxcb-shm0 libxcb-icccm4 libxcb-sync1 libxcb-xfixes0 libxcb-shape0 libxcb-randr0 libxcb-render-util0 libxcb-util1 libxcb-xinerama0 libxcb-xkb1 libxkbcommon0 libxkbcommon-x11-0 bison build-essential flex gperf python3 gcc g++ libgl1-mesa-dev libwayland-dev vulkan-sdk cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server libmysqlclient21 libodbc2 libpq5
+ packages=(
+ bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
+ libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
+ libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 python3
+ vulkan-sdk
+ )
+ sudo apt-get update
+ sudo apt-get install -y "${packages[@]}"
- run:
name: Installing Qt
command: |
@@ -754,22 +751,11 @@ jobs:
- run:
name: Build
command: |
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
- $Env:PATH = "${Env:PATH};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
+ $vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
+ Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
+ Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
+
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
- $Env:LIB = "${Env:LIB};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
- $Env:INCLUDE = "${Env:INCLUDE};C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S gpt4all-chat -B build -G Ninja `
@@ -814,7 +800,7 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S gpt4all-chat -B build -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
- -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake/Qt6 \
+ -DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
@@ -879,8 +865,11 @@ jobs:
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
+ packages=(
+ build-essential cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ )
sudo apt-get update
- sudo apt-get install -y cmake build-essential vulkan-sdk cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server
+ sudo apt-get install -y "${packages[@]}"
pip install setuptools wheel cmake
- run:
name: Build C library
@@ -971,23 +960,9 @@ jobs:
- run:
name: Build C library
command: |
- # Visual Studio setup
- # I would use Enter-VsDevShell but it causes cudafe++ to segfault
- $Env:PATH += ";C:\Program Files (x86)\Windows Kits\10\bin\x64"
- $Env:PATH += ";C:\Program Files (x86)\Windows Kits\10\bin\10.0.22000.0\x64"
- $Env:PATH += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\bin\HostX64\x64"
- $Env:LIB = "C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\ucrt\x64"
- $Env:LIB += ";C:\Program Files (x86)\Windows Kits\10\Lib\10.0.22000.0\um\x64"
- $Env:LIB += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\lib\x64"
- $Env:LIB += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\lib\x64"
- $Env:INCLUDE = "C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\ucrt"
- $Env:INCLUDE += ";C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\um"
- $Env:INCLUDE += ";C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\shared"
- $Env:INCLUDE += ";C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\winrt"
- $Env:INCLUDE += ";C:\Program Files (x86)\Windows Kits\10\include\10.0.22000.0\cppwinrt"
- $Env:INCLUDE += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Auxiliary\VS\include"
- $Env:INCLUDE += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\include"
- $Env:INCLUDE += ";C:\Program Files (x86)\Microsoft Visual Studio\2019\Community\VC\Tools\MSVC\14.29.30133\ATLMFC\include"
+ $vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
+ Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
+ Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
$Env:PATH += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
@@ -1020,7 +995,7 @@ jobs:
name: Install dependencies
command: |
sudo apt-get update
- sudo apt-get install -y cmake build-essential
+ sudo apt-get install -y build-essential cmake
pip install setuptools wheel twine
- run:
name: Upload Python package
@@ -1046,8 +1021,11 @@ jobs:
sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
+ packages=(
+ build-essential cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ )
sudo apt-get update
- sudo apt-get install -y cmake build-essential vulkan-sdk cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server libmysqlclient21 libodbc2 libpq5
+ sudo apt-get install -y "${packages[@]}"
- run:
name: Build Libraries
command: |
@@ -1309,13 +1287,6 @@ jobs:
workflows:
version: 2
- default:
- when:
- or:
- - << pipeline.parameters.run-all-workflows >>
- - << pipeline.parameters.run-default-workflow >>
- jobs:
- - default-job
build-chat-offline-installers:
when:
or:
From 1aae4ffe0a29ec41fadbef8098709e8a8ca7156c Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Fri, 6 Sep 2024 16:09:11 -0400
Subject: [PATCH 62/66] ci: use ccache to cache compiler output (#2942)
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 384 +++++++++++++++++++++++-----------
1 file changed, 263 insertions(+), 121 deletions(-)
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index 1f5c6dcc..fce8d0e8 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -29,25 +29,22 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - macos-qt-cache-v3
+ - ccache-gpt4all-macos-
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
+ - run:
+ name: Install dependencies
+ command: brew install ccache
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
- hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
- /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- fi
- - save_cache: # this is the new step to save cache
- key: macos-qt-cache-v3
- paths:
- - ~/Qt
+ curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
+ hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
+ /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
+ hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- run:
name: Setup Keychain
command: |
@@ -61,6 +58,7 @@ jobs:
name: Build
no_output_timeout: 30m
command: |
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
@@ -69,6 +67,8 @@ jobs:
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
@@ -78,11 +78,17 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
+ ccache -s
mkdir upload
cp gpt4all-installer-* upload
# persist the unsigned installer
- store_artifacts:
path: build/upload
+ - save_cache:
+ key: ccache-gpt4all-macos-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
@@ -161,25 +167,22 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - macos-qt-cache-v3
+ - ccache-gpt4all-macos-
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
+ - run:
+ name: Install dependencies
+ command: brew install ccache
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
- hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
- /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- fi
- - save_cache: # this is the new step to save cache
- key: macos-qt-cache-v3
- paths:
- - ~/Qt
+ curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
+ hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
+ /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
+ hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- run:
name: Setup Keychain
command: |
@@ -193,6 +196,7 @@ jobs:
name: Build
no_output_timeout: 30m
command: |
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
@@ -201,6 +205,8 @@ jobs:
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6 \
@@ -210,12 +216,18 @@ jobs:
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target all
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target install
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build . --target package
+ ccache -s
mkdir upload
cp gpt4all-installer-* upload
tar -cvzf upload/repository.tar.gz -C _CPack_Packages/Darwin/IFW/gpt4all-installer-darwin repository
# persist the unsigned installer
- store_artifacts:
path: build/upload
+ - save_cache:
+ key: ccache-gpt4all-macos-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
@@ -294,9 +306,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - linux-qt-cache-v2
+ - ccache-gpt4all-linux-amd64-
- run:
name: Setup Linux and Dependencies
command: |
@@ -305,7 +317,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
@@ -317,15 +329,9 @@ jobs:
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
- chmod +x qt-unified-linux-x64-4.6.0-online.run
- ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- fi
- - save_cache: # this is the new step to save cache
- key: linux-qt-cache-v2
- paths:
- - ~/Qt
+ wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
+ chmod +x qt-unified-linux-x64-4.6.0-online.run
+ ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- run:
name: Build linuxdeployqt
command: |
@@ -339,19 +345,30 @@ jobs:
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
export PATH=$PATH:/usr/local/cuda/bin
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DGPT4ALL_OFFLINE_INSTALLER=ON
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
+ ccache -s
cp gpt4all-installer-* upload
- store_artifacts:
path: build/upload
+ - save_cache:
+ key: ccache-gpt4all-linux-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
build-online-chat-installer-linux:
machine:
@@ -363,9 +380,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - linux-qt-cache-v2
+ - ccache-gpt4all-linux-amd64-
- run:
name: Setup Linux and Dependencies
command: |
@@ -374,7 +391,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
@@ -386,15 +403,9 @@ jobs:
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
- chmod +x qt-unified-linux-x64-4.6.0-online.run
- ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- fi
- - save_cache: # this is the new step to save cache
- key: linux-qt-cache-v2
- paths:
- - ~/Qt
+ wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
+ chmod +x qt-unified-linux-x64-4.6.0-online.run
+ ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- run:
name: Build linuxdeployqt
command: |
@@ -408,20 +419,31 @@ jobs:
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:$HOME/Qt/Tools/QtInstallerFramework/4.7/bin
export PATH=$PATH:/usr/local/cuda/bin
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
mkdir build
cd build
mkdir upload
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DGPT4ALL_OFFLINE_INSTALLER=OFF
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target all
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target install
~/Qt/Tools/CMake/bin/cmake --build . -j$(nproc) --target package
+ ccache -s
cp gpt4all-installer-* upload
tar -cvzf upload/repository.tar.gz -C _CPack_Packages/Linux/IFW/gpt4all-installer-linux repository
- store_artifacts:
path: build/upload
+ - save_cache:
+ key: ccache-gpt4all-linux-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
build-offline-chat-installer-windows:
machine:
@@ -435,20 +457,17 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - windows-qt-cache-v2
+ - ccache-gpt4all-win-amd64-
+ - run:
+ name: Install dependencies
+ command: choco install -y ccache
- run:
name: Installing Qt
command: |
- if (-not (Test-Path C:\Qt)) {
- Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
- & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- }
- - save_cache: # this is the new step to save cache
- key: windows-qt-cache-v2
- paths:
- - C:\Qt
+ Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
+ & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
@@ -483,29 +502,40 @@ jobs:
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.7\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
+ ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
- "-DCMAKE_BUILD_TYPE=Release" `
+ -DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
- "-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
- "-DGPT4ALL_OFFLINE_INSTALLER=ON"
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
+ -DGPT4ALL_OFFLINE_INSTALLER=ON
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
+ ccache -s
mkdir upload
copy gpt4all-installer-win64.exe upload
- store_artifacts:
path: build/upload
# add workspace so signing jobs can connect & obtain dmg
+ - save_cache:
+ key: ccache-gpt4all-win-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ..\.ccache
- persist_to_workspace:
root: build
# specify path to only include components we want to persist
# accross builds
paths:
- upload
+
sign-offline-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
@@ -535,6 +565,7 @@ jobs:
AzureSignTool.exe sign -du "https://gpt4all.io/index.html" -kvu https://gpt4all.vault.azure.net -kvi "$Env:AZSignGUID" -kvs "$Env:AZSignPWD" -kvc "$Env:AZSignCertName" -kvt "$Env:AZSignTID" -tr http://timestamp.digicert.com -v "$($(Get-Location).Path)\build\upload\gpt4all-installer-win64.exe"
- store_artifacts:
path: build/upload
+
build-online-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
@@ -547,20 +578,17 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - windows-qt-cache-v2
+ - ccache-gpt4all-win-amd64-
+ - run:
+ name: Install dependencies
+ command: choco install -y ccache
- run:
name: Installing Qt
command: |
- if (-not (Test-Path C:\Qt)) {
- Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
- & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- }
- - save_cache: # this is the new step to save cache
- key: windows-qt-cache-v2
- paths:
- - C:\Qt
+ Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
+ & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
@@ -600,24 +628,34 @@ jobs:
$Env:PATH = "${Env:PATH};C:\Qt\Tools\QtInstallerFramework\4.7\bin"
$Env:DOTNET_ROOT="$($(Get-Location).Path)\dotnet\dotnet-sdk-8.0.302-win-x64"
$Env:PATH="$Env:DOTNET_ROOT;$Env:PATH"
+ ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
mkdir build
cd build
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S ..\gpt4all-chat -B . -G Ninja `
- "-DCMAKE_BUILD_TYPE=Release" `
+ -DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
- "-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON" `
- "-DGPT4ALL_OFFLINE_INSTALLER=OFF"
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
+ -DGPT4ALL_OFFLINE_INSTALLER=OFF
& "C:\Qt\Tools\Ninja\ninja.exe"
& "C:\Qt\Tools\Ninja\ninja.exe" install
& "C:\Qt\Tools\Ninja\ninja.exe" package
+ ccache -s
mkdir upload
copy gpt4all-installer-win64.exe upload
Set-Location -Path "_CPack_Packages/win64/IFW/gpt4all-installer-win64"
Compress-Archive -Path 'repository' -DestinationPath '..\..\..\..\upload\repository.zip'
- store_artifacts:
path: build/upload
+ - save_cache:
+ key: ccache-gpt4all-win-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ..\.ccache
# add workspace so signing jobs can connect & obtain dmg
- persist_to_workspace:
root: build
@@ -625,6 +663,7 @@ jobs:
# accross builds
paths:
- upload
+
sign-online-chat-installer-windows:
machine:
image: 'windows-server-2019-vs2019:2022.08.1'
@@ -670,9 +709,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - linux-qt-cache-v2
+ - ccache-gpt4all-linux-amd64-
- run:
name: Setup Linux and Dependencies
command: |
@@ -681,7 +720,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
+ bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
@@ -693,24 +732,29 @@ jobs:
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
- chmod +x qt-unified-linux-x64-4.6.0-online.run
- ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- fi
- - save_cache: # this is the new step to save cache
- key: linux-qt-cache-v2
- paths:
- - ~/Qt
+ wget https://gpt4all.io/ci/qt-unified-linux-x64-4.6.0-online.run
+ chmod +x qt-unified-linux-x64-4.6.0-online.run
+ ./qt-unified-linux-x64-4.6.0-online.run --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.gcc_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver qt.qt6.651.qtwaylandcompositor
- run:
name: Build
command: |
export CMAKE_PREFIX_PATH=~/Qt/6.5.1/gcc_64/lib/cmake
export PATH=$PATH:/usr/local/cuda/bin
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
~/Qt/Tools/CMake/bin/cmake \
-S gpt4all-chat -B build \
- -DCMAKE_BUILD_TYPE=Release
+ -DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
~/Qt/Tools/CMake/bin/cmake --build build -j$(nproc) --target all
+ ccache -s
+ - save_cache:
+ key: ccache-gpt4all-linux-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
build-gpt4all-chat-windows:
machine:
@@ -724,20 +768,17 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - windows-qt-cache-v2
+ - ccache-gpt4all-win-amd64-
+ - run:
+ name: Install dependencies
+ command: choco install -y ccache
- run:
name: Installing Qt
command: |
- if (-not (Test-Path C:\Qt)) {
- Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
- & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- }
- - save_cache: # this is the new step to save cache
- key: windows-qt-cache-v2
- paths:
- - C:\Qt
+ Invoke-WebRequest -Uri https://gpt4all.io/ci/qt-unified-windows-x64-4.6.0-online.exe -OutFile qt-unified-windows-x64-4.6.0-online.exe
+ & .\qt-unified-windows-x64-4.6.0-online.exe --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email ${Env:QT_EMAIL} --password ${Env:QT_PASSWORD} install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.win64_msvc2019_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- run:
name: Install VulkanSDK
command: |
@@ -757,13 +798,23 @@ jobs:
$Env:PATH = "${Env:PATH};C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
+ ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
& "C:\Qt\Tools\CMake_64\bin\cmake.exe" `
-S gpt4all-chat -B build -G Ninja `
- "-DCMAKE_BUILD_TYPE=Release" `
+ -DCMAKE_BUILD_TYPE=Release `
"-DCMAKE_PREFIX_PATH:PATH=C:\Qt\6.5.1\msvc2019_64" `
"-DCMAKE_MAKE_PROGRAM:FILEPATH=C:\Qt\Tools\Ninja\ninja.exe" `
- "-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON"
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
& "C:\Qt\Tools\Ninja\ninja.exe" -C build
+ ccache -s
+ - save_cache:
+ key: ccache-gpt4all-win-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ..\.ccache
build-gpt4all-chat-macos:
macos:
@@ -775,37 +826,44 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
- - restore_cache: # this is the new step to restore cache
+ - restore_cache:
keys:
- - macos-qt-cache-v3
+ - ccache-gpt4all-macos-
- run:
name: Install Rosetta
command: softwareupdate --install-rosetta --agree-to-license # needed for QtIFW
+ - run:
+ name: Install dependencies
+ command: brew install ccache
- run:
name: Installing Qt
command: |
- if [ ! -d ~/Qt ]; then
- curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
- hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
- /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
- hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- fi
- - save_cache: # this is the new step to save cache
- key: macos-qt-cache-v3
- paths:
- - ~/Qt
+ curl -o qt-unified-macOS-x64-4.6.0-online.dmg https://gpt4all.io/ci/qt-unified-macOS-x64-4.6.0-online.dmg
+ hdiutil attach qt-unified-macOS-x64-4.6.0-online.dmg
+ /Volumes/qt-unified-macOS-x64-4.6.0-online/qt-unified-macOS-x64-4.6.0-online.app/Contents/MacOS/qt-unified-macOS-x64-4.6.0-online --no-force-installations --no-default-installations --no-size-checking --default-answer --accept-licenses --confirm-command --accept-obligations --email $QT_EMAIL --password $QT_PASSWORD install qt.tools.cmake qt.tools.ifw.47 qt.tools.ninja qt.qt6.651.clang_64 qt.qt6.651.qt5compat qt.qt6.651.debug_info qt.qt6.651.addons.qtpdf qt.qt6.651.addons.qthttpserver
+ hdiutil detach /Volumes/qt-unified-macOS-x64-4.6.0-online
- run:
name: Build
command: |
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake \
-S gpt4all-chat -B build -G Ninja \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_PREFIX_PATH:PATH=~/Qt/6.5.1/macos/lib/cmake \
-DCMAKE_MAKE_PROGRAM:FILEPATH=~/Qt/Tools/Ninja/ninja \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
~/Qt/Tools/CMake/CMake.app/Contents/bin/cmake --build build --target all
+ ccache -s
+ - save_cache:
+ key: ccache-gpt4all-macos-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
+
build-ts-docs:
docker:
- image: cimg/base:stable
@@ -824,6 +882,7 @@ jobs:
command: |
cd gpt4all-bindings/typescript
npm run docs:build
+
build-py-docs:
docker:
- image: circleci/python:3.8
@@ -855,6 +914,9 @@ jobs:
image: ubuntu-2204:2023.04.2
steps:
- checkout
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-linux-amd64-
- run:
name: Set Python Version
command: pyenv global 3.11.2
@@ -866,7 +928,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- build-essential cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ build-essential ccache cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
@@ -876,12 +938,17 @@ jobs:
command: |
export PATH=$PATH:/usr/local/cuda/bin
git submodule update --init --recursive
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON \
-DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build -j$(nproc)
+ ccache -s
- run:
name: Build wheel
command: |
@@ -889,6 +956,11 @@ jobs:
python setup.py bdist_wheel --plat-name=manylinux1_x86_64
- store_artifacts:
path: gpt4all-bindings/python/dist
+ - save_cache:
+ key: ccache-gpt4all-linux-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
@@ -900,22 +972,29 @@ jobs:
resource_class: macos.m1.large.gen1
steps:
- checkout
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-macos-
- run:
name: Install dependencies
command: |
- brew install cmake
+ brew install ccache cmake
pip install setuptools wheel cmake
- run:
name: Build C library
command: |
git submodule update --init # don't use --recursive because macOS doesn't use Kompute
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
cmake --build build --parallel
+ ccache -s
- run:
name: Build wheel
command: |
@@ -923,6 +1002,11 @@ jobs:
python setup.py bdist_wheel --plat-name=macosx_10_15_universal2
- store_artifacts:
path: gpt4all-bindings/python/dist
+ - save_cache:
+ key: ccache-gpt4all-macos-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
@@ -940,6 +1024,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-win-amd64-
- run:
name: Install VulkanSDK
command: |
@@ -953,7 +1040,7 @@ jobs:
- run:
name: Install dependencies
command:
- choco install -y cmake ninja --installargs 'ADD_CMAKE_TO_PATH=System'
+ choco install -y ccache cmake ninja --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Install Python dependencies
command: pip install setuptools wheel cmake
@@ -966,12 +1053,17 @@ jobs:
$Env:PATH += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
+ ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
cd gpt4all-backend
cmake -B build -G Ninja `
-DCMAKE_BUILD_TYPE=Release `
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
-DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON `
-DCMAKE_CUDA_ARCHITECTURES='52-virtual;61-virtual;70-virtual;75-virtual'
cmake --build build --parallel
+ ccache -s
- run:
name: Build wheel
command: |
@@ -979,6 +1071,11 @@ jobs:
python setup.py bdist_wheel --plat-name=win_amd64
- store_artifacts:
path: gpt4all-bindings/python/dist
+ - save_cache:
+ key: ccache-gpt4all-win-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ..\.ccache
- persist_to_workspace:
root: gpt4all-bindings/python/dist
paths:
@@ -1014,6 +1111,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-linux-amd64-
- run:
name: Install dependencies
command: |
@@ -1022,7 +1122,7 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- build-essential cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ build-essential ccache cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
@@ -1030,14 +1130,25 @@ jobs:
name: Build Libraries
command: |
export PATH=$PATH:/usr/local/cuda/bin
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. \
- -DCMAKE_BUILD_TYPE=Release
+ -DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
cmake --build . -j$(nproc)
+ ccache -s
mkdir ../linux-x64
cp -L *.so ../linux-x64 # otherwise persist_to_workspace seems to mess symlinks
+ - save_cache:
+ key: ccache-gpt4all-linux-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
@@ -1053,26 +1164,38 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-macos-
- run:
name: Install dependencies
command: |
- brew install cmake
+ brew install ccache cmake
- run:
name: Build Libraries
command: |
+ ccache -o "cache_dir=${PWD}/../.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache \
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DBUILD_UNIVERSAL=ON \
-DCMAKE_OSX_DEPLOYMENT_TARGET=12.6 \
-DGGML_METAL_MACOSX_VERSION_MIN=12.6
cmake --build . --parallel
+ ccache -s
mkdir ../osx-x64
cp -L *.dylib ../osx-x64
cp ../../llama.cpp-mainline/*.metal ../osx-x64
ls ../osx-x64
+ - save_cache:
+ key: ccache-gpt4all-macos-{{ epoch }}
+ when: always
+ paths:
+ - ../.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
@@ -1091,6 +1214,9 @@ jobs:
command: |
git submodule sync
git submodule update --init --recursive
+ - restore_cache:
+ keys:
+ - ccache-gpt4all-win-amd64-
- run:
name: Install VulkanSDK
command: |
@@ -1104,19 +1230,34 @@ jobs:
- run:
name: Install dependencies
command: |
- choco install -y cmake --installargs 'ADD_CMAKE_TO_PATH=System'
+ choco install -y ccache cmake ninja --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Build Libraries
command: |
- $Env:Path += ";C:\Program Files\CMake\bin"
+ $vsInstallPath = & "C:\Program Files (x86)\Microsoft Visual Studio\Installer\vswhere.exe" -property installationpath
+ Import-Module "${vsInstallPath}\Common7\Tools\Microsoft.VisualStudio.DevShell.dll"
+ Enter-VsDevShell -VsInstallPath "$vsInstallPath" -SkipAutomaticLocation -DevCmdArguments '-arch=x64 -no_logo'
+
$Env:Path += ";C:\VulkanSDK\1.3.261.1\bin"
$Env:VULKAN_SDK = "C:\VulkanSDK\1.3.261.1"
+ ccache -o "cache_dir=${pwd}\..\.ccache" -o max_size=500M -p -z
cd gpt4all-backend
mkdir runtimes/win-x64_msvc
cd runtimes/win-x64_msvc
- cmake -G "Visual Studio 17 2022" -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON -A X64 ../..
- cmake --build . --parallel --config Release
+ cmake -S ../.. -B . -G Ninja `
+ -DCMAKE_BUILD_TYPE=Release `
+ -DCMAKE_C_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CXX_COMPILER_LAUNCHER=ccache `
+ -DCMAKE_CUDA_COMPILER_LAUNCHER=ccache `
+ -DKOMPUTE_OPT_DISABLE_VULKAN_VERSION_CHECK=ON
+ cmake --build . --parallel
+ ccache -s
cp bin/Release/*.dll .
+ - save_cache:
+ key: ccache-gpt4all-win-amd64-{{ epoch }}
+ when: always
+ paths:
+ - ..\.ccache
- persist_to_workspace:
root: gpt4all-backend
paths:
@@ -1153,6 +1294,7 @@ jobs:
paths:
- prebuilds/linux-x64/*.node
- runtimes/linux-x64/*-*.so
+
build-nodejs-macos:
macos:
xcode: 15.4.0
From 39005288c50f5d3ed84b76bebf94fb6a474239f8 Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 9 Sep 2024 10:48:57 -0400
Subject: [PATCH 63/66] server: improve correctness of request parsing and
responses (#2929)
Signed-off-by: Jared Van Bortel
---
.circleci/continue_config.yml | 123 +--
.gitmodules | 3 +
gpt4all-backend/CMakeLists.txt | 2 +-
gpt4all-backend/deps/llama.cpp-mainline | 2 +-
.../include/gpt4all-backend/llmodel.h | 7 +-
gpt4all-backend/src/llamamodel.cpp | 4 +-
gpt4all-backend/src/llamamodel_impl.h | 3 +-
gpt4all-backend/src/llmodel_c.cpp | 8 +-
gpt4all-backend/src/llmodel_shared.cpp | 14 +-
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/CMakeLists.txt | 10 +-
gpt4all-chat/deps/fmt | 1 +
gpt4all-chat/src/chat.cpp | 5 +-
gpt4all-chat/src/chatapi.cpp | 4 +-
gpt4all-chat/src/chatapi.h | 7 +-
gpt4all-chat/src/chatllm.cpp | 38 +-
gpt4all-chat/src/chatllm.h | 5 +-
gpt4all-chat/src/localdocsmodel.h | 11 +-
gpt4all-chat/src/modellist.h | 6 +-
gpt4all-chat/src/mysettings.h | 1 +
gpt4all-chat/src/server.cpp | 839 +++++++++++++-----
gpt4all-chat/src/server.h | 24 +-
22 files changed, 790 insertions(+), 328 deletions(-)
create mode 160000 gpt4all-chat/deps/fmt
diff --git a/.circleci/continue_config.yml b/.circleci/continue_config.yml
index fce8d0e8..a25a484d 100644
--- a/.circleci/continue_config.yml
+++ b/.circleci/continue_config.yml
@@ -317,9 +317,9 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
- libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
- libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ bison build-essential ccache cuda-compiler-11-8 flex g++-12 gperf libcublas-dev-11-8 libfontconfig1
+ libfreetype6 libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev
+ libx11-6 libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 patchelf
python3 vulkan-sdk
@@ -352,6 +352,8 @@ jobs:
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER=gcc-12 \
+ -DCMAKE_CXX_COMPILER=g++-12 \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
@@ -391,9 +393,9 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
- libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
- libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ bison build-essential ccache cuda-compiler-11-8 flex g++-12 gperf libcublas-dev-11-8 libfontconfig1
+ libfreetype6 libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev
+ libx11-6 libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 patchelf
python3 vulkan-sdk
@@ -426,6 +428,8 @@ jobs:
~/Qt/Tools/CMake/bin/cmake \
-S ../gpt4all-chat -B . \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER=gcc-12 \
+ -DCMAKE_CXX_COMPILER=g++-12 \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
@@ -447,7 +451,7 @@ jobs:
build-offline-chat-installer-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -538,7 +542,7 @@ jobs:
sign-offline-chat-installer-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -568,7 +572,7 @@ jobs:
build-online-chat-installer-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -666,7 +670,7 @@ jobs:
sign-online-chat-installer-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -720,9 +724,9 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- bison build-essential ccache cuda-compiler-11-8 flex gperf libcublas-dev-11-8 libfontconfig1 libfreetype6
- libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev libx11-6
- libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
+ bison build-essential ccache cuda-compiler-11-8 flex g++-12 gperf libcublas-dev-11-8 libfontconfig1
+ libfreetype6 libgl1-mesa-dev libmysqlclient21 libnvidia-compute-550-server libodbc2 libpq5 libwayland-dev
+ libx11-6 libx11-xcb1 libxcb-cursor0 libxcb-glx0 libxcb-icccm4 libxcb-image0 libxcb-keysyms1 libxcb-randr0
libxcb-render-util0 libxcb-shape0 libxcb-shm0 libxcb-sync1 libxcb-util1 libxcb-xfixes0 libxcb-xinerama0
libxcb-xkb1 libxcb1 libxext6 libxfixes3 libxi6 libxkbcommon-x11-0 libxkbcommon0 libxrender1 python3
vulkan-sdk
@@ -744,6 +748,8 @@ jobs:
~/Qt/Tools/CMake/bin/cmake \
-S gpt4all-chat -B build \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER=gcc-12 \
+ -DCMAKE_CXX_COMPILER=g++-12 \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
@@ -758,7 +764,7 @@ jobs:
build-gpt4all-chat-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -864,8 +870,8 @@ jobs:
paths:
- ../.ccache
- build-ts-docs:
- docker:
+ build-ts-docs:
+ docker:
- image: cimg/base:stable
steps:
- checkout
@@ -887,7 +893,7 @@ jobs:
docker:
- image: circleci/python:3.8
steps:
- - checkout
+ - checkout
- run:
name: Install dependencies
command: |
@@ -928,7 +934,8 @@ jobs:
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- build-essential ccache cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ build-essential ccache cmake cuda-compiler-11-8 g++-12 libcublas-dev-11-8 libnvidia-compute-550-server
+ vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
@@ -942,6 +949,8 @@ jobs:
cd gpt4all-backend
cmake -B build \
-DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER=gcc-12 \
+ -DCMAKE_CXX_COMPILER=g++-12 \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
-DCMAKE_CUDA_COMPILER_LAUNCHER=ccache \
@@ -1014,7 +1023,7 @@ jobs:
build-py-windows:
machine:
- image: 'windows-server-2019-vs2019:2022.08.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -1118,11 +1127,12 @@ jobs:
name: Install dependencies
command: |
wget -qO- https://packages.lunarg.com/lunarg-signing-key-pub.asc | sudo tee /etc/apt/trusted.gpg.d/lunarg.asc
- sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
+ sudo wget -qO /etc/apt/sources.list.d/lunarg-vulkan-jammy.list http://packages.lunarg.com/vulkan/lunarg-vulkan-jammy.list
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-keyring_1.1-1_all.deb
sudo dpkg -i cuda-keyring_1.1-1_all.deb
packages=(
- build-essential ccache cmake cuda-compiler-11-8 libcublas-dev-11-8 libnvidia-compute-550-server vulkan-sdk
+ build-essential ccache cmake cuda-compiler-11-8 g++-12 libcublas-dev-11-8 libnvidia-compute-550-server
+ vulkan-sdk
)
sudo apt-get update
sudo apt-get install -y "${packages[@]}"
@@ -1135,6 +1145,9 @@ jobs:
mkdir -p runtimes/build
cd runtimes/build
cmake ../.. \
+ -DCMAKE_BUILD_TYPE=Release \
+ -DCMAKE_C_COMPILER=gcc-12 \
+ -DCMAKE_C_COMPILER=g++-12 \
-DCMAKE_BUILD_TYPE=Release \
-DCMAKE_C_COMPILER_LAUNCHER=ccache \
-DCMAKE_CXX_COMPILER_LAUNCHER=ccache \
@@ -1204,7 +1217,7 @@ jobs:
build-bindings-backend-windows:
machine:
- image: 'windows-server-2022-gui:2023.03.1'
+ image: windows-server-2022-gui:current
resource_class: windows.large
shell: powershell.exe -ExecutionPolicy Bypass
steps:
@@ -1230,7 +1243,7 @@ jobs:
- run:
name: Install dependencies
command: |
- choco install -y ccache cmake ninja --installargs 'ADD_CMAKE_TO_PATH=System'
+ choco install -y ccache cmake ninja --installargs 'ADD_CMAKE_TO_PATH=System'
- run:
name: Build Libraries
command: |
@@ -1263,8 +1276,8 @@ jobs:
paths:
- runtimes/win-x64_msvc/*.dll
- build-nodejs-linux:
- docker:
+ build-nodejs-linux:
+ docker:
- image: cimg/base:stable
steps:
- checkout
@@ -1280,10 +1293,10 @@ jobs:
pkg-manager: yarn
override-ci-command: yarn install
- run:
- command: |
+ command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- - run:
+ - run:
command: |
mkdir -p gpt4all-backend/prebuilds/linux-x64
mkdir -p gpt4all-backend/runtimes/linux-x64
@@ -1292,10 +1305,10 @@ jobs:
- persist_to_workspace:
root: gpt4all-backend
paths:
- - prebuilds/linux-x64/*.node
+ - prebuilds/linux-x64/*.node
- runtimes/linux-x64/*-*.so
- build-nodejs-macos:
+ build-nodejs-macos:
macos:
xcode: 15.4.0
steps:
@@ -1312,12 +1325,12 @@ jobs:
pkg-manager: yarn
override-ci-command: yarn install
- run:
- command: |
+ command: |
cd gpt4all-bindings/typescript
yarn prebuildify -t 18.16.0 --napi
- - run:
+ - run:
name: "Persisting all necessary things to workspace"
- command: |
+ command: |
mkdir -p gpt4all-backend/prebuilds/darwin-x64
mkdir -p gpt4all-backend/runtimes/darwin
cp /tmp/gpt4all-backend/runtimes/osx-x64/*-*.* gpt4all-backend/runtimes/darwin
@@ -1328,7 +1341,7 @@ jobs:
- prebuilds/darwin-x64/*.node
- runtimes/darwin/*-*.*
- build-nodejs-windows:
+ build-nodejs-windows:
executor:
name: win/default
size: large
@@ -1342,29 +1355,29 @@ jobs:
command: wget https://nodejs.org/dist/v18.16.0/node-v18.16.0-x86.msi -P C:\Users\circleci\Downloads\
shell: cmd.exe
- run: MsiExec.exe /i C:\Users\circleci\Downloads\node-v18.16.0-x86.msi /qn
- - run:
+ - run:
command: |
Start-Process powershell -verb runAs -Args "-start GeneralProfile"
nvm install 18.16.0
nvm use 18.16.0
- - run: node --version
+ - run: node --version
- run: corepack enable
- - run:
+ - run:
command: |
npm install -g yarn
cd gpt4all-bindings/typescript
yarn install
- run:
- command: |
+ command: |
cd gpt4all-bindings/typescript
- yarn prebuildify -t 18.16.0 --napi
- - run:
+ yarn prebuildify -t 18.16.0 --napi
+ - run:
command: |
mkdir -p gpt4all-backend/prebuilds/win32-x64
mkdir -p gpt4all-backend/runtimes/win32-x64
cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll gpt4all-backend/runtimes/win32-x64
cp gpt4all-bindings/typescript/prebuilds/win32-x64/*.node gpt4all-backend/prebuilds/win32-x64
-
+
- persist_to_workspace:
root: gpt4all-backend
paths:
@@ -1372,7 +1385,7 @@ jobs:
- runtimes/win32-x64/*-*.dll
prepare-npm-pkg:
- docker:
+ docker:
- image: cimg/base:stable
steps:
- attach_workspace:
@@ -1383,19 +1396,19 @@ jobs:
node-version: "18.16"
- run: node --version
- run: corepack enable
- - run:
+ - run:
command: |
cd gpt4all-bindings/typescript
# excluding llmodel. nodejs bindings dont need llmodel.dll
mkdir -p runtimes/win32-x64/native
mkdir -p prebuilds/win32-x64/
- cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll runtimes/win32-x64/native/
- cp /tmp/gpt4all-backend/prebuilds/win32-x64/*.node prebuilds/win32-x64/
+ cp /tmp/gpt4all-backend/runtimes/win-x64_msvc/*-*.dll runtimes/win32-x64/native/
+ cp /tmp/gpt4all-backend/prebuilds/win32-x64/*.node prebuilds/win32-x64/
- mkdir -p runtimes/linux-x64/native
+ mkdir -p runtimes/linux-x64/native
mkdir -p prebuilds/linux-x64/
- cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so runtimes/linux-x64/native/
- cp /tmp/gpt4all-backend/prebuilds/linux-x64/*.node prebuilds/linux-x64/
+ cp /tmp/gpt4all-backend/runtimes/linux-x64/*-*.so runtimes/linux-x64/native/
+ cp /tmp/gpt4all-backend/prebuilds/linux-x64/*.node prebuilds/linux-x64/
# darwin has univeral runtime libraries
mkdir -p runtimes/darwin/native
@@ -1403,22 +1416,22 @@ jobs:
cp /tmp/gpt4all-backend/runtimes/darwin/*-*.* runtimes/darwin/native/
- cp /tmp/gpt4all-backend/prebuilds/darwin-x64/*.node prebuilds/darwin-x64/
-
+ cp /tmp/gpt4all-backend/prebuilds/darwin-x64/*.node prebuilds/darwin-x64/
+
# Fallback build if user is not on above prebuilds
mv -f binding.ci.gyp binding.gyp
mkdir gpt4all-backend
cd ../../gpt4all-backend
mv llmodel.h llmodel.cpp llmodel_c.cpp llmodel_c.h sysinfo.h dlhandle.h ../gpt4all-bindings/typescript/gpt4all-backend/
-
+
# Test install
- node/install-packages:
app-dir: gpt4all-bindings/typescript
pkg-manager: yarn
override-ci-command: yarn install
- - run:
- command: |
+ - run:
+ command: |
cd gpt4all-bindings/typescript
yarn run test
- run:
@@ -1552,7 +1565,7 @@ workflows:
- build-py-linux
- build-py-macos
build-bindings:
- when:
+ when:
or:
- << pipeline.parameters.run-all-workflows >>
- << pipeline.parameters.run-python-workflow >>
@@ -1585,8 +1598,8 @@ workflows:
requires:
- hold
- # NodeJs Jobs
- - prepare-npm-pkg:
+ # NodeJs Jobs
+ - prepare-npm-pkg:
filters:
branches:
only:
diff --git a/.gitmodules b/.gitmodules
index b59d07fd..6ed4b266 100644
--- a/.gitmodules
+++ b/.gitmodules
@@ -8,3 +8,6 @@
[submodule "gpt4all-chat/deps/SingleApplication"]
path = gpt4all-chat/deps/SingleApplication
url = https://github.com/nomic-ai/SingleApplication.git
+[submodule "gpt4all-chat/deps/fmt"]
+ path = gpt4all-chat/deps/fmt
+ url = https://github.com/fmtlib/fmt.git
diff --git a/gpt4all-backend/CMakeLists.txt b/gpt4all-backend/CMakeLists.txt
index 2c1fbb46..fb5937aa 100644
--- a/gpt4all-backend/CMakeLists.txt
+++ b/gpt4all-backend/CMakeLists.txt
@@ -33,7 +33,7 @@ set(LLMODEL_VERSION_PATCH 0)
set(LLMODEL_VERSION "${LLMODEL_VERSION_MAJOR}.${LLMODEL_VERSION_MINOR}.${LLMODEL_VERSION_PATCH}")
project(llmodel VERSION ${LLMODEL_VERSION} LANGUAGES CXX C)
-set(CMAKE_CXX_STANDARD 20)
+set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
set(CMAKE_LIBRARY_OUTPUT_DIRECTORY ${CMAKE_RUNTIME_OUTPUT_DIRECTORY})
set(BUILD_SHARED_LIBS ON)
diff --git a/gpt4all-backend/deps/llama.cpp-mainline b/gpt4all-backend/deps/llama.cpp-mainline
index 443665ae..ced74fba 160000
--- a/gpt4all-backend/deps/llama.cpp-mainline
+++ b/gpt4all-backend/deps/llama.cpp-mainline
@@ -1 +1 @@
-Subproject commit 443665aec4721ecf57df8162e7e093a0cd674a76
+Subproject commit ced74fbad4b258507f3ec06e77eec9445583511a
diff --git a/gpt4all-backend/include/gpt4all-backend/llmodel.h b/gpt4all-backend/include/gpt4all-backend/llmodel.h
index 04a510dc..d18584eb 100644
--- a/gpt4all-backend/include/gpt4all-backend/llmodel.h
+++ b/gpt4all-backend/include/gpt4all-backend/llmodel.h
@@ -162,7 +162,7 @@ public:
bool allowContextShift,
PromptContext &ctx,
bool special = false,
- std::string *fakeReply = nullptr);
+ std::optional fakeReply = {});
using EmbedCancelCallback = bool(unsigned *batchSizes, unsigned nBatch, const char *backend);
@@ -212,7 +212,7 @@ public:
protected:
// These are pure virtual because subclasses need to implement as the default implementation of
// 'prompt' above calls these functions
- virtual std::vector tokenize(PromptContext &ctx, const std::string &str, bool special = false) = 0;
+ virtual std::vector tokenize(PromptContext &ctx, std::string_view str, bool special = false) = 0;
virtual bool isSpecialToken(Token id) const = 0;
virtual std::string tokenToString(Token id) const = 0;
virtual Token sampleToken(PromptContext &ctx) const = 0;
@@ -249,7 +249,8 @@ protected:
std::function responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
- std::vector embd_inp);
+ std::vector embd_inp,
+ bool isResponse = false);
void generateResponse(std::function responseCallback,
bool allowContextShift,
PromptContext &promptCtx);
diff --git a/gpt4all-backend/src/llamamodel.cpp b/gpt4all-backend/src/llamamodel.cpp
index e2bbd0ac..8c92b025 100644
--- a/gpt4all-backend/src/llamamodel.cpp
+++ b/gpt4all-backend/src/llamamodel.cpp
@@ -536,13 +536,13 @@ size_t LLamaModel::restoreState(const uint8_t *src)
return llama_set_state_data(d_ptr->ctx, const_cast(src));
}
-std::vector LLamaModel::tokenize(PromptContext &ctx, const std::string &str, bool special)
+std::vector LLamaModel::tokenize(PromptContext &ctx, std::string_view str, bool special)
{
bool atStart = m_tokenize_last_token == -1;
bool insertSpace = atStart || isSpecialToken(m_tokenize_last_token);
std::vector fres(str.length() + 4);
int32_t fres_len = llama_tokenize_gpt4all(
- d_ptr->model, str.c_str(), str.length(), fres.data(), fres.size(), /*add_special*/ atStart,
+ d_ptr->model, str.data(), str.length(), fres.data(), fres.size(), /*add_special*/ atStart,
/*parse_special*/ special, /*insert_space*/ insertSpace
);
fres.resize(fres_len);
diff --git a/gpt4all-backend/src/llamamodel_impl.h b/gpt4all-backend/src/llamamodel_impl.h
index 7c698ffa..5189b9b3 100644
--- a/gpt4all-backend/src/llamamodel_impl.h
+++ b/gpt4all-backend/src/llamamodel_impl.h
@@ -8,6 +8,7 @@
#include
#include
+#include
#include
struct LLamaPrivate;
@@ -52,7 +53,7 @@ private:
bool m_supportsCompletion = false;
protected:
- std::vector tokenize(PromptContext &ctx, const std::string &str, bool special) override;
+ std::vector tokenize(PromptContext &ctx, std::string_view str, bool special) override;
bool isSpecialToken(Token id) const override;
std::string tokenToString(Token id) const override;
Token sampleToken(PromptContext &ctx) const override;
diff --git a/gpt4all-backend/src/llmodel_c.cpp b/gpt4all-backend/src/llmodel_c.cpp
index f3fd68ff..b0974223 100644
--- a/gpt4all-backend/src/llmodel_c.cpp
+++ b/gpt4all-backend/src/llmodel_c.cpp
@@ -12,6 +12,7 @@
#include
#include
#include
+#include
#include
struct LLModelWrapper {
@@ -130,13 +131,10 @@ void llmodel_prompt(llmodel_model model, const char *prompt,
wrapper->promptContext.repeat_last_n = ctx->repeat_last_n;
wrapper->promptContext.contextErase = ctx->context_erase;
- std::string fake_reply_str;
- if (fake_reply) { fake_reply_str = fake_reply; }
- auto *fake_reply_p = fake_reply ? &fake_reply_str : nullptr;
-
// Call the C++ prompt method
wrapper->llModel->prompt(prompt, prompt_template, prompt_callback, response_func, allow_context_shift,
- wrapper->promptContext, special, fake_reply_p);
+ wrapper->promptContext, special,
+ fake_reply ? std::make_optional(fake_reply) : std::nullopt);
// Update the C context by giving access to the wrappers raw pointers to std::vector data
// which involves no copies
diff --git a/gpt4all-backend/src/llmodel_shared.cpp b/gpt4all-backend/src/llmodel_shared.cpp
index 570f62c6..b4d5accc 100644
--- a/gpt4all-backend/src/llmodel_shared.cpp
+++ b/gpt4all-backend/src/llmodel_shared.cpp
@@ -11,6 +11,7 @@
#include
#include
#include
+#include
#include
namespace ranges = std::ranges;
@@ -45,7 +46,7 @@ void LLModel::prompt(const std::string &prompt,
bool allowContextShift,
PromptContext &promptCtx,
bool special,
- std::string *fakeReply)
+ std::optional fakeReply)
{
if (!isModelLoaded()) {
std::cerr << implementation().modelType() << " ERROR: prompt won't work with an unloaded model!\n";
@@ -129,11 +130,11 @@ void LLModel::prompt(const std::string &prompt,
return; // error
// decode the assistant's reply, either generated or spoofed
- if (fakeReply == nullptr) {
+ if (!fakeReply) {
generateResponse(responseCallback, allowContextShift, promptCtx);
} else {
embd_inp = tokenize(promptCtx, *fakeReply, false);
- if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp))
+ if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp, true))
return; // error
}
@@ -157,7 +158,8 @@ bool LLModel::decodePrompt(std::function promptCallback,
std::function responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
- std::vector embd_inp) {
+ std::vector embd_inp,
+ bool isResponse) {
if ((int) embd_inp.size() > promptCtx.n_ctx - 4) {
responseCallback(-1, "ERROR: The prompt size exceeds the context window size and cannot be processed.");
std::cerr << implementation().modelType() << " ERROR: The prompt is " << embd_inp.size() <<
@@ -196,7 +198,9 @@ bool LLModel::decodePrompt(std::function promptCallback,
for (size_t t = 0; t < tokens; ++t) {
promptCtx.tokens.push_back(batch.at(t));
promptCtx.n_past += 1;
- if (!promptCallback(batch.at(t)))
+ Token tok = batch.at(t);
+ bool res = isResponse ? responseCallback(tok, tokenToString(tok)) : promptCallback(tok);
+ if (!res)
return false;
}
i = batch_end;
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index c774a040..91b42f51 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -26,6 +26,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix a typo in Model Settings (by [@3Simplex](https://github.com/3Simplex) in [#2916](https://github.com/nomic-ai/gpt4all/pull/2916))
- Fix the antenna icon tooltip when using the local server ([#2922](https://github.com/nomic-ai/gpt4all/pull/2922))
- Fix a few issues with locating files and handling errors when loading remote models on startup ([#2875](https://github.com/nomic-ai/gpt4all/pull/2875))
+- Significantly improve API server request parsing and response correctness ([#2929](https://github.com/nomic-ai/gpt4all/pull/2929))
## [3.2.1] - 2024-08-13
diff --git a/gpt4all-chat/CMakeLists.txt b/gpt4all-chat/CMakeLists.txt
index fa70a793..bbc7d9b2 100644
--- a/gpt4all-chat/CMakeLists.txt
+++ b/gpt4all-chat/CMakeLists.txt
@@ -1,7 +1,7 @@
cmake_minimum_required(VERSION 3.16)
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
-set(CMAKE_CXX_STANDARD 20)
+set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if(APPLE)
@@ -64,6 +64,12 @@ message(STATUS "Qt 6 root directory: ${Qt6_ROOT_DIR}")
set (CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
+set(FMT_INSTALL OFF)
+set(BUILD_SHARED_LIBS_SAVED "${BUILD_SHARED_LIBS}")
+set(BUILD_SHARED_LIBS OFF)
+add_subdirectory(deps/fmt)
+set(BUILD_SHARED_LIBS "${BUILD_SHARED_LIBS_SAVED}")
+
add_subdirectory(../gpt4all-backend llmodel)
set(CHAT_EXE_RESOURCES)
@@ -240,7 +246,7 @@ else()
PRIVATE Qt6::Quick Qt6::Svg Qt6::HttpServer Qt6::Sql Qt6::Pdf)
endif()
target_link_libraries(chat
- PRIVATE llmodel SingleApplication)
+ PRIVATE llmodel SingleApplication fmt::fmt)
# -- install --
diff --git a/gpt4all-chat/deps/fmt b/gpt4all-chat/deps/fmt
new file mode 160000
index 00000000..0c9fce2f
--- /dev/null
+++ b/gpt4all-chat/deps/fmt
@@ -0,0 +1 @@
+Subproject commit 0c9fce2ffefecfdce794e1859584e25877b7b592
diff --git a/gpt4all-chat/src/chat.cpp b/gpt4all-chat/src/chat.cpp
index d9a66091..dd0bf1ec 100644
--- a/gpt4all-chat/src/chat.cpp
+++ b/gpt4all-chat/src/chat.cpp
@@ -239,16 +239,17 @@ void Chat::newPromptResponsePair(const QString &prompt)
resetResponseState();
m_chatModel->updateCurrentResponse(m_chatModel->count() - 1, false);
m_chatModel->appendPrompt("Prompt: ", prompt);
- m_chatModel->appendResponse("Response: ", prompt);
+ m_chatModel->appendResponse("Response: ", QString());
emit resetResponseRequested();
}
+// the server needs to block until response is reset, so it calls resetResponse on its own m_llmThread
void Chat::serverNewPromptResponsePair(const QString &prompt)
{
resetResponseState();
m_chatModel->updateCurrentResponse(m_chatModel->count() - 1, false);
m_chatModel->appendPrompt("Prompt: ", prompt);
- m_chatModel->appendResponse("Response: ", prompt);
+ m_chatModel->appendResponse("Response: ", QString());
}
bool Chat::restoringFromText() const
diff --git a/gpt4all-chat/src/chatapi.cpp b/gpt4all-chat/src/chatapi.cpp
index 06594a32..5634e8b2 100644
--- a/gpt4all-chat/src/chatapi.cpp
+++ b/gpt4all-chat/src/chatapi.cpp
@@ -93,7 +93,7 @@ void ChatAPI::prompt(const std::string &prompt,
bool allowContextShift,
PromptContext &promptCtx,
bool special,
- std::string *fakeReply) {
+ std::optional fakeReply) {
Q_UNUSED(promptCallback);
Q_UNUSED(allowContextShift);
@@ -121,7 +121,7 @@ void ChatAPI::prompt(const std::string &prompt,
if (fakeReply) {
promptCtx.n_past += 1;
m_context.append(formattedPrompt);
- m_context.append(QString::fromStdString(*fakeReply));
+ m_context.append(QString::fromUtf8(fakeReply->data(), fakeReply->size()));
return;
}
diff --git a/gpt4all-chat/src/chatapi.h b/gpt4all-chat/src/chatapi.h
index 724178de..a5e1ad58 100644
--- a/gpt4all-chat/src/chatapi.h
+++ b/gpt4all-chat/src/chatapi.h
@@ -12,9 +12,10 @@
#include
#include
-#include
#include
+#include
#include
+#include
#include
class QNetworkAccessManager;
@@ -72,7 +73,7 @@ public:
bool allowContextShift,
PromptContext &ctx,
bool special,
- std::string *fakeReply) override;
+ std::optional fakeReply) override;
void setThreadCount(int32_t n_threads) override;
int32_t threadCount() const override;
@@ -97,7 +98,7 @@ protected:
// them as they are only called from the default implementation of 'prompt' which we override and
// completely replace
- std::vector tokenize(PromptContext &ctx, const std::string &str, bool special) override
+ std::vector tokenize(PromptContext &ctx, std::string_view str, bool special) override
{
(void)ctx;
(void)str;
diff --git a/gpt4all-chat/src/chatllm.cpp b/gpt4all-chat/src/chatllm.cpp
index fd9316f5..a81d49bd 100644
--- a/gpt4all-chat/src/chatllm.cpp
+++ b/gpt4all-chat/src/chatllm.cpp
@@ -626,16 +626,16 @@ void ChatLLM::regenerateResponse()
m_ctx.tokens.erase(m_ctx.tokens.end() - m_promptResponseTokens, m_ctx.tokens.end());
m_promptResponseTokens = 0;
m_promptTokens = 0;
- m_response = std::string();
- emit responseChanged(QString::fromStdString(m_response));
+ m_response = m_trimmedResponse = std::string();
+ emit responseChanged(QString::fromStdString(m_trimmedResponse));
}
void ChatLLM::resetResponse()
{
m_promptTokens = 0;
m_promptResponseTokens = 0;
- m_response = std::string();
- emit responseChanged(QString::fromStdString(m_response));
+ m_response = m_trimmedResponse = std::string();
+ emit responseChanged(QString::fromStdString(m_trimmedResponse));
}
void ChatLLM::resetContext()
@@ -645,9 +645,12 @@ void ChatLLM::resetContext()
m_ctx = LLModel::PromptContext();
}
-QString ChatLLM::response() const
+QString ChatLLM::response(bool trim) const
{
- return QString::fromStdString(remove_leading_whitespace(m_response));
+ std::string resp = m_response;
+ if (trim)
+ resp = remove_leading_whitespace(resp);
+ return QString::fromStdString(resp);
}
ModelInfo ChatLLM::modelInfo() const
@@ -705,7 +708,8 @@ bool ChatLLM::handleResponse(int32_t token, const std::string &response)
// check for error
if (token < 0) {
m_response.append(response);
- emit responseChanged(QString::fromStdString(remove_leading_whitespace(m_response)));
+ m_trimmedResponse = remove_leading_whitespace(m_response);
+ emit responseChanged(QString::fromStdString(m_trimmedResponse));
return false;
}
@@ -715,7 +719,8 @@ bool ChatLLM::handleResponse(int32_t token, const std::string &response)
m_timer->inc();
Q_ASSERT(!response.empty());
m_response.append(response);
- emit responseChanged(QString::fromStdString(remove_leading_whitespace(m_response)));
+ m_trimmedResponse = remove_leading_whitespace(m_response);
+ emit responseChanged(QString::fromStdString(m_trimmedResponse));
return !m_stopGenerating;
}
@@ -741,7 +746,7 @@ bool ChatLLM::prompt(const QList &collectionList, const QString &prompt
bool ChatLLM::promptInternal(const QList &collectionList, const QString &prompt, const QString &promptTemplate,
int32_t n_predict, int32_t top_k, float top_p, float min_p, float temp, int32_t n_batch, float repeat_penalty,
- int32_t repeat_penalty_tokens)
+ int32_t repeat_penalty_tokens, std::optional fakeReply)
{
if (!isModelLoaded())
return false;
@@ -751,7 +756,7 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
QList databaseResults;
const int retrievalSize = MySettings::globalInstance()->localDocsRetrievalSize();
- if (!collectionList.isEmpty()) {
+ if (!fakeReply && !collectionList.isEmpty()) {
emit requestRetrieveFromDB(collectionList, prompt, retrievalSize, &databaseResults); // blocks
emit databaseResultsChanged(databaseResults);
}
@@ -797,7 +802,8 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
m_ctx.n_predict = old_n_predict; // now we are ready for a response
}
m_llModelInfo.model->prompt(prompt.toStdString(), promptTemplate.toStdString(), promptFunc, responseFunc,
- /*allowContextShift*/ true, m_ctx);
+ /*allowContextShift*/ true, m_ctx, false,
+ fakeReply.transform(std::mem_fn(&QString::toStdString)));
#if defined(DEBUG)
printf("\n");
fflush(stdout);
@@ -805,9 +811,9 @@ bool ChatLLM::promptInternal(const QList &collectionList, const QString
m_timer->stop();
qint64 elapsed = totalTime.elapsed();
std::string trimmed = trim_whitespace(m_response);
- if (trimmed != m_response) {
- m_response = trimmed;
- emit responseChanged(QString::fromStdString(m_response));
+ if (trimmed != m_trimmedResponse) {
+ m_trimmedResponse = trimmed;
+ emit responseChanged(QString::fromStdString(m_trimmedResponse));
}
SuggestionMode mode = MySettings::globalInstance()->suggestionMode();
@@ -1078,6 +1084,7 @@ bool ChatLLM::deserialize(QDataStream &stream, int version, bool deserializeKV,
QString response;
stream >> response;
m_response = response.toStdString();
+ m_trimmedResponse = trim_whitespace(m_response);
QString nameResponse;
stream >> nameResponse;
m_nameResponse = nameResponse.toStdString();
@@ -1306,10 +1313,9 @@ void ChatLLM::processRestoreStateFromText()
auto &response = *it++;
Q_ASSERT(response.first != "Prompt: ");
- auto responseText = response.second.toStdString();
m_llModelInfo.model->prompt(prompt.second.toStdString(), promptTemplate.toStdString(), promptFunc, nullptr,
- /*allowContextShift*/ true, m_ctx, false, &responseText);
+ /*allowContextShift*/ true, m_ctx, false, response.second.toUtf8().constData());
}
if (!m_stopGenerating) {
diff --git a/gpt4all-chat/src/chatllm.h b/gpt4all-chat/src/chatllm.h
index eb8d044f..62d83753 100644
--- a/gpt4all-chat/src/chatllm.h
+++ b/gpt4all-chat/src/chatllm.h
@@ -116,7 +116,7 @@ public:
void setForceUnloadModel(bool b) { m_forceUnloadModel = b; }
void setMarkedForDeletion(bool b) { m_markedForDeletion = b; }
- QString response() const;
+ QString response(bool trim = true) const;
ModelInfo modelInfo() const;
void setModelInfo(const ModelInfo &info);
@@ -198,7 +198,7 @@ Q_SIGNALS:
protected:
bool promptInternal(const QList &collectionList, const QString &prompt, const QString &promptTemplate,
int32_t n_predict, int32_t top_k, float top_p, float min_p, float temp, int32_t n_batch, float repeat_penalty,
- int32_t repeat_penalty_tokens);
+ int32_t repeat_penalty_tokens, std::optional fakeReply = {});
bool handlePrompt(int32_t token);
bool handleResponse(int32_t token, const std::string &response);
bool handleNamePrompt(int32_t token);
@@ -221,6 +221,7 @@ private:
bool loadNewModel(const ModelInfo &modelInfo, QVariantMap &modelLoadProps);
std::string m_response;
+ std::string m_trimmedResponse;
std::string m_nameResponse;
QString m_questionResponse;
LLModelInfo m_llModelInfo;
diff --git a/gpt4all-chat/src/localdocsmodel.h b/gpt4all-chat/src/localdocsmodel.h
index 82b5f882..ddce8963 100644
--- a/gpt4all-chat/src/localdocsmodel.h
+++ b/gpt4all-chat/src/localdocsmodel.h
@@ -20,24 +20,25 @@ class LocalDocsCollectionsModel : public QSortFilterProxyModel
Q_OBJECT
Q_PROPERTY(int count READ count NOTIFY countChanged)
Q_PROPERTY(int updatingCount READ updatingCount NOTIFY updatingCountChanged)
+
public:
explicit LocalDocsCollectionsModel(QObject *parent);
+ int count() const { return rowCount(); }
+ int updatingCount() const;
public Q_SLOTS:
- int count() const { return rowCount(); }
void setCollections(const QList &collections);
- int updatingCount() const;
Q_SIGNALS:
void countChanged();
void updatingCountChanged();
-private Q_SLOT:
- void maybeTriggerUpdatingCountChanged();
-
protected:
bool filterAcceptsRow(int sourceRow, const QModelIndex &sourceParent) const override;
+private Q_SLOTS:
+ void maybeTriggerUpdatingCountChanged();
+
private:
QList m_collections;
int m_updatingCount = 0;
diff --git a/gpt4all-chat/src/modellist.h b/gpt4all-chat/src/modellist.h
index 21d9aeef..6123dde8 100644
--- a/gpt4all-chat/src/modellist.h
+++ b/gpt4all-chat/src/modellist.h
@@ -18,10 +18,12 @@
#include
#include
#include
-#include
+
+#include
using namespace Qt::Literals::StringLiterals;
+
struct ModelInfo {
Q_GADGET
Q_PROPERTY(QString id READ id WRITE setId)
@@ -523,7 +525,7 @@ private:
protected:
explicit ModelList();
- ~ModelList() { for (auto *model: m_models) { delete model; } }
+ ~ModelList() override { for (auto *model: std::as_const(m_models)) { delete model; } }
friend class MyModelList;
};
diff --git a/gpt4all-chat/src/mysettings.h b/gpt4all-chat/src/mysettings.h
index 3db8b234..85335f0b 100644
--- a/gpt4all-chat/src/mysettings.h
+++ b/gpt4all-chat/src/mysettings.h
@@ -8,6 +8,7 @@
#include
#include
#include
+#include
#include
#include
diff --git a/gpt4all-chat/src/server.cpp b/gpt4all-chat/src/server.cpp
index 1da962f5..9d5c9583 100644
--- a/gpt4all-chat/src/server.cpp
+++ b/gpt4all-chat/src/server.cpp
@@ -4,7 +4,13 @@
#include "modellist.h"
#include "mysettings.h"
+#include
+#include
+
#include
+#include
+#include
+#include
#include
#include
#include
@@ -14,19 +20,67 @@
#include
#include
#include
+#include
#include
+#include
#include
+#include
#include
+#include
+#include
#include
+#include
+#include
#include
#include
+#include
#include
+namespace ranges = std::ranges;
+using namespace std::string_literals;
using namespace Qt::Literals::StringLiterals;
//#define DEBUG
+
+#define MAKE_FORMATTER(type, conversion) \
+ template <> \
+ struct fmt::formatter: fmt::formatter { \
+ template \
+ FmtContext::iterator format(const type &value, FmtContext &ctx) const \
+ { \
+ return formatter::format(conversion, ctx); \
+ } \
+ }
+
+MAKE_FORMATTER(QString, value.toStdString() );
+MAKE_FORMATTER(QVariant, value.toString().toStdString());
+
+namespace {
+
+class InvalidRequestError: public std::invalid_argument {
+ using std::invalid_argument::invalid_argument;
+
+public:
+ QHttpServerResponse asResponse() const
+ {
+ QJsonObject error {
+ { "message", what(), },
+ { "type", u"invalid_request_error"_s, },
+ { "param", QJsonValue::Null },
+ { "code", QJsonValue::Null },
+ };
+ return { QJsonObject {{ "error", error }},
+ QHttpServerResponder::StatusCode::BadRequest };
+ }
+
+private:
+ Q_DISABLE_COPY_MOVE(InvalidRequestError)
+};
+
+} // namespace
+
static inline QJsonObject modelToJson(const ModelInfo &info)
{
QJsonObject model;
@@ -39,7 +93,7 @@ static inline QJsonObject modelToJson(const ModelInfo &info)
QJsonArray permissions;
QJsonObject permissionObj;
- permissionObj.insert("id", "foobarbaz");
+ permissionObj.insert("id", "placeholder");
permissionObj.insert("object", "model_permission");
permissionObj.insert("created", 0);
permissionObj.insert("allow_create_engine", false);
@@ -70,6 +124,328 @@ static inline QJsonObject resultToJson(const ResultInfo &info)
return result;
}
+class BaseCompletionRequest {
+public:
+ QString model; // required
+ // NB: some parameters are not supported yet
+ int32_t max_tokens = 16;
+ qint64 n = 1;
+ float temperature = 1.f;
+ float top_p = 1.f;
+ float min_p = 0.f;
+
+ BaseCompletionRequest() = default;
+ virtual ~BaseCompletionRequest() = default;
+
+ virtual BaseCompletionRequest &parse(QCborMap request)
+ {
+ parseImpl(request);
+ if (!request.isEmpty())
+ throw InvalidRequestError(fmt::format(
+ "Unrecognized request argument supplied: {}", request.keys().constFirst().toString()
+ ));
+ return *this;
+ }
+
+protected:
+ virtual void parseImpl(QCborMap &request)
+ {
+ using enum Type;
+
+ auto reqValue = [&request](auto &&...args) { return takeValue(request, args...); };
+ QCborValue value;
+
+ this->model = reqValue("model", String, /*required*/ true).toString();
+
+ value = reqValue("frequency_penalty", Number, false, /*min*/ -2, /*max*/ 2);
+ if (value.isDouble() || value.toInteger() != 0)
+ throw InvalidRequestError("'frequency_penalty' is not supported");
+
+ value = reqValue("max_tokens", Integer, false, /*min*/ 1);
+ if (!value.isNull())
+ this->max_tokens = int32_t(qMin(value.toInteger(), INT32_MAX));
+
+ value = reqValue("n", Integer, false, /*min*/ 1);
+ if (!value.isNull())
+ this->n = value.toInteger();
+
+ value = reqValue("presence_penalty", Number);
+ if (value.isDouble() || value.toInteger() != 0)
+ throw InvalidRequestError("'presence_penalty' is not supported");
+
+ value = reqValue("seed", Integer);
+ if (!value.isNull())
+ throw InvalidRequestError("'seed' is not supported");
+
+ value = reqValue("stop");
+ if (!value.isNull())
+ throw InvalidRequestError("'stop' is not supported");
+
+ value = reqValue("stream", Boolean);
+ if (value.isTrue())
+ throw InvalidRequestError("'stream' is not supported");
+
+ value = reqValue("stream_options", Object);
+ if (!value.isNull())
+ throw InvalidRequestError("'stream_options' is not supported");
+
+ value = reqValue("temperature", Number, false, /*min*/ 0, /*max*/ 2);
+ if (!value.isNull())
+ this->temperature = float(value.toDouble());
+
+ value = reqValue("top_p", Number, /*min*/ 0, /*max*/ 1);
+ if (!value.isNull())
+ this->top_p = float(value.toDouble());
+
+ value = reqValue("min_p", Number, /*min*/ 0, /*max*/ 1);
+ if (!value.isNull())
+ this->min_p = float(value.toDouble());
+
+ reqValue("user", String); // validate but don't use
+ }
+
+ enum class Type : uint8_t {
+ Boolean,
+ Integer,
+ Number,
+ String,
+ Array,
+ Object,
+ };
+
+ static const std::unordered_map s_typeNames;
+
+ static bool typeMatches(const QCborValue &value, Type type) noexcept {
+ using enum Type;
+ switch (type) {
+ case Boolean: return value.isBool();
+ case Integer: return value.isInteger();
+ case Number: return value.isInteger() || value.isDouble();
+ case String: return value.isString();
+ case Array: return value.isArray();
+ case Object: return value.isMap();
+ }
+ Q_UNREACHABLE();
+ }
+
+ static QCborValue takeValue(
+ QCborMap &obj, const char *key, std::optional type = {}, bool required = false,
+ std::optional min = {}, std::optional max = {}
+ ) {
+ auto value = obj.take(QLatin1StringView(key));
+ if (value.isUndefined())
+ value = QCborValue(QCborSimpleType::Null);
+ if (required && value.isNull())
+ throw InvalidRequestError(fmt::format("you must provide a {} parameter", key));
+ if (type && !value.isNull() && !typeMatches(value, *type))
+ throw InvalidRequestError(fmt::format("'{}' is not of type '{}' - '{}'",
+ value.toVariant(), s_typeNames.at(*type), key));
+ if (!value.isNull()) {
+ double num = value.toDouble();
+ if (min && num < double(*min))
+ throw InvalidRequestError(fmt::format("{} is less than the minimum of {} - '{}'", num, *min, key));
+ if (max && num > double(*max))
+ throw InvalidRequestError(fmt::format("{} is greater than the maximum of {} - '{}'", num, *max, key));
+ }
+ return value;
+ }
+
+private:
+ Q_DISABLE_COPY_MOVE(BaseCompletionRequest)
+};
+
+class CompletionRequest : public BaseCompletionRequest {
+public:
+ QString prompt; // required
+ // some parameters are not supported yet - these ones are
+ bool echo = false;
+
+ CompletionRequest &parse(QCborMap request) override
+ {
+ BaseCompletionRequest::parse(std::move(request));
+ return *this;
+ }
+
+protected:
+ void parseImpl(QCborMap &request) override
+ {
+ using enum Type;
+
+ auto reqValue = [&request](auto &&...args) { return takeValue(request, args...); };
+ QCborValue value;
+
+ BaseCompletionRequest::parseImpl(request);
+
+ this->prompt = reqValue("prompt", String, /*required*/ true).toString();
+
+ value = reqValue("best_of", Integer);
+ {
+ qint64 bof = value.toInteger(1);
+ if (this->n > bof)
+ throw InvalidRequestError(fmt::format(
+ "You requested that the server return more choices than it will generate (HINT: you must set 'n' "
+ "(currently {}) to be at most 'best_of' (currently {}), or omit either parameter if you don't "
+ "specifically want to use them.)",
+ this->n, bof
+ ));
+ if (bof > this->n)
+ throw InvalidRequestError("'best_of' is not supported");
+ }
+
+ value = reqValue("echo", Boolean);
+ if (value.isBool())
+ this->echo = value.toBool();
+
+ // we don't bother deeply typechecking unsupported subobjects for now
+ value = reqValue("logit_bias", Object);
+ if (!value.isNull())
+ throw InvalidRequestError("'logit_bias' is not supported");
+
+ value = reqValue("logprobs", Integer, false, /*min*/ 0);
+ if (!value.isNull())
+ throw InvalidRequestError("'logprobs' is not supported");
+
+ value = reqValue("suffix", String);
+ if (!value.isNull() && !value.toString().isEmpty())
+ throw InvalidRequestError("'suffix' is not supported");
+ }
+};
+
+const std::unordered_map BaseCompletionRequest::s_typeNames = {
+ { BaseCompletionRequest::Type::Boolean, "boolean" },
+ { BaseCompletionRequest::Type::Integer, "integer" },
+ { BaseCompletionRequest::Type::Number, "number" },
+ { BaseCompletionRequest::Type::String, "string" },
+ { BaseCompletionRequest::Type::Array, "array" },
+ { BaseCompletionRequest::Type::Object, "object" },
+};
+
+class ChatRequest : public BaseCompletionRequest {
+public:
+ struct Message {
+ enum class Role : uint8_t {
+ User,
+ Assistant,
+ };
+ Role role;
+ QString content;
+ };
+
+ QList messages; // required
+
+ ChatRequest &parse(QCborMap request) override
+ {
+ BaseCompletionRequest::parse(std::move(request));
+ return *this;
+ }
+
+protected:
+ void parseImpl(QCborMap &request) override
+ {
+ using enum Type;
+
+ auto reqValue = [&request](auto &&...args) { return takeValue(request, args...); };
+ QCborValue value;
+
+ BaseCompletionRequest::parseImpl(request);
+
+ value = reqValue("messages", std::nullopt, /*required*/ true);
+ if (!value.isArray() || value.toArray().isEmpty())
+ throw InvalidRequestError(fmt::format(
+ "Invalid type for 'messages': expected a non-empty array of objects, but got '{}' instead.",
+ value.toVariant()
+ ));
+
+ this->messages.clear();
+ {
+ QCborArray arr = value.toArray();
+ Message::Role nextRole = Message::Role::User;
+ for (qsizetype i = 0; i < arr.size(); i++) {
+ const auto &elem = arr[i];
+ if (!elem.isMap())
+ throw InvalidRequestError(fmt::format(
+ "Invalid type for 'messages[{}]': expected an object, but got '{}' instead.",
+ i, elem.toVariant()
+ ));
+ QCborMap msg = elem.toMap();
+ Message res;
+ QString role = takeValue(msg, "role", String, /*required*/ true).toString();
+ if (role == u"system"_s)
+ continue; // FIXME(jared): don't ignore these
+ if (role == u"user"_s) {
+ res.role = Message::Role::User;
+ } else if (role == u"assistant"_s) {
+ res.role = Message::Role::Assistant;
+ } else {
+ throw InvalidRequestError(fmt::format(
+ "Invalid 'messages[{}].role': expected one of 'system', 'assistant', or 'user', but got '{}'"
+ " instead.",
+ i, role.toStdString()
+ ));
+ }
+ res.content = takeValue(msg, "content", String, /*required*/ true).toString();
+ if (res.role != nextRole)
+ throw InvalidRequestError(fmt::format(
+ "Invalid 'messages[{}].role': did not expect '{}' here", i, role
+ ));
+ this->messages.append(res);
+ nextRole = res.role == Message::Role::User ? Message::Role::Assistant
+ : Message::Role::User;
+
+ if (!msg.isEmpty())
+ throw InvalidRequestError(fmt::format(
+ "Invalid 'messages[{}]': unrecognized key: '{}'", i, msg.keys().constFirst().toString()
+ ));
+ }
+ }
+
+ // we don't bother deeply typechecking unsupported subobjects for now
+ value = reqValue("logit_bias", Object);
+ if (!value.isNull())
+ throw InvalidRequestError("'logit_bias' is not supported");
+
+ value = reqValue("logprobs", Boolean);
+ if (value.isTrue())
+ throw InvalidRequestError("'logprobs' is not supported");
+
+ value = reqValue("top_logprobs", Integer, false, /*min*/ 0);
+ if (!value.isNull())
+ throw InvalidRequestError("The 'top_logprobs' parameter is only allowed when 'logprobs' is enabled.");
+
+ value = reqValue("response_format", Object);
+ if (!value.isNull())
+ throw InvalidRequestError("'response_format' is not supported");
+
+ reqValue("service_tier", String); // validate but don't use
+
+ value = reqValue("tools", Array);
+ if (!value.isNull())
+ throw InvalidRequestError("'tools' is not supported");
+
+ value = reqValue("tool_choice");
+ if (!value.isNull())
+ throw InvalidRequestError("'tool_choice' is not supported");
+
+ // validate but don't use
+ reqValue("parallel_tool_calls", Boolean);
+
+ value = reqValue("function_call");
+ if (!value.isNull())
+ throw InvalidRequestError("'function_call' is not supported");
+
+ value = reqValue("functions", Array);
+ if (!value.isNull())
+ throw InvalidRequestError("'functions' is not supported");
+ }
+};
+
+template
+T &parseRequest(T &request, QJsonObject &&obj)
+{
+ // lossless conversion to CBOR exposes more type information
+ return request.parse(QCborMap::fromJsonObject(obj));
+}
+
Server::Server(Chat *chat)
: ChatLLM(chat, true /*isServer*/)
, m_chat(chat)
@@ -80,20 +456,28 @@ Server::Server(Chat *chat)
connect(chat, &Chat::collectionListChanged, this, &Server::handleCollectionListChanged, Qt::QueuedConnection);
}
-Server::~Server()
+static QJsonObject requestFromJson(const QByteArray &request)
{
+ QJsonParseError err;
+ const QJsonDocument document = QJsonDocument::fromJson(request, &err);
+ if (err.error || !document.isObject())
+ throw InvalidRequestError(fmt::format(
+ "error parsing request JSON: {}",
+ err.error ? err.errorString().toStdString() : "not an object"s
+ ));
+ return document.object();
}
void Server::start()
{
- m_server = new QHttpServer(this);
+ m_server = std::make_unique(this);
if (!m_server->listen(QHostAddress::LocalHost, MySettings::globalInstance()->networkPort())) {
qWarning() << "ERROR: Unable to start the server";
return;
}
m_server->route("/v1/models", QHttpServerRequest::Method::Get,
- [](const QHttpServerRequest &request) {
+ [](const QHttpServerRequest &) {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
@@ -113,7 +497,7 @@ void Server::start()
);
m_server->route("/v1/models/", QHttpServerRequest::Method::Get,
- [](const QString &model, const QHttpServerRequest &request) {
+ [](const QString &model, const QHttpServerRequest &) {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
@@ -137,7 +521,23 @@ void Server::start()
[this](const QHttpServerRequest &request) {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
- return handleCompletionRequest(request, false);
+
+ try {
+ auto reqObj = requestFromJson(request.body());
+#if defined(DEBUG)
+ qDebug().noquote() << "/v1/completions request" << QJsonDocument(reqObj).toJson(QJsonDocument::Indented);
+#endif
+ CompletionRequest req;
+ parseRequest(req, std::move(reqObj));
+ auto [resp, respObj] = handleCompletionRequest(req);
+#if defined(DEBUG)
+ if (respObj)
+ qDebug().noquote() << "/v1/completions reply" << QJsonDocument(*respObj).toJson(QJsonDocument::Indented);
+#endif
+ return std::move(resp);
+ } catch (const InvalidRequestError &e) {
+ return e.asResponse();
+ }
}
);
@@ -145,13 +545,30 @@ void Server::start()
[this](const QHttpServerRequest &request) {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
- return handleCompletionRequest(request, true);
+
+ try {
+ auto reqObj = requestFromJson(request.body());
+#if defined(DEBUG)
+ qDebug().noquote() << "/v1/chat/completions request" << QJsonDocument(reqObj).toJson(QJsonDocument::Indented);
+#endif
+ ChatRequest req;
+ parseRequest(req, std::move(reqObj));
+ auto [resp, respObj] = handleChatRequest(req);
+ (void)respObj;
+#if defined(DEBUG)
+ if (respObj)
+ qDebug().noquote() << "/v1/chat/completions reply" << QJsonDocument(*respObj).toJson(QJsonDocument::Indented);
+#endif
+ return std::move(resp);
+ } catch (const InvalidRequestError &e) {
+ return e.asResponse();
+ }
}
);
// Respond with code 405 to wrong HTTP methods:
m_server->route("/v1/models", QHttpServerRequest::Method::Post,
- [](const QHttpServerRequest &request) {
+ [] {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
return QHttpServerResponse(
@@ -163,7 +580,8 @@ void Server::start()
);
m_server->route("/v1/models/", QHttpServerRequest::Method::Post,
- [](const QString &model, const QHttpServerRequest &request) {
+ [](const QString &model) {
+ (void)model;
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
return QHttpServerResponse(
@@ -175,7 +593,7 @@ void Server::start()
);
m_server->route("/v1/completions", QHttpServerRequest::Method::Get,
- [](const QHttpServerRequest &request) {
+ [] {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
return QHttpServerResponse(
@@ -186,7 +604,7 @@ void Server::start()
);
m_server->route("/v1/chat/completions", QHttpServerRequest::Method::Get,
- [](const QHttpServerRequest &request) {
+ [] {
if (!MySettings::globalInstance()->serverChat())
return QHttpServerResponse(QHttpServerResponder::StatusCode::Unauthorized);
return QHttpServerResponse(
@@ -205,268 +623,261 @@ void Server::start()
&Chat::serverNewPromptResponsePair, Qt::BlockingQueuedConnection);
}
-QHttpServerResponse Server::handleCompletionRequest(const QHttpServerRequest &request, bool isChat)
+static auto makeError(auto &&...args) -> std::pair>
{
- // We've been asked to do a completion...
- QJsonParseError err;
- const QJsonDocument document = QJsonDocument::fromJson(request.body(), &err);
- if (err.error || !document.isObject()) {
- std::cerr << "ERROR: invalid json in completions body" << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::NoContent);
- }
-#if defined(DEBUG)
- printf("/v1/completions %s\n", qPrintable(document.toJson(QJsonDocument::Indented)));
- fflush(stdout);
-#endif
- const QJsonObject body = document.object();
- if (!body.contains("model")) { // required
- std::cerr << "ERROR: completions contains no model" << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::NoContent);
- }
- QJsonArray messages;
- if (isChat) {
- if (!body.contains("messages")) {
- std::cerr << "ERROR: chat completions contains no messages" << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::NoContent);
- }
- messages = body["messages"].toArray();
- }
+ return {QHttpServerResponse(args...), std::nullopt};
+}
- const QString modelRequested = body["model"].toString();
+auto Server::handleCompletionRequest(const CompletionRequest &request)
+ -> std::pair>
+{
ModelInfo modelInfo = ModelList::globalInstance()->defaultModelInfo();
const QList modelList = ModelList::globalInstance()->selectableModelList();
for (const ModelInfo &info : modelList) {
Q_ASSERT(info.installed);
if (!info.installed)
continue;
- if (modelRequested == info.name() || modelRequested == info.filename()) {
+ if (request.model == info.name() || request.model == info.filename()) {
modelInfo = info;
break;
}
}
- // We only support one prompt for now
- QList prompts;
- if (body.contains("prompt")) {
- QJsonValue promptValue = body["prompt"];
- if (promptValue.isString())
- prompts.append(promptValue.toString());
- else {
- QJsonArray array = promptValue.toArray();
- for (const QJsonValue &v : array)
- prompts.append(v.toString());
- }
- } else
- prompts.append(" ");
-
- int max_tokens = 16;
- if (body.contains("max_tokens"))
- max_tokens = body["max_tokens"].toInt();
-
- float temperature = 1.f;
- if (body.contains("temperature"))
- temperature = body["temperature"].toDouble();
-
- float top_p = 1.f;
- if (body.contains("top_p"))
- top_p = body["top_p"].toDouble();
-
- float min_p = 0.f;
- if (body.contains("min_p"))
- min_p = body["min_p"].toDouble();
-
- int n = 1;
- if (body.contains("n"))
- n = body["n"].toInt();
-
- int logprobs = -1; // supposed to be null by default??
- if (body.contains("logprobs"))
- logprobs = body["logprobs"].toInt();
-
- bool echo = false;
- if (body.contains("echo"))
- echo = body["echo"].toBool();
-
- // We currently don't support any of the following...
-#if 0
- // FIXME: Need configurable reverse prompts
- QList stop;
- if (body.contains("stop")) {
- QJsonValue stopValue = body["stop"];
- if (stopValue.isString())
- stop.append(stopValue.toString());
- else {
- QJsonArray array = stopValue.toArray();
- for (QJsonValue v : array)
- stop.append(v.toString());
- }
- }
-
- // FIXME: QHttpServer doesn't support server-sent events
- bool stream = false;
- if (body.contains("stream"))
- stream = body["stream"].toBool();
-
- // FIXME: What does this do?
- QString suffix;
- if (body.contains("suffix"))
- suffix = body["suffix"].toString();
-
- // FIXME: We don't support
- float presence_penalty = 0.f;
- if (body.contains("presence_penalty"))
- top_p = body["presence_penalty"].toDouble();
-
- // FIXME: We don't support
- float frequency_penalty = 0.f;
- if (body.contains("frequency_penalty"))
- top_p = body["frequency_penalty"].toDouble();
-
- // FIXME: We don't support
- int best_of = 1;
- if (body.contains("best_of"))
- logprobs = body["best_of"].toInt();
-
- // FIXME: We don't need
- QString user;
- if (body.contains("user"))
- suffix = body["user"].toString();
-#endif
-
- QString actualPrompt = prompts.first();
-
- // if we're a chat completion we have messages which means we need to prepend these to the prompt
- if (!messages.isEmpty()) {
- QList chats;
- for (int i = 0; i < messages.count(); ++i) {
- QJsonValue v = messages.at(i);
- // FIXME: Deal with system messages correctly
- QString role = v.toObject()["role"].toString();
- if (role != "user")
- continue;
- QString content = v.toObject()["content"].toString();
- if (!content.endsWith("\n") && i < messages.count() - 1)
- content += "\n";
- chats.append(content);
- }
- actualPrompt.prepend(chats.join("\n"));
- }
-
// adds prompt/response items to GUI
- emit requestServerNewPromptResponsePair(actualPrompt); // blocks
+ emit requestServerNewPromptResponsePair(request.prompt); // blocks
+ resetResponse();
// load the new model if necessary
setShouldBeLoaded(true);
if (modelInfo.filename().isEmpty()) {
- std::cerr << "ERROR: couldn't load default model " << modelRequested.toStdString() << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::BadRequest);
+ std::cerr << "ERROR: couldn't load default model " << request.model.toStdString() << std::endl;
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
}
// NB: this resets the context, regardless of whether this model is already loaded
if (!loadModel(modelInfo)) {
std::cerr << "ERROR: couldn't load model " << modelInfo.name().toStdString() << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::InternalServerError);
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
}
- const QString promptTemplate = modelInfo.promptTemplate();
- const float top_k = modelInfo.topK();
- const int n_batch = modelInfo.promptBatchSize();
- const float repeat_penalty = modelInfo.repeatPenalty();
- const int repeat_last_n = modelInfo.repeatPenaltyTokens();
+ // FIXME(jared): taking parameters from the UI inhibits reproducibility of results
+ const int top_k = modelInfo.topK();
+ const int n_batch = modelInfo.promptBatchSize();
+ const auto repeat_penalty = float(modelInfo.repeatPenalty());
+ const int repeat_last_n = modelInfo.repeatPenaltyTokens();
int promptTokens = 0;
int responseTokens = 0;
QList>> responses;
- for (int i = 0; i < n; ++i) {
+ for (int i = 0; i < request.n; ++i) {
if (!promptInternal(
m_collections,
- actualPrompt,
- promptTemplate,
- max_tokens /*n_predict*/,
+ request.prompt,
+ /*promptTemplate*/ u"%1"_s,
+ request.max_tokens,
top_k,
- top_p,
- min_p,
- temperature,
+ request.top_p,
+ request.min_p,
+ request.temperature,
n_batch,
repeat_penalty,
repeat_last_n)) {
std::cerr << "ERROR: couldn't prompt model " << modelInfo.name().toStdString() << std::endl;
- return QHttpServerResponse(QHttpServerResponder::StatusCode::InternalServerError);
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
}
- QString echoedPrompt = actualPrompt;
- if (!echoedPrompt.endsWith("\n"))
- echoedPrompt += "\n";
- responses.append(qMakePair((echo ? u"%1\n"_s.arg(actualPrompt) : QString()) + response(), m_databaseResults));
+ QString resp = response(/*trim*/ false);
+ if (request.echo)
+ resp = request.prompt + resp;
+ responses.append({resp, m_databaseResults});
if (!promptTokens)
- promptTokens += m_promptTokens;
+ promptTokens = m_promptTokens;
responseTokens += m_promptResponseTokens - m_promptTokens;
- if (i != n - 1)
+ if (i < request.n - 1)
resetResponse();
}
- QJsonObject responseObject;
- responseObject.insert("id", "foobarbaz");
- responseObject.insert("object", "text_completion");
- responseObject.insert("created", QDateTime::currentSecsSinceEpoch());
- responseObject.insert("model", modelInfo.name());
+ QJsonObject responseObject {
+ { "id", "placeholder" },
+ { "object", "text_completion" },
+ { "created", QDateTime::currentSecsSinceEpoch() },
+ { "model", modelInfo.name() },
+ };
QJsonArray choices;
-
- if (isChat) {
+ {
int index = 0;
for (const auto &r : responses) {
QString result = r.first;
QList infos = r.second;
- QJsonObject choice;
- choice.insert("index", index++);
- choice.insert("finish_reason", responseTokens == max_tokens ? "length" : "stop");
- QJsonObject message;
- message.insert("role", "assistant");
- message.insert("content", result);
- choice.insert("message", message);
+ QJsonObject choice {
+ { "text", result },
+ { "index", index++ },
+ { "logprobs", QJsonValue::Null },
+ { "finish_reason", responseTokens == request.max_tokens ? "length" : "stop" },
+ };
if (MySettings::globalInstance()->localDocsShowReferences()) {
QJsonArray references;
for (const auto &ref : infos)
references.append(resultToJson(ref));
- choice.insert("references", references);
- }
- choices.append(choice);
- }
- } else {
- int index = 0;
- for (const auto &r : responses) {
- QString result = r.first;
- QList infos = r.second;
- QJsonObject choice;
- choice.insert("text", result);
- choice.insert("index", index++);
- choice.insert("logprobs", QJsonValue::Null); // We don't support
- choice.insert("finish_reason", responseTokens == max_tokens ? "length" : "stop");
- if (MySettings::globalInstance()->localDocsShowReferences()) {
- QJsonArray references;
- for (const auto &ref : infos)
- references.append(resultToJson(ref));
- choice.insert("references", references);
+ choice.insert("references", references.isEmpty() ? QJsonValue::Null : QJsonValue(references));
}
choices.append(choice);
}
}
responseObject.insert("choices", choices);
+ responseObject.insert("usage", QJsonObject {
+ { "prompt_tokens", promptTokens },
+ { "completion_tokens", responseTokens },
+ { "total_tokens", promptTokens + responseTokens },
+ });
- QJsonObject usage;
- usage.insert("prompt_tokens", int(promptTokens));
- usage.insert("completion_tokens", int(responseTokens));
- usage.insert("total_tokens", int(promptTokens + responseTokens));
- responseObject.insert("usage", usage);
-
-#if defined(DEBUG)
- QJsonDocument newDoc(responseObject);
- printf("/v1/completions %s\n", qPrintable(newDoc.toJson(QJsonDocument::Indented)));
- fflush(stdout);
-#endif
-
- return QHttpServerResponse(responseObject);
+ return {QHttpServerResponse(responseObject), responseObject};
+}
+
+auto Server::handleChatRequest(const ChatRequest &request)
+ -> std::pair>
+{
+ ModelInfo modelInfo = ModelList::globalInstance()->defaultModelInfo();
+ const QList modelList = ModelList::globalInstance()->selectableModelList();
+ for (const ModelInfo &info : modelList) {
+ Q_ASSERT(info.installed);
+ if (!info.installed)
+ continue;
+ if (request.model == info.name() || request.model == info.filename()) {
+ modelInfo = info;
+ break;
+ }
+ }
+
+ // load the new model if necessary
+ setShouldBeLoaded(true);
+
+ if (modelInfo.filename().isEmpty()) {
+ std::cerr << "ERROR: couldn't load default model " << request.model.toStdString() << std::endl;
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
+ }
+
+ // NB: this resets the context, regardless of whether this model is already loaded
+ if (!loadModel(modelInfo)) {
+ std::cerr << "ERROR: couldn't load model " << modelInfo.name().toStdString() << std::endl;
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
+ }
+
+ const QString promptTemplate = modelInfo.promptTemplate();
+ const int top_k = modelInfo.topK();
+ const int n_batch = modelInfo.promptBatchSize();
+ const auto repeat_penalty = float(modelInfo.repeatPenalty());
+ const int repeat_last_n = modelInfo.repeatPenaltyTokens();
+
+ int promptTokens = 0;
+ int responseTokens = 0;
+ QList>> responses;
+ Q_ASSERT(!request.messages.isEmpty());
+ Q_ASSERT(request.messages.size() % 2 == 1);
+ for (int i = 0; i < request.messages.size() - 2; i += 2) {
+ using enum ChatRequest::Message::Role;
+ auto &user = request.messages[i];
+ auto &assistant = request.messages[i + 1];
+ Q_ASSERT(user.role == User);
+ Q_ASSERT(assistant.role == Assistant);
+
+ // adds prompt/response items to GUI
+ emit requestServerNewPromptResponsePair(user.content); // blocks
+ resetResponse();
+
+ if (!promptInternal(
+ {},
+ user.content,
+ promptTemplate,
+ request.max_tokens,
+ top_k,
+ request.top_p,
+ request.min_p,
+ request.temperature,
+ n_batch,
+ repeat_penalty,
+ repeat_last_n,
+ assistant.content)
+ ) {
+ std::cerr << "ERROR: couldn't prompt model " << modelInfo.name().toStdString() << std::endl;
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
+ }
+ promptTokens += m_promptResponseTokens; // previous responses are part of current prompt
+ }
+
+ QString lastMessage = request.messages.last().content;
+ // adds prompt/response items to GUI
+ emit requestServerNewPromptResponsePair(lastMessage); // blocks
+ resetResponse();
+
+ for (int i = 0; i < request.n; ++i) {
+ if (!promptInternal(
+ m_collections,
+ lastMessage,
+ promptTemplate,
+ request.max_tokens,
+ top_k,
+ request.top_p,
+ request.min_p,
+ request.temperature,
+ n_batch,
+ repeat_penalty,
+ repeat_last_n)
+ ) {
+ std::cerr << "ERROR: couldn't prompt model " << modelInfo.name().toStdString() << std::endl;
+ return makeError(QHttpServerResponder::StatusCode::InternalServerError);
+ }
+ responses.append({response(), m_databaseResults});
+ // FIXME(jared): these are UI counts and do not include framing tokens, which they should
+ if (i == 0)
+ promptTokens += m_promptTokens;
+ responseTokens += m_promptResponseTokens - m_promptTokens;
+ if (i != request.n - 1)
+ resetResponse();
+ }
+
+ QJsonObject responseObject {
+ { "id", "placeholder" },
+ { "object", "chat.completion" },
+ { "created", QDateTime::currentSecsSinceEpoch() },
+ { "model", modelInfo.name() },
+ };
+
+ QJsonArray choices;
+ {
+ int index = 0;
+ for (const auto &r : responses) {
+ QString result = r.first;
+ QList infos = r.second;
+ QJsonObject message {
+ { "role", "assistant" },
+ { "content", result },
+ };
+ QJsonObject choice {
+ { "index", index++ },
+ { "message", message },
+ { "finish_reason", responseTokens == request.max_tokens ? "length" : "stop" },
+ { "logprobs", QJsonValue::Null },
+ };
+ if (MySettings::globalInstance()->localDocsShowReferences()) {
+ QJsonArray references;
+ for (const auto &ref : infos)
+ references.append(resultToJson(ref));
+ choice.insert("references", references.isEmpty() ? QJsonValue::Null : QJsonValue(references));
+ }
+ choices.append(choice);
+ }
+ }
+
+ responseObject.insert("choices", choices);
+ responseObject.insert("usage", QJsonObject {
+ { "prompt_tokens", promptTokens },
+ { "completion_tokens", responseTokens },
+ { "total_tokens", promptTokens + responseTokens },
+ });
+
+ return {QHttpServerResponse(responseObject), responseObject};
}
diff --git a/gpt4all-chat/src/server.h b/gpt4all-chat/src/server.h
index 689f0b60..a1d46264 100644
--- a/gpt4all-chat/src/server.h
+++ b/gpt4all-chat/src/server.h
@@ -4,22 +4,29 @@
#include "chatllm.h"
#include "database.h"
-#include
+#include
#include
-#include
+#include
#include
+#include
#include
+#include
+#include
+#include
+
class Chat;
-class QHttpServer;
+class ChatRequest;
+class CompletionRequest;
+
class Server : public ChatLLM
{
Q_OBJECT
public:
- Server(Chat *parent);
- virtual ~Server();
+ explicit Server(Chat *chat);
+ ~Server() override = default;
public Q_SLOTS:
void start();
@@ -27,14 +34,17 @@ public Q_SLOTS:
Q_SIGNALS:
void requestServerNewPromptResponsePair(const QString &prompt);
+private:
+ auto handleCompletionRequest(const CompletionRequest &request) -> std::pair>;
+ auto handleChatRequest(const ChatRequest &request) -> std::pair>;
+
private Q_SLOTS:
- QHttpServerResponse handleCompletionRequest(const QHttpServerRequest &request, bool isChat);
void handleDatabaseResultsChanged(const QList &results) { m_databaseResults = results; }
void handleCollectionListChanged(const QList &collectionList) { m_collections = collectionList; }
private:
Chat *m_chat;
- QHttpServer *m_server;
+ std::unique_ptr m_server;
QList m_databaseResults;
QList m_collections;
};
From 08d9a401d2543857c251f6cfc2e8b63b09bb970d Mon Sep 17 00:00:00 2001
From: Jared Van Bortel
Date: Mon, 9 Sep 2024 17:12:12 -0400
Subject: [PATCH 64/66] mixpanel: report more information about the build and
platform (#2939)
Signed-off-by: Jared Van Bortel
---
gpt4all-chat/CHANGELOG.md | 1 +
gpt4all-chat/src/network.cpp | 75 ++++++++++++++++++++++++++++++------
2 files changed, 65 insertions(+), 11 deletions(-)
diff --git a/gpt4all-chat/CHANGELOG.md b/gpt4all-chat/CHANGELOG.md
index 91b42f51..c9b45742 100644
--- a/gpt4all-chat/CHANGELOG.md
+++ b/gpt4all-chat/CHANGELOG.md
@@ -9,6 +9,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Use greedy sampling when temperature is set to zero ([#2854](https://github.com/nomic-ai/gpt4all/pull/2854))
- Use configured system prompt in server mode and ignore system messages ([#2921](https://github.com/nomic-ai/gpt4all/pull/2921), [#2924](https://github.com/nomic-ai/gpt4all/pull/2924))
+- Add more system information to anonymous usage stats ([#2939](https://github.com/nomic-ai/gpt4all/pull/2939))
### Changed
- The offline update button now directs users to the offline installer releases page. (by [@3Simplex](https://github.com/3Simplex) in [#2888](https://github.com/nomic-ai/gpt4all/pull/2888))
diff --git a/gpt4all-chat/src/network.cpp b/gpt4all-chat/src/network.cpp
index 19e96a10..84fe1bc9 100644
--- a/gpt4all-chat/src/network.cpp
+++ b/gpt4all-chat/src/network.cpp
@@ -19,6 +19,7 @@
#include
#include
#include
+#include
#include
#include