AddCollectionView ← Existing Collections ← 存在集合 Add Document Collection 添加文档集合 Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings. 添加一个包含纯文本文件、PDF或Markdown的文件夹。在“设置”中配置其他扩展。 Please choose a directory 请选择一个目录 Name 名称 Collection name... 集合名称 Name of the collection to add (Required) 集合名称 (必须) Folder 目录 Folder path... 目录地址 Folder path to documents (Required) 文档的目录地址(必须) Browse 查看 Create Collection 创建集合 AddModelView ← Existing Models ← 存在的模型 Explore Models 发现模型 Discover and download models by keyword search... 通过关键词查找并下载模型 ... Text field for discovering and filtering downloadable models 用于发现和筛选可下载模型的文本字段 Initiate model discovery and filtering 启动模型发现和过滤 Triggers discovery and filtering of models 触发模型的发现和筛选 Default 默认 Likes 喜欢 Downloads 下载 Recent 近期 Asc 升序 Desc 倒序 None Searching · %1 搜索中 · %1 Sort by: %1 排序: %1 Sort dir: %1 排序目录: %1 Limit: %1 数量: %1 Network error: could not retrieve %1 网络错误:无法检索 %1 Busy indicator 繁忙程度 Displayed when the models request is ongoing 在模型请求进行中时显示 Model file 模型文件 Model file to be downloaded 待下载模型 Description 描述 File description 文件描述 Cancel 取消 Resume 继续 Download 下载 Stop/restart/start the download 停止/重启/开始下载 Remove 删除 Remove model from filesystem 从系统中删除模型 Install 安装 Install online model 安装在线模型 <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">错误</a></strong></font> <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">警告: 你的设备硬件不推荐 ,模型需要的内存 (%1 GB)比你的系统还要多 (%2).</strong></font> ERROR: $API_KEY is empty. 错误:$API_KEY 为空 ERROR: $BASE_URL is empty. 错误:$BASE_URL 为空 enter $BASE_URL 输入 $BASE_URL ERROR: $MODEL_NAME is empty. 错误:$MODEL_NAME为空 enter $MODEL_NAME 输入:$MODEL_NAME %1 GB %1 GB ? Describes an error that occurred when downloading 描述下载过程中发生的错误 Error for incompatible hardware 硬件不兼容的错误 Download progressBar 下载进度 Shows the progress made in the download 显示下载进度 Download speed 下载速度 Download speed in bytes/kilobytes/megabytes per second 下载速度 b/kb/mb /s Calculating... 计算中 Whether the file hash is being calculated 是否正在计算文件哈希 Displayed when the file hash is being calculated 在计算文件哈希时显示 enter $API_KEY 输入$API_KEY File size 文件大小 RAM required RAM 需要 Parameters 参数 Quant 量化 Type 类型 ApplicationSettings Application 应用 Network dialog 网络对话 opt-in to share feedback/conversations 选择加入以共享反馈/对话 Error dialog 错误对话 Application Settings 应用设置 General 通用设置 Theme 主题 The application color scheme. 应用的主题颜色 Dark 深色 Light 亮色 ERROR: Update system could not find the MaintenanceTool used to check for updates!<br/><br/>Did you install this application using the online installer? If so, the MaintenanceTool executable should be located one directory above where this application resides on your filesystem.<br/><br/>If you can't start it manually, then I'm afraid you'll have to reinstall. 错误:更新系统无法找到用于检查更新的 MaintenanceTool!<br><br>您是否使用在线安装程序安装了此应用程序?如果是的话,MaintenanceTool 可执行文件应该位于文件系统中此应用程序所在目录的上一级目录。<br><br>如果无法手动启动它,那么恐怕您需要重新安装。 LegacyDark LegacyDark Font Size 字体大小 The size of text in the application. 应用中的文本大小。 Small Medium Large Language and Locale 语言和本地化 The language and locale you wish to use. 你想使用的语言 System Locale 系统语言 Device 设备 The compute device used for text generation. "Auto" uses Vulkan or Metal. 用于文本生成的计算设备. "自动" 使用 Vulkan or Metal. The compute device used for text generation. 设备用于文本生成 Application default 程序默认 Default Model 默认模型 The preferred model for new chats. Also used as the local server fallback. 新聊天的首选模式。也用作本地服务器回退。 Suggestion Mode 建议模式 Generate suggested follow-up questions at the end of responses. 在答复结束时生成建议的后续问题。 When chatting with LocalDocs 本地文档检索 Whenever possible 只要有可能 Never 从不 Download Path 下载目录 Where to store local models and the LocalDocs database. 本地模型和本地文档数据库存储目录 Browse 查看 Choose where to save model files 模型下载目录 Enable Datalake 开启数据湖 Send chats and feedback to the GPT4All Open-Source Datalake. 发送对话和反馈给GPT4All 的开源数据湖。 Advanced 高级 CPU Threads CPU线程 The number of CPU threads used for inference and embedding. 用于推理和嵌入的CPU线程数 Save Chat Context 保存对话上下文 Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. 保存模型's 状态以提供更快加载速度. 警告: 需用 ~2GB 每个对话. Enable Local API Server 开启本地 API 服务 Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. 将OpenAI兼容服务器暴露给本地主机。警告:导致资源使用量增加。 API Server Port API 服务端口 The port to use for the local server. Requires restart. 使用本地服务的端口,需要重启 Check For Updates 检查更新 Manually check for an update to GPT4All. 手动检查更新 Updates 更新 Chat New Chat 新对话 Server Chat 服务器对话 ChatAPIWorker ERROR: Network error occurred while connecting to the API server 错误:连接到 API 服务器时发生网络错误 ChatAPIWorker::handleFinished got HTTP Error %1 %2 ChatAPIWorker::handleFinished 收到 HTTP 错误 %1 %2 ChatDrawer Drawer 抽屉 Main navigation drawer 导航 + New Chat + 新对话 Create a new chat 新对话 Select the current chat or edit the chat when in edit mode 选择当前的聊天或在编辑模式下编辑聊天 Edit chat name 修改对话名称 Save chat name 保存对话名称 Delete chat 删除对话 Confirm chat deletion 确认删除对话 Cancel chat deletion 取消删除对话 List of chats 对话列表 List of chats in the drawer dialog 对话框中的聊天列表 ChatListModel TODAY 今天 THIS WEEK 本周 THIS MONTH 本月 LAST SIX MONTHS 半年内 THIS YEAR 今年内 LAST YEAR 去年 ChatView <h3>Warning</h3><p>%1</p> <h3>警告</h3><p>%1</p> Switch model dialog 切换模型对话 Warn the user if they switch models, then context will be erased 如果用户切换模型,则警告用户,然后上下文将被删除 Conversation copied to clipboard. 复制对话到剪切板 Code copied to clipboard. 复制代码到剪切板 Chat panel 对话面板 Chat panel with options 对话面板选项 Reload the currently loaded model 重载当前模型 Eject the currently loaded model 弹出当前加载的模型 No model installed. 没有安装模型 Model loading error. 模型加载错误 Waiting for model... 稍等片刻 Switching context... 切换上下文 Choose a model... 选择模型 Not found: %1 没找到: %1 The top item is the current model 当前模型的最佳选项 LocalDocs 本地文档 Add documents 添加文档 add collections of documents to the chat 将文档集合添加到聊天中 Load the default model 载入默认模型 Loads the default model which can be changed in settings 加载默认模型,可以在设置中更改 No Model Installed 没有下载模型 GPT4All requires that you install at least one model to get started GPT4All要求您至少安装一个模型才能开始 Install a Model 下载模型 Shows the add model view 查看添加的模型 Conversation with the model 使用此模型对话 prompt / response pairs from the conversation 对话中的提示/响应对 GPT4All GPT4All You response stopped ... 响应停止... processing ... 处理中 generating response ... 响应中... generating questions ... 生成响应 Copy 复制 Copy Message 复制内容 Disable markdown 不允许markdown Enable markdown 允许markdown Thumbs up 点赞 Gives a thumbs up to the response 点赞响应 Thumbs down 点踩 Opens thumbs down dialog 打开点踩对话框 Suggested follow-ups 建议的后续行动 Erase and reset chat session 擦除并重置聊天会话 Copy chat session to clipboard 复制对话到剪切板 Redo last chat response 重新生成上个响应 Stop generating 停止生成 Stop the current response generation 停止当前响应 Reloads the model 重载模型 <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help <h3>加载模型时遇到错误:</h3><br><i><%1></i><br><br>模型加载失败可能由多种原因引起,但最常见的原因包括文件格式错误、下载不完整或损坏、文件类型错误、系统 RAM 不足或模型类型不兼容。以下是一些解决问题的建议:<br><ul><li>确保模型文件具有兼容的格式和类型<li>检查下载文件夹中的模型文件是否完整<li>您可以在设置对话框中找到下载文件夹<li>如果您已侧载模型,请通过检查 md5sum 确保文件未损坏<li>在我们的 <a href="https://docs.gpt4all.io/">文档</a> 中了解有关 gui 支持哪些模型的更多信息<li>查看我们的 <a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助 Reload · %1 重载 · %1 Loading · %1 载入中 · %1 Load · %1 (default) → 载入 · %1 (默认) → restoring from text ... 从文本恢复中 retrieving localdocs: %1 ... 检索本地文档: %1 ... searching localdocs: %1 ... 搜索本地文档: %1 ... %n Source(s) %n 资源 Send a message... 发送消息... Load a model to continue... 选择模型并继续 Send messages/prompts to the model 发送消息/提示词给模型 Cut 剪切 Paste 粘贴 Select All 全选 Send message 发送消息 Sends the message/prompt contained in textfield to the model 将文本框中包含的消息/提示发送给模型 CollectionsDrawer Warning: searching collections while indexing can return incomplete results 提示: 索引时搜索集合可能会返回不完整的结果 %n file(s) %n word(s) Updating 更新中 + Add Docs + 添加文档 Select a collection to make it available to the chat model. 选择一个集合,使其可用于聊天模型。 Download Model "%1" is installed successfully. 模型 "%1" 安装成功 ERROR: $MODEL_NAME is empty. 错误:$MODEL_NAME 为空 ERROR: $API_KEY is empty. 错误:$API_KEY为空 ERROR: $BASE_URL is invalid. 错误:$BASE_URL 非法 ERROR: Model "%1 (%2)" is conflict. 错误: 模型 "%1 (%2)" 有冲突. Model "%1 (%2)" is installed successfully. 模型 "%1 (%2)" 安装成功. Model "%1" is removed. 模型 "%1" 已删除. HomeView Welcome to GPT4All 欢迎 The privacy-first LLM chat application 隐私至上的大模型咨询应用程序 Start chatting 开始聊天 Start Chatting 开始聊天 Chat with any LLM 大预言模型聊天 LocalDocs 本地文档 Chat with your local files 本地文件聊天 Find Models 查找模型 Explore and download models 发现并下载模型 Latest news 新闻 Latest news from GPT4All GPT4All新闻 Release Notes 发布日志 Documentation 文档 Discord Discord X (Twitter) X (Twitter) Github Github nomic.ai nomic.ai Subscribe to Newsletter 订阅信息 LocalDocsSettings LocalDocs 本地文档 LocalDocs Settings 本地文档设置 Indexing 索引中 Allowed File Extensions 添加文档扩展名 Comma-separated list. LocalDocs will only attempt to process files with these extensions. 逗号分隔的列表。LocalDocs 只会尝试处理具有这些扩展名的文件 Embedding Embedding Use Nomic Embed API 使用 Nomic 内部 API Embed documents using the fast Nomic API instead of a private local model. Requires restart. 使用快速的 Nomic API 嵌入文档,而不是使用私有本地模型 Nomic API Key Nomic API Key API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart. Nomic Embed 使用的 API 密钥。请访问官网获取,需要重启。 Embeddings Device Embeddings 设备 The compute device used for embeddings. Requires restart. 技术设备用于embeddings. 需要重启. Application default 程序默认 Display 显示 Show Sources 查看源码 Display the sources used for each response. 显示每个响应所使用的源。 Advanced 高级 Warning: Advanced usage only. 提示: 仅限高级使用。 Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. 值过大可能会导致 localdocs 失败、响应速度极慢或根本无法响应。粗略地说,{N 个字符 x N 个片段} 被添加到模型的上下文窗口中。更多信息请见<a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">此处</a>。 Document snippet size (characters) 文档粘贴大小 (字符) Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. 每个文档片段的字符数。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。 Max document snippets per prompt 每个提示的最大文档片段数 Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. 检索到的文档片段最多添加到提示上下文中的前 N 个最佳匹配项。较大的数值增加了事实性响应的可能性,但也会导致生成速度变慢。 LocalDocsView LocalDocs 本地文档 Chat with your local files 和本地文件对话 + Add Collection + 添加集合 <h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。 <h3>ERROR: The LocalDocs database cannot be accessed or is not valid.</h3><br><i>Note: You will need to restart after trying any of the following suggested fixes.</i><br><ul><li>Make sure that the folder set as <b>Download Path</b> exists on the file system.</li><li>Check ownership as well as read and write permissions of the <b>Download Path</b>.</li><li>If there is a <b>localdocs_v2.db</b> file, check its ownership and read/write permissions, too.</li></ul><br>If the problem persists and there are any 'localdocs_v*.db' files present, as a last resort you can<br>try backing them up and removing them. You will have to recreate your collections, however. <h3>错误:无法访问 LocalDocs 数据库或该数据库无效。</h3><br><i>注意:尝试以下任何建议的修复方法后,您将需要重新启动。</i><br><ul><li>确保设置为<b>下载路径</b>的文件夹存在于文件系统中。</li><li>检查<b>下载路径</b>的所有权以及读写权限。</li><li>如果有<b>localdocs_v2.db</b>文件,请检查其所有权和读/写权限。</li></ul><br>如果问题仍然存在,并且存在任何“localdocs_v*.db”文件,作为最后的手段,您可以<br>尝试备份并删除它们。但是,您必须重新创建您的收藏。 No Collections Installed 没有集合 Install a collection of local documents to get started using this feature 安装一组本地文档以开始使用此功能 + Add Doc Collection + 添加文档集合 Shows the add model view 查看添加的模型 Indexing progressBar 索引进度 Shows the progress made in the indexing 显示索引进度 ERROR 错误 INDEXING 索引 EMBEDDING EMBEDDING REQUIRES UPDATE 需更新 READY 准备 INSTALLING 安装中 Indexing in progress 构建索引中 Embedding in progress Embedding进度 This collection requires an update after version change 此集合需要在版本更改后进行更新 Automatically reindexes upon changes to the folder 在文件夹变动时自动重新索引 Installation in progress 安装进度 % % %n file(s) %n 文件 %n word(s) %n 词 Remove 删除 Rebuild 重新构建 Reindex this folder from scratch. This is slow and usually not needed. 从头开始重新索引此文件夹。这个过程较慢,通常情况下不需要。 Update 更新 Update the collection to the new version. This is a slow operation. 将集合更新为新版本。这是一个缓慢的操作。 ModelList cannot open "%1": %2 无法打开“%1”:%2 cannot create "%1": %2 无法创建“%1”:%2 %1 (%2) %1 (%2) <strong>OpenAI-Compatible API Model</strong><br><ul><li>API Key: %1</li><li>Base URL: %2</li><li>Model Name: %3</li></ul> <strong>与 OpenAI 兼容的 API 模型</strong><br><ul><li>API 密钥:%1</li><li>基本 URL:%2</li><li>模型名称:%3</li></ul> <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <ul><li>需要个人 OpenAI API 密钥。</li><li>警告:将把您的聊天内容发送给 OpenAI!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与 OpenAI 通信</li><li>您可以在此处<a href="https://platform.openai.com/account/api-keys">申请 API 密钥。</a></li> <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral Small model</strong><br> %1 <strong>Mistral Small model</strong><br> %1 <strong>Mistral Medium model</strong><br> %1 <strong>Mistral Medium model</strong><br> %1 <ul><li>Requires personal API key and the API base URL.</li><li>WARNING: Will send your chats to the OpenAI-compatible API Server you specified!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with the OpenAI-compatible API Server</li> <ul><li>需要个人 API 密钥和 API 基本 URL。</li><li>警告:将把您的聊天内容发送到您指定的与 OpenAI 兼容的 API 服务器!</li><li>您的 API 密钥将存储在磁盘上</li><li>仅用于与与 OpenAI 兼容的 API 服务器通信</li> <strong>Connect to OpenAI-compatible API server</strong><br> %1 <strong>连接到与 OpenAI 兼容的 API 服务器</strong><br> %1 <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <br><br><i>* 即使您为ChatGPT-4向OpenAI付款,这也不能保证API密钥访问。联系OpenAI获取更多信息。 <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> ModelSettings Model 模型 Model Settings 模型设置 Clone 克隆 Remove 删除 Name 名称 Model File 模型文件 System Prompt 系统提示词 Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. 每次对话开始时的前缀 Prompt Template 提示词模版 The template that wraps every prompt. 包装每个提示的模板 Must contain the string "%1" to be replaced with the user's input. 必须包含字符串 "%1" 替换为用户的's 输入. Chat Name Prompt 聊天名称提示 Prompt used to automatically generate chat names. 用于自动生成聊天名称的提示。 Suggested FollowUp Prompt 建议的后续提示 Prompt used to generate suggested follow-up questions. 用于生成建议的后续问题的提示。 Context Length 上下文长度 Number of input and output tokens the model sees. 模型看到的输入和输出令牌的数量。 Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. 信息丢失前的最大组合提示/响应令牌。 使用比模型训练时更多的上下文将产生较差的结果。 注意:在重新加载模型之前不会生效。 Temperature 温度 Randomness of model output. Higher -> more variation. 模型输出的随机性。更高->更多的变化。 Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. 温度增加了选择不太可能的token的机会。 注:温度越高,输出越有创意,但预测性越低。 Top-P Top-P Nucleus Sampling factor. Lower -> more predictable. 核子取样系数。较低->更具可预测性。 Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. 只能选择总概率高达top_p的最有可能的令牌。 注意:防止选择极不可能的token。 Min-P Min-P Minimum token probability. Higher -> more predictable. 最小令牌概率。更高 -> 更可预测。 Sets the minimum relative probability for a token to be considered. 设置被考虑的标记的最小相对概率。 Top-K Top-K Size of selection pool for tokens. 令牌选择池的大小。 Only the top K most likely tokens will be chosen from. 仅从最可能的前 K 个标记中选择 Max Length 最大长度 Maximum response length, in tokens. 最大响应长度(以令牌为单位) Prompt Batch Size 提示词大小 The batch size used for prompt processing. 用于快速处理的批量大小。 Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. 一次要处理的提示令牌数量。 注意:较高的值可以加快读取提示,但会使用更多的RAM。 Repeat Penalty 重复惩罚 Repetition penalty factor. Set to 1 to disable. 重复处罚系数。设置为1可禁用。 Repeat Penalty Tokens 重复惩罚数 Number of previous tokens used for penalty. 用于惩罚的先前令牌数量。 GPU Layers GPU 层 Number of model layers to load into VRAM. 要加载到VRAM中的模型层数。 How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. 将多少模型层加载到VRAM中。如果GPT4All在加载此模型时耗尽VRAM,请减少此值。 较低的值会增加CPU负载和RAM使用率,并使推理速度变慢。 注意:在重新加载模型之前不会生效。 ModelsView No Models Installed 无模型 Install a model to get started using GPT4All 安装模型并开始使用 + Add Model + 添加模型 Shows the add model view 查看增加到模型 Installed Models 已安装的模型 Locally installed chat models 本地安装的聊天 Model file 模型文件 Model file to be downloaded 待下载的模型 Description 描述 File description 文件描述 Cancel 取消 Resume 继续 Stop/restart/start the download 停止/重启/开始下载 Remove 删除 Remove model from filesystem 从系统中删除模型 Install 按照 Install online model 安装在线模型 <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> ERROR: $API_KEY is empty. 错误:$API_KEY 为空 ERROR: $BASE_URL is empty. 错误:$BASE_URL 为空 enter $BASE_URL 输入 $BASE_URL ERROR: $MODEL_NAME is empty. 错误:$MODEL_NAME为空 enter $MODEL_NAME 输入:$MODEL_NAME %1 GB %1 GB ? Describes an error that occurred when downloading 描述下载时发生的错误 Error for incompatible hardware 硬件不兼容的错误 Download progressBar 下载进度 Shows the progress made in the download 显示下载进度 Download speed 下载速度 Download speed in bytes/kilobytes/megabytes per second 下载速度 b/kb/mb /s Calculating... 计算中... Whether the file hash is being calculated 是否正在计算文件哈希 Busy indicator 繁忙程度 Displayed when the file hash is being calculated 在计算文件哈希时显示 enter $API_KEY 输入 $API_KEY File size 文件大小 RAM required 需要 RAM Parameters 参数 Quant 量化 Type 类型 MyFancyLink Fancy link 精选链接 A stylized link 样式化链接 MySettingsStack Please choose a directory 请选择目录 MySettingsTab Restore Defaults 恢复初始化 Restores settings dialog to a default state 将设置对话框恢复为默认状态 NetworkDialog Contribute data to the GPT4All Opensource Datalake. 向GPT4All开源数据湖贡献数据 By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! 通过启用此功能,您将能够通过为未来的模型改进贡献数据来参与训练大型语言模型的民主过程。 当 GPT4All 模型回复您并且您已选择加入时,您的对话将被发送到 GPT4All 开源数据湖。此外,您可以喜欢/不喜欢它的回复。如果您不喜欢某个回复,您可以建议其他回复。这些数据将在 GPT4All 数据湖中收集和汇总。 注意:通过启用此功能,您将把数据发送到 GPT4All 开源数据湖。启用此功能后,您不应该期望聊天隐私。但是,如果您愿意,您应该期望可选的归因。您的聊天数据将公开供任何人下载,并将被 Nomic AI 用于改进未来的 GPT4All 模型。Nomic AI 将保留与您的数据相关的所有归因信息,并且您将被视为使用您的数据的任何 GPT4All 模型发布的贡献者! Terms for opt-in 选择加入的条款 Describes what will happen when you opt-in 描述选择加入时会发生的情况 Please provide a name for attribution (optional) 填写名称属性 (可选) Attribution (optional) 属性 (可选) Provide attribution 提供属性 Enable 启用 Enable opt-in 启用选择加入 Cancel 取消 Cancel opt-in 取消加入 NewVersionDialog New version is available 新版本可选 Update 更新 Update to new version 更新到新版本 PopupDialog Reveals a shortlived help balloon 显示一个短暂的帮助气球 Busy indicator 繁忙程度 Displayed when the popup is showing busy 在弹出窗口显示忙碌时显示 SettingsView Settings 设置 Contains various application settings 包含各种应用程序设置 Application 应用 Model 模型 LocalDocs 本地文档 StartupDialog Welcome! 欢迎! ### Release Notes %1<br/> ### Contributors %2 ### 发布日志 %1<br/> ### 贡献者 %2 Release notes 发布日志 Release notes for this version 本版本发布日志 ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! ### 选择加入匿名使用分析和数据湖 通过启用这些功能,您将能够通过为未来的模型改进贡献数据来参与训练大型语言模型的民主过程。 当 GPT4All 模型回复您并且您已选择加入时,您的对话将被发送到 GPT4All 开源数据湖。此外,您可以喜欢/不喜欢它的回复。如果您不喜欢某个回复,您可以建议其他回复。这些数据将在 GPT4All 数据湖中收集和汇总。 注意:通过启用此功能,您将把您的数据发送到 GPT4All 开源数据湖。 启用此功能后,您不应该期望聊天隐私。但是,如果您愿意,您应该期望可选的归因。您的聊天数据将公开供任何人下载,并将由 Nomic AI 用于改进未来的 GPT4All 模型。 Nomic AI 将保留与您的数据相关的所有 归因信息,并且您将被视为使用您的数据的任何 GPT4All 模型发布的贡献者! Terms for opt-in 选择加入选项 Describes what will happen when you opt-in 描述选择加入时会发生的情况 Opt-in for anonymous usage statistics 允许选择加入匿名使用统计数据 Yes Allow opt-in for anonymous usage statistics 允许选择加入匿名使用统计数据 No Opt-out for anonymous usage statistics 退出匿名使用统计数据 Allow opt-out for anonymous usage statistics 允许选择退出匿名使用统计数据 Opt-in for network 加入网络 Allow opt-in for network 允许选择加入网络 Allow opt-in anonymous sharing of chats to the GPT4All Datalake 允许选择加入匿名共享聊天至 GPT4All 数据湖 Opt-out for network 取消网络 Allow opt-out anonymous sharing of chats to the GPT4All Datalake 允许选择退出将聊天匿名共享至 GPT4All 数据湖 SwitchModelDialog <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? <b>警告:</b> 更改模型将删除当前对话。您想继续吗? Continue 继续 Continue with model loading 模型载入时继续 Cancel 取消 ThumbsDownDialog Please edit the text below to provide a better response. (optional) 请编辑下方文本以提供更好的回复。(可选) Please provide a better response... 提供更好回答... Submit 提交 Submits the user's response 提交用户响应 Cancel 取消 Closes the response dialog 关闭的对话 main GPT4All v%1 GPT4All v%1 <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>启动时遇到错误:</h3><br><i>“检测到不兼容的硬件。”</i><br><br>很遗憾,您的 CPU 不满足运行此程序的最低要求。特别是,它不支持此程序成功运行现代大型语言模型所需的 AVX 内在函数。目前唯一的解决方案是将您的硬件升级到更现代的 CPU。<br><br>有关更多信息,请参阅此处:<a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions>>https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. <h3>启动时遇到错误:</h3><br><i>“无法访问设置文件。”</i><br><br>不幸的是,某些东西阻止程序访问设置文件。这可能是由于设置文件所在的本地应用程序配置目录中的权限不正确造成的。请查看我们的<a href="https://discord.gg/4M2QFmTt2k">discord 频道</a> 以获取帮助。 Connection to datalake failed. 链接数据湖失败 Saving chats. 保存对话 Network dialog 网络对话 opt-in to share feedback/conversations 选择加入以共享反馈/对话 Home view 主页 Home view of application 主页 Home 主页 Chat view 对话视图 Chat view to interact with models 聊天视图可与模型互动 Chats 对话 Models 模型 Models view for installed models 已安装模型的页面 LocalDocs 本地文档 LocalDocs view to configure and use local docs LocalDocs视图可配置和使用本地文档 Settings 设置 Settings view for application configuration 设置页面 The datalake is enabled 数据湖已开启 Using a network model 使用联网模型 Server mode is enabled 服务器模式已开 Installed models 安装模型 View of installed models 查看已安装模型