AddCollectionView ← Existing Collections Add Document Collection Add a folder containing plain text files, PDFs, or Markdown. Configure additional extensions in Settings. Please choose a directory Name Collection name... Name of the collection to add (Required) Folder Folder path... Folder path to documents (Required) Browse Create Collection AddModelView ← Existing Models Explore Models Discover and download models by keyword search... Text field for discovering and filtering downloadable models Initiate model discovery and filtering Triggers discovery and filtering of models Default Likes Downloads Recent Asc Desc None Searching · %1 Sort by: %1 Sort dir: %1 Limit: %1 Network error: could not retrieve %1 Busy indicator Displayed when the models request is ongoing Model file Model file to be downloaded Description File description Cancel Resume Download Stop/restart/start the download Remove Remove model from filesystem Install Install online model <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> %1 GB ? Describes an error that occurred when downloading <strong><font size="1"><a href="#error">Error</a></strong></font> Error for incompatible hardware Download progressBar Shows the progress made in the download Download speed Download speed in bytes/kilobytes/megabytes per second Calculating... Whether the file hash is being calculated Displayed when the file hash is being calculated enter $API_KEY File size RAM required Parameters Quant Type ApplicationSettings Application Network dialog opt-in to share feedback/conversations ERROR: Update system could not find the MaintenanceTool used<br> to check for updates!<br><br> Did you install this application using the online installer? If so,<br> the MaintenanceTool executable should be located one directory<br> above where this application resides on your filesystem.<br><br> If you can't start it manually, then I'm afraid you'll have to<br> reinstall. Error dialog Application Settings General Theme The application color scheme. Dark Light LegacyDark Font Size The size of text in the application. Small Medium Large Language and Locale The language and locale you wish to use. Device The compute device used for text generation. "Auto" uses Vulkan or Metal. Default Model The preferred model for new chats. Also used as the local server fallback. Suggestion Mode Generate suggested follow-up questions at the end of responses. When chatting with LocalDocs Whenever possible Never Download Path Where to store local models and the LocalDocs database. Browse Choose where to save model files Enable Datalake Send chats and feedback to the GPT4All Open-Source Datalake. Advanced CPU Threads The number of CPU threads used for inference and embedding. Save Chat Context Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat. Enable Local Server Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage. API Server Port The port to use for the local server. Requires restart. Check For Updates Manually check for an update to GPT4All. Updates Chat New Chat Server Chat ChatDrawer Drawer Main navigation drawer + New Chat Create a new chat Select the current chat or edit the chat when in edit mode Edit chat name Save chat name Delete chat Confirm chat deletion Cancel chat deletion List of chats List of chats in the drawer dialog ChatListModel TODAY THIS WEEK THIS MONTH LAST SIX MONTHS THIS YEAR LAST YEAR ChatView <h3>Warning</h3><p>%1</p> Switch model dialog Warn the user if they switch models, then context will be erased Conversation copied to clipboard. Code copied to clipboard. Chat panel Chat panel with options Reload the currently loaded model Eject the currently loaded model No model installed. Model loading error. Waiting for model... Switching context... Choose a model... Not found: %1 The top item is the current model LocalDocs Add documents add collections of documents to the chat Load the default model Loads the default model which can be changed in settings No Model Installed GPT4All requires that you install at least one model to get started Install a Model Shows the add model view Conversation with the model prompt / response pairs from the conversation GPT4All You recalculating context ... response stopped ... processing ... generating response ... generating questions ... Copy Copy Message Disable markdown Enable markdown Thumbs up Gives a thumbs up to the response Thumbs down Opens thumbs down dialog %1 Sources Suggested follow-ups Erase and reset chat session Copy chat session to clipboard Redo last chat response Stop generating Stop the current response generation Reloads the model <h3>Encountered an error loading model:</h3><br><i>"%1"</i><br><br>Model loading failures can happen for a variety of reasons, but the most common causes include a bad file format, an incomplete or corrupted download, the wrong file type, not enough system RAM or an incompatible model type. Here are some suggestions for resolving the problem:<br><ul><li>Ensure the model file has a compatible format and type<li>Check the model file is complete in the download folder<li>You can find the download folder in the settings dialog<li>If you've sideloaded the model ensure the file is not corrupt by checking md5sum<li>Read more about what models are supported in our <a href="https://docs.gpt4all.io/">documentation</a> for the gui<li>Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help Reload · %1 Loading · %1 Load · %1 (default) → retrieving localdocs: %1 ... searching localdocs: %1 ... Send a message... Load a model to continue... Send messages/prompts to the model Cut Paste Select All Send message Sends the message/prompt contained in textfield to the model CollectionsDrawer Warning: searching collections while indexing can return incomplete results %n file(s) %n word(s) Updating + Add Docs Select a collection to make it available to the chat model. HomeView Welcome to GPT4All The privacy-first LLM chat application Start chatting Start Chatting Chat with any LLM LocalDocs Chat with your local files Find Models Explore and download models Latest news Latest news from GPT4All Release Notes Documentation Discord X (Twitter) Github GPT4All.io Subscribe to Newsletter LocalDocsSettings LocalDocs LocalDocs Settings Indexing Allowed File Extensions Comma-separated list. LocalDocs will only attempt to process files with these extensions. Embedding Use Nomic Embed API Embed documents using the fast Nomic API instead of a private local model. Requires restart. Nomic API Key API key to use for Nomic Embed. Get one from the Atlas <a href="https://atlas.nomic.ai/cli-login">API keys page</a>. Requires restart. Embeddings Device The compute device used for embeddings. "Auto" uses the CPU. Requires restart. Display Show Sources Display the sources used for each response. Advanced Warning: Advanced usage only. Values too large may cause localdocs failure, extremely slow responses or failure to respond at all. Roughly speaking, the {N chars x N snippets} are added to the model's context window. More info <a href="https://docs.gpt4all.io/gpt4all_desktop/localdocs.html">here</a>. Document snippet size (characters) Number of characters per document snippet. Larger numbers increase likelihood of factual responses, but also result in slower generation. Max document snippets per prompt Max best N matches of retrieved document snippets to add to the context for prompt. Larger numbers increase likelihood of factual responses, but also result in slower generation. LocalDocsView LocalDocs Chat with your local files + Add Collection ERROR: The LocalDocs database is not valid. No Collections Installed Install a collection of local documents to get started using this feature + Add Doc Collection Shows the add model view Indexing progressBar Shows the progress made in the indexing ERROR INDEXING EMBEDDING REQUIRES UPDATE READY INSTALLING Indexing in progress Embedding in progress This collection requires an update after version change Automatically reindexes upon changes to the folder Installation in progress % %n file(s) %n word(s) Remove Rebuild Reindex this folder from scratch. This is slow and usually not needed. Update Update the collection to the new version. This is a slow operation. ModelList <ul><li>Requires personal OpenAI API key.</li><li>WARNING: Will send your chats to OpenAI!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with OpenAI</li><li>You can apply for an API key <a href="https://platform.openai.com/account/api-keys">here.</a></li> <strong>OpenAI's ChatGPT model GPT-3.5 Turbo</strong><br> %1 <strong>OpenAI's ChatGPT model GPT-4</strong><br> %1 %2 <strong>Mistral Tiny model</strong><br> %1 <strong>Mistral Small model</strong><br> %1 <strong>Mistral Medium model</strong><br> %1 <br><br><i>* Even if you pay OpenAI for ChatGPT-4 this does not guarantee API key access. Contact OpenAI for more info. <ul><li>Requires personal Mistral API key.</li><li>WARNING: Will send your chats to Mistral!</li><li>Your API key will be stored on disk</li><li>Will only be used to communicate with Mistral</li><li>You can apply for an API key <a href="https://console.mistral.ai/user/api-keys">here</a>.</li> <strong>Created by %1.</strong><br><ul><li>Published on %2.<li>This model has %3 likes.<li>This model has %4 downloads.<li>More info can be found <a href="https://huggingface.co/%5">here.</a></ul> ModelSettings Model Model Settings Clone Remove Name Model File System Prompt Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens. Prompt Template The template that wraps every prompt. Must contain the string "%1" to be replaced with the user's input. Chat Name Prompt Prompt used to automatically generate chat names. Suggested FollowUp Prompt Prompt used to generate suggested follow-up questions. Context Length Number of input and output tokens the model sees. Maximum combined prompt/response tokens before information is lost. Using more context than the model was trained on will yield poor results. NOTE: Does not take effect until you reload the model. Temperature Randomness of model output. Higher -> more variation. Temperature increases the chances of choosing less likely tokens. NOTE: Higher temperature gives more creative but less predictable outputs. Top-P Nucleus Sampling factor. Lower -> more predicatable. Only the most likely tokens up to a total probability of top_p can be chosen. NOTE: Prevents choosing highly unlikely tokens. Min-P Minimum token probability. Higher -> more predictable. Sets the minimum relative probability for a token to be considered. Top-K Size of selection pool for tokens. Only the top K most likely tokens will be chosen from. Max Length Maximum response length, in tokens. Prompt Batch Size The batch size used for prompt processing. Amount of prompt tokens to process at once. NOTE: Higher values can speed up reading prompts but will use more RAM. Repeat Penalty Repetition penalty factor. Set to 1 to disable. Repeat Penalty Tokens Number of previous tokens used for penalty. GPU Layers Number of model layers to load into VRAM. How many model layers to load into VRAM. Decrease this if GPT4All runs out of VRAM while loading this model. Lower values increase CPU load and RAM usage, and make inference slower. NOTE: Does not take effect until you reload the model. ModelsView No Models Installed Install a model to get started using GPT4All + Add Model Shows the add model view Installed Models Locally installed chat models Model file Model file to be downloaded Description File description Cancel Resume Stop/restart/start the download Remove Remove model from filesystem Install Install online model <strong><font size="1"><a href="#error">Error</a></strong></font> <strong><font size="2">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font> %1 GB ? Describes an error that occurred when downloading Error for incompatible hardware Download progressBar Shows the progress made in the download Download speed Download speed in bytes/kilobytes/megabytes per second Calculating... Whether the file hash is being calculated Busy indicator Displayed when the file hash is being calculated enter $API_KEY File size RAM required Parameters Quant Type MyFancyLink Fancy link A stylized link MySettingsStack Please choose a directory MySettingsTab Restore Defaults Restores settings dialog to a default state NetworkDialog Contribute data to the GPT4All Opensource Datalake. By enabling this feature, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! Terms for opt-in Describes what will happen when you opt-in Please provide a name for attribution (optional) Attribution (optional) Provide attribution Enable Enable opt-in Cancel Cancel opt-in NewVersionDialog New version is available Update Update to new version PopupDialog Reveals a shortlived help balloon Busy indicator Displayed when the popup is showing busy QObject Default SettingsView Settings Contains various application settings Application Model LocalDocs StartupDialog Welcome! ### Release notes %1### Contributors %2 Release notes Release notes for this version ### Opt-ins for anonymous usage analytics and datalake By enabling these features, you will be able to participate in the democratic process of training a large language model by contributing data for future model improvements. When a GPT4All model responds to you and you have opted-in, your conversation will be sent to the GPT4All Open Source Datalake. Additionally, you can like/dislike its response. If you dislike a response, you can suggest an alternative response. This data will be collected and aggregated in the GPT4All Datalake. NOTE: By turning on this feature, you will be sending your data to the GPT4All Open Source Datalake. You should have no expectation of chat privacy when this feature is enabled. You should; however, have an expectation of an optional attribution if you wish. Your chat data will be openly available for anyone to download and will be used by Nomic AI to improve future GPT4All models. Nomic AI will retain all attribution information attached to your data and you will be credited as a contributor to any GPT4All model release that uses your data! Terms for opt-in Describes what will happen when you opt-in Opt-in for anonymous usage statistics Yes Allow opt-in for anonymous usage statistics No Opt-out for anonymous usage statistics Allow opt-out for anonymous usage statistics Opt-in for network Allow opt-in for network Allow opt-in anonymous sharing of chats to the GPT4All Datalake Opt-out for network Allow opt-out anonymous sharing of chats to the GPT4All Datalake SwitchModelDialog <b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue? Continue Continue with model loading Cancel ThumbsDownDialog Please edit the text below to provide a better response. (optional) Please provide a better response... Submit Submits the user's response Cancel Closes the response dialog main <h3>Encountered an error starting up:</h3><br><i>"Incompatible hardware detected."</i><br><br>Unfortunately, your CPU does not meet the minimal requirements to run this program. In particular, it does not support AVX intrinsics which this program requires to successfully run a modern large language model. The only solution at this time is to upgrade your hardware to a more modern CPU.<br><br>See here for more information: <a href="https://en.wikipedia.org/wiki/Advanced_Vector_Extensions">https://en.wikipedia.org/wiki/Advanced_Vector_Extensions</a> GPT4All v%1 <h3>Encountered an error starting up:</h3><br><i>"Inability to access settings file."</i><br><br>Unfortunately, something is preventing the program from accessing the settings file. This could be caused by incorrect permissions in the local app config directory where the settings file is located. Check out our <a href="https://discord.gg/4M2QFmTt2k">discord channel</a> for help. Connection to datalake failed. Saving chats. Network dialog opt-in to share feedback/conversations Home view Home view of application Home Chat view Chat view to interact with models Chats Models Models view for installed models LocalDocs LocalDocs view to configure and use local docs Settings Settings view for application configuration The datalake is enabled Using a network model Server mode is enabled Installed models View of installed models