mirror of
https://github.com/nomic-ai/gpt4all.git
synced 2024-10-01 01:06:10 -04:00
c950fdd84e
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
7.2 KiB
7.2 KiB
Changelog
All notable changes to this project will be documented in this file.
The format is based on Keep a Changelog.
Unreleased
Added
Changed
- Add missing entries to Italian transltation (by @Harvester62 in #2783)
- Use llama_kv_cache ops to shift context faster (#2781)
- Don't stop generating at end of context (#2781)
Fixed
- Case-insensitive LocalDocs source icon detection (by @cosmic-snow in #2761)
- Fix comparison of pre- and post-release versions for update check and models3.json (#2762, #2772)
- Fix several backend issues (#2778)
- Make reverse prompt detection work more reliably and prevent it from breaking output (#2781)
- Disallow context shift for chat name and follow-up generation to prevent bugs (#2781)
3.1.1 - 2024-07-27
Added
- Add Llama 3.1 8B Instruct to models3.json (by @3Simplex in #2731 and #2732)
- Portuguese (BR) translation (by thiagojramos in #2733)
- Support adding arbitrary OpenAI-compatible models by URL (by @supersonictw in #2683)
- Support Llama 3.1 RoPE scaling (#2758)
Changed
- Add missing entries to Chinese (Simplified) translation (by wuodoo in #2716 and #2749)
- Update translation files and add missing paths to CMakeLists.txt (#2735)
3.1.0 - 2024-07-24
Added
- Generate suggested follow-up questions (#2634, #2723)
- Also add options for the chat name and follow-up question prompt templates
- Scaffolding for translations (#2612)
- Spanish (MX) translation (by @jstayco in #2654)
- Chinese (Simplified) translation by mikage (#2657)
- Dynamic changes of language and locale at runtime (#2659, #2677)
- Romanian translation by @SINAPSA_IC (#2662)
- Chinese (Traditional) translation (by @supersonictw in #2661)
- Italian translation (by @Harvester62 in #2700)
Changed
- Customize combo boxes and context menus to fit the new style (#2535)
- Improve view bar scaling and Model Settings layout (#2520
- Make the logo spin while the model is generating (#2557)
- Server: Reply to wrong GET/POST method with HTTP 405 instead of 404 (by @cosmic-snow in #2615)
- Update theme for menus (by @3Simplex in #2578)
- Move the "stop" button to the message box (#2561)
- Build with CUDA 11.8 for better compatibility (#2639)
- Make links in latest news section clickable (#2643)
- Support translation of settings choices (#2667, #2690)
- Improve LocalDocs view's error message (by @cosmic-snow in #2679)
- Ignore case of LocalDocs file extensions (#2642, #2684)
- Update llama.cpp to commit 87e397d00 from July 19th (#2694, #2702)
- Add support for GPT-NeoX, Gemma 2, OpenELM, ChatGLM, and Jais architectures (all with Vulkan support)
- Add support for DeepSeek-V2 architecture (no Vulkan support)
- Enable Vulkan support for StarCoder2, XVERSE, Command R, and OLMo
- Show scrollbar in chat collections list as needed (by @cosmic-snow in #2691)
Removed
Fixed
- Fix placement of thumbs-down and datalake opt-in dialogs (#2540)
- Select the correct folder with the Linux fallback folder dialog (#2541)
- Fix clone button sometimes producing blank model info (#2545)
- Fix jerky chat view scrolling (#2555)
- Fix "reload" showing for chats with missing models (#2520
- Fix property binding loop warning (#2601)
- Fix UI hang with certain chat view content (#2543)
- Fix crash when Kompute falls back to CPU (#2640)
- Fix several Vulkan resource management issues (#2694)
- Fix crash/hang when some models stop generating, by showing special tokens (#2701)