Compare commits

..

No commits in common. "main" and "v3.10.0" have entirely different histories.

5 changed files with 28 additions and 33 deletions

View File

@ -1,7 +1,7 @@
version: 2.1 version: 2.1
setup: true setup: true
orbs: orbs:
path-filtering: circleci/path-filtering@1.3.0 path-filtering: circleci/path-filtering@1.1.0
workflows: workflows:
version: 2.1 version: 2.1

View File

@ -35,11 +35,6 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows Installer <img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows Installer
</a> &mdash; </a> &mdash;
</p> </p>
<p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-win64-arm.exe">
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows ARM Installer
</a> &mdash;
</p>
<p> <p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-darwin.dmg"> &mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-darwin.dmg">
<img src="gpt4all-bindings/python/docs/assets/mac.png" style="height: 1em; width: auto" /> macOS Installer <img src="gpt4all-bindings/python/docs/assets/mac.png" style="height: 1em; width: auto" /> macOS Installer
@ -51,16 +46,10 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
</a> &mdash; </a> &mdash;
</p> </p>
<p> <p>
The Windows and Linux builds require Intel Core i3 2nd Gen / AMD Bulldozer, or better. Windows and Linux require Intel Core i3 2nd Gen / AMD Bulldozer, or better. x86-64 only, no ARM.
</p> </p>
<p> <p>
The Windows ARM build supports Qualcomm Snapdragon and Microsoft SQ1/SQ2 processors. macOS requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
</p>
<p>
The Linux build is x86-64 only (no ARM).
</p>
<p>
The macOS build requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
</p> </p>
See the full [System Requirements](gpt4all-chat/system_requirements.md) for more details. See the full [System Requirements](gpt4all-chat/system_requirements.md) for more details.

View File

@ -4,9 +4,9 @@ include(../common/common.cmake)
set(APP_VERSION_MAJOR 3) set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 10) set(APP_VERSION_MINOR 10)
set(APP_VERSION_PATCH 1) set(APP_VERSION_PATCH 0)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}") set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
set(APP_VERSION "${APP_VERSION_BASE}-dev0") set(APP_VERSION "${APP_VERSION_BASE}")
project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C) project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)

View File

@ -1,15 +1,26 @@
## Latest News ## Latest News
GPT4All v3.10.0 was released on February 24th. Changes include: GPT4All v3.9.0 was released on February 4th. Changes include:
* **Remote Models:** * **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.
* The Add Model page now has a dedicated tab for remote model providers. * **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.
* Groq, OpenAI, and Mistral remote models are now easier to configure. * **Windows ARM Improvements:**
* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend. * Graphical artifacts on some SoCs have been fixed.
* **New Model:** The non-MoE Granite model is now supported. * A crash when adding a collection of PDFs to LocalDocs has been fixed.
* **Translation Updates:** * **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.
* The Italian translation has been updated. * **New Models:** OLMoE and Granite MoE models are now supported.
* The Simplified Chinese translation has been significantly improved.
* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved. GPT4All v3.8.0 was released on January 30th. Changes include:
* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.
* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed. * **Native DeepSeek-R1-Distill Support:** GPT4All now has robust support for the DeepSeek-R1 family of distillations.
* Several model variants are now available on the downloads page.
* Reasoning (wrapped in "think" tags) is displayed similarly to the Reasoner model.
* The DeepSeek-R1 Qwen pretokenizer is now supported, resolving the loading failure in previous versions.
* The model is now configured with a GPT4All-compatible prompt template by default.
* **Chat Templating Overhaul:** The template parser has been *completely* replaced with one that has much better compatibility with common models.
* **Code Interpreter Fixes:**
* An issue preventing the code interpreter from logging a single string in v3.7.0 has been fixed.
* The UI no longer freezes while the code interpreter is running a computation.
* **Local Server Fixes:**
* An issue preventing the server from using LocalDocs after the first request since v3.5.0 has been fixed.
* System messages are now correctly hidden from the message history.

View File

@ -273,10 +273,5 @@
"version": "3.9.0", "version": "3.9.0",
"notes": "* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.\n* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.\n* **Windows ARM Improvements:**\n * Graphical artifacts on some SoCs have been fixed.\n * A crash when adding a collection of PDFs to LocalDocs has been fixed.\n* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.\n* **New Models:** OLMoE and Granite MoE models are now supported.\n", "notes": "* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.\n* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.\n* **Windows ARM Improvements:**\n * Graphical artifacts on some SoCs have been fixed.\n * A crash when adding a collection of PDFs to LocalDocs has been fixed.\n* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.\n* **New Models:** OLMoE and Granite MoE models are now supported.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)" "contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.10.0",
"notes": "* **Remote Models:**\n * The Add Model page now has a dedicated tab for remote model providers.\n * Groq, OpenAI, and Mistral remote models are now easier to configure.\n* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.\n* **New Model:** The non-MoE Granite model is now supported.\n* **Translation Updates:**\n * The Italian translation has been updated.\n * The Simplified Chinese translation has been significantly improved.\n* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.\n* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.\n* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)\n* Lil Bob (`@Junior2Ran`)\n* Riccardo Giovanetti (`@Harvester62`)"
} }
] ]