Compare commits

..

68 Commits
v3.7.0 ... main

Author SHA1 Message Date
Jared Van Bortel
b666d16db5
ci: update path-filtering orb to 1.3.0 (#3588)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-05-27 15:46:52 -04:00
Jared Van Bortel
cd70db29ed
readme: add Windows ARM download link
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:51:59 -05:00
Jared Van Bortel
fb72ba1ff5 chat: bump version to 3.10.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:44:45 -05:00
Jared Van Bortel
b968d45c11
chat: release version 3.10.0 (#3515)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:41:13 -05:00
Jared Van Bortel
228d5379cf
chat: cut v3.10.0 release (#3511)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 17:15:34 -05:00
Jared Van Bortel
dd820ef7c4
Italian and draft Simplified Chinese translations for v3.10.0 (#3514)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 17:14:10 -05:00
Jared Van Bortel
a7cbc8c3fd
Run lupdate before v3.10.0 release (#3512)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 15:33:27 -05:00
AT
4d171835ac
Add new remote model provider view. (#3506)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 14:59:53 -05:00
Lil Bob
0c28ee7059
Translations: Improve Chinese translation (#3467)
Signed-off-by: Junior2Ran <hdr01@126.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-20 20:44:28 -05:00
Jared Van Bortel
96aeb44210
backend: build with CUDA compute 5.0 support by default (#3499)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-19 11:27:06 -05:00
Jared Van Bortel
29f29773af
chat: require Qt 6.8 and fix #includes (#3498)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 13:59:50 -05:00
Jared Van Bortel
d8c04cead8
ci: use LLVM Clang 19 on macOS and Ubuntu (#3500)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 12:02:14 -05:00
Riccardo Giovanetti
b1cb46ec2a
Italian localization update (#3496)
Signed-off-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 11:47:39 -05:00
Jared Van Bortel
b83d06e67f translations: run lupdate -no-obsolete on Simplified Chinese
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-13 11:27:04 -05:00
Jared Van Bortel
7aa339cf40 translations: run lupdate
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-13 11:26:28 -05:00
ThiloteE
1b84182030
Add replacement templates for OLMoE and granite-3.1 (#3471)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-12 14:23:46 -05:00
ThiloteE
02e12089d3
Add Granite arch to model whitelist (#3487)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-12 14:17:49 -05:00
Jared Van Bortel
09f37a0ff8
maintainers: remove extra bracket
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-11 14:49:46 -05:00
AT
5e7e4b3f78
Fix spacing issues with deepseek models: (#3470)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2025-02-06 12:04:32 -05:00
Jared Van Bortel
22ebd42c32
Misc fixes for undefined behavior, crashes, and build failure (#3465)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-06 11:22:52 -05:00
Jared Van Bortel
051a63f031 ci: fix scheduled workflow jobs
s/online/offline/

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-05 11:56:53 -05:00
Jared Van Bortel
26356f872e chat: bump version to 3.9.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 19:15:20 -05:00
Jared Van Bortel
22b8bc546f
chat: release version 3.9.0 (#3462)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 19:12:17 -05:00
Jared Van Bortel
52164142de changelog: fix missing paren
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:14:30 -05:00
Jared Van Bortel
be6347389e
chat: cut v3.9.0 release (#3461)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:09:15 -05:00
Jared Van Bortel
8c10eccd24 changelog: fix missing credit
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:08:06 -05:00
ThiloteE
6ef0bd518e
Whitelist OLMoE and Granite MoE (#3449)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:00:07 -05:00
Jared Van Bortel
04dc157b98
minja: update submodule to fix {# hang (redo) (#3457)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 17:30:04 -05:00
Jared Van Bortel
014bf67c63
Fix PDFium abuse that leads to a crash on Windows ARM (#3460)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 17:29:01 -05:00
Jared Van Bortel
8c9f26e249
Ignore DeepSeek-R1 "think" content in name/follow-up responses (#3458)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 12:08:17 -05:00
Andriy Mulyar
d4e6a6e485
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2025-02-03 17:40:53 -05:00
Jared Van Bortel
a081255951 Revert "minja: update submodule to fix {# hang (#3446)"
This reverts commit c38c7455d890ea242ed32bca8d9467b8768af296.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 12:44:27 -05:00
Jared Van Bortel
36c852b8be
chat: work around Direct3D 11 rendering artifacts on win11 arm (#3450)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:47:40 -05:00
Jared Van Bortel
c38c7455d8
minja: update submodule to fix {# hang (#3446)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:25:21 -05:00
Jared Van Bortel
9131f4c432
Fix index used by LocalDocs when tool calling/thinking is active (#3451)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:22:46 -05:00
Jared Van Bortel
6bfa014594 cmake: remove reference to deleted README
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-31 16:26:17 -05:00
Jared Van Bortel
5af31278b7
ci: update to Qt 6.8.2 (#3442)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-31 11:20:50 -05:00
Jared Van Bortel
a80f023ed2
chat: release version 3.8.0 (#3439)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 20:06:42 -05:00
Jared Van Bortel
126042fdc9 remove ancient README
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 19:27:44 -05:00
Jared Van Bortel
1f2712d57c
chat: fix emoji corruption (#3443)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 18:15:37 -05:00
Jared Van Bortel
f8f78c6677 ci: allow generate-config to run on tags
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:53:14 -05:00
Jared Van Bortel
643c733be3 ci: fix missing job_allow_tags
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:50:00 -05:00
Jared Van Bortel
0734694fb8 ci: remove conflicting pipeline.git.branch requirement
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:47:58 -05:00
Jared Van Bortel
e267512db9
chat: cut v3.8.0 release (#3441)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:37:02 -05:00
Jared Van Bortel
34037f3101
models: add DeepSeek-R1 distillations to official models list (#3437)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:23:41 -05:00
AT
007a7af1c8
Display DeepSeek-R1 thinking like Reasoner (#3440)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:11:05 -05:00
Jared Van Bortel
f914ee56c9
chat: replace Jinja2Cpp with minja (#3433)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:01:49 -05:00
Jared Van Bortel
8a0ec5c303 ci: add missing signing holds to Windows ARM builds
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 15:23:18 -05:00
Jared Van Bortel
c2ee252ef2 chat: bump version to 3.8.0-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 13:12:47 -05:00
Jared Van Bortel
64dcf7682e
ci: build offline installers when pipeline is scheduled (#3436)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 13:07:47 -05:00
AT
22b8278ef1
Don't block the gui thread for tool calls (#3435)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-29 18:33:08 -05:00
Jared Van Bortel
adafa17c37
ci: verify that installers we build function and are signed (#3432)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-29 11:29:20 -05:00
Jared Van Bortel
343a4b6b6a
Support DeepSeek-R1 Qwen (#3431)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-29 09:51:50 -05:00
Jared Van Bortel
6a8a840681
ci: selective signing and automatic release builds (#3430)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-28 17:41:01 -05:00
ThiloteE
88f5dac133
[Jinja] Fix typo in Phi-3.1-mini-128k-instruct replacement template (#3412)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-01-28 16:54:15 -05:00
Jared Van Bortel
0d974297a5
codeinterpreter: permit console.log with single string arg (#3426)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-27 15:22:20 -05:00
Jared Van Bortel
4fbc20ced9
cmake: do not modify gpt4all.app after signing it (#3417)
Signed-off-by: AT <manyoso@users.noreply.github.com>
2025-01-24 14:15:24 -05:00
Jared Van Bortel
f4f7de51e7 Revert "cmake: do not modify gpt4all.app after signing it (#3413)"
This reverts commit c01ac7fa933ae135dc8d9eed9dcbc2890dff38e3.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-24 13:21:34 -05:00
Jared Van Bortel
c01ac7fa93
cmake: do not modify gpt4all.app after signing it (#3413)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2025-01-24 12:57:55 -05:00
Jared Van Bortel
173fdb18c2
Update to Qt 6.8.1 (#3386)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-24 10:29:59 -05:00
AT
8790586e57
Server view fix (#3411)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-24 10:29:28 -05:00
AT
b98501c786
Fix regression while using localdocs with server API. (#3410)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-24 10:26:24 -05:00
Jared Van Bortel
49df6464a7 chat: bump version to v3.7.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:59:59 -05:00
Jared Van Bortel
6b719e99b5 metadata: fix typo
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:22:54 -05:00
Jared Van Bortel
d85fe40de8
chat: release version 3.7.0 (#3407)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:17:13 -05:00
Jared Van Bortel
15f66570fe
ci: fix macOS codesigning (#3408)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 11:41:34 -05:00
Jared Van Bortel
a97a28fe4f changelog: fix reference to wrong macOS version
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 13:09:01 -05:00
Jared Van Bortel
df2d124c19 changelog: add missing link
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 11:38:26 -05:00
94 changed files with 5488 additions and 3249 deletions

View File

@ -1,13 +1,17 @@
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@1.1.0
path-filtering: circleci/path-filtering@1.3.0
workflows:
version: 2.1
generate-config:
jobs:
- path-filtering/filter:
filters:
tags:
only:
- /.*/
base-revision: main
config-path: .circleci/continue_config.yml
mapping: |

File diff suppressed because it is too large Load Diff

12
.gitmodules vendored
View File

@ -17,9 +17,9 @@
[submodule "gpt4all-chat/deps/QXlsx"]
path = gpt4all-chat/deps/QXlsx
url = https://github.com/nomic-ai/QXlsx.git
[submodule "gpt4all-chat/deps/Jinja2Cpp"]
path = gpt4all-chat/deps/Jinja2Cpp
url = https://github.com/nomic-ai/jinja2cpp.git
[submodule "gpt4all-chat/deps/rapidjson"]
path = gpt4all-chat/deps/rapidjson
url = https://github.com/nomic-ai/rapidjson.git
[submodule "gpt4all-chat/deps/minja"]
path = gpt4all-chat/deps/minja
url = https://github.com/nomic-ai/minja.git
[submodule "gpt4all-chat/deps/json"]
path = gpt4all-chat/deps/json
url = https://github.com/nlohmann/json.git

View File

@ -72,6 +72,6 @@ Discord: `@Tim453`
- Flatpak
Jack ([@wuodoo](https://github.com/wuodoo))<br/>
E-mail: 2296103047@qq.com><br/>
E-mail: 2296103047@qq.com<br/>
Discord: `@mikage`
- zh\_CN translation

View File

@ -1,5 +1,9 @@
<h1 align="center">GPT4All</h1>
<p align="center">
Now with support for DeepSeek R1 Distillations
</p>
<p align="center">
<a href="https://www.nomic.ai/gpt4all">Website</a> &bull; <a href="https://docs.gpt4all.io">Documentation</a> &bull; <a href="https://discord.gg/mGZE39AS3e">Discord</a> &bull; <a href="https://www.youtube.com/watch?v=gQcZDXRVJok">YouTube Tutorial</a>
</p>
@ -31,6 +35,11 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows Installer
</a> &mdash;
</p>
<p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-win64-arm.exe">
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows ARM Installer
</a> &mdash;
</p>
<p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-darwin.dmg">
<img src="gpt4all-bindings/python/docs/assets/mac.png" style="height: 1em; width: auto" /> macOS Installer
@ -42,10 +51,16 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
</a> &mdash;
</p>
<p>
Windows and Linux require Intel Core i3 2nd Gen / AMD Bulldozer, or better. x86-64 only, no ARM.
The Windows and Linux builds require Intel Core i3 2nd Gen / AMD Bulldozer, or better.
</p>
<p>
macOS requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
The Windows ARM build supports Qualcomm Snapdragon and Microsoft SQ1/SQ2 processors.
</p>
<p>
The Linux build is x86-64 only (no ARM).
</p>
<p>
The macOS build requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
</p>
See the full [System Requirements](gpt4all-chat/system_requirements.md) for more details.

View File

@ -69,7 +69,7 @@ if (LLMODEL_CUDA)
cmake_minimum_required(VERSION 3.18) # for CMAKE_CUDA_ARCHITECTURES
# Defaults must be set before enable_language(CUDA).
# Keep this in sync with the arch list in ggml/src/CMakeLists.txt.
# Keep this in sync with the arch list in ggml/src/CMakeLists.txt (plus 5.0 for non-F16 branch).
if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
# 52 == lowest CUDA 12 standard
# 60 == f16 CUDA intrinsics
@ -78,7 +78,7 @@ if (LLMODEL_CUDA)
if (GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
set(CMAKE_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
else()
set(CMAKE_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
set(CMAKE_CUDA_ARCHITECTURES "50;52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
#set(CMAKE_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
endif()
endif()

@ -1 +1 @@
Subproject commit 58a55efc4ae5dd3bc12887d47981faa7136027af
Subproject commit 11f734c3b0334dbae4823b4a7467764e447fc6d6

View File

@ -53,6 +53,8 @@ static const std::vector<const char *> KNOWN_ARCHES {
"gpt2",
// "gptj", -- no inference code
"gptneox",
"granite",
"granitemoe",
"mpt",
"baichuan",
"starcoder",
@ -80,6 +82,7 @@ static const std::vector<const char *> KNOWN_ARCHES {
"command-r",
// "dbrx", -- 16x12B parameters
"olmo",
"olmoe",
"openelm",
// "arctic", -- 10B+128x3.66B parameters
"deepseek2",

View File

@ -140,9 +140,14 @@ const std::vector<LLModel::Implementation> &LLModel::Implementation::implementat
std::string path;
// Split the paths string by the delimiter and process each path.
while (std::getline(ss, path, ';')) {
std::u8string u8_path(path.begin(), path.end());
fs::directory_iterator iter;
try {
iter = fs::directory_iterator(std::u8string(path.begin(), path.end()));
} catch (const fs::filesystem_error &) {
continue; // skip nonexistent path
}
// Iterate over all libraries
for (const auto &f : fs::directory_iterator(u8_path)) {
for (const auto &f : iter) {
const fs::path &p = f.path();
if (p.extension() != LIB_FILE_EXT) continue;

View File

@ -4,11 +4,62 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [3.10.0] - 2025-02-24
### Added
- Whitelist Granite (non-MoE) model architecture (by [@ThiloteE](https://github.com/ThiloteE) in [#3487](https://github.com/nomic-ai/gpt4all/pull/3487))
- Add support for CUDA compute 5.0 GPUs such as the GTX 750 ([#3499](https://github.com/nomic-ai/gpt4all/pull/3499))
- Add a Remote Providers tab to the Add Model page ([#3506](https://github.com/nomic-ai/gpt4all/pull/3506))
### Changed
- Substitute prettier default templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B (by [@ThiloteE](https://github.com/ThiloteE) in [#3471](https://github.com/nomic-ai/gpt4all/pull/3471))
- Build with LLVM Clang 19 on macOS and Ubuntu ([#3500](https://github.com/nomic-ai/gpt4all/pull/3500))
### Fixed
- Fix several potential crashes ([#3465](https://github.com/nomic-ai/gpt4all/pull/3465))
- Fix visual spacing issues with deepseek models ([#3470](https://github.com/nomic-ai/gpt4all/pull/3470))
- Add missing strings to Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#3496](https://github.com/nomic-ai/gpt4all/pull/3496))
- Update Simplified Chinese translation (by [@Junior2Ran](https://github.com/Junior2Ran) in [#3467](https://github.com/nomic-ai/pull/3467))
## [3.9.0] - 2025-02-04
### Added
- Whitelist OLMoE and Granite MoE model architectures (no Vulkan) (by [@ThiloteE](https://github.com/ThiloteE) in [#3449](https://github.com/nomic-ai/gpt4all/pull/3449))
### Fixed
- Fix "index N is not a prompt" when using LocalDocs with reasoning ([#3451](https://github.com/nomic-ai/gpt4all/pull/3451))
- Work around rendering artifacts on Snapdragon SoCs with Windows ([#3450](https://github.com/nomic-ai/gpt4all/pull/3450))
- Prevent DeepSeek-R1 reasoning from appearing in chat names and follow-up questions ([#3458](https://github.com/nomic-ai/gpt4all/pull/3458))
- Fix LocalDocs crash on Windows ARM when reading PDFs ([#3460](https://github.com/nomic-ai/gpt4all/pull/3460))
- Fix UI freeze when chat template is `{#` ([#3446](https://github.com/nomic-ai/gpt4all/pull/3446))
## [3.8.0] - 2025-01-30
### Added
- Support DeepSeek-R1 Qwen models ([#3431](https://github.com/nomic-ai/gpt4all/pull/3431))
- Support for think tags in the GUI ([#3440](https://github.com/nomic-ai/gpt4all/pull/3440))
- Support specifying SHA256 hash in models3.json instead of MD5 ([#3437](https://github.com/nomic-ai/gpt4all/pull/3437))
### Changed
- Use minja instead of Jinja2Cpp for significantly improved template compatibility ([#3433](https://github.com/nomic-ai/gpt4all/pull/3433))
### Fixed
- Fix regression while using localdocs with server API ([#3410](https://github.com/nomic-ai/gpt4all/pull/3410))
- Don't show system messages in server chat view ([#3411](https://github.com/nomic-ai/gpt4all/pull/3411))
- Fix `codesign --verify` failure on macOS ([#3413](https://github.com/nomic-ai/gpt4all/pull/3413))
- Code Interpreter: Fix console.log not accepting a single string after v3.7.0 ([#3426](https://github.com/nomic-ai/gpt4all/pull/3426))
- Fix Phi 3.1 Mini 128K Instruct template (by [@ThiloteE](https://github.com/ThiloteE) in [#3412](https://github.com/nomic-ai/gpt4all/pull/3412))
- Don't block the gui thread for reasoning ([#3435](https://github.com/nomic-ai/gpt4all/pull/3435))
- Fix corruption of unicode in output of reasoning models ([#3443](https://github.com/nomic-ai/gpt4all/pull/3443))
## [3.7.0] - 2025-01-21
### Added
- Add support for the Windows ARM64 target platform (CPU-only) ([#3385](https://github.com/nomic-ai/gpt4all/pull/3385))
### Changed
- Update from Qt 6.5.1 to 6.8.1 ([#3386](https://github.com/nomic-ai/gpt4all/pull/3386))
### Fixed
- Fix the timeout error in code interpreter ([#3369](https://github.com/nomic-ai/gpt4all/pull/3369))
- Fix code interpreter console.log not accepting multiple arguments ([#3371](https://github.com/nomic-ai/gpt4all/pull/3371))
@ -17,7 +68,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Save chats on quit, even if the window isn't closed first ([#3387](https://github.com/nomic-ai/gpt4all/pull/3387))
- Add chat template replacements for five new models and fix EM German Mistral ([#3393](https://github.com/nomic-ai/gpt4all/pull/3393))
- Fix crash when entering `{{ a["foo"(` as chat template ([#3394](https://github.com/nomic-ai/gpt4all/pull/3394))
- Sign the maintenance tool on macOS to prevent crash on Sonoma ([#3391](https://github.com/nomic-ai/gpt4all/pull/3391))
- Sign the maintenance tool on macOS to prevent crash on Sequoia ([#3391](https://github.com/nomic-ai/gpt4all/pull/3391))
- Jinja2Cpp: Fix operator precedence in 'not X is defined' ([#3402](https://github.com/nomic-ai/gpt4all/pull/3402))
## [3.6.1] - 2024-12-20
@ -261,6 +312,10 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
[3.10.0]: https://github.com/nomic-ai/gpt4all/compare/v3.9.0...v3.10.0
[3.9.0]: https://github.com/nomic-ai/gpt4all/compare/v3.8.0...v3.9.0
[3.8.0]: https://github.com/nomic-ai/gpt4all/compare/v3.7.0...v3.8.0
[3.7.0]: https://github.com/nomic-ai/gpt4all/compare/v3.6.1...v3.7.0
[3.6.1]: https://github.com/nomic-ai/gpt4all/compare/v3.6.0...v3.6.1
[3.6.0]: https://github.com/nomic-ai/gpt4all/compare/v3.5.3...v3.6.0
[3.5.3]: https://github.com/nomic-ai/gpt4all/compare/v3.5.2...v3.5.3

View File

@ -3,13 +3,17 @@ cmake_minimum_required(VERSION 3.25) # for try_compile SOURCE_FROM_VAR
include(../common/common.cmake)
set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 7)
set(APP_VERSION_PATCH 0)
set(APP_VERSION_MINOR 10)
set(APP_VERSION_PATCH 1)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
set(APP_VERSION "${APP_VERSION_BASE}")
set(APP_VERSION "${APP_VERSION_BASE}-dev0")
project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)
if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install CACHE PATH "..." FORCE)
endif()
if(APPLE)
option(BUILD_UNIVERSAL "Build a Universal binary on macOS" OFF)
if(BUILD_UNIVERSAL)
@ -31,6 +35,8 @@ option(GPT4ALL_SIGN_INSTALL "Sign installed binaries and installers (requires si
option(GPT4ALL_GEN_CPACK_CONFIG "Generate the CPack config.xml in the package step and nothing else." OFF)
set(GPT4ALL_USE_QTPDF "AUTO" CACHE STRING "Whether to Use QtPDF for LocalDocs. If OFF or not available on this platform, PDFium is used.")
set_property(CACHE GPT4ALL_USE_QTPDF PROPERTY STRINGS AUTO ON OFF)
set(GPT4ALL_FORCE_D3D12 "AUTO" CACHE STRING "Whether to use Direct3D 12 as the Qt scene graph backend. Defaults to ON on Windows ARM.")
set_property(CACHE GPT4ALL_FORCE_D3D12 PROPERTY STRINGS AUTO ON OFF)
include(cmake/cpack_config.cmake)
@ -47,7 +53,7 @@ set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if (MSVC)
# Enable accurate __cplusplus macro to fix errors in Jinja2Cpp
# Enable accurate __cplusplus macro
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/Zc:__cplusplus>)
endif()
@ -86,12 +92,6 @@ include_directories("${CMAKE_CURRENT_BINARY_DIR}")
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
# Generate a header file with the version number
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/config.h.in"
"${CMAKE_CURRENT_BINARY_DIR}/config.h"
)
set(CMAKE_FIND_PACKAGE_TARGETS_GLOBAL ON)
set(GPT4ALL_QT_COMPONENTS Core HttpServer LinguistTools Quick QuickDialogs2 Sql Svg)
set(GPT4ALL_USING_QTPDF OFF)
@ -104,7 +104,7 @@ elseif (GPT4ALL_USE_QTPDF MATCHES "^(ON|AUTO)$")
set(GPT4ALL_USING_QTPDF ON)
list(APPEND GPT4ALL_QT_COMPONENTS Pdf)
endif()
find_package(Qt6 6.5 COMPONENTS ${GPT4ALL_QT_COMPONENTS} REQUIRED)
find_package(Qt6 6.8 COMPONENTS ${GPT4ALL_QT_COMPONENTS} REQUIRED)
if (QT_KNOWN_POLICY_QTP0004)
qt_policy(SET QTP0004 NEW) # generate extra qmldir files on Qt 6.8+
@ -126,10 +126,24 @@ message(STATUS "Qt 6 root directory: ${Qt6_ROOT_DIR}")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install CACHE PATH "..." FORCE)
set(GPT4ALL_CONFIG_FORCE_D3D12 -1)
if (NOT CMAKE_SYSTEM_NAME MATCHES Windows OR Qt6_VERSION VERSION_LESS "6.6")
# Direct3D 12 is not available.
if (GPT4ALL_FORCE_D3D12 STREQUAL "ON")
message(FATAL_ERROR "Cannot use Direct3D 12 on this platform.")
endif()
elseif (GPT4ALL_FORCE_D3D12 MATCHES "^(ON|AUTO)$")
if (GPT4ALL_FORCE_D3D12 STREQUAL "ON" OR CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
set(GPT4ALL_CONFIG_FORCE_D3D12 1)
endif()
endif()
# Generate a header file for configuration
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/src/config.h.in"
"${CMAKE_CURRENT_BINARY_DIR}/config.h"
)
add_subdirectory(deps)
add_subdirectory(../gpt4all-backend llmodel)
@ -252,6 +266,7 @@ qt_add_qml_module(chat
qml/AddModelView.qml
qml/AddGPT4AllModelView.qml
qml/AddHFModelView.qml
qml/AddRemoteModelView.qml
qml/ApplicationSettings.qml
qml/ChatDrawer.qml
qml/ChatCollapsibleItem.qml
@ -300,6 +315,7 @@ qt_add_qml_module(chat
qml/MyTextField.qml
qml/MyToolButton.qml
qml/MyWelcomeButton.qml
qml/RemoteModelCard.qml
RESOURCES
icons/antenna_1.svg
icons/antenna_2.svg
@ -330,6 +346,7 @@ qt_add_qml_module(chat
icons/gpt4all-48.png
icons/gpt4all.svg
icons/gpt4all_transparent.svg
icons/groq.svg
icons/home.svg
icons/image.svg
icons/info.svg
@ -337,12 +354,14 @@ qt_add_qml_module(chat
icons/left_panel_open.svg
icons/local-docs.svg
icons/models.svg
icons/mistral.svg
icons/network.svg
icons/nomic_logo.svg
icons/notes.svg
icons/paperclip.svg
icons/plus.svg
icons/plus_circle.svg
icons/openai.svg
icons/recycle.svg
icons/regenerate.svg
icons/search.svg
@ -437,7 +456,10 @@ else()
target_link_libraries(chat PRIVATE pdfium)
endif()
target_link_libraries(chat
PRIVATE llmodel SingleApplication fmt::fmt duckx::duckx QXlsx jinja2cpp)
PRIVATE llmodel SingleApplication fmt::fmt duckx::duckx QXlsx)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/json/include)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/json/include/nlohmann)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/minja/include)
if (APPLE)
target_link_libraries(chat PRIVATE ${COCOA_LIBRARY})
@ -445,12 +467,18 @@ endif()
# -- install --
if (APPLE)
set(GPT4ALL_LIB_DEST bin/gpt4all.app/Contents/Frameworks)
else()
set(GPT4ALL_LIB_DEST lib)
endif()
install(TARGETS chat DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN})
install(
TARGETS llmodel
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN} # .dll
LIBRARY DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
# We should probably iterate through the list of the cmake for backend, but these need to be installed
@ -473,8 +501,8 @@ endif()
install(
TARGETS ${MODEL_IMPL_TARGETS}
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
LIBRARY DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
if(APPLE AND GPT4ALL_SIGN_INSTALL)
@ -503,7 +531,7 @@ if (LLMODEL_CUDA)
TARGETS llamamodel-mainline-cuda
llamamodel-mainline-cuda-avxonly
RUNTIME_DEPENDENCY_SET llama-cuda-deps
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
if (WIN32)
@ -520,9 +548,9 @@ endif()
if (NOT GPT4ALL_USING_QTPDF)
# Install PDFium
if (WIN32)
install(FILES "${PDFium_LIBRARY}" DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN})
install(FILES ${PDFium_LIBRARY} DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN}) # .dll
else()
install(FILES "${PDFium_LIBRARY}" DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN})
install(FILES ${PDFium_LIBRARY} DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN}) # .so/.dylib
endif()
endif()

View File

@ -1,45 +0,0 @@
# gpt4all-chat
Cross platform Qt based GUI for GPT4All versions with GPT-J as the base
model. NOTE: The model seen in the screenshot is actually a preview of a
new training run for GPT4All based on GPT-J. The GPT4All project is busy
at work getting ready to release this model including installers for all
three major OS's. In the meantime, you can try this UI out with the original
GPT-J model by following build instructions below.
![image](https://user-images.githubusercontent.com/50458173/231464085-da9edff6-a593-410e-8f38-7513f75c8aab.png)
## Install
One click installers for macOS, Linux, and Windows at https://www.nomic.ai/gpt4all
## Features
* Cross-platform (Linux, Windows, MacOSX)
* The UI is made to look and feel like you've come to expect from a chatty gpt
* Check for updates so you can always stay fresh with latest models
* Easy to install with precompiled binaries available for all three major desktop platforms
* Multi-modal - Ability to load more than one model and switch between them
* Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
* Supports models that are supported by llama.cpp
* Model downloader in GUI featuring many popular open source models
* Settings dialog to change temp, top_p, min_p, top_k, threads, etc
* Copy your conversation to clipboard
* RAG via LocalDocs feature
* Check for updates to get the very latest GUI
## Building and running
* Follow the visual instructions on the [build_and_run](build_and_run.md) page
## Getting the latest
If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do `git submodule update --init --recursive` to update the submodules. (If you ever run into trouble, deinitializing via `git submodule deinit -f .` and then initializing again via `git submodule update --init --recursive` fixes most issues)
## Contributing
* Pull requests welcome. See the feature wish list for ideas :)
## License
The source code of this chat interface is currently under a MIT license.

View File

@ -1,6 +0,0 @@
#ifndef CONFIG_H
#define CONFIG_H
#define APP_VERSION "@APP_VERSION@"
#endif // CONFIG_H

View File

@ -37,7 +37,6 @@ set(CPACK_PACKAGE_VERSION_PATCH ${PROJECT_VERSION_PATCH})
set(CPACK_PACKAGE_HOMEPAGE_URL "https://www.nomic.ai/gpt4all")
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_RESOURCE_FILE_LICENSE ${CMAKE_CURRENT_SOURCE_DIR}/LICENSE)
set(CPACK_RESOURCE_FILE_README ${CMAKE_CURRENT_SOURCE_DIR}/README.md)
set(CPACK_PACKAGE_EXECUTABLES "GPT4All")
set(CPACK_CREATE_DESKTOP_LINKS "GPT4All")
set(CPACK_IFW_PACKAGE_NAME "GPT4All")

View File

@ -8,12 +8,6 @@ if (GPT4ALL_SIGN_INSTALL)
set(MAC_NOTARIZE -sign-for-notarization=${GPT4ALL_SIGNING_ID})
endif()
execute_process(COMMAND ${MACDEPLOYQT} ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app -qmldir=${CMAKE_CURRENT_SOURCE_DIR} -verbose=2 ${MAC_NOTARIZE})
file(GLOB MYLLAMALIBS ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/lib/libllama*)
file(GLOB MYLLMODELLIBS ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/lib/libllmodel.*)
file(COPY ${MYLLAMALIBS}
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app/Contents/Frameworks)
file(COPY ${MYLLMODELLIBS}
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app/Contents/Frameworks)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-32.png"
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png"

View File

@ -15,43 +15,34 @@ add_subdirectory(DuckX)
set(QT_VERSION_MAJOR 6)
add_subdirectory(QXlsx/QXlsx)
# forked dependency of Jinja2Cpp
set(RAPIDJSON_BUILD_DOC OFF)
set(RAPIDJSON_BUILD_EXAMPLES OFF)
set(RAPIDJSON_BUILD_TESTS OFF)
set(RAPIDJSON_ENABLE_INSTRUMENTATION_OPT OFF)
add_subdirectory(rapidjson)
add_subdirectory(Jinja2Cpp)
if (NOT GPT4ALL_USING_QTPDF)
# If we do not use QtPDF, we need to get PDFium.
set(GPT4ALL_PDFIUM_TAG "chromium/6954")
set(GPT4ALL_PDFIUM_TAG "chromium/6996")
if (CMAKE_SYSTEM_NAME MATCHES Linux)
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-linux-x64.tgz"
URL_HASH "SHA256=69917fd9543befc6c806254aff6c8a604d9e7cd3999a3e70fc32b8690d372da2"
URL_HASH "SHA256=68b381b87efed539f2e33ae1e280304c9a42643a878cc296c1d66a93b0cb4335"
)
elseif (CMAKE_SYSTEM_NAME MATCHES Windows)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|AMD64|amd64)$")
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-win-x64.tgz"
URL_HASH "SHA256=62ecac78fbaf658457beaffcc05eb147f493d435a2e1309e6a731808b4e80d38"
URL_HASH "SHA256=83e714c302ceacccf403826d5cb57ea39b77f393d83b8d5781283012774a9378"
)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-win-arm64.tgz"
URL_HASH "SHA256=a0b69014467f2b9824776c064920bc95359c9ba0d88793bdda1894a0f22206f8"
URL_HASH "SHA256=78e77e871453a4915cbf66fb381b951c9932f88a747c6b2b33c9f27ec2371445"
)
endif()
elseif (CMAKE_SYSTEM_NAME MATCHES Darwin)
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-mac-univ.tgz"
URL_HASH "SHA256=7442f1dc6bef90898b2b7bd38dbec369ddd81bbf66c1c5aac3a1b60e107098f9"
URL_HASH "SHA256=e7577f3242ff9c1df50025f9615673a43601a201bc51ee4792975f98920793a2"
)
endif()

@ -1 +0,0 @@
Subproject commit ce10f783bae46ede6afa4b09a8a169ebe88a14d4

@ -0,0 +1 @@
Subproject commit 606b6347edf0758c531abb6c36743e09a4c48a84

@ -0,0 +1 @@
Subproject commit e97bb2442cd6ab3d5bb5f5a3e8a1f7d6081d613b

@ -1 +0,0 @@
Subproject commit 9b547ef4bd86210ef084abc2790bd1ddfe66b592

View File

@ -0,0 +1,3 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 26.3 26.3"><defs><style>.cls-1{fill:#f05237;}.cls-2{fill:#fff;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="Content"><circle class="cls-1" cx="13.15" cy="13.15" r="13.15"/><path class="cls-2" d="M13.17,6.88a4.43,4.43,0,0,0,0,8.85h1.45V14.07H13.17a2.77,2.77,0,1,1,2.77-2.76v4.07a2.74,2.74,0,0,1-4.67,2L10.1,18.51a4.37,4.37,0,0,0,3.07,1.29h.06a4.42,4.42,0,0,0,4.36-4.4V11.2a4.43,4.43,0,0,0-4.42-4.32"/></g></g></svg>

After

Width:  |  Height:  |  Size: 620 B

View File

@ -0,0 +1 @@
<svg viewBox="0 0 512 512" xmlns="http://www.w3.org/2000/svg" fill-rule="evenodd" clip-rule="evenodd" stroke-linejoin="round" stroke-miterlimit="2"><path d="M189.08 303.228H94.587l.044-94.446h94.497l-.048 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M283.528 397.674h-94.493l.044-94.446h94.496l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M283.575 303.228H189.08l.046-94.446h94.496l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M378.07 303.228h-94.495l.044-94.446h94.498l-.047 94.446zM189.128 208.779H94.633l.044-94.448h94.498l-.047 94.448zM378.115 208.779h-94.494l.045-94.448h94.496l-.047 94.448zM94.587 303.227H.093l.044-96.017h94.496l-.046 96.017z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.633 208.779H.138l.046-94.448H94.68l-.047 94.448z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.68 115.902H.185L.23 19.885h94.498l-.047 96.017zM472.657 114.331h-94.495l.044-94.446h94.497l-.046 94.446zM94.54 399.244H.046l.044-97.588h94.497l-.047 97.588z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.495 492.123H0l.044-94.446H94.54l-.045 94.446zM472.563 303.228H378.07l.044-94.446h94.496l-.047 94.446zM472.61 208.779h-94.495l.044-94.448h94.498l-.047 94.448z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M472.517 397.674h-94.494l.044-94.446h94.497l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M472.47 492.121h-94.493l.044-96.017h94.496l-.047 96.017z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M228.375 303.22h-96.061l.046-94.446h96.067l-.052 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M322.827 397.666h-94.495l.044-96.018h94.498l-.047 96.018z" fill="#ff4900" fill-rule="nonzero"/><path d="M324.444 303.22h-97.636l.046-94.446h97.638l-.048 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M418.938 303.22h-96.064l.045-94.446h96.066l-.047 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M228.423 208.77H132.36l.045-94.445h96.066l-.05 94.446zM418.985 208.77H322.92l.044-94.445h96.069l-.048 94.446z" fill="#ffa300" fill-rule="nonzero"/><path d="M133.883 304.79H39.392l.044-96.017h94.496l-.049 96.017z" fill="#ff7000" fill-rule="nonzero"/><path d="M133.929 208.77H39.437l.044-95.445h94.496l-.048 95.445z" fill="#ffa300" fill-rule="nonzero"/><path d="M133.976 114.325H39.484l.044-94.448h94.497l-.05 94.448zM511.954 115.325h-94.493l.044-95.448h94.497l-.048 95.448z" fill="#ffce00" fill-rule="nonzero"/><path d="M133.836 399.667H39.345l.044-96.447h94.496l-.049 96.447z" fill="#ff4900" fill-rule="nonzero"/><path d="M133.79 492.117H39.3l.044-94.448h94.496l-.049 94.448z" fill="#ff0107" fill-rule="nonzero"/><path d="M511.862 303.22h-94.495l.046-94.446h94.496l-.047 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M511.907 208.77h-94.493l.044-94.445h94.496l-.047 94.446z" fill="#ffa300" fill-rule="nonzero"/><path d="M511.815 398.666h-94.493l.044-95.447h94.496l-.047 95.447z" fill="#ff4900" fill-rule="nonzero"/><path d="M511.77 492.117h-94.496l.046-94.448h94.496l-.047 94.448z" fill="#ff0107" fill-rule="nonzero"/></svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@ -0,0 +1,2 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg fill="#000000" width="800px" height="800px" viewBox="0 0 24 24" role="img" xmlns="http://www.w3.org/2000/svg"><title>OpenAI icon</title><path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z"/></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@ -1,16 +1,15 @@
## Latest News
GPT4All v3.6.1 was released on December 20th and fixes issues with the stop generation and copy conversation buttons which were broken in v3.6.0.
GPT4All v3.10.0 was released on February 24th. Changes include:
---
GPT4All v3.6.0 was released on December 19th. Changes include:
* **Reasoner v1:**
* Built-in javascript code interpreter tool.
* Custom curated model that utilizes the code interpreter to break down, analyze, perform, and verify complex reasoning tasks.
* **Templates:** Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs.
* **Fixes:**
* Remote model template to allow for XML in messages.
* Jinja2Cpp bug that broke system message detection in chat templates.
* LocalDocs sources displaying in unconsolidated form after v3.5.0.
* **Remote Models:**
* The Add Model page now has a dedicated tab for remote model providers.
* Groq, OpenAI, and Mistral remote models are now easier to configure.
* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.
* **New Model:** The non-MoE Granite model is now supported.
* **Translation Updates:**
* The Italian translation has been updated.
* The Simplified Chinese translation has been significantly improved.
* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.
* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.
* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.

View File

@ -32,6 +32,66 @@
"systemPrompt": "",
"chatTemplate": "{%- set loop_messages = messages %}\n{%- for message in loop_messages %}\n {%- set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' %}\n {{- content }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}"
},
{
"order": "aa1",
"sha256sum": "5cd4ee65211770f1d99b4f6f4951780b9ef40e29314bd6542bb5bd0ad0bc29d1",
"name": "DeepSeek-R1-Distill-Qwen-7B",
"filename": "DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf",
"filesize": "4444121056",
"requires": "3.8.0",
"ramrequired": "8",
"parameters": "7 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-Math-7B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa2",
"sha256sum": "906b3382f2680f4ce845459b4a122e904002b075238080307586bcffcde49eef",
"name": "DeepSeek-R1-Distill-Qwen-14B",
"filename": "DeepSeek-R1-Distill-Qwen-14B-Q4_0.gguf",
"filesize": "8544267680",
"requires": "3.8.0",
"ramrequired": "16",
"parameters": "14 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-14B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-14B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa3",
"sha256sum": "0eb93e436ac8beec18aceb958c120d282cb2cf5451b23185e7be268fe9d375cc",
"name": "DeepSeek-R1-Distill-Llama-8B",
"filename": "DeepSeek-R1-Distill-Llama-8B-Q4_0.gguf",
"filesize": "4675894112",
"requires": "3.8.0",
"ramrequired": "8",
"parameters": "8 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Llama-3.1-8B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa4",
"sha256sum": "b3af887d0a015b39fab2395e4faf682c1a81a6a3fd09a43f0d4292f7d94bf4d0",
"name": "DeepSeek-R1-Distill-Qwen-1.5B",
"filename": "DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf",
"filesize": "1068807776",
"requires": "3.8.0",
"ramrequired": "3",
"parameters": "1.5 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-Math-1.5B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "b",
"md5sum": "27b44e8ae1817525164ddf4f8dae8af4",
@ -472,7 +532,7 @@
"filename": "qwen2-1_5b-instruct-q4_0.gguf",
"filesize": "937532800",
"requires": "3.0",
"ramrequired": "4",
"ramrequired": "3",
"parameters": "1.5 billion",
"quant": "q4_0",
"type": "qwen2",

View File

@ -251,12 +251,32 @@
},
{
"version": "3.6.0",
"notes": "* **Reasoner v1:**\n * Built-in javascript code interpreter tool.\n * Custom curated model that utilizes the code interpreter to break down, analyze, perform, and verify complex reasoning tasks.\n* **Templates:** Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs.\n* **Fixes:**\n * Remote model template to allow for XML in messages.\n * Jinja2Cpp bug that broke system message detection in chat templates.\n * LocalDocs sources displaying in unconsolidated form after v3.5.0.",
"notes": "* **Reasoner v1:**\n * Built-in javascript code interpreter tool.\n * Custom curated model that utilizes the code interpreter to break down, analyze, perform, and verify complex reasoning tasks.\n* **Templates:** Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs.\n* **Fixes:**\n * Remote model template to allow for XML in messages.\n * Jinja2Cpp bug that broke system message detection in chat templates.\n * LocalDocs sources displaying in unconsolidated form after v3.5.0.\n",
"contributors": "* Adam Treat (Nomic AI)\n* Jared Van Bortel (Nomic AI)"
},
{
"version": "3.6.1",
"notes": "* **Fixes:**\n * The stop generation button no longer working in v3.6.0.\n * The copy entire conversation button no longer working in v3.6.0.",
"notes": "* **Fixes:**\n * The stop generation button no longer working in v3.6.0.\n * The copy entire conversation button no longer working in v3.6.0.\n",
"contributors": "* Adam Treat (Nomic AI)"
},
{
"version": "3.7.0",
"notes": "* **Windows ARM Support:** GPT4All now supports the Windows ARM platform, ensuring compatibility with devices powered by Qualcomm Snapdragon and Microsoft SQ-series processors.\n * **NOTE:** Support for GPU and/or NPU acceleration is not available at this time. Only the CPU will be used to run LLMs.\n * **NOTE:** You must install the new *Windows ARM* version of GPT4All from the website. The standard *Windows* version will not work due to emulation limitations.\n* **Fixed Updating on macOS:** The maintenance tool no longer crashes when attempting to update or uninstall GPT4All on Sequoia.\n * **NOTE:** If you have installed the version from the GitHub releases as a workaround for this issue, you can safely uninstall it and switch back to the version from the website.\n* **Fixed Chat Saving on macOS:** Chats now save as expected when the application is quit with Command-Q.\n* **Code Interpreter Improvements:**\n * The behavior when the code takes too long to execute and times out has been improved.\n * console.log now accepts multiple arguments for better compatibility with native JavaScript.\n* **Chat Templating Improvements:**\n * Two crashes and one compatibility issue have been fixed in the chat template parser.\n * The default chat template for EM German Mistral has been fixed.\n * Automatic replacements have been added for five new models as we continue to improve compatibility with common chat templates.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* Riccardo Giovanetti (`@Harvester62`)"
},
{
"version": "3.8.0",
"notes": "* **Native DeepSeek-R1-Distill Support:** GPT4All now has robust support for the DeepSeek-R1 family of distillations.\n * Several model variants are now available on the downloads page.\n * Reasoning (wrapped in \"think\" tags) is displayed similarly to the Reasoner model.\n * The DeepSeek-R1 Qwen pretokenizer is now supported, resolving the loading failure in previous versions.\n * The model is now configured with a GPT4All-compatible prompt template by default.\n* **Chat Templating Overhaul:** The template parser has been *completely* replaced with one that has much better compatibility with common models.\n* **Code Interpreter Fixes:**\n * An issue preventing the code interpreter from logging a single string in v3.7.0 has been fixed.\n * The UI no longer freezes while the code interpreter is running a computation.\n* **Local Server Fixes:**\n * An issue preventing the server from using LocalDocs after the first request since v3.5.0 has been fixed.\n * System messages are now correctly hidden from the message history.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.9.0",
"notes": "* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.\n* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.\n* **Windows ARM Improvements:**\n * Graphical artifacts on some SoCs have been fixed.\n * A crash when adding a collection of PDFs to LocalDocs has been fixed.\n* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.\n* **New Models:** OLMoE and Granite MoE models are now supported.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.10.0",
"notes": "* **Remote Models:**\n * The Add Model page now has a dedicated tab for remote model providers.\n * Groq, OpenAI, and Mistral remote models are now easier to configure.\n* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.\n* **New Model:** The non-MoE Granite model is now supported.\n* **Translation Updates:**\n * The Italian translation has been updated.\n * The Simplified Chinese translation has been significantly improved.\n* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.\n* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.\n* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)\n* Lil Bob (`@Junior2Ran`)\n* Riccardo Giovanetti (`@Harvester62`)"
}
]

View File

@ -204,7 +204,7 @@ ColumnLayout {
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isOnline && !installed && !calcHash && downloadError === ""
visible: !installed && !calcHash && downloadError === ""
Accessible.description: qsTr("Stop/restart/start the download")
onClicked: {
if (!isDownloading) {
@ -230,52 +230,6 @@ ColumnLayout {
}
}
MySettingsButton {
id: installButton
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
text: qsTr("Install")
font.pixelSize: theme.fontSizeLarge
onClicked: {
var apiKeyText = apiKey.text.trim(),
baseUrlText = baseUrl.text.trim(),
modelNameText = modelName.text.trim();
var apiKeyOk = apiKeyText !== "",
baseUrlOk = !isCompatibleApi || baseUrlText !== "",
modelNameOk = !isCompatibleApi || modelNameText !== "";
if (!apiKeyOk)
apiKey.showError();
if (!baseUrlOk)
baseUrl.showError();
if (!modelNameOk)
modelName.showError();
if (!apiKeyOk || !baseUrlOk || !modelNameOk)
return;
if (!isCompatibleApi)
Download.installModel(
filename,
apiKeyText,
);
else
Download.installCompatibleModel(
modelNameText,
apiKeyText,
baseUrlText,
);
}
Accessible.role: Accessible.Button
Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install online model")
}
ColumnLayout {
spacing: 0
Label {
@ -390,69 +344,6 @@ ColumnLayout {
Accessible.description: qsTr("Displayed when the file hash is being calculated")
}
}
MyTextField {
id: apiKey
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $API_KEY is empty."));
apiKey.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
apiKey.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $API_KEY")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: baseUrl
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $BASE_URL is empty."));
baseUrl.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
baseUrl.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $BASE_URL")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: modelName
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $MODEL_NAME is empty."))
modelName.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
modelName.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $MODEL_NAME")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
}
}
}

View File

@ -89,6 +89,13 @@ Rectangle {
gpt4AllModelView.show();
}
}
MyTabButton {
text: qsTr("Remote Providers")
isSelected: remoteModelView.isShown()
onPressed: {
remoteModelView.show();
}
}
MyTabButton {
text: qsTr("HuggingFace")
isSelected: huggingfaceModelView.isShown()
@ -112,7 +119,20 @@ Rectangle {
stackLayout.currentIndex = 0;
}
function isShown() {
return stackLayout.currentIndex === 0
return stackLayout.currentIndex === 0;
}
}
AddRemoteModelView {
id: remoteModelView
Layout.fillWidth: true
Layout.fillHeight: true
function show() {
stackLayout.currentIndex = 1;
}
function isShown() {
return stackLayout.currentIndex === 1;
}
}
@ -126,10 +146,10 @@ Rectangle {
anchors.fill: parent
function show() {
stackLayout.currentIndex = 1;
stackLayout.currentIndex = 2;
}
function isShown() {
return stackLayout.currentIndex === 1
return stackLayout.currentIndex === 2;
}
}
}

View File

@ -0,0 +1,147 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
ColumnLayout {
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop
spacing: 5
Label {
Layout.topMargin: 0
Layout.bottomMargin: 25
Layout.rightMargin: 150 * theme.fontScale
Layout.alignment: Qt.AlignTop
Layout.fillWidth: true
verticalAlignment: Text.AlignTop
text: qsTr("Various remote model providers that use network resources for inference.")
font.pixelSize: theme.fontSizeLarger
color: theme.textColor
wrapMode: Text.WordWrap
}
ScrollView {
id: scrollView
ScrollBar.vertical.policy: ScrollBar.AsNeeded
Layout.fillWidth: true
Layout.fillHeight: true
contentWidth: availableWidth
clip: true
Flow {
anchors.left: parent.left
anchors.right: parent.right
spacing: 20
bottomPadding: 20
property int childWidth: 330 * theme.fontScale
property int childHeight: 400 + 166 * theme.fontScale
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.groq.com/openai/v1/"
providerName: qsTr("Groq")
providerImage: "qrc:/gpt4all/icons/groq.svg"
providerDesc: qsTr('Groq offers a high-performance AI inference engine designed for low-latency and efficient processing. Optimized for real-time applications, Groqs technology is ideal for users who need fast responses from open large language models and other AI workloads.<br><br>Get your API key: <a href="https://console.groq.com/keys">https://groq.com/</a>')
modelWhitelist: [
// last updated 2025-02-24
"deepseek-r1-distill-llama-70b",
"deepseek-r1-distill-qwen-32b",
"gemma2-9b-it",
"llama-3.1-8b-instant",
"llama-3.2-1b-preview",
"llama-3.2-3b-preview",
"llama-3.3-70b-specdec",
"llama-3.3-70b-versatile",
"llama3-70b-8192",
"llama3-8b-8192",
"mixtral-8x7b-32768",
"qwen-2.5-32b",
"qwen-2.5-coder-32b",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.openai.com/v1/"
providerName: qsTr("OpenAI")
providerImage: "qrc:/gpt4all/icons/openai.svg"
providerDesc: qsTr('OpenAI provides access to advanced AI models, including GPT-4 supporting a wide range of applications, from conversational AI to content generation and code completion.<br><br>Get your API key: <a href="https://platform.openai.com/signup">https://openai.com/</a>')
modelWhitelist: [
// last updated 2025-02-24
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-4",
"gpt-4-32k",
"gpt-4-turbo",
"gpt-4o",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.mistral.ai/v1/"
providerName: qsTr("Mistral")
providerImage: "qrc:/gpt4all/icons/mistral.svg"
providerDesc: qsTr('Mistral AI specializes in efficient, open-weight language models optimized for various natural language processing tasks. Their models are designed for flexibility and performance, making them a solid option for applications requiring scalable AI solutions.<br><br>Get your API key: <a href="https://mistral.ai/">https://mistral.ai/</a>')
modelWhitelist: [
// last updated 2025-02-24
"codestral-2405",
"codestral-2411-rc5",
"codestral-2412",
"codestral-2501",
"codestral-latest",
"codestral-mamba-2407",
"codestral-mamba-latest",
"ministral-3b-2410",
"ministral-3b-latest",
"ministral-8b-2410",
"ministral-8b-latest",
"mistral-large-2402",
"mistral-large-2407",
"mistral-large-2411",
"mistral-large-latest",
"mistral-medium-2312",
"mistral-medium-latest",
"mistral-saba-2502",
"mistral-saba-latest",
"mistral-small-2312",
"mistral-small-2402",
"mistral-small-2409",
"mistral-small-2501",
"mistral-small-latest",
"mistral-tiny-2312",
"mistral-tiny-2407",
"mistral-tiny-latest",
"open-codestral-mamba",
"open-mistral-7b",
"open-mistral-nemo",
"open-mistral-nemo-2407",
"open-mixtral-8x22b",
"open-mixtral-8x22b-2404",
"open-mixtral-8x7b",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerIsCustom: true
providerName: qsTr("Custom")
providerImage: "qrc:/gpt4all/icons/antenna_3.svg"
providerDesc: qsTr("The custom provider option allows users to connect their own OpenAI-compatible AI models or third-party inference services. This is useful for organizations with proprietary models or those leveraging niche AI providers not listed here.")
}
}
}
}

View File

@ -13,6 +13,8 @@ ColumnLayout {
property alias textContent: innerTextItem.textContent
property bool isCurrent: false
property bool isError: false
property bool isThinking: false
property int thinkingTime: 0
Layout.topMargin: 10
Layout.bottomMargin: 10
@ -26,16 +28,20 @@ ColumnLayout {
anchors.bottom: parent.bottom
Item {
width: myTextArea.width
height: myTextArea.height
Layout.preferredWidth: myTextArea.implicitWidth
Layout.preferredHeight: myTextArea.implicitHeight
TextArea {
id: myTextArea
text: {
if (isError)
return qsTr("Analysis encountered error");
if (isCurrent)
return qsTr("Analyzing");
return qsTr("Analyzed");
return isThinking ? qsTr("Thinking") : qsTr("Analyzing");
return isThinking
? qsTr("Thought for %1 %2")
.arg(Math.ceil(thinkingTime / 1000.0))
.arg(Math.ceil(thinkingTime / 1000.0) === 1 ? qsTr("second") : qsTr("seconds"))
: qsTr("Analyzed");
}
padding: 0
font.pixelSize: theme.fontSizeLarger

View File

@ -110,6 +110,7 @@ GridLayout {
case Chat.PromptProcessing: return qsTr("processing ...")
case Chat.ResponseGeneration: return qsTr("generating response ...");
case Chat.GeneratingQuestions: return qsTr("generating questions ...");
case Chat.ToolCallGeneration: return qsTr("generating toolcall ...");
default: return ""; // handle unexpected values
}
}
@ -188,6 +189,18 @@ GridLayout {
isError: modelData.isToolCallError
}
}
DelegateChoice {
roleValue: "Think: ";
ChatCollapsibleItem {
Layout.fillWidth: true
textContent: modelData.content
isCurrent: modelData.isCurrentResponse
isError: false
isThinking: true
thinkingTime: modelData.thinkingTime
visible: modelData.content !== ""
}
}
}
delegate: chooser

View File

@ -828,7 +828,7 @@ Rectangle {
textInput.cursorPosition = text.length;
}
height: visible ? implicitHeight : 0
visible: name !== "ToolResponse: "
visible: name !== "ToolResponse: " && name !== "System: "
}
remove: Transition {

View File

@ -60,27 +60,28 @@ ComboBox {
highlighted: comboBox.highlightedIndex === index
}
popup: Popup {
// FIXME This should be made much nicer to take into account lists that are very long so
// that it is scrollable and also sized optimally taking into account the x,y and the content
// width and height as well as the window width and height
y: comboBox.height - 1
width: comboBox.width
implicitHeight: contentItem.implicitHeight + 20
implicitHeight: Math.min(window.height - y, contentItem.implicitHeight + 20)
padding: 0
contentItem: Rectangle {
implicitWidth: myListView.contentWidth
implicitWidth: comboBox.width
implicitHeight: myListView.contentHeight
color: "transparent"
ListView {
id: myListView
radius: 10
ScrollView {
anchors.fill: parent
anchors.margins: 10
clip: true
implicitHeight: contentHeight
model: comboBox.popup.visible ? comboBox.delegateModel : null
currentIndex: comboBox.highlightedIndex
ScrollIndicator.vertical: ScrollIndicator { }
ScrollBar.vertical.policy: ScrollBar.AsNeeded
ScrollBar.horizontal.policy: ScrollBar.AlwaysOff
ListView {
id: myListView
implicitHeight: contentHeight
model: comboBox.popup.visible ? comboBox.delegateModel : null
currentIndex: comboBox.highlightedIndex
ScrollIndicator.vertical: ScrollIndicator { }
}
}
}

View File

@ -0,0 +1,221 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
Rectangle {
property alias providerName: providerNameLabel.text
property alias providerImage: myimage.source
property alias providerDesc: providerDescLabel.text
property string providerBaseUrl: ""
property bool providerIsCustom: false
property var modelWhitelist: null
color: theme.conversationBackground
radius: 10
border.width: 1
border.color: theme.controlBorder
implicitHeight: topColumn.height + bottomColumn.height + 33 * theme.fontScale
ColumnLayout {
id: topColumn
anchors.left: parent.left
anchors.right: parent.right
anchors.top: parent.top
anchors.margins: 20
spacing: 15 * theme.fontScale
RowLayout {
Layout.alignment: Qt.AlignTop
spacing: 10
Item {
Layout.preferredWidth: 27 * theme.fontScale
Layout.preferredHeight: 27 * theme.fontScale
Layout.alignment: Qt.AlignLeft
Image {
id: myimage
anchors.centerIn: parent
sourceSize.width: parent.width
sourceSize.height: parent.height
mipmap: true
fillMode: Image.PreserveAspectFit
}
}
Label {
id: providerNameLabel
color: theme.textColor
font.pixelSize: theme.fontSizeBanner
}
}
Label {
id: providerDescLabel
Layout.fillWidth: true
wrapMode: Text.Wrap
color: theme.settingsTitleTextColor
font.pixelSize: theme.fontSizeLarge
onLinkActivated: function(link) { Qt.openUrlExternally(link); }
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.NoButton // pass clicks to parent
cursorShape: parent.hoveredLink ? Qt.PointingHandCursor : Qt.ArrowCursor
}
}
}
ColumnLayout {
id: bottomColumn
anchors.left: parent.left
anchors.right: parent.right
anchors.bottom: parent.bottom
anchors.margins: 20
spacing: 30
ColumnLayout {
MySettingsLabel {
text: qsTr("API Key")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: apiKeyField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $API_KEY is empty."));
apiKeyField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
apiKeyField.placeholderTextColor = theme.mutedTextColor;
if (!providerIsCustom) {
let models = ModelList.remoteModelList(apiKeyField.text, providerBaseUrl);
if (modelWhitelist !== null)
models = models.filter(m => modelWhitelist.includes(m));
myModelList.model = models;
myModelList.currentIndex = -1;
}
}
placeholderText: qsTr("enter $API_KEY")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
}
ColumnLayout {
visible: providerIsCustom
MySettingsLabel {
text: qsTr("Base Url")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: baseUrlField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $BASE_URL is empty."));
baseUrlField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
baseUrlField.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $BASE_URL")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
}
}
ColumnLayout {
visible: providerIsCustom
MySettingsLabel {
text: qsTr("Model Name")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: modelNameField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $MODEL_NAME is empty."))
modelNameField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
modelNameField.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $MODEL_NAME")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
}
}
ColumnLayout {
visible: myModelList.count > 0 && !providerIsCustom
MySettingsLabel {
text: qsTr("Models")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
RowLayout {
spacing: 10
MyComboBox {
Layout.fillWidth: true
id: myModelList
currentIndex: -1;
}
}
}
MySettingsButton {
id: installButton
Layout.alignment: Qt.AlignRight
text: qsTr("Install")
font.pixelSize: theme.fontSizeLarge
property string apiKeyText: apiKeyField.text.trim()
property string baseUrlText: providerIsCustom ? baseUrlField.text.trim() : providerBaseUrl.trim()
property string modelNameText: providerIsCustom ? modelNameField.text.trim() : myModelList.currentText.trim()
enabled: apiKeyText !== "" && baseUrlText !== "" && modelNameText !== ""
onClicked: {
Download.installCompatibleModel(
modelNameText,
apiKeyText,
baseUrlText,
);
myModelList.currentIndex = -1;
}
Accessible.role: Accessible.Button
Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install remote model")
}
}
}

View File

@ -7,24 +7,26 @@
#include "toolcallparser.h"
#include "toolmodel.h"
#include <QBuffer>
#include <QByteArray>
#include <QDataStream>
#include <QDebug>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QFile>
#include <QFileInfo>
#include <QIODevice>
#include <QLatin1String>
#include <QMap>
#include <QRegularExpression>
#include <QString>
#include <QVariant>
#include <Qt>
#include <QtAssert>
#include <QtLogging>
#include <optional>
#include <utility>
using namespace ToolEnums;
Chat::Chat(QObject *parent)
: QObject(parent)
, m_id(Network::globalInstance()->generateUniqueId())
@ -181,6 +183,11 @@ QVariant Chat::popPrompt(int index)
void Chat::stopGenerating()
{
// In future if we have more than one tool we'll have to keep track of which tools are possibly
// running, but for now we only have one
Tool *toolInstance = ToolModel::globalInstance()->get(ToolCallConstants::CodeInterpreterFunction);
Q_ASSERT(toolInstance);
toolInstance->interrupt();
m_llmodel->stopGenerating();
}
@ -242,56 +249,71 @@ void Chat::responseStopped(qint64 promptResponseMs)
const QString possibleToolcall = m_chatModel->possibleToolcall();
ToolCallParser parser;
parser.update(possibleToolcall);
if (parser.state() == ToolEnums::ParseState::Complete) {
const QString toolCall = parser.toolCall();
// Regex to remove the formatting around the code
static const QRegularExpression regex("^\\s*```javascript\\s*|\\s*```\\s*$");
QString code = toolCall;
code.remove(regex);
code = code.trimmed();
// Right now the code interpreter is the only available tool
Tool *toolInstance = ToolModel::globalInstance()->get(ToolCallConstants::CodeInterpreterFunction);
Q_ASSERT(toolInstance);
// The param is the code
const ToolParam param = { "code", ToolEnums::ParamType::String, code };
const QString result = toolInstance->run({param}, 10000 /*msecs to timeout*/);
const ToolEnums::Error error = toolInstance->error();
const QString errorString = toolInstance->errorString();
// Update the current response with meta information about toolcall and re-parent
m_chatModel->updateToolCall({
ToolCallConstants::CodeInterpreterFunction,
{ param },
result,
error,
errorString
});
++m_consecutiveToolCalls;
// We limit the number of consecutive toolcalls otherwise we get into a potentially endless loop
if (m_consecutiveToolCalls < 3 || error == ToolEnums::Error::NoError) {
resetResponseState();
emit promptRequested(m_collections); // triggers a new response
return;
}
}
if (m_generatedName.isEmpty())
emit generateNameRequested();
m_consecutiveToolCalls = 0;
Network::globalInstance()->trackChatEvent("response_complete", {
Network::globalInstance()->trackChatEvent("response_stopped", {
{"first", m_firstResponse},
{"message_count", chatModel()->count()},
{"$duration", promptResponseMs / 1000.},
});
ToolCallParser parser;
parser.update(possibleToolcall.toUtf8());
if (parser.state() == ToolEnums::ParseState::Complete && parser.startTag() != ToolCallConstants::ThinkStartTag)
processToolCall(parser.toolCall());
else
responseComplete();
}
void Chat::processToolCall(const QString &toolCall)
{
m_responseState = Chat::ToolCallGeneration;
emit responseStateChanged();
// Regex to remove the formatting around the code
static const QRegularExpression regex("^\\s*```javascript\\s*|\\s*```\\s*$");
QString code = toolCall;
code.remove(regex);
code = code.trimmed();
// Right now the code interpreter is the only available tool
Tool *toolInstance = ToolModel::globalInstance()->get(ToolCallConstants::CodeInterpreterFunction);
Q_ASSERT(toolInstance);
connect(toolInstance, &Tool::runComplete, this, &Chat::toolCallComplete, Qt::SingleShotConnection);
// The param is the code
const ToolParam param = { "code", ToolEnums::ParamType::String, code };
m_responseInProgress = true;
emit responseInProgressChanged();
toolInstance->run({param});
}
void Chat::toolCallComplete(const ToolCallInfo &info)
{
// Update the current response with meta information about toolcall and re-parent
m_chatModel->updateToolCall(info);
++m_consecutiveToolCalls;
m_responseInProgress = false;
emit responseInProgressChanged();
// We limit the number of consecutive toolcalls otherwise we get into a potentially endless loop
if (m_consecutiveToolCalls < 3 || info.error == ToolEnums::Error::NoError) {
resetResponseState();
emit promptRequested(m_collections); // triggers a new response
return;
}
responseComplete();
}
void Chat::responseComplete()
{
if (m_generatedName.isEmpty())
emit generateNameRequested();
m_responseState = Chat::ResponseStopped;
emit responseStateChanged();
m_consecutiveToolCalls = 0;
m_firstResponse = false;
}
@ -361,11 +383,8 @@ void Chat::trySwitchContextOfLoadedModel()
void Chat::generatedNameChanged(const QString &name)
{
// Only use the first three words maximum and remove newlines and extra spaces
m_generatedName = name.simplified();
QStringList words = m_generatedName.split(' ', Qt::SkipEmptyParts);
int wordCount = qMin(7, words.size());
m_name = words.mid(0, wordCount).join(' ');
m_generatedName = name;
m_name = name;
emit nameChanged();
m_needsSave = true;
}

View File

@ -3,21 +3,26 @@
#include "chatllm.h"
#include "chatmodel.h"
#include "database.h" // IWYU pragma: keep
#include "localdocsmodel.h" // IWYU pragma: keep
#include "database.h"
#include "localdocsmodel.h"
#include "modellist.h"
#include "tool.h"
#include <QDateTime>
#include <QList>
#include <QObject>
#include <QQmlEngine>
#include <QQmlEngine> // IWYU pragma: keep
#include <QString>
#include <QStringList> // IWYU pragma: keep
#include <QStringView>
#include <QtGlobal>
#include <QUrl>
#include <QVariant>
#include <QtTypes>
// IWYU pragma: no_forward_declare LocalDocsCollectionsModel
// IWYU pragma: no_forward_declare ToolCallInfo
class QDataStream;
class Chat : public QObject
{
Q_OBJECT
@ -55,7 +60,8 @@ public:
LocalDocsProcessing,
PromptProcessing,
GeneratingQuestions,
ResponseGeneration
ResponseGeneration,
ToolCallGeneration
};
Q_ENUM(ResponseState)
@ -166,6 +172,9 @@ private Q_SLOTS:
void promptProcessing();
void generatingQuestions();
void responseStopped(qint64 promptResponseMs);
void processToolCall(const QString &toolCall);
void toolCallComplete(const ToolCallInfo &info);
void responseComplete();
void generatedNameChanged(const QString &name);
void generatedQuestionFinished(const QString &question);
void handleModelLoadingError(const QString &error);

View File

@ -2,6 +2,9 @@
#include "utils.h"
#include <fmt/format.h>
#include <QAnyStringView>
#include <QCoreApplication>
#include <QDebug>
#include <QGuiApplication>
@ -9,15 +12,17 @@
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QLatin1String>
#include <QNetworkAccessManager>
#include <QNetworkRequest>
#include <QStringView>
#include <QThread>
#include <QUrl>
#include <QUtf8StringView>
#include <QUtf8StringView> // IWYU pragma: keep
#include <QVariant>
#include <QXmlStreamReader>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <expected>
@ -29,6 +34,7 @@ using namespace Qt::Literals::StringLiterals;
//#define DEBUG
ChatAPI::ChatAPI()
: QObject(nullptr)
, m_modelName("gpt-3.5-turbo")

View File

@ -3,10 +3,11 @@
#include <gpt4all-backend/llmodel.h>
#include <QByteArray> // IWYU pragma: keep
#include <QByteArray>
#include <QNetworkReply>
#include <QObject>
#include <QString>
#include <QtPreprocessorSupport>
#include <cstddef>
#include <cstdint>
@ -17,9 +18,11 @@
#include <unordered_map>
#include <vector>
// IWYU pragma: no_forward_declare QByteArray
class ChatAPI;
class QNetworkAccessManager;
class ChatAPI;
class ChatAPIWorker : public QObject {
Q_OBJECT
public:

View File

@ -1,23 +1,24 @@
#include "chatlistmodel.h"
#include "database.h" // IWYU pragma: keep
#include "mysettings.h"
#include <QCoreApplication>
#include <QDataStream>
#include <QDir>
#include <QElapsedTimer>
#include <QEvent>
#include <QFile>
#include <QFileInfo>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QIODevice>
#include <QSettings>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <Qt>
#include <QtTypes>
#include <algorithm>
#include <memory>
static constexpr quint32 CHAT_FORMAT_MAGIC = 0xF5D553CC;
static constexpr qint32 CHAT_FORMAT_VERSION = 12;

View File

@ -7,17 +7,20 @@
#include <QAbstractListModel>
#include <QByteArray>
#include <QDate>
#include <QDebug>
#include <QHash>
#include <QList>
#include <QMutex>
#include <QObject>
#include <QString>
#include <QThread>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <QtPreprocessorSupport>
#include <memory>

View File

@ -12,38 +12,43 @@
#include "toolcallparser.h"
#include <fmt/format.h>
#include <minja/minja.hpp>
#include <nlohmann/json.hpp>
#include <jinja2cpp/error_info.h>
#include <jinja2cpp/template.h>
#include <jinja2cpp/template_env.h>
#include <jinja2cpp/user_callable.h>
#include <jinja2cpp/value.h>
#include <QChar>
#include <QDataStream>
#include <QDebug>
#include <QFile>
#include <QGlobalStatic>
#include <QIODevice>
#include <QIODevice> // IWYU pragma: keep
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QMap>
#include <QMutex>
#include <QMutex> // IWYU pragma: keep
#include <QMutexLocker> // IWYU pragma: keep
#include <QRegularExpression>
#include <QRegularExpressionMatch>
#include <QRegularExpression> // IWYU pragma: keep
#include <QRegularExpressionMatch> // IWYU pragma: keep
#include <QSet>
#include <QStringView>
#include <QTextStream>
#include <QUrl>
#include <QVariant>
#include <QWaitCondition>
#include <Qt>
#include <QtAssert>
#include <QtLogging>
#include <QtTypes> // IWYU pragma: keep
#include <algorithm>
#include <chrono>
#include <cmath>
#include <concepts>
#include <cstddef>
#include <cstdint>
#include <ctime>
#include <exception>
#include <functional>
#include <iomanip>
#include <limits>
#include <optional>
@ -60,44 +65,104 @@
using namespace Qt::Literals::StringLiterals;
using namespace ToolEnums;
namespace ranges = std::ranges;
using json = nlohmann::ordered_json;
//#define DEBUG
//#define DEBUG_MODEL_LOADING
// NOTE: not threadsafe
static jinja2::TemplateEnv *jinjaEnv()
static const std::shared_ptr<minja::Context> &jinjaEnv()
{
static std::optional<jinja2::TemplateEnv> environment;
static std::shared_ptr<minja::Context> environment;
if (!environment) {
auto &env = environment.emplace();
auto &settings = env.GetSettings();
settings.trimBlocks = true;
settings.lstripBlocks = true;
env.AddGlobal("raise_exception", jinja2::UserCallable(
/*callable*/ [](auto &params) -> jinja2::Value {
auto messageArg = params.args.find("message");
if (messageArg == params.args.end() || !messageArg->second.isString())
throw std::runtime_error("'message' argument to raise_exception() must be a string");
throw std::runtime_error(fmt::format("Jinja template error: {}", messageArg->second.asString()));
},
/*argsInfo*/ { jinja2::ArgInfo("message", /*isMandatory*/ true) }
));
env.AddGlobal("strftime_now", jinja2::UserCallable(
/*callable*/ [](auto &params) -> jinja2::Value {
environment = minja::Context::builtins();
environment->set("strftime_now", minja::simple_function(
"strftime_now", { "format" },
[](const std::shared_ptr<minja::Context> &, minja::Value &args) -> minja::Value {
auto format = args.at("format").get<std::string>();
using Clock = std::chrono::system_clock;
auto formatArg = params.args.find("format");
if (formatArg == params.args.end() || !formatArg->second.isString())
throw std::runtime_error("'format' argument to strftime_now() must be a string");
time_t nowUnix = Clock::to_time_t(Clock::now());
auto localDate = *std::localtime(&nowUnix);
std::ostringstream ss;
ss << std::put_time(&localDate, formatArg->second.asString().c_str());
ss << std::put_time(&localDate, format.c_str());
return ss.str();
},
/*argsInfo*/ { jinja2::ArgInfo("format", /*isMandatory*/ true) }
}
));
environment->set("regex_replace", minja::simple_function(
"regex_replace", { "str", "pattern", "repl" },
[](const std::shared_ptr<minja::Context> &, minja::Value &args) -> minja::Value {
auto str = args.at("str" ).get<std::string>();
auto pattern = args.at("pattern").get<std::string>();
auto repl = args.at("repl" ).get<std::string>();
return std::regex_replace(str, std::regex(pattern), repl);
}
));
}
return &*environment;
return environment;
}
class BaseResponseHandler {
public:
virtual void onSplitIntoTwo (const QString &startTag, const QString &firstBuffer, const QString &secondBuffer) = 0;
virtual void onSplitIntoThree (const QString &secondBuffer, const QString &thirdBuffer) = 0;
// "old-style" responses, with all of the implementation details left in
virtual void onOldResponseChunk(const QByteArray &chunk) = 0;
// notify of a "new-style" response that has been cleaned of tool calling
virtual bool onBufferResponse (const QString &response, int bufferIdx) = 0;
// notify of a "new-style" response, no tool calling applicable
virtual bool onRegularResponse () = 0;
virtual bool getStopGenerating () const = 0;
};
static auto promptModelWithTools(
LLModel *model, const LLModel::PromptCallback &promptCallback, BaseResponseHandler &respHandler,
const LLModel::PromptContext &ctx, const QByteArray &prompt, const QStringList &toolNames
) -> std::pair<QStringList, bool>
{
ToolCallParser toolCallParser(toolNames);
auto handleResponse = [&toolCallParser, &respHandler](LLModel::Token token, std::string_view piece) -> bool {
Q_UNUSED(token)
toolCallParser.update(piece.data());
// Split the response into two if needed
if (toolCallParser.numberOfBuffers() < 2 && toolCallParser.splitIfPossible()) {
const auto parseBuffers = toolCallParser.buffers();
Q_ASSERT(parseBuffers.size() == 2);
respHandler.onSplitIntoTwo(toolCallParser.startTag(), parseBuffers.at(0), parseBuffers.at(1));
}
// Split the response into three if needed
if (toolCallParser.numberOfBuffers() < 3 && toolCallParser.startTag() == ToolCallConstants::ThinkStartTag
&& toolCallParser.splitIfPossible()) {
const auto parseBuffers = toolCallParser.buffers();
Q_ASSERT(parseBuffers.size() == 3);
respHandler.onSplitIntoThree(parseBuffers.at(1), parseBuffers.at(2));
}
respHandler.onOldResponseChunk(QByteArray::fromRawData(piece.data(), piece.size()));
bool ok;
const auto parseBuffers = toolCallParser.buffers();
if (parseBuffers.size() > 1) {
ok = respHandler.onBufferResponse(parseBuffers.last(), parseBuffers.size() - 1);
} else {
ok = respHandler.onRegularResponse();
}
if (!ok)
return false;
const bool shouldExecuteToolCall = toolCallParser.state() == ToolEnums::ParseState::Complete
&& toolCallParser.startTag() != ToolCallConstants::ThinkStartTag;
return !shouldExecuteToolCall && !respHandler.getStopGenerating();
};
model->prompt(std::string_view(prompt), promptCallback, handleResponse, ctx);
const bool shouldExecuteToolCall = toolCallParser.state() == ToolEnums::ParseState::Complete
&& toolCallParser.startTag() != ToolCallConstants::ThinkStartTag;
return { toolCallParser.buffers(), shouldExecuteToolCall };
}
class LLModelStore {
@ -738,7 +803,8 @@ std::vector<MessageItem> ChatLLM::forkConversation(const QString &prompt) const
conversation.reserve(items.size() + 1);
conversation.assign(items.begin(), items.end());
}
conversation.emplace_back(MessageItem::Type::Prompt, prompt.toUtf8());
qsizetype nextIndex = conversation.empty() ? 0 : conversation.back().index().value() + 1;
conversation.emplace_back(nextIndex, MessageItem::Type::Prompt, prompt.toUtf8());
return conversation;
}
@ -757,19 +823,18 @@ static uint parseJinjaTemplateVersion(QStringView tmpl)
return 0;
}
static auto loadJinjaTemplate(
std::optional<jinja2::Template> &tmpl /*out*/, const std::string &source
) -> jinja2::Result<void>
static std::shared_ptr<minja::TemplateNode> loadJinjaTemplate(const std::string &source)
{
tmpl.emplace(jinjaEnv());
return tmpl->Load(source);
return minja::Parser::parse(source, { .trim_blocks = true, .lstrip_blocks = true, .keep_trailing_newline = false });
}
std::optional<std::string> ChatLLM::checkJinjaTemplateError(const std::string &source)
{
std::optional<jinja2::Template> tmpl;
if (auto res = loadJinjaTemplate(tmpl, source); !res)
return res.error().ToString();
try {
loadJinjaTemplate(source);
} catch (const std::runtime_error &e) {
return e.what();
}
return std::nullopt;
}
@ -801,29 +866,29 @@ std::string ChatLLM::applyJinjaTemplate(std::span<const MessageItem> items) cons
uint version = parseJinjaTemplateVersion(chatTemplate);
auto makeMap = [version](const MessageItem &item) {
return jinja2::GenericMap([msg = std::make_shared<JinjaMessage>(version, item)] { return msg.get(); });
return JinjaMessage(version, item).AsJson();
};
std::unique_ptr<MessageItem> systemItem;
bool useSystem = !isAllSpace(systemMessage);
jinja2::ValuesList messages;
json::array_t messages;
messages.reserve(useSystem + items.size());
if (useSystem) {
systemItem = std::make_unique<MessageItem>(MessageItem::Type::System, systemMessage.toUtf8());
systemItem = std::make_unique<MessageItem>(MessageItem::system_tag, systemMessage.toUtf8());
messages.emplace_back(makeMap(*systemItem));
}
for (auto &item : items)
messages.emplace_back(makeMap(item));
jinja2::ValuesList toolList;
json::array_t toolList;
const int toolCount = ToolModel::globalInstance()->count();
for (int i = 0; i < toolCount; ++i) {
Tool *t = ToolModel::globalInstance()->get(i);
toolList.push_back(t->jinjaValue());
}
jinja2::ValuesMap params {
json::object_t params {
{ "messages", std::move(messages) },
{ "add_generation_prompt", true },
{ "toolList", toolList },
@ -831,12 +896,14 @@ std::string ChatLLM::applyJinjaTemplate(std::span<const MessageItem> items) cons
for (auto &[name, token] : model->specialTokens())
params.emplace(std::move(name), std::move(token));
std::optional<jinja2::Template> tmpl;
auto maybeRendered = loadJinjaTemplate(tmpl, chatTemplate.toStdString())
.and_then([&] { return tmpl->RenderAsString(params); });
if (!maybeRendered)
throw std::runtime_error(fmt::format("Failed to parse chat template: {}", maybeRendered.error().ToString()));
return *maybeRendered;
try {
auto tmpl = loadJinjaTemplate(chatTemplate.toStdString());
auto context = minja::Context::make(minja::Value(std::move(params)), jinjaEnv());
return tmpl->render(context);
} catch (const std::runtime_error &e) {
throw std::runtime_error(fmt::format("Failed to parse chat template: {}", e.what()));
}
Q_UNREACHABLE();
}
auto ChatLLM::promptInternalChat(const QStringList &enabledCollections, const LLModel::PromptContext &ctx,
@ -862,7 +929,7 @@ auto ChatLLM::promptInternalChat(const QStringList &enabledCollections, const LL
// Find the prompt that represents the query. Server chats are flexible and may not have one.
auto items = getChat();
if (auto peer = m_chatModel->getPeer(items, items.end() - 1)) // peer of response
query = { *peer - items.begin(), (*peer)->content() };
query = { (*peer)->index().value(), (*peer)->content() };
}
if (query) {
@ -888,6 +955,63 @@ auto ChatLLM::promptInternalChat(const QStringList &enabledCollections, const LL
};
}
class ChatViewResponseHandler : public BaseResponseHandler {
public:
ChatViewResponseHandler(ChatLLM *cllm, QElapsedTimer *totalTime, ChatLLM::PromptResult *result)
: m_cllm(cllm), m_totalTime(totalTime), m_result(result) {}
void onSplitIntoTwo(const QString &startTag, const QString &firstBuffer, const QString &secondBuffer) override
{
if (startTag == ToolCallConstants::ThinkStartTag)
m_cllm->m_chatModel->splitThinking({ firstBuffer, secondBuffer });
else
m_cllm->m_chatModel->splitToolCall({ firstBuffer, secondBuffer });
}
void onSplitIntoThree(const QString &secondBuffer, const QString &thirdBuffer) override
{
m_cllm->m_chatModel->endThinking({ secondBuffer, thirdBuffer }, m_totalTime->elapsed());
}
void onOldResponseChunk(const QByteArray &chunk) override
{
m_result->responseTokens++;
m_cllm->m_timer->inc();
m_result->response.append(chunk);
}
bool onBufferResponse(const QString &response, int bufferIdx) override
{
Q_UNUSED(bufferIdx)
try {
QString r = response;
m_cllm->m_chatModel->setResponseValue(removeLeadingWhitespace(r));
} catch (const std::exception &e) {
// We have a try/catch here because the main thread might have removed the response from
// the chatmodel by erasing the conversation during the response... the main thread sets
// m_stopGenerating before doing so, but it doesn't wait after that to reset the chatmodel
Q_ASSERT(m_cllm->m_stopGenerating);
return false;
}
emit m_cllm->responseChanged();
return true;
}
bool onRegularResponse() override
{
auto respStr = QString::fromUtf8(m_result->response);
return onBufferResponse(respStr, 0);
}
bool getStopGenerating() const override
{ return m_cllm->m_stopGenerating; }
private:
ChatLLM *m_cllm;
QElapsedTimer *m_totalTime;
ChatLLM::PromptResult *m_result;
};
auto ChatLLM::promptInternal(
const std::variant<std::span<const MessageItem>, std::string_view> &prompt,
const LLModel::PromptContext &ctx,
@ -935,53 +1059,22 @@ auto ChatLLM::promptInternal(
return !m_stopGenerating;
};
ToolCallParser toolCallParser;
auto handleResponse = [this, &result, &toolCallParser](LLModel::Token token, std::string_view piece) -> bool {
Q_UNUSED(token)
result.responseTokens++;
m_timer->inc();
// FIXME: This is *not* necessarily fully formed utf data because it can be partial at this point
// handle this like below where we have a QByteArray
toolCallParser.update(QString::fromStdString(piece.data()));
// Create a toolcall and split the response if needed
if (!toolCallParser.hasSplit() && toolCallParser.state() == ToolEnums::ParseState::Partial) {
const QPair<QString, QString> pair = toolCallParser.split();
m_chatModel->splitToolCall(pair);
}
result.response.append(piece.data(), piece.size());
auto respStr = QString::fromUtf8(result.response);
try {
if (toolCallParser.hasSplit())
m_chatModel->setResponseValue(toolCallParser.buffer());
else
m_chatModel->setResponseValue(removeLeadingWhitespace(respStr));
} catch (const std::exception &e) {
// We have a try/catch here because the main thread might have removed the response from
// the chatmodel by erasing the conversation during the response... the main thread sets
// m_stopGenerating before doing so, but it doesn't wait after that to reset the chatmodel
Q_ASSERT(m_stopGenerating);
return false;
}
emit responseChanged();
const bool foundToolCall = toolCallParser.state() == ToolEnums::ParseState::Complete;
return !foundToolCall && !m_stopGenerating;
};
QElapsedTimer totalTime;
totalTime.start();
m_timer->start();
ChatViewResponseHandler respHandler(this, &totalTime, &result);
m_timer->start();
QStringList finalBuffers;
bool shouldExecuteTool;
try {
emit promptProcessing();
m_llModelInfo.model->setThreadCount(mySettings->threadCount());
m_stopGenerating = false;
m_llModelInfo.model->prompt(conversation, handlePrompt, handleResponse, ctx);
std::tie(finalBuffers, shouldExecuteTool) = promptModelWithTools(
m_llModelInfo.model.get(), handlePrompt, respHandler, ctx,
QByteArray::fromRawData(conversation.data(), conversation.size()),
ToolCallConstants::AllTagNames
);
} catch (...) {
m_timer->stop();
throw;
@ -990,20 +1083,18 @@ auto ChatLLM::promptInternal(
m_timer->stop();
qint64 elapsed = totalTime.elapsed();
const bool foundToolCall = toolCallParser.state() == ToolEnums::ParseState::Complete;
// trim trailing whitespace
auto respStr = QString::fromUtf8(result.response);
if (!respStr.isEmpty() && (std::as_const(respStr).back().isSpace() || foundToolCall)) {
if (toolCallParser.hasSplit())
m_chatModel->setResponseValue(toolCallParser.buffer());
if (!respStr.isEmpty() && (std::as_const(respStr).back().isSpace() || finalBuffers.size() > 1)) {
if (finalBuffers.size() > 1)
m_chatModel->setResponseValue(finalBuffers.last().trimmed());
else
m_chatModel->setResponseValue(respStr.trimmed());
emit responseChanged();
}
bool doQuestions = false;
if (!m_isServer && messageItems && !foundToolCall) {
if (!m_isServer && messageItems && !shouldExecuteTool) {
switch (mySettings->suggestionMode()) {
case SuggestionMode::On: doQuestions = true; break;
case SuggestionMode::LocalDocsOnly: doQuestions = usedLocalDocs; break;
@ -1081,6 +1172,66 @@ void ChatLLM::reloadModel()
loadModel(m);
}
// This class throws discards the text within thinking tags, for use with chat names and follow-up questions.
class SimpleResponseHandler : public BaseResponseHandler {
public:
SimpleResponseHandler(ChatLLM *cllm)
: m_cllm(cllm) {}
void onSplitIntoTwo(const QString &startTag, const QString &firstBuffer, const QString &secondBuffer) override
{ /* no-op */ }
void onSplitIntoThree(const QString &secondBuffer, const QString &thirdBuffer) override
{ /* no-op */ }
void onOldResponseChunk(const QByteArray &chunk) override
{ m_response.append(chunk); }
bool onBufferResponse(const QString &response, int bufferIdx) override
{
if (bufferIdx == 1)
return true; // ignore "think" content
return onSimpleResponse(response);
}
bool onRegularResponse() override
{ return onBufferResponse(QString::fromUtf8(m_response), 0); }
bool getStopGenerating() const override
{ return m_cllm->m_stopGenerating; }
protected:
virtual bool onSimpleResponse(const QString &response) = 0;
protected:
ChatLLM *m_cllm;
QByteArray m_response;
};
class NameResponseHandler : public SimpleResponseHandler {
private:
// max length of chat names, in words
static constexpr qsizetype MAX_WORDS = 3;
public:
using SimpleResponseHandler::SimpleResponseHandler;
protected:
bool onSimpleResponse(const QString &response) override
{
QTextStream stream(const_cast<QString *>(&response), QIODeviceBase::ReadOnly);
QStringList words;
while (!stream.atEnd() && words.size() < MAX_WORDS) {
QString word;
stream >> word;
words << word;
}
emit m_cllm->generatedNameChanged(words.join(u' '));
return words.size() < MAX_WORDS || stream.atEnd();
}
};
void ChatLLM::generateName()
{
Q_ASSERT(isModelLoaded());
@ -1097,23 +1248,15 @@ void ChatLLM::generateName()
return;
}
QByteArray response; // raw UTF-8
auto handleResponse = [this, &response](LLModel::Token token, std::string_view piece) -> bool {
Q_UNUSED(token)
response.append(piece.data(), piece.size());
QStringList words = QString::fromUtf8(response).simplified().split(u' ', Qt::SkipEmptyParts);
emit generatedNameChanged(words.join(u' '));
return words.size() <= 3;
};
NameResponseHandler respHandler(this);
try {
m_llModelInfo.model->prompt(
applyJinjaTemplate(forkConversation(chatNamePrompt)),
[this](auto &&...) { return !m_stopGenerating; },
handleResponse,
promptContextFromSettings(m_modelInfo)
promptModelWithTools(
m_llModelInfo.model.get(),
/*promptCallback*/ [this](auto &&...) { return !m_stopGenerating; },
respHandler, promptContextFromSettings(m_modelInfo),
applyJinjaTemplate(forkConversation(chatNamePrompt)).c_str(),
{ ToolCallConstants::ThinkTagName }
);
} catch (const std::exception &e) {
qWarning() << "ChatLLM failed to generate name:" << e.what();
@ -1125,13 +1268,43 @@ void ChatLLM::handleChatIdChanged(const QString &id)
m_llmThread.setObjectName(id);
}
void ChatLLM::generateQuestions(qint64 elapsed)
{
class QuestionResponseHandler : public SimpleResponseHandler {
public:
using SimpleResponseHandler::SimpleResponseHandler;
protected:
bool onSimpleResponse(const QString &response) override
{
auto responseUtf8Bytes = response.toUtf8().slice(m_offset);
auto responseUtf8 = std::string(responseUtf8Bytes.begin(), responseUtf8Bytes.end());
// extract all questions from response
ptrdiff_t lastMatchEnd = -1;
auto it = std::sregex_iterator(responseUtf8.begin(), responseUtf8.end(), s_reQuestion);
auto end = std::sregex_iterator();
for (; it != end; ++it) {
auto pos = it->position();
auto len = it->length();
lastMatchEnd = pos + len;
emit m_cllm->generatedQuestionFinished(QString::fromUtf8(&responseUtf8[pos], len));
}
// remove processed input from buffer
if (lastMatchEnd != -1)
m_offset += lastMatchEnd;
return true;
}
private:
// FIXME: This only works with response by the model in english which is not ideal for a multi-language
// model.
// match whole question sentences
static const std::regex reQuestion(R"(\b(?:What|Where|How|Why|When|Who|Which|Whose|Whom)\b[^?]*\?)");
static inline const std::regex s_reQuestion { R"(\b(?:What|Where|How|Why|When|Who|Which|Whose|Whom)\b[^?]*\?)" };
qsizetype m_offset = 0;
};
void ChatLLM::generateQuestions(qint64 elapsed)
{
Q_ASSERT(isModelLoaded());
if (!isModelLoaded()) {
emit responseStopped(elapsed);
@ -1149,39 +1322,17 @@ void ChatLLM::generateQuestions(qint64 elapsed)
emit generatingQuestions();
std::string response; // raw UTF-8
auto handleResponse = [this, &response](LLModel::Token token, std::string_view piece) -> bool {
Q_UNUSED(token)
// add token to buffer
response.append(piece);
// extract all questions from response
ptrdiff_t lastMatchEnd = -1;
auto it = std::sregex_iterator(response.begin(), response.end(), reQuestion);
auto end = std::sregex_iterator();
for (; it != end; ++it) {
auto pos = it->position();
auto len = it->length();
lastMatchEnd = pos + len;
emit generatedQuestionFinished(QString::fromUtf8(&response[pos], len));
}
// remove processed input from buffer
if (lastMatchEnd != -1)
response.erase(0, lastMatchEnd);
return true;
};
QuestionResponseHandler respHandler(this);
QElapsedTimer totalTime;
totalTime.start();
try {
m_llModelInfo.model->prompt(
applyJinjaTemplate(forkConversation(suggestedFollowUpPrompt)),
[this](auto &&...) { return !m_stopGenerating; },
handleResponse,
promptContextFromSettings(m_modelInfo)
promptModelWithTools(
m_llModelInfo.model.get(),
/*promptCallback*/ [this](auto &&...) { return !m_stopGenerating; },
respHandler, promptContextFromSettings(m_modelInfo),
applyJinjaTemplate(forkConversation(suggestedFollowUpPrompt)).c_str(),
{ ToolCallConstants::ThinkTagName }
);
} catch (const std::exception &e) {
qWarning() << "ChatLLM failed to generate follow-up questions:" << e.what();

View File

@ -2,7 +2,7 @@
#define CHATLLM_H
#include "chatmodel.h"
#include "database.h" // IWYU pragma: keep
#include "database.h"
#include "modellist.h"
#include <gpt4all-backend/llmodel.h>
@ -10,28 +10,30 @@
#include <QByteArray>
#include <QElapsedTimer>
#include <QFileInfo>
#include <QList> // IWYU pragma: keep
#include <QList>
#include <QObject>
#include <QPointer>
#include <QString>
#include <QStringList> // IWYU pragma: keep
#include <QStringView>
#include <QThread>
#include <QVariantMap> // IWYU pragma: keep
#include <QtGlobal>
#include <QtNumeric>
#include <atomic>
#include <cstdint>
#include <memory>
#include <optional>
#include <span>
#include <string>
#include <string_view>
#include <variant>
#include <vector>
using namespace Qt::Literals::StringLiterals;
class ChatLLM;
class QDataStream;
// NOTE: values serialized to disk, do not change or reuse
enum class LLModelTypeV0 { // chat versions 2-5
MPT = 0,
@ -88,9 +90,6 @@ inline LLModelTypeV1 parseLLModelTypeV0(int v0)
}
}
class ChatLLM;
class ChatModel;
struct LLModelInfo {
std::unique_ptr<LLModel> model;
QFileInfo fileInfo;
@ -285,6 +284,8 @@ private:
bool m_isServer;
bool m_forceMetal;
bool m_reloadingToChangeVariant;
friend class ChatViewResponseHandler;
friend class SimpleResponseHandler;
};
#endif // CHATLLM_H

View File

@ -2,9 +2,11 @@
#include <QDebug>
#include <QMap>
#include <QtGlobal>
#include <QTextStream>
#include <QtLogging>
#include <exception>
QList<ResultInfo> ChatItem::consolidateSources(const QList<ResultInfo> &sources)
{
@ -41,6 +43,12 @@ void ChatItem::serializeText(QDataStream &stream, int version)
stream << value;
}
void ChatItem::serializeThink(QDataStream &stream, int version)
{
stream << value;
stream << thinkingTime;
}
void ChatItem::serializeSubItems(QDataStream &stream, int version)
{
stream << name;
@ -50,6 +58,7 @@ void ChatItem::serializeSubItems(QDataStream &stream, int version)
case ToolCall: { serializeToolCall(stream, version); break; }
case ToolResponse: { serializeToolResponse(stream, version); break; }
case Text: { serializeText(stream, version); break; }
case Think: { serializeThink(stream, version); break; }
case System:
case Prompt:
throw std::invalid_argument(fmt::format("cannot serialize subitem type {}", int(typ)));
@ -162,6 +171,13 @@ bool ChatItem::deserializeResponse(QDataStream &stream, int version)
return true;
}
bool ChatItem::deserializeThink(QDataStream &stream, int version)
{
stream >> value;
stream >> thinkingTime;
return true;
}
bool ChatItem::deserializeSubItems(QDataStream &stream, int version)
{
stream >> name;
@ -177,6 +193,7 @@ bool ChatItem::deserializeSubItems(QDataStream &stream, int version)
case ToolCall: { deserializeToolCall(stream, version); break; }
case ToolResponse: { deserializeToolResponse(stream, version); break; }
case Text: { deserializeText(stream, version); break; }
case Think: { deserializeThink(stream, version); break; }
case System:
case Prompt:
throw std::invalid_argument(fmt::format("cannot serialize subitem type {}", int(typ)));

View File

@ -4,32 +4,41 @@
#include "database.h"
#include "tool.h"
#include "toolcallparser.h"
#include "utils.h"
#include "utils.h" // IWYU pragma: keep
#include "xlsxtomd.h"
#include <fmt/format.h>
#include <QApplication>
#include <QAbstractListModel>
#include <QBuffer>
#include <QByteArray>
#include <QClipboard>
#include <QDataStream>
#include <QJsonDocument>
#include <QFileInfo>
#include <QGuiApplication>
#include <QIODevice>
#include <QHash>
#include <QList>
#include <QMutex>
#include <QMutexLocker> // IWYU pragma: keep
#include <QObject>
#include <QPair>
#include <QPair> // IWYU pragma: keep
#include <QString>
#include <QStringList> // IWYU pragma: keep
#include <QUrl>
#include <QVariant>
#include <QVector>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtPreprocessorSupport>
#include <QtTypes>
#include <algorithm>
#include <iterator>
#include <list>
#include <optional>
#include <ranges>
#include <span>
#include <stdexcept>
#include <utility>
#include <vector>
@ -94,11 +103,28 @@ class MessageItem
public:
enum class Type { System, Prompt, Response, ToolResponse };
MessageItem(Type type, QString content)
: m_type(type), m_content(std::move(content)) {}
struct system_tag_t { explicit system_tag_t() = default; };
static inline constexpr system_tag_t system_tag = system_tag_t{};
MessageItem(Type type, QString content, const QList<ResultInfo> &sources, const QList<PromptAttachment> &promptAttachments)
: m_type(type), m_content(std::move(content)), m_sources(sources), m_promptAttachments(promptAttachments) {}
MessageItem(qsizetype index, Type type, QString content)
: m_index(index), m_type(type), m_content(std::move(content))
{
Q_ASSERT(type != Type::System); // use system_tag constructor
}
// Construct a system message with no index, since they are never stored in the chat
MessageItem(system_tag_t, QString content)
: m_type(Type::System), m_content(std::move(content)) {}
MessageItem(qsizetype index, Type type, QString content, const QList<ResultInfo> &sources, const QList<PromptAttachment> &promptAttachments)
: m_index(index)
, m_type(type)
, m_content(std::move(content))
, m_sources(sources)
, m_promptAttachments(promptAttachments) {}
// index of the parent ChatItem (system, prompt, response) in its container
std::optional<qsizetype> index() const { return m_index; }
Type type() const { return m_type; }
const QString &content() const { return m_content; }
@ -126,10 +152,11 @@ public:
}
private:
Type m_type;
QString m_content;
QList<ResultInfo> m_sources;
QList<PromptAttachment> m_promptAttachments;
std::optional<qsizetype> m_index;
Type m_type;
QString m_content;
QList<ResultInfo> m_sources;
QList<PromptAttachment> m_promptAttachments;
};
Q_DECLARE_METATYPE(MessageItem)
@ -159,8 +186,11 @@ class ChatItem : public QObject
Q_PROPERTY(bool thumbsUpState MEMBER thumbsUpState )
Q_PROPERTY(bool thumbsDownState MEMBER thumbsDownState)
// thinking
Q_PROPERTY(int thinkingTime MEMBER thinkingTime NOTIFY thinkingTimeChanged)
public:
enum class Type { System, Prompt, Response, Text, ToolCall, ToolResponse };
enum class Type { System, Prompt, Response, Text, ToolCall, ToolResponse, Think };
// tags for constructing ChatItems
struct prompt_tag_t { explicit prompt_tag_t () = default; };
@ -169,19 +199,22 @@ public:
struct text_tag_t { explicit text_tag_t () = default; };
struct tool_call_tag_t { explicit tool_call_tag_t () = default; };
struct tool_response_tag_t { explicit tool_response_tag_t() = default; };
struct think_tag_t { explicit think_tag_t () = default; };
static inline constexpr prompt_tag_t prompt_tag = prompt_tag_t {};
static inline constexpr response_tag_t response_tag = response_tag_t {};
static inline constexpr system_tag_t system_tag = system_tag_t {};
static inline constexpr text_tag_t text_tag = text_tag_t {};
static inline constexpr tool_call_tag_t tool_call_tag = tool_call_tag_t {};
static inline constexpr tool_response_tag_t tool_response_tag = tool_response_tag_t {};
static inline constexpr think_tag_t think_tag = think_tag_t {};
public:
ChatItem(QObject *parent)
: QObject(nullptr)
{
moveToThread(parent->thread());
setParent(parent);
// setParent must be called from the thread the object lives in
QMetaObject::invokeMethod(this, [this, parent]() { this->setParent(parent); });
}
// NOTE: System messages are currently never serialized and only *stored* by the local server.
@ -220,6 +253,10 @@ public:
: ChatItem(parent)
{ this->name = u"ToolResponse: "_s; this->value = value; }
ChatItem(QObject *parent, think_tag_t, const QString &value)
: ChatItem(parent)
{ this->name = u"Think: "_s; this->value = value; }
Type type() const
{
if (name == u"System: "_s)
@ -234,6 +271,8 @@ public:
return Type::ToolCall;
if (name == u"ToolResponse: "_s)
return Type::ToolResponse;
if (name == u"Think: "_s)
return Type::Think;
throw std::invalid_argument(fmt::format("Chat item has unknown label: {:?}", name));
}
@ -254,7 +293,7 @@ public:
if (type() == Type::Response) {
// We parse if this contains any part of a partial toolcall
ToolCallParser parser;
parser.update(value);
parser.update(value.toUtf8());
// If no tool call is detected, return the original value
if (parser.startIndex() < 0)
@ -265,9 +304,11 @@ public:
return beforeToolCall;
}
// For tool calls we only return content if it is the code interpreter
if (type() == Type::Think)
return thinkContent(value);
if (type() == Type::ToolCall)
return codeInterpreterContent(value);
return toolCallContent(value);
// We don't show any of content from the tool response in the GUI
if (type() == Type::ToolResponse)
@ -276,10 +317,21 @@ public:
return value;
}
QString codeInterpreterContent(const QString &value) const
QString thinkContent(const QString &value) const
{
ToolCallParser parser;
parser.update(value);
parser.update(value.toUtf8());
// Extract the content
QString content = parser.toolCall();
content = content.trimmed();
return content;
}
QString toolCallContent(const QString &value) const
{
ToolCallParser parser;
parser.update(value.toUtf8());
// Extract the code
QString code = parser.toolCall();
@ -357,6 +409,12 @@ public:
return toolCallInfo.error != ToolEnums::Error::NoError;
}
void setThinkingTime(int t)
{
thinkingTime = t;
emit thinkingTimeChanged();
}
// NB: Assumes response is not current.
static ChatItem *fromMessageInput(QObject *parent, const MessageInput &message)
{
@ -369,7 +427,7 @@ public:
Q_UNREACHABLE();
}
MessageItem asMessageItem() const
MessageItem asMessageItem(qsizetype index) const
{
MessageItem::Type msgType;
switch (auto typ = type()) {
@ -380,9 +438,10 @@ public:
case ToolResponse: msgType = MessageItem::Type::ToolResponse; break;
case Text:
case ToolCall:
case Think:
throw std::invalid_argument(fmt::format("cannot convert ChatItem type {} to message item", int(typ)));
}
return { msgType, flattenedContent(), sources, promptAttachments };
return { index, msgType, flattenedContent(), sources, promptAttachments };
}
static QList<ResultInfo> consolidateSources(const QList<ResultInfo> &sources);
@ -391,6 +450,7 @@ public:
void serializeToolCall(QDataStream &stream, int version);
void serializeToolResponse(QDataStream &stream, int version);
void serializeText(QDataStream &stream, int version);
void serializeThink(QDataStream &stream, int version);
void serializeSubItems(QDataStream &stream, int version); // recursive
void serialize(QDataStream &stream, int version);
@ -399,6 +459,7 @@ public:
bool deserializeToolCall(QDataStream &stream, int version);
bool deserializeToolResponse(QDataStream &stream, int version);
bool deserializeText(QDataStream &stream, int version);
bool deserializeThink(QDataStream &stream, int version);
bool deserializeSubItems(QDataStream &stream, int version); // recursive
bool deserialize(QDataStream &stream, int version);
@ -406,6 +467,7 @@ Q_SIGNALS:
void contentChanged();
void isTooCallErrorChanged();
void isCurrentResponseChanged();
void thinkingTimeChanged();
public:
@ -429,6 +491,9 @@ public:
bool stopped = false;
bool thumbsUpState = false;
bool thumbsDownState = false;
// thinking time in ms
int thinkingTime = 0;
};
class ChatModel : public QAbstractListModel
@ -500,6 +565,7 @@ private:
return std::nullopt;
}
// FIXME(jared): this should really be done at the parent level, not the sub-item level
static std::optional<qsizetype> getPeerInternal(const MessageItem *arr, qsizetype size, qsizetype index)
{
qsizetype peer;
@ -879,6 +945,70 @@ public:
if (changed) emit dataChanged(createIndex(index, 0), createIndex(index, 0), {NewResponseRole});
}
Q_INVOKABLE void splitThinking(const QPair<QString, QString> &split)
{
qsizetype index;
{
QMutexLocker locker(&m_mutex);
if (m_chatItems.isEmpty() || m_chatItems.cend()[-1]->type() != ChatItem::Type::Response)
throw std::logic_error("can only set thinking on a chat that ends with a response");
index = m_chatItems.count() - 1;
ChatItem *currentResponse = m_chatItems.back();
Q_ASSERT(currentResponse->isCurrentResponse);
// Create a new response container for any text and the thinking
ChatItem *newResponse = new ChatItem(this, ChatItem::response_tag);
// Add preceding text if any
if (!split.first.isEmpty()) {
ChatItem *textItem = new ChatItem(this, ChatItem::text_tag, split.first);
newResponse->subItems.push_back(textItem);
}
// Add the thinking item
Q_ASSERT(!split.second.isEmpty());
ChatItem *thinkingItem = new ChatItem(this, ChatItem::think_tag, split.second);
thinkingItem->isCurrentResponse = true;
newResponse->subItems.push_back(thinkingItem);
// Add new response and reset our value
currentResponse->subItems.push_back(newResponse);
currentResponse->value = QString();
}
emit dataChanged(createIndex(index, 0), createIndex(index, 0), {ChildItemsRole, ContentRole});
}
Q_INVOKABLE void endThinking(const QPair<QString, QString> &split, int thinkingTime)
{
qsizetype index;
{
QMutexLocker locker(&m_mutex);
if (m_chatItems.isEmpty() || m_chatItems.cend()[-1]->type() != ChatItem::Type::Response)
throw std::logic_error("can only end thinking on a chat that ends with a response");
index = m_chatItems.count() - 1;
ChatItem *currentResponse = m_chatItems.back();
Q_ASSERT(currentResponse->isCurrentResponse);
ChatItem *subResponse = currentResponse->subItems.back();
Q_ASSERT(subResponse->type() == ChatItem::Type::Response);
Q_ASSERT(subResponse->isCurrentResponse);
subResponse->setCurrentResponse(false);
ChatItem *thinkingItem = subResponse->subItems.back();
Q_ASSERT(thinkingItem->type() == ChatItem::Type::Think);
thinkingItem->setCurrentResponse(false);
thinkingItem->setValue(split.first);
thinkingItem->setThinkingTime(thinkingTime);
currentResponse->setValue(split.second);
}
emit dataChanged(createIndex(index, 0), createIndex(index, 0), {ChildItemsRole, ContentRole});
}
Q_INVOKABLE void splitToolCall(const QPair<QString, QString> &split)
{
qsizetype index;
@ -1013,10 +1143,12 @@ public:
// A flattened version of the chat item tree used by the backend and jinja
QMutexLocker locker(&m_mutex);
std::vector<MessageItem> chatItems;
for (const ChatItem *item : m_chatItems) {
chatItems.reserve(chatItems.size() + item->subItems.size() + 1);
ranges::copy(item->subItems | views::transform(&ChatItem::asMessageItem), std::back_inserter(chatItems));
chatItems.push_back(item->asMessageItem());
for (qsizetype i : views::iota(0, m_chatItems.size())) {
auto *parent = m_chatItems.at(i);
chatItems.reserve(chatItems.size() + parent->subItems.size() + 1);
ranges::copy(parent->subItems | views::transform([&](auto *s) { return s->asMessageItem(i); }),
std::back_inserter(chatItems));
chatItems.push_back(parent->asMessageItem(i));
}
return chatItems;
}

View File

@ -1,29 +1,32 @@
#include "chatviewtextprocessor.h"
#include <QAbstractTextDocumentLayout>
#include <QBrush>
#include <QChar>
#include <QClipboard>
#include <QDebug>
#include <QFlag>
#include <QFont>
#include <QFontMetricsF>
#include <QGuiApplication>
#include <QList>
#include <QPainter>
#include <QList> // IWYU pragma: keep
#include <QPair>
#include <QQuickTextDocument>
#include <QRegularExpression>
#include <QStringList>
#include <QTextBlock>
#include <QTextCharFormat>
#include <QStringList> // IWYU pragma: keep
#include <QTextBlock> // IWYU pragma: keep
#include <QTextCharFormat> // IWYU pragma: keep
#include <QTextCursor>
#include <QTextDocument>
#include <QTextDocumentFragment>
#include <QTextFrame>
#include <QTextFrameFormat>
#include <QTextFrame> // IWYU pragma: keep
#include <QTextFrameFormat> // IWYU pragma: keep
#include <QTextTableCell>
#include <QVariant>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <algorithm>
#include <utility>
enum Language {
None,

View File

@ -3,18 +3,15 @@
#include <QColor>
#include <QObject>
#include <QQmlEngine>
#include <QQuickTextDocument> // IWYU pragma: keep
#include <QRectF>
#include <QSizeF>
#include <QQmlEngine> // IWYU pragma: keep
#include <QQuickTextDocument>
#include <QString>
#include <QSyntaxHighlighter>
#include <QTextObjectInterface>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <QtTypes>
// IWYU pragma: no_forward_declare QQuickTextDocument
class QPainter;
class QTextDocument;
class QTextFormat;
struct CodeColors {
Q_GADGET

View File

@ -1,14 +1,25 @@
#include "codeinterpreter.h"
#include <QJSEngine>
#include <QJSValue>
#include <QStringList>
#include <QList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QVariant>
#include <Qt>
using namespace Qt::Literals::StringLiterals;
QString CodeInterpreter::run(const QList<ToolParam> &params, qint64 timeout)
CodeInterpreter::CodeInterpreter()
: Tool()
, m_error(ToolEnums::Error::NoError)
{
m_worker = new CodeInterpreterWorker;
connect(this, &CodeInterpreter::request, m_worker, &CodeInterpreterWorker::request, Qt::QueuedConnection);
}
void CodeInterpreter::run(const QList<ToolParam> &params)
{
m_error = ToolEnums::Error::NoError;
m_errorString = QString();
@ -18,27 +29,24 @@ QString CodeInterpreter::run(const QList<ToolParam> &params, qint64 timeout)
&& params.first().type == ToolEnums::ParamType::String);
const QString code = params.first().value.toString();
QThread workerThread;
CodeInterpreterWorker worker;
worker.moveToThread(&workerThread);
connect(&worker, &CodeInterpreterWorker::finished, &workerThread, &QThread::quit, Qt::DirectConnection);
connect(&workerThread, &QThread::started, [&worker, code]() {
worker.request(code);
connect(m_worker, &CodeInterpreterWorker::finished, [this, params] {
m_error = m_worker->error();
m_errorString = m_worker->errorString();
emit runComplete({
ToolCallConstants::CodeInterpreterFunction,
params,
m_worker->response(),
m_error,
m_errorString
});
});
workerThread.start();
bool timedOut = !workerThread.wait(timeout);
if (timedOut) {
worker.interrupt(timeout); // thread safe
m_error = ToolEnums::Error::TimeoutError;
}
workerThread.quit();
workerThread.wait();
if (!timedOut) {
m_error = worker.error();
m_errorString = worker.errorString();
}
return worker.response();
emit request(code);
}
bool CodeInterpreter::interrupt()
{
return m_worker->interrupt();
}
QList<ToolParamInfo> CodeInterpreter::parameters() const
@ -89,20 +97,20 @@ QString CodeInterpreter::exampleReply() const
CodeInterpreterWorker::CodeInterpreterWorker()
: QObject(nullptr)
, m_engine(new QJSEngine(this))
{
}
moveToThread(&m_thread);
void CodeInterpreterWorker::request(const QString &code)
{
JavaScriptConsoleCapture consoleCapture;
QJSValue consoleInternalObject = m_engine.newQObject(&consoleCapture);
m_engine.globalObject().setProperty("console_internal", consoleInternalObject);
QJSValue consoleInternalObject = m_engine->newQObject(&m_consoleCapture);
m_engine->globalObject().setProperty("console_internal", consoleInternalObject);
// preprocess console.log args in JS since Q_INVOKE doesn't support varargs
auto consoleObject = m_engine.evaluate(uR"(
auto consoleObject = m_engine->evaluate(uR"(
class Console {
log(...args) {
if (args.length && typeof args[0] === 'string')
if (args.length == 0)
return;
if (args.length >= 2 && typeof args[0] === 'string')
throw new Error('console.log string formatting not supported');
let cat = '';
for (const arg of args) {
@ -114,15 +122,28 @@ void CodeInterpreterWorker::request(const QString &code)
new Console();
)"_s);
m_engine.globalObject().setProperty("console", consoleObject);
m_engine->globalObject().setProperty("console", consoleObject);
m_thread.start();
}
const QJSValue result = m_engine.evaluate(code);
void CodeInterpreterWorker::reset()
{
m_response.clear();
m_error = ToolEnums::Error::NoError;
m_errorString.clear();
m_consoleCapture.output.clear();
m_engine->setInterrupted(false);
}
void CodeInterpreterWorker::request(const QString &code)
{
reset();
const QJSValue result = m_engine->evaluate(code);
QString resultString;
if (m_engine.isInterrupted()) {
resultString = QString("Error: code execution was timed out as it exceeded %1 ms. Code must be written to ensure execution does not timeout.").arg(m_timeout);
} else if (result.isError()) {
if (m_engine->isInterrupted()) {
resultString = QString("Error: code execution was interrupted or timed out.");
} else if (result.isError()) {
// NOTE: We purposely do not set the m_error or m_errorString for the code interpreter since
// we *want* the model to see the response has an error so it can hopefully correct itself. The
// error member variables are intended for tools that have error conditions that cannot be corrected.
@ -143,9 +164,16 @@ void CodeInterpreterWorker::request(const QString &code)
}
if (resultString.isEmpty())
resultString = consoleCapture.output;
else if (!consoleCapture.output.isEmpty())
resultString += "\n" + consoleCapture.output;
resultString = m_consoleCapture.output;
else if (!m_consoleCapture.output.isEmpty())
resultString += "\n" + m_consoleCapture.output;
m_response = resultString;
emit finished();
}
bool CodeInterpreterWorker::interrupt()
{
m_error = ToolEnums::Error::TimeoutError;
m_engine->setInterrupted(true);
return true;
}

View File

@ -4,10 +4,13 @@
#include "tool.h"
#include "toolcallparser.h"
#include <QJSEngine>
#include <QObject>
#include <QString>
#include <QtGlobal>
#include <QThread>
#include <QtAssert>
class QJSEngine;
class JavaScriptConsoleCapture : public QObject
{
@ -39,32 +42,37 @@ public:
CodeInterpreterWorker();
virtual ~CodeInterpreterWorker() {}
void reset();
QString response() const { return m_response; }
void request(const QString &code);
void interrupt(qint64 timeout) { m_timeout = timeout; m_engine.setInterrupted(true); }
ToolEnums::Error error() const { return m_error; }
QString errorString() const { return m_errorString; }
bool interrupt();
public Q_SLOTS:
void request(const QString &code);
Q_SIGNALS:
void finished();
private:
qint64 m_timeout = 0;
QJSEngine m_engine;
QString m_response;
ToolEnums::Error m_error = ToolEnums::Error::NoError;
QString m_errorString;
QThread m_thread;
JavaScriptConsoleCapture m_consoleCapture;
QJSEngine *m_engine = nullptr;
};
class CodeInterpreter : public Tool
{
Q_OBJECT
public:
explicit CodeInterpreter() : Tool(), m_error(ToolEnums::Error::NoError) {}
explicit CodeInterpreter();
virtual ~CodeInterpreter() {}
QString run(const QList<ToolParam> &params, qint64 timeout = 2000) override;
void run(const QList<ToolParam> &params) override;
bool interrupt() override;
ToolEnums::Error error() const override { return m_error; }
QString errorString() const override { return m_errorString; }
@ -77,9 +85,13 @@ public:
QString exampleCall() const override;
QString exampleReply() const override;
Q_SIGNALS:
void request(const QString &code);
private:
ToolEnums::Error m_error = ToolEnums::Error::NoError;
QString m_errorString;
CodeInterpreterWorker *m_worker;
};
#endif // CODEINTERPRETER_H

View File

@ -0,0 +1,7 @@
#pragma once
#define APP_VERSION "@APP_VERSION@"
#define G4A_CONFIG(name) (1/G4A_CONFIG_##name == 1)
#define G4A_CONFIG_force_d3d12 @GPT4ALL_CONFIG_FORCE_D3D12@

View File

@ -1,19 +1,21 @@
#include "database.h"
#include "mysettings.h"
#include "utils.h"
#include "utils.h" // IWYU pragma: keep
#include <duckx/duckx.hpp>
#include <fmt/format.h>
#include <usearch/index.hpp>
#include <usearch/index_plugins.hpp>
#include <QByteArrayView>
#include <QDebug>
#include <QDir>
#include <QDirIterator>
#include <QFile>
#include <QFileSystemWatcher>
#include <QFlags>
#include <QIODevice>
#include <QKeyValueIterator>
#include <QRegularExpression>
#include <QSqlError>
#include <QSqlQuery>
@ -22,8 +24,9 @@
#include <QMap>
#include <QUtf8StringView>
#include <QVariant>
#include <Qt>
#include <QtLogging>
#include <QtMinMax>
#include <QtTypes>
#include <algorithm>
#include <cmath>
@ -46,6 +49,7 @@ namespace us = unum::usearch;
//#define DEBUG
//#define DEBUG_EXAMPLE
namespace {
/* QFile that checks input for binary data. If seen, it fails the read and returns true
@ -1111,9 +1115,9 @@ class DocumentReader {
public:
struct Metadata { QString title, author, subject, keywords; };
static std::unique_ptr<DocumentReader> fromDocument(const DocumentInfo &info);
static std::unique_ptr<DocumentReader> fromDocument(DocumentInfo info);
const DocumentInfo &doc () const { return *m_info; }
const DocumentInfo &doc () const { return m_info; }
const Metadata &metadata() const { return m_metadata; }
const std::optional<QString> &word () const { return m_word; }
const std::optional<QString> &nextWord() { m_word = advance(); return m_word; }
@ -1123,8 +1127,8 @@ public:
virtual ~DocumentReader() = default;
protected:
explicit DocumentReader(const DocumentInfo &info)
: m_info(&info) {}
explicit DocumentReader(DocumentInfo info)
: m_info(std::move(info)) {}
void postInit(Metadata &&metadata = {})
{
@ -1134,9 +1138,9 @@ protected:
virtual std::optional<QString> advance() = 0;
const DocumentInfo *m_info;
Metadata m_metadata;
std::optional<QString> m_word;
DocumentInfo m_info;
Metadata m_metadata;
std::optional<QString> m_word;
};
namespace {
@ -1144,8 +1148,8 @@ namespace {
#ifdef GPT4ALL_USE_QTPDF
class PdfDocumentReader final : public DocumentReader {
public:
explicit PdfDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit PdfDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
{
QString path = info.file.canonicalFilePath();
if (m_doc.load(path) != QPdfDocument::Error::None)
@ -1185,8 +1189,8 @@ private:
#else
class PdfDocumentReader final : public DocumentReader {
public:
explicit PdfDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit PdfDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
{
QString path = info.file.canonicalFilePath();
m_doc = FPDF_LoadDocument(path.toUtf8().constData(), nullptr);
@ -1209,7 +1213,6 @@ public:
FPDF_ClosePage(m_page);
if (m_doc)
FPDF_CloseDocument(m_doc);
FPDF_DestroyLibrary();
}
int page() const override { return m_currentPage; }
@ -1224,7 +1227,7 @@ private:
return std::nullopt;
if (m_page)
FPDF_ClosePage(m_page);
FPDF_ClosePage(std::exchange(m_page, nullptr));
m_page = FPDF_LoadPage(m_doc, m_currentPage++);
if (!m_page)
throw std::runtime_error("Failed to load page.");
@ -1278,8 +1281,8 @@ private:
class WordDocumentReader final : public DocumentReader {
public:
explicit WordDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit WordDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
, m_doc(info.file.canonicalFilePath().toStdString())
{
m_doc.open();
@ -1371,8 +1374,8 @@ protected:
class TxtDocumentReader final : public DocumentReader {
public:
explicit TxtDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit TxtDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
, m_file(info.file.canonicalFilePath())
{
if (!m_file.open(QIODevice::ReadOnly))
@ -1413,13 +1416,13 @@ protected:
} // namespace
std::unique_ptr<DocumentReader> DocumentReader::fromDocument(const DocumentInfo &doc)
std::unique_ptr<DocumentReader> DocumentReader::fromDocument(DocumentInfo doc)
{
if (doc.isPdf())
return std::make_unique<PdfDocumentReader>(doc);
return std::make_unique<PdfDocumentReader>(std::move(doc));
if (doc.isDocx())
return std::make_unique<WordDocumentReader>(doc);
return std::make_unique<TxtDocumentReader>(doc);
return std::make_unique<WordDocumentReader>(std::move(doc));
return std::make_unique<TxtDocumentReader>(std::move(doc));
}
ChunkStreamer::ChunkStreamer(Database *database)
@ -1427,12 +1430,12 @@ ChunkStreamer::ChunkStreamer(Database *database)
ChunkStreamer::~ChunkStreamer() = default;
void ChunkStreamer::setDocument(const DocumentInfo &doc, int documentId, const QString &embeddingModel)
void ChunkStreamer::setDocument(DocumentInfo doc, int documentId, const QString &embeddingModel)
{
auto docKey = doc.key();
if (!m_docKey || *m_docKey != docKey) {
m_docKey = docKey;
m_reader = DocumentReader::fromDocument(doc);
m_reader = DocumentReader::fromDocument(std::move(doc));
m_documentId = documentId;
m_embeddingModel = embeddingModel;
m_chunk.clear();
@ -1442,7 +1445,8 @@ void ChunkStreamer::setDocument(const DocumentInfo &doc, int documentId, const Q
if (m_database->m_documentIdCache.contains(documentId)) {
QSqlQuery q(m_database->m_db);
if (!m_database->removeChunksByDocumentId(q, documentId))
handleDocumentError("ERROR: Cannot remove chunks of document", documentId, doc.file.canonicalPath(), q.lastError());
handleDocumentError("ERROR: Cannot remove chunks of document",
documentId, m_reader->doc().file.canonicalPath(), q.lastError());
}
}
}

View File

@ -1,7 +1,7 @@
#ifndef DATABASE_H
#define DATABASE_H
#include "embllm.h" // IWYU pragma: keep
#include "embllm.h"
#include <QByteArray>
#include <QChar>
@ -15,11 +15,11 @@
#include <QSet>
#include <QSqlDatabase>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QUrl>
#include <QVector>
#include <QtGlobal>
#include <QVector> // IWYU pragma: keep
#include <QtAssert>
#include <atomic>
#include <cstddef>
@ -28,7 +28,7 @@
#include <memory>
#include <optional>
#include <utility>
#include <vector>
#include <vector> // IWYU pragma: keep
using namespace Qt::Literals::StringLiterals;
@ -39,6 +39,7 @@ class QSqlQuery;
class QTextStream;
class QTimer;
/* Version 0: GPT4All v2.4.3, full-text search
* Version 1: GPT4All v2.5.3, embeddings in hsnwlib
* Version 2: GPT4All v3.0.0, embeddings in sqlite
@ -171,7 +172,7 @@ public:
explicit ChunkStreamer(Database *database);
~ChunkStreamer();
void setDocument(const DocumentInfo &doc, int documentId, const QString &embeddingModel);
void setDocument(DocumentInfo doc, int documentId, const QString &embeddingModel);
std::optional<DocumentInfo::key_type> currentDocKey() const;
void reset();

View File

@ -10,32 +10,37 @@
#include <QDebug>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QIODevice>
#include <QIODevice> // IWYU pragma: keep
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QKeyValueIterator>
#include <QLocale>
#include <QNetworkRequest>
#include <QPair>
#include <QPair> // IWYU pragma: keep
#include <QRegularExpression>
#include <QRegularExpressionMatch>
#include <QSettings>
#include <QSslConfiguration>
#include <QSslSocket>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QTextStream>
#include <QUrl>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <Qt>
#include <QtAssert>
#include <QtLogging>
#include <QtMinMax>
#include <algorithm>
#include <compare>
#include <cstddef>
#include <utility>
using namespace Qt::Literals::StringLiterals;
class MyDownload: public Download { };
Q_GLOBAL_STATIC(MyDownload, downloadInstance)
Download *Download::globalInstance()

View File

@ -13,10 +13,14 @@
#include <QSslError>
#include <QString>
#include <QThread>
#include <QtGlobal>
#include <QtTypes>
// IWYU pragma: no_forward_declare QFile
// IWYU pragma: no_forward_declare QList
// IWYU pragma: no_forward_declare QSslError
class QByteArray;
struct ReleaseInfo {
Q_GADGET
Q_PROPERTY(QString version MEMBER version)

View File

@ -1,35 +1,35 @@
#include "embllm.h"
#include "modellist.h"
#include "mysettings.h"
#include <gpt4all-backend/llmodel.h>
#include <QCoreApplication>
#include <QDebug>
#include <QFile>
#include <QFileInfo>
#include <QGuiApplication>
#include <QIODevice>
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QList>
#include <QMutexLocker>
#include <QMutexLocker> // IWYU pragma: keep
#include <QNetworkAccessManager>
#include <QNetworkReply>
#include <QNetworkRequest>
#include <QUrl>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <exception>
#include <string>
#include <utility>
#include <vector>
using namespace Qt::Literals::StringLiterals;
static const QString EMBEDDING_MODEL_NAME = u"nomic-embed-text-v1.5"_s;
static const QString LOCAL_EMBEDDING_MODEL = u"nomic-embed-text-v1.5.f16.gguf"_s;
@ -359,8 +359,11 @@ void EmbeddingLLMWorker::handleFinished()
if (retrievedData.isValid() && retrievedData.canConvert<QVector<EmbeddingChunk>>())
chunks = retrievedData.value<QVector<EmbeddingChunk>>();
QVariant response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
QVariant response;
if (reply->error() != QNetworkReply::NoError) {
response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
}
bool ok;
int code = response.toInt(&ok);
if (!ok || code != 200) {

View File

@ -5,10 +5,10 @@
#include <QMutex>
#include <QObject>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <atomic>
#include <vector>
@ -16,6 +16,7 @@
class LLModel;
class QNetworkAccessManager;
struct EmbeddingChunk {
QString model; // TODO(jared): use to select model
int folder_id;

View File

@ -1,117 +1,76 @@
#include "jinja_helpers.h"
#include "utils.h"
#include <fmt/format.h>
#include <QString>
#include <QUrl>
#include <memory>
#include <vector>
#include <ranges>
#include <string>
#include <utility>
using namespace std::literals::string_view_literals;
namespace views = std::views;
using json = nlohmann::ordered_json;
JinjaResultInfo::~JinjaResultInfo() = default;
const JinjaFieldMap<ResultInfo> JinjaResultInfo::s_fields = {
{ "collection", [](auto &s) { return s.collection.toStdString(); } },
{ "path", [](auto &s) { return s.path .toStdString(); } },
{ "file", [](auto &s) { return s.file .toStdString(); } },
{ "title", [](auto &s) { return s.title .toStdString(); } },
{ "author", [](auto &s) { return s.author .toStdString(); } },
{ "date", [](auto &s) { return s.date .toStdString(); } },
{ "text", [](auto &s) { return s.text .toStdString(); } },
{ "page", [](auto &s) { return s.page; } },
{ "file_uri", [](auto &s) { return s.fileUri() .toStdString(); } },
};
JinjaPromptAttachment::~JinjaPromptAttachment() = default;
const JinjaFieldMap<PromptAttachment> JinjaPromptAttachment::s_fields = {
{ "url", [](auto &s) { return s.url.toString() .toStdString(); } },
{ "file", [](auto &s) { return s.file() .toStdString(); } },
{ "processed_content", [](auto &s) { return s.processedContent().toStdString(); } },
};
std::vector<std::string> JinjaMessage::GetKeys() const
json::object_t JinjaResultInfo::AsJson() const
{
std::vector<std::string> result;
auto &keys = this->keys();
result.reserve(keys.size());
result.assign(keys.begin(), keys.end());
return result;
return {
{ "collection", m_source->collection.toStdString() },
{ "path", m_source->path .toStdString() },
{ "file", m_source->file .toStdString() },
{ "title", m_source->title .toStdString() },
{ "author", m_source->author .toStdString() },
{ "date", m_source->date .toStdString() },
{ "text", m_source->text .toStdString() },
{ "page", m_source->page },
{ "file_uri", m_source->fileUri() .toStdString() },
};
}
auto JinjaMessage::keys() const -> const std::unordered_set<std::string_view> &
json::object_t JinjaPromptAttachment::AsJson() const
{
static const std::unordered_set<std::string_view> baseKeys
{ "role", "content" };
static const std::unordered_set<std::string_view> userKeys
{ "role", "content", "sources", "prompt_attachments" };
switch (m_item->type()) {
using enum MessageItem::Type;
case System:
case Response:
case ToolResponse:
return baseKeys;
case Prompt:
return userKeys;
break;
}
Q_UNREACHABLE();
return {
{ "url", m_attachment->url.toString() .toStdString() },
{ "file", m_attachment->file() .toStdString() },
{ "processed_content", m_attachment->processedContent().toStdString() },
};
}
bool operator==(const JinjaMessage &a, const JinjaMessage &b)
json::object_t JinjaMessage::AsJson() const
{
if (a.m_item == b.m_item)
return true;
const auto &[ia, ib] = std::tie(*a.m_item, *b.m_item);
auto type = ia.type();
if (type != ib.type() || ia.content() != ib.content())
return false;
switch (type) {
using enum MessageItem::Type;
case System:
case Response:
case ToolResponse:
return true;
case Prompt:
return ia.sources() == ib.sources() && ia.promptAttachments() == ib.promptAttachments();
break;
}
Q_UNREACHABLE();
}
const JinjaFieldMap<JinjaMessage> JinjaMessage::s_fields = {
{ "role", [](auto &m) {
switch (m.item().type()) {
json::object_t obj;
{
json::string_t role;
switch (m_item->type()) {
using enum MessageItem::Type;
case System: return "system"sv;
case Prompt: return "user"sv;
case Response: return "assistant"sv;
case ToolResponse: return "tool"sv;
break;
case System: role = "system"; break;
case Prompt: role = "user"; break;
case Response: role = "assistant"; break;
case ToolResponse: role = "tool"; break;
}
Q_UNREACHABLE();
} },
{ "content", [](auto &m) {
if (m.version() == 0 && m.item().type() == MessageItem::Type::Prompt)
return m.item().bakedPrompt().toStdString();
return m.item().content().toStdString();
} },
{ "sources", [](auto &m) {
auto sources = m.item().sources() | views::transform([](auto &r) {
return jinja2::GenericMap([map = std::make_shared<JinjaResultInfo>(r)] { return map.get(); });
});
return jinja2::ValuesList(sources.begin(), sources.end());
} },
{ "prompt_attachments", [](auto &m) {
auto attachments = m.item().promptAttachments() | views::transform([](auto &pa) {
return jinja2::GenericMap([map = std::make_shared<JinjaPromptAttachment>(pa)] { return map.get(); });
});
return jinja2::ValuesList(attachments.begin(), attachments.end());
} },
};
obj.emplace_back("role", std::move(role));
}
{
QString content;
if (m_version == 0 && m_item->type() == MessageItem::Type::Prompt) {
content = m_item->bakedPrompt();
} else {
content = m_item->content();
}
obj.emplace_back("content", content.toStdString());
}
if (m_item->type() == MessageItem::Type::Prompt) {
{
auto sources = m_item->sources() | views::transform([](auto &r) {
return JinjaResultInfo(r).AsJson();
});
obj.emplace("sources", json::array_t(sources.begin(), sources.end()));
}
{
auto attachments = m_item->promptAttachments() | views::transform([](auto &pa) {
return JinjaPromptAttachment(pa).AsJson();
});
obj.emplace("prompt_attachments", json::array_t(attachments.begin(), attachments.end()));
}
}
return obj;
}

View File

@ -3,47 +3,21 @@
#include "chatmodel.h"
#include "database.h"
#include <jinja2cpp/value.h>
#include <nlohmann/json.hpp>
#include <functional>
#include <ranges>
#include <string>
#include <string_view>
#include <unordered_map>
#include <unordered_set>
#include <QtTypes> // IWYU pragma: keep
#include <QtGlobal>
// IWYU pragma: no_forward_declare MessageItem
// IWYU pragma: no_forward_declare PromptAttachment
// IWYU pragma: no_forward_declare ResultInfo
namespace views = std::views;
using json = nlohmann::ordered_json;
template <typename T>
using JinjaFieldMap = std::unordered_map<std::string_view, std::function<jinja2::Value (const T &)>>;
template <typename Derived>
class JinjaComparable : public jinja2::IMapItemAccessor {
class JinjaHelper {
public:
JinjaComparable() = default;
bool IsEqual(const jinja2::IComparable &other) const override;
private:
Q_DISABLE_COPY_MOVE(JinjaComparable)
};
template <typename Derived>
class JinjaHelper : public JinjaComparable<Derived> {
public:
size_t GetSize() const override
{ return Derived::s_fields.size(); }
bool HasValue(const std::string &name) const override
{ return Derived::s_fields.contains(name); }
jinja2::Value GetValueByName(const std::string &name) const override;
std::vector<std::string> GetKeys() const override
{ auto keys = views::elements<0>(Derived::s_fields); return { keys.begin(), keys.end() }; }
json::object_t AsJson() const { return static_cast<const Derived *>(this)->AsJson(); }
};
class JinjaResultInfo : public JinjaHelper<JinjaResultInfo> {
@ -51,18 +25,10 @@ public:
explicit JinjaResultInfo(const ResultInfo &source) noexcept
: m_source(&source) {}
~JinjaResultInfo() override;
const ResultInfo &value() const { return *m_source; }
friend bool operator==(const JinjaResultInfo &a, const JinjaResultInfo &b)
{ return a.m_source == b.m_source || *a.m_source == *b.m_source; }
json::object_t AsJson() const;
private:
static const JinjaFieldMap<ResultInfo> s_fields;
const ResultInfo *m_source;
friend class JinjaHelper<JinjaResultInfo>;
};
class JinjaPromptAttachment : public JinjaHelper<JinjaPromptAttachment> {
@ -70,18 +36,10 @@ public:
explicit JinjaPromptAttachment(const PromptAttachment &attachment) noexcept
: m_attachment(&attachment) {}
~JinjaPromptAttachment() override;
const PromptAttachment &value() const { return *m_attachment; }
friend bool operator==(const JinjaPromptAttachment &a, const JinjaPromptAttachment &b)
{ return a.m_attachment == b.m_attachment || *a.m_attachment == *b.m_attachment; }
json::object_t AsJson() const;
private:
static const JinjaFieldMap<PromptAttachment> s_fields;
const PromptAttachment *m_attachment;
friend class JinjaHelper<JinjaPromptAttachment>;
};
class JinjaMessage : public JinjaHelper<JinjaMessage> {
@ -89,28 +47,9 @@ public:
explicit JinjaMessage(uint version, const MessageItem &item) noexcept
: m_version(version), m_item(&item) {}
const JinjaMessage &value () const { return *this; }
uint version() const { return m_version; }
const MessageItem &item () const { return *m_item; }
size_t GetSize() const override { return keys().size(); }
bool HasValue(const std::string &name) const override { return keys().contains(name); }
jinja2::Value GetValueByName(const std::string &name) const override
{ return HasValue(name) ? JinjaHelper::GetValueByName(name) : jinja2::EmptyValue(); }
std::vector<std::string> GetKeys() const override;
json::object_t AsJson() const;
private:
auto keys() const -> const std::unordered_set<std::string_view> &;
private:
static const JinjaFieldMap<JinjaMessage> s_fields;
uint m_version;
uint m_version;
const MessageItem *m_item;
friend class JinjaHelper<JinjaMessage>;
friend bool operator==(const JinjaMessage &a, const JinjaMessage &b);
};
#include "jinja_helpers.inl"

View File

@ -1,17 +0,0 @@
template <typename D>
bool JinjaComparable<D>::IsEqual(const jinja2::IComparable &other) const
{
if (auto *omsg = dynamic_cast<const D *>(&other))
return *static_cast<const D *>(this) == *omsg;
return false;
}
template <typename D>
jinja2::Value JinjaHelper<D>::GetValueByName(const std::string &name) const
{
if (auto it = D::s_fields.find(name); it != D::s_fields.end()) {
auto [_, func] = *it;
return func(static_cast<const D *>(this)->value());
}
return jinja2::EmptyValue();
}

View File

@ -2,29 +2,15 @@
#include "jinja_replacements.h"
#include <utility>
// This is a list of prompt templates known to GPT4All and their associated replacements which are automatically used
// instead when loading the chat template from GGUF. These exist for two primary reasons:
// - HuggingFace model authors make ugly chat templates because they do not expect the end user to see them;
// - and our Jinja2Cpp-based template parsing is not fully compatible with HuggingFace transformers and jinja2.
// Below is a list of known incompatibilities with the official HF jinja2 implementation. These are not all necessarily
// reflected in the below substitution list, and this cannot be an exhaustive list because there are a plethora of edge
// cases in template parsing in which jinja2 and Jinja2Cpp differ. These are differences that could be reasonably
// expected to affect chat templates that could be seen in the wild, or that cause a crash:
// - Jinja2Cpp crashes (in debug builds) if given the template `a[""(`
// - Jinja2Cpp does not support these jinja2 constructs:
// - `is not none`
// - list slicing, e.g. `messages[1:]`
// - the jinja2.ext.loopcontrols extension, which HF enables by default
// - a missing space after a quote in substitution (e.g. `{{ 'foo'}}`), which *has* been seen in the wild
// - GPT4All does not currently support these HuggingFace template features:
// - customized "tojson" filter (we provide the built-in Jinja2Cpp one)
// - the AssistantTracker extension
// - and chat templates occasionally use features we do not support. This is less true now that we use minja.
// The substitution list.
// For templates that apply to models listed in models3.json, these should be copied there as well for best
// compatibility with older versions of GPT4All.
const std::unordered_map<std::string_view, std::string_view> CHAT_TEMPLATE_SUBSTITUTIONS {
// calme-2.1-phi3.5-4b.Q6_K.gguf (reported by ThilotE on Discord), Phi-3.5-mini-instruct-Q4_0.gguf (nomic-ai/gpt4all#3345)
@ -52,6 +38,30 @@ const std::unordered_map<std::string_view, std::string_view> CHAT_TEMPLATE_SUBST
{{- '<|assistant|>\n' }}
{%- else %}
{{- eos_token }}
{%- endif %})TEMPLATE",
},
// DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf
{
// original
R"TEMPLATE({% if not add_generation_prompt is defined %}{% set add_generation_prompt = false %}{% endif %}{% set ns = namespace(is_first=false, is_tool=false, is_output_first=true, system_prompt='') %}{%- for message in messages %}{%- if message['role'] == 'system' %}{% set ns.system_prompt = message['content'] %}{%- endif %}{%- endfor %}{{bos_token}}{{ns.system_prompt}}{%- for message in messages %}{%- if message['role'] == 'user' %}{%- set ns.is_tool = false -%}{{'<User>' + message['content']}}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is none %}{%- set ns.is_tool = false -%}{%- for tool in message['tool_calls']%}{%- if not ns.is_first %}{{'<Assistant><tool▁calls▁begin><tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<tool▁call▁end>'}}{%- set ns.is_first = true -%}{%- else %}{{'\n' + '<tool▁call▁begin>' + tool['type'] + '<tool▁sep>' + tool['function']['name'] + '\n' + '```json' + '\n' + tool['function']['arguments'] + '\n' + '```' + '<tool▁call▁end>'}}{{'<tool▁calls▁end><end▁of▁sentence>'}}{%- endif %}{%- endfor %}{%- endif %}{%- if message['role'] == 'assistant' and message['content'] is not none %}{%- if ns.is_tool %}{{'<tool▁outputs▁end>' + message['content'] + '<end▁of▁sentence>'}}{%- set ns.is_tool = false -%}{%- else %}{% set content = message['content'] %}{% if '</think>' in content %}{% set content = content.split('</think>')[-1] %}{% endif %}{{'<Assistant>' + content + '<end▁of▁sentence>'}}{%- endif %}{%- endif %}{%- if message['role'] == 'tool' %}{%- set ns.is_tool = true -%}{%- if ns.is_output_first %}{{'<tool▁outputs▁begin><tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- set ns.is_output_first = false %}{%- else %}{{'\n<tool▁output▁begin>' + message['content'] + '<tool▁output▁end>'}}{%- endif %}{%- endif %}{%- endfor -%}{% if ns.is_tool %}{{'<tool▁outputs▁end>'}}{% endif %}{% if add_generation_prompt and not ns.is_tool %}{{'<Assistant>'}}{% endif %})TEMPLATE",
// replacement
R"TEMPLATE({%- if not add_generation_prompt is defined %}
{%- set add_generation_prompt = false %}
{%- endif %}
{%- if messages[0]['role'] == 'system' %}
{{- messages[0]['content'] }}
{%- endif %}
{%- for message in messages %}
{%- if message['role'] == 'user' %}
{{- '<User>' + message['content'] }}
{%- endif %}
{%- if message['role'] == 'assistant' %}
{%- set content = message['content'] | regex_replace('^[\\s\\S]*</think>', '') %}
{{- '<Assistant>' + content + '<endofsentence>' }}
{%- endif %}
{%- endfor -%}
{%- if add_generation_prompt %}
{{- '<Assistant>' }}
{%- endif %})TEMPLATE",
},
// gemma-2-9b-it-Q4_0.gguf (nomic-ai/gpt4all#3282)
@ -106,11 +116,31 @@ const std::unordered_map<std::string_view, std::string_view> CHAT_TEMPLATE_SUBST
{%- elif message['role'] == 'system' %}
{{- '<|system|>\n' + message['content'] + eos_token }}
{%- elif message['role'] == 'assistant' %}
{{- '<|assistant|>\n' + message['content'] + eos_token }}
{{- '<|assistant|>\n' + message['content'] + eos_token }}
{%- endif %}
{%- if loop.last and add_generation_prompt %}
{{- '<|assistant|>' }}
{%- endif %}
{%- endfor %})TEMPLATE",
},
// granite-3.1-3b-a800m-instruct-Q4_0.gguf, granite-3.1-8b-instruct-Q4_0.gguf (nomic-ai/gpt4all#3471)
{
// original
R"TEMPLATE({%- if messages[0]['role'] == 'system' %}{%- set system_message = messages[0]['content'] %}{%- set loop_messages = messages[1:] %}{%- else %}{%- set system_message = "Knowledge Cutoff Date: April 2024. You are Granite, developed by IBM." %}{%- if tools and documents %}{%- set system_message = system_message + " You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request. Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data." %}{%- elif tools %}{%- set system_message = system_message + " You are a helpful AI assistant with access to the following tools. When a tool is required to answer the user's query, respond with <|tool_call|> followed by a JSON list of tools used. If a tool does not exist in the provided list of tools, notify the user that you do not have the ability to fulfill the request." %}{%- elif documents %}{%- set system_message = system_message + " Write the response to the user's input by strictly aligning with the facts in the provided documents. If the information needed to answer the question is not available in the documents, inform the user that the question cannot be answered based on the available data." %}{%- else %}{%- set system_message = system_message + " You are a helpful AI assistant." %}{%- endif %}{%- if controls and 'citations' in controls and documents %}{%- set system_message = system_message + ' In your response, use the symbols <co> and </co> to indicate when a fact comes from a document in the search result, e.g <co>0</co> for a fact from document 0. Afterwards, list all the citations with their corresponding documents in an ordered list.' %}{%- endif %}{%- if controls and 'hallucinations' in controls and documents %}{%- set system_message = system_message + ' Finally, after the response is written, include a numbered list of sentences from the response that are potentially hallucinated and not based in the documents.' %}{%- endif %}{%- set loop_messages = messages %}{%- endif %}{{- '<|start_of_role|>system<|end_of_role|>' + system_message + '<|end_of_text|> ' }}{%- if tools %}{{- '<|start_of_role|>tools<|end_of_role|>' }}{{- tools | tojson(indent=4) }}{{- '<|end_of_text|> ' }}{%- endif %}{%- if documents %}{{- '<|start_of_role|>documents<|end_of_role|>' }}{%- for document in documents %}{{- 'Document ' + loop.index0 | string + ' ' }}{{- document['text'] }}{%- if not loop.last %}{{- ' '}}{%- endif%}{%- endfor %}{{- '<|end_of_text|> ' }}{%- endif %}{%- for message in loop_messages %}{{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' + message['content'] + '<|end_of_text|> ' }}{%- if loop.last and add_generation_prompt %}{{- '<|start_of_role|>assistant' }}{%- if controls %}{{- ' ' + controls | tojson()}}{%- endif %}{{- '<|end_of_role|>' }}{%- endif %}{%- endfor %})TEMPLATE",
// replacement
R"TEMPLATE({%- if messages[0]['role'] == 'system' %}
{%- set system_message = messages[0]['content'] %}
{%- set loop_messages = messages[1:] %}
{%- else %}
{%- set system_message = "Knowledge Cutoff Date: April 2024. You are Granite, developed by IBM. You are a helpful AI assistant." %}
{%- set loop_messages = messages %}
{%- endif %}
{{- '<|start_of_role|>system<|end_of_role|>' + system_message + '<|end_of_text|> ' }}
{%- for message in loop_messages %}
{{- '<|start_of_role|>' + message['role'] + '<|end_of_role|>' + message['content'] + '<|end_of_text|> ' }}
{%- if loop.last and add_generation_prompt %}
{{- '<|start_of_role|>assistant<|end_of_role|>' }}
{%- endif %}
{%- endfor %})TEMPLATE",
},
// Hermes-3-Llama-3.2-3B.Q4_0.gguf, mistral-7b-openorca.gguf2.Q4_0.gguf
@ -611,6 +641,70 @@ const std::unordered_map<std::string_view, std::string_view> CHAT_TEMPLATE_SUBST
{%- if add_generation_prompt %}
{{- '<|im_start|>assistant\n' }}
{%- endif %})TEMPLATE",
},
// OLMoE-1B-7B-0125-Instruct-Q4_0.gguf (nomic-ai/gpt4all#3471)
{
// original
R"TEMPLATE({{ bos_token }}{% for message in messages %}{% if message['role'] == 'system' %}{{ '<|system|>
' + message['content'] + '
' }}{% elif message['role'] == 'user' %}{{ '<|user|>
' + message['content'] + '
' }}{% elif message['role'] == 'assistant' %}{% if not loop.last %}{{ '<|assistant|>
' + message['content'] + eos_token + '
' }}{% else %}{{ '<|assistant|>
' + message['content'] + eos_token }}{% endif %}{% endif %}{% if loop.last and add_generation_prompt %}{{ '<|assistant|>
' }}{% endif %}{% endfor %})TEMPLATE",
// replacement
R"TEMPLATE({{- bos_token }}
{%- for message in messages %}
{%- if message['role'] == 'system' %}
{{- '<|system|>\n' + message['content'] + '\n' }}
{%- elif message['role'] == 'user' %}
{{- '<|user|>\n' + message['content'] + '\n' }}
{%- elif message['role'] == 'assistant' %}
{%- if not loop.last %}
{{- '<|assistant|>\n' + message['content'] + eos_token + '\n' }}
{%- else %}
{{- '<|assistant|>\n' + message['content'] + eos_token }}
{%- endif %}
{%- endif %}
{%- if loop.last and add_generation_prompt %}
{{- '<|assistant|>\n' }}
{%- endif %}
{%- endfor %})TEMPLATE",
},
// OLMoE-1B-7B-0924-Instruct-Q4_0.gguf (nomic-ai/gpt4all#3471)
{
// original
R"TEMPLATE({{ bos_token }}{% for message in messages %}
{% if message['role'] == 'system' %}
{{ '<|system|>
' + message['content'] }}
{% elif message['role'] == 'user' %}
{{ '<|user|>
' + message['content'] }}
{% elif message['role'] == 'assistant' %}
{{ '<|assistant|>
' + message['content'] + eos_token }}
{% endif %}
{% if loop.last and add_generation_prompt %}
{{ '<|assistant|>' }}
{% endif %}
{% endfor %})TEMPLATE",
// replacement
R"TEMPLATE({{- bos_token }}
{%- for message in messages %}
{%- if message['role'] == 'system' %}
{{- '<|system|>\n' + message['content'] }}
{%- elif message['role'] == 'user' %}
{{- '<|user|>\n' + message['content'] }}
{%- elif message['role'] == 'assistant' %}
{{- '<|assistant|>\n' + message['content'] + eos_token }}
{%- endif %}
{%- if loop.last and add_generation_prompt %}
{{- '<|assistant|>' }}
{%- endif %}
{%- endfor %})TEMPLATE",
},
// Phi-3.1-mini-128k-instruct-Q4_0.gguf (nomic-ai/gpt4all#3346)
{
@ -626,7 +720,7 @@ const std::unordered_map<std::string_view, std::string_view> CHAT_TEMPLATE_SUBST
// replacement
R"TEMPLATE({%- for message in messages %}
{%- if message['role'] == 'system' %}
{{-'<|system|>\n' + message['content'] + '<|end|>\n'}}
{{- '<|system|>\n' + message['content'] + '<|end|>\n' }}
{%- elif message['role'] == 'user' %}
{{- '<|user|>\n' + message['content'] + '<|end|>\n' }}
{%- elif message['role'] == 'assistant' %}

View File

@ -12,6 +12,9 @@
#include <QSettings>
#include <QUrl>
#include <QtLogging>
#include <QtSystemDetection>
#include <string>
#ifdef GPT4ALL_OFFLINE_INSTALLER
# include <QDesktopServices>
@ -25,6 +28,7 @@
using namespace Qt::Literals::StringLiterals;
class MyLLM: public LLM { };
Q_GLOBAL_STATIC(MyLLM, llmInstance)
LLM *LLM::globalInstance()

View File

@ -3,7 +3,8 @@
#include <QObject>
#include <QString>
#include <QtGlobal>
#include <QtTypes>
class LLM : public QObject
{

View File

@ -5,10 +5,14 @@
#include "mysettings.h"
#include <QCoreApplication>
#include <QDebug>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QList>
#include <QUrl>
#include <Qt>
#include <QtLogging>
class MyLocalDocs: public LocalDocs { };
Q_GLOBAL_STATIC(MyLocalDocs, localDocsInstance)

View File

@ -2,11 +2,14 @@
#define LOCALDOCS_H
#include "database.h"
#include "localdocsmodel.h" // IWYU pragma: keep
#include "localdocsmodel.h"
#include <QObject>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
// IWYU pragma: no_forward_declare LocalDocsModel
class LocalDocs : public QObject
{

View File

@ -5,11 +5,11 @@
#include <QDateTime>
#include <QMap>
#include <QVector>
#include <QtGlobal>
#include <QVector> // IWYU pragma: keep
#include <utility>
LocalDocsCollectionsModel::LocalDocsCollectionsModel(QObject *parent)
: QSortFilterProxyModel(parent)
{

View File

@ -4,17 +4,19 @@
#include "database.h"
#include <QAbstractListModel>
#include <QByteArray>
#include <QHash>
#include <QList>
#include <QObject>
#include <QObject> // IWYU pragma: keep
#include <QSortFilterProxyModel>
#include <QString>
#include <QVariant>
#include <Qt>
#include <functional>
class QByteArray;
class QVariant;
template <typename Key, typename T> class QHash;
class LocalDocsCollectionsModel : public QSortFilterProxyModel
{
Q_OBJECT

View File

@ -2,8 +2,10 @@
#include <QDateTime>
#include <QDebug>
#include <QFlags>
#include <QGlobalStatic>
#include <QIODevice>
#include <QMutexLocker> // IWYU pragma: keep
#include <QStandardPaths>
#include <cstdio>
@ -12,6 +14,7 @@
using namespace Qt::Literals::StringLiterals;
class MyLogger: public Logger { };
Q_GLOBAL_STATIC(MyLogger, loggerInstance)
Logger *Logger::globalInstance()
@ -62,8 +65,11 @@ void Logger::messageHandler(QtMsgType type, const QMessageLogContext &, const QS
}
// Get time and date
auto timestamp = QDateTime::currentDateTime().toString();
// Write message
const std::string out = u"[%1] (%2): %3\n"_s.arg(typeString, timestamp, msg).toStdString();
// Write message
QMutexLocker locker(&logger->m_mutex);
logger->m_file.write(out.c_str());
logger->m_file.flush();
std::cerr << out;

View File

@ -2,19 +2,24 @@
#define LOGGER_H
#include <QFile>
#include <QMutex>
#include <QString>
#include <QtLogging>
class Logger
{
QFile m_file;
static void messageHandler(QtMsgType type, const QMessageLogContext &context, const QString &msg);
class Logger {
public:
explicit Logger();
static Logger *globalInstance();
explicit Logger();
private:
static void messageHandler(QtMsgType type, const QMessageLogContext &context, const QString &msg);
private:
QFile m_file;
QMutex m_mutex;
friend class MyLogger;
};

View File

@ -2,6 +2,7 @@
#include <Cocoa/Cocoa.h>
void MacOSDock::showIcon()
{
[[NSApplication sharedApplication] setActivationPolicy:NSApplicationActivationPolicyRegular];

View File

@ -12,18 +12,28 @@
#include <gpt4all-backend/llmodel.h>
#include <singleapplication.h>
#include <QByteArray>
#include <QCoreApplication>
#include <QFont>
#include <QFontDatabase>
#include <QList>
#include <QObject>
#include <QQmlApplicationEngine>
#include <QQmlContext>
#include <QQuickWindow>
#include <QSettings>
#include <QString>
#include <QStringList>
#include <QUrl>
#include <QVariant>
#include <QWindow>
#include <Qt>
#include <QtAssert>
#include <QtSystemDetection>
#if G4A_CONFIG(force_d3d12)
# include <QSGRendererInterface>
#endif
#ifndef GPT4ALL_USE_QTPDF
# include <fpdfview.h>
@ -83,24 +93,27 @@ int main(int argc, char *argv[])
return 0;
}
#if G4A_CONFIG(force_d3d12)
QQuickWindow::setGraphicsApi(QSGRendererInterface::Direct3D12);
#endif
#ifdef Q_OS_LINUX
app.setWindowIcon(QIcon(":/gpt4all/icons/gpt4all.svg"));
#endif
// set search path before constructing the MySettings instance, which relies on this
QString llmodelSearchPaths = QCoreApplication::applicationDirPath();
const QString libDir = QCoreApplication::applicationDirPath() + "/../lib/";
if (LLM::directoryExists(libDir))
llmodelSearchPaths += ";" + libDir;
#if defined(Q_OS_MAC)
const QString binDir = QCoreApplication::applicationDirPath() + "/../../../";
if (LLM::directoryExists(binDir))
llmodelSearchPaths += ";" + binDir;
const QString frameworksDir = QCoreApplication::applicationDirPath() + "/../Frameworks/";
if (LLM::directoryExists(frameworksDir))
llmodelSearchPaths += ";" + frameworksDir;
{
auto appDirPath = QCoreApplication::applicationDirPath();
QStringList searchPaths {
#ifdef Q_OS_DARWIN
u"%1/../Frameworks"_s.arg(appDirPath),
#else
appDirPath,
u"%1/../lib"_s.arg(appDirPath),
#endif
LLModel::Implementation::setImplementationsSearchPath(llmodelSearchPaths.toStdString());
};
LLModel::Implementation::setImplementationsSearchPath(searchPaths.join(u';').toStdString());
}
// Set the local and language translation before the qml engine has even been started. This will
// use the default system locale unless the user has explicitly set it to use a different one.
@ -174,5 +187,9 @@ int main(int argc, char *argv[])
// Otherwise, we can get a heap-use-after-free inside of llama.cpp.
ChatListModel::globalInstance()->destroyChats();
#ifndef GPT4ALL_USE_QTPDF
FPDF_DestroyLibrary();
#endif
return res;
}

View File

@ -9,9 +9,11 @@
#include <QChar>
#include <QCoreApplication>
#include <QCryptographicHash>
#include <QDebug>
#include <QDir>
#include <QDirIterator>
#include <QEvent>
#include <QEventLoop>
#include <QFile>
#include <QFileInfo>
@ -29,14 +31,15 @@
#include <QSslConfiguration>
#include <QSslSocket>
#include <QStandardPaths>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QTextStream>
#include <QTimer>
#include <QUrl>
#include <QtAssert>
#include <QtLogging>
#include <QtPreprocessorSupport>
#include <algorithm>
#include <cstddef>
#include <iterator>
#include <optional>
#include <string>
@ -499,10 +502,11 @@ bool GPT4AllDownloadableModels::filterAcceptsRow(int sourceRow,
bool hasDescription = !description.isEmpty();
bool isClone = sourceModel()->data(index, ModelList::IsCloneRole).toBool();
bool isDiscovered = sourceModel()->data(index, ModelList::IsDiscoveredRole).toBool();
bool isOnline = sourceModel()->data(index, ModelList::OnlineRole).toBool();
bool satisfiesKeyword = m_keywords.isEmpty();
for (const QString &k : m_keywords)
satisfiesKeyword = description.contains(k) ? true : satisfiesKeyword;
return !isDiscovered && hasDescription && !isClone && satisfiesKeyword;
return !isOnline && !isDiscovered && hasDescription && !isClone && satisfiesKeyword;
}
int GPT4AllDownloadableModels::count() const
@ -1621,7 +1625,6 @@ void ModelList::parseModelsJsonFile(const QByteArray &jsonData, bool save)
QString requiresVersion = obj["requires"].toString();
QString versionRemoved = obj["removedIn"].toString();
QString url = obj["url"].toString();
QByteArray modelHash = obj["md5sum"].toString().toLatin1();
bool isDefault = obj.contains("isDefault") && obj["isDefault"] == u"true"_s;
bool disableGUI = obj.contains("disableGUI") && obj["disableGUI"] == u"true"_s;
QString description = obj["description"].toString();
@ -1632,6 +1635,16 @@ void ModelList::parseModelsJsonFile(const QByteArray &jsonData, bool save)
QString type = obj["type"].toString();
bool isEmbeddingModel = obj["embeddingModel"].toBool();
QByteArray modelHash;
ModelInfo::HashAlgorithm hashAlgorithm;
if (auto it = obj.find("sha256sum"_L1); it != obj.end()) {
modelHash = it->toString().toLatin1();
hashAlgorithm = ModelInfo::Sha256;
} else {
modelHash = obj["md5sum"].toString().toLatin1();
hashAlgorithm = ModelInfo::Md5;
}
// Some models aren't supported in the GUI at all
if (disableGUI)
continue;
@ -1660,7 +1673,7 @@ void ModelList::parseModelsJsonFile(const QByteArray &jsonData, bool save)
{ ModelList::FilenameRole, modelFilename },
{ ModelList::FilesizeRole, modelFilesize },
{ ModelList::HashRole, modelHash },
{ ModelList::HashAlgorithmRole, ModelInfo::Md5 },
{ ModelList::HashAlgorithmRole, hashAlgorithm },
{ ModelList::DefaultRole, isDefault },
{ ModelList::DescriptionRole, description },
{ ModelList::RequiresVersionRole, requiresVersion },
@ -2344,3 +2357,56 @@ void ModelList::handleDiscoveryItemErrorOccurred(QNetworkReply::NetworkError cod
qWarning() << u"ERROR: Discovery item failed with error code \"%1-%2\""_s
.arg(code).arg(reply->errorString()).toStdString();
}
QStringList ModelList::remoteModelList(const QString &apiKey, const QUrl &baseUrl)
{
QStringList modelList;
// Create the request
QNetworkRequest request;
request.setUrl(baseUrl.resolved(QUrl("models")));
request.setHeader(QNetworkRequest::ContentTypeHeader, "application/json");
// Add the Authorization header
const QString bearerToken = QString("Bearer %1").arg(apiKey);
request.setRawHeader("Authorization", bearerToken.toUtf8());
// Make the GET request
QNetworkReply *reply = m_networkManager.get(request);
// We use a local event loop to wait for the request to complete
QEventLoop loop;
connect(reply, &QNetworkReply::finished, &loop, &QEventLoop::quit);
loop.exec();
// Check for errors
if (reply->error() == QNetworkReply::NoError) {
// Parse the JSON response
const QByteArray responseData = reply->readAll();
const QJsonDocument jsonDoc = QJsonDocument::fromJson(responseData);
if (!jsonDoc.isNull() && jsonDoc.isObject()) {
QJsonObject rootObj = jsonDoc.object();
QJsonValue dataValue = rootObj.value("data");
if (dataValue.isArray()) {
QJsonArray dataArray = dataValue.toArray();
for (const QJsonValue &val : dataArray) {
if (val.isObject()) {
QJsonObject obj = val.toObject();
const QString modelId = obj.value("id").toString();
modelList.append(modelId);
}
}
}
}
} else {
// Handle network error (e.g. print it to qDebug)
qWarning() << "Error retrieving models:" << reply->errorString();
}
// Clean up
reply->deleteLater();
return modelList;
}

View File

@ -5,25 +5,29 @@
#include <QByteArray>
#include <QDateTime>
#include <QHash>
#include <QLatin1StringView>
#include <QLatin1StringView> // IWYU pragma: keep
#include <QList>
#include <QMutex>
#include <QNetworkAccessManager>
#include <QNetworkReply>
#include <QObject>
#include <QPair>
#include <QQmlEngine>
#include <QPair> // IWYU pragma: keep
#include <QQmlEngine> // IWYU pragma: keep
#include <QSortFilterProxyModel>
#include <QSslError>
#include <QString>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <Qt>
#include <QtGlobal>
#include <QtTypes>
#include <optional>
#include <utility>
// IWYU pragma: no_forward_declare QObject
// IWYU pragma: no_forward_declare QSslError
class QUrl;
using namespace Qt::Literals::StringLiterals;
@ -269,7 +273,7 @@ private:
std::optional<QString> m_chatTemplate;
mutable std::optional<QString> m_modelChatTemplate;
QString m_systemMessage;
QString m_chatNamePrompt = "Describe the above conversation in seven words or less.";
QString m_chatNamePrompt = "Describe the above conversation. Your entire response must be three words or less.";
QString m_suggestedFollowUpPrompt = "Suggest three very short factual follow-up questions that have not been answered yet or cannot be found inspired by the previous conversation and excerpts.";
friend class MySettings;
friend class ModelList;
@ -530,6 +534,8 @@ public:
Q_INVOKABLE void discoverSearch(const QString &discover);
Q_INVOKABLE QStringList remoteModelList(const QString &apiKey, const QUrl &baseUrl);
Q_SIGNALS:
void countChanged();
void installedModelsChanged();

View File

@ -11,22 +11,27 @@
#include <QFileInfo>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QIODevice>
#include <QIODevice> // IWYU pragma: keep
#include <QMap>
#include <QMetaObject>
#include <QStandardPaths>
#include <QThread>
#include <QUrl>
#include <QVariant>
#include <QtLogging>
#include <QtAssert>
#include <algorithm>
#include <string>
#include <thread>
#include <vector>
#if !(defined(Q_OS_MAC) && defined(__aarch64__))
#include <cstring>
#endif
using namespace Qt::Literals::StringLiterals;
// used only for settings serialization, do not translate
static const QStringList suggestionModeNames { "LocalDocsOnly", "On", "Off" };
static const QStringList chatThemeNames { "Light", "Dark", "LegacyDark" };

View File

@ -4,20 +4,24 @@
#include "modellist.h" // IWYU pragma: keep
#include <QDateTime>
#include <QLatin1StringView>
#include <QLatin1StringView> // IWYU pragma: keep
#include <QList>
#include <QModelIndex>
#include <QObject>
#include <QSettings>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QTranslator>
#include <QVector>
#include <QVariant>
#include <cstdint>
#include <memory>
#include <optional>
// IWYU pragma: no_forward_declare QModelIndex
class QLocale;
namespace MySettingsEnums {
Q_NAMESPACE

View File

@ -8,7 +8,6 @@
#include "localdocsmodel.h"
#include "modellist.h"
#include "mysettings.h"
#include "utils.h"
#include <gpt4all-backend/llmodel.h>
@ -29,7 +28,6 @@
#include <QSslSocket>
#include <QSysInfo>
#include <Qt>
#include <QtGlobal>
#include <QtLogging>
#include <QUrl>
#include <QUuid>
@ -49,6 +47,7 @@ using namespace Qt::Literals::StringLiterals;
#define STR_(x) #x
#define STR(x) STR_(x)
static const char MIXPANEL_TOKEN[] = "ce362e568ddaee16ed243eaffb5860a2";
#ifdef __clang__
@ -242,6 +241,12 @@ void Network::handleJsonUploadFinished()
m_activeUploads.removeAll(jsonReply);
if (jsonReply->error() != QNetworkReply::NoError) {
qWarning() << "Request to" << jsonReply->url().toString() << "failed:" << jsonReply->errorString();
jsonReply->deleteLater();
return;
}
QVariant response = jsonReply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
bool ok;
@ -449,6 +454,11 @@ void Network::handleIpifyFinished()
QNetworkReply *reply = qobject_cast<QNetworkReply *>(sender());
if (!reply)
return;
if (reply->error() != QNetworkReply::NoError) {
qWarning() << "Request to" << reply->url().toString() << "failed:" << reply->errorString();
reply->deleteLater();
return;
}
QVariant response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
@ -473,6 +483,11 @@ void Network::handleMixpanelFinished()
QNetworkReply *reply = qobject_cast<QNetworkReply *>(sender());
if (!reply)
return;
if (reply->error() != QNetworkReply::NoError) {
qWarning() << "Request to" << reply->url().toString() << "failed:" << reply->errorString();
reply->deleteLater();
return;
}
QVariant response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
@ -511,6 +526,11 @@ void Network::handleHealthFinished()
QNetworkReply *healthReply = qobject_cast<QNetworkReply *>(sender());
if (!healthReply)
return;
if (healthReply->error() != QNetworkReply::NoError) {
qWarning() << "Request to" << healthReply->url().toString() << "failed:" << healthReply->errorString();
healthReply->deleteLater();
return;
}
QVariant response = healthReply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());

View File

@ -11,7 +11,14 @@
#include <QSslError>
#include <QString>
#include <QVariant>
#include <QVector>
#include <QVariantMap> // IWYU pragma: keep
#include <QVector> // IWYU pragma: keep
// IWYU pragma: no_forward_declare QByteArray
// IWYU pragma: no_forward_declare QNetworkReply
// IWYU pragma: no_forward_declare QSslError
class QUrl;
struct KeyValue {
QString key;

View File

@ -4,9 +4,10 @@
#include "chatmodel.h"
#include "modellist.h"
#include "mysettings.h"
#include "utils.h"
#include "utils.h" // IWYU pragma: keep
#include <fmt/format.h>
#include <gpt4all-backend/llmodel.h>
#include <QByteArray>
#include <QCborArray>
@ -15,32 +16,38 @@
#include <QDateTime>
#include <QDebug>
#include <QHostAddress>
#include <QHttpHeaders>
#include <QHttpServer>
#include <QHttpServerRequest>
#include <QHttpServerResponder>
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QLatin1StringView>
#include <QPair>
#include <QPair> // IWYU pragma: keep
#include <QTcpServer>
#include <QVariant>
#include <Qt>
#include <QtAssert>
#include <QtCborCommon>
#include <QtGlobal>
#include <QtLogging>
#include <QtMinMax>
#include <QtPreprocessorSupport>
#include <QtTypes>
#include <cstdint>
#include <exception>
#include <iostream>
#include <optional>
#include <span>
#include <stdexcept>
#include <string>
#include <type_traits>
#include <string_view>
#include <unordered_map>
#include <utility>
#if QT_VERSION >= QT_VERSION_CHECK(6, 8, 0)
# include <QTcpServer>
#endif
#include <variant>
#include <vector>
using namespace std::string_literals;
using namespace Qt::Literals::StringLiterals;
@ -451,23 +458,17 @@ static QJsonObject requestFromJson(const QByteArray &request)
void Server::start()
{
m_server = std::make_unique<QHttpServer>(this);
#if QT_VERSION >= QT_VERSION_CHECK(6, 8, 0)
auto *tcpServer = new QTcpServer(m_server.get());
#else
auto *tcpServer = m_server.get();
#endif
auto port = MySettings::globalInstance()->networkPort();
if (!tcpServer->listen(QHostAddress::LocalHost, port)) {
qWarning() << "Server ERROR: Failed to listen on port" << port;
return;
}
#if QT_VERSION >= QT_VERSION_CHECK(6, 8, 0)
if (!m_server->bind(tcpServer)) {
qWarning() << "Server ERROR: Failed to HTTP server to socket" << port;
return;
}
#endif
m_server->route("/v1/models", QHttpServerRequest::Method::Get,
[](const QHttpServerRequest &) {
@ -607,19 +608,12 @@ void Server::start()
}
);
#if QT_VERSION >= QT_VERSION_CHECK(6, 8, 0)
m_server->addAfterRequestHandler(this, [](const QHttpServerRequest &req, QHttpServerResponse &resp) {
Q_UNUSED(req);
auto headers = resp.headers();
headers.append("Access-Control-Allow-Origin"_L1, "*"_L1);
resp.setHeaders(std::move(headers));
});
#else
m_server->afterRequest([](QHttpServerResponse &&resp) {
resp.addHeader("Access-Control-Allow-Origin", "*");
return std::move(resp);
});
#endif
connect(this, &Server::requestResetResponseState, m_chat, &Chat::resetResponseState, Qt::BlockingQueuedConnection);
}

View File

@ -8,7 +8,7 @@
#include <QHttpServerResponse>
#include <QJsonObject>
#include <QList>
#include <QObject>
#include <QObject> // IWYU pragma: keep
#include <QString>
#include <memory>

View File

@ -1,12 +1,16 @@
#include "tool.h"
#include <jinja2cpp/value.h>
#include <QDataStream>
#include <QtTypes>
#include <string>
jinja2::Value Tool::jinjaValue() const
using json = nlohmann::ordered_json;
json::object_t Tool::jinjaValue() const
{
jinja2::ValuesList paramList;
json::array_t paramList;
const QList<ToolParamInfo> p = parameters();
for (auto &info : p) {
std::string typeStr;
@ -20,26 +24,24 @@ jinja2::Value Tool::jinjaValue() const
case Boolean: typeStr = "boolean"; break;
case Null: typeStr = "null"; break;
}
jinja2::ValuesMap infoMap {
{ "name", info.name.toStdString() },
{ "type", typeStr},
paramList.emplace_back(json::initializer_list_t {
{ "name", info.name.toStdString() },
{ "type", typeStr },
{ "description", info.description.toStdString() },
{ "required", info.required }
};
paramList.push_back(infoMap);
{ "required", info.required },
});
}
jinja2::ValuesMap params {
{ "name", name().toStdString() },
{ "description", description().toStdString() },
{ "function", function().toStdString() },
{ "parameters", paramList },
return {
{ "name", name().toStdString() },
{ "description", description().toStdString() },
{ "function", function().toStdString() },
{ "parameters", paramList },
{ "symbolicFormat", symbolicFormat().toStdString() },
{ "examplePrompt", examplePrompt().toStdString() },
{ "exampleCall", exampleCall().toStdString() },
{ "exampleReply", exampleReply().toStdString() }
{ "examplePrompt", examplePrompt().toStdString() },
{ "exampleCall", exampleCall().toStdString() },
{ "exampleReply", exampleReply().toStdString() },
};
return params;
}
void ToolCallInfo::serialize(QDataStream &stream, int version)

View File

@ -1,13 +1,18 @@
#ifndef TOOL_H
#define TOOL_H
#include <nlohmann/json.hpp>
#include <QList>
#include <QObject>
#include <QString>
#include <QVariant>
#include <QtGlobal>
#include <jinja2cpp/value.h>
class QDataStream;
using json = nlohmann::ordered_json;
namespace ToolEnums
{
@ -25,6 +30,7 @@ namespace ToolEnums
enum class ParseState {
None,
InTagChoice,
InStart,
Partial,
Complete,
@ -87,7 +93,8 @@ public:
Tool() : QObject(nullptr) {}
virtual ~Tool() {}
virtual QString run(const QList<ToolParam> &params, qint64 timeout = 2000) = 0;
virtual void run(const QList<ToolParam> &params) = 0;
virtual bool interrupt() = 0;
// Tools should set these if they encounter errors. For instance, a tool depending upon the network
// might set these error variables if the network is not available.
@ -121,7 +128,10 @@ public:
bool operator==(const Tool &other) const { return function() == other.function(); }
jinja2::Value jinjaValue() const;
json::object_t jinjaValue() const;
Q_SIGNALS:
void runComplete(const ToolCallInfo &info);
};
#endif // TOOL_H

View File

@ -1,16 +1,31 @@
#include "toolcallparser.h"
#include <QDebug>
#include <QtGlobal>
#include <QtLogging>
#include "tool.h"
#include <cstddef>
#include <QChar>
#include <QSet>
#include <QtAssert>
#include <QtTypes>
#include <stdexcept>
static const QString ToolCallStart = ToolCallConstants::CodeInterpreterTag;
static const QString ToolCallEnd = ToolCallConstants::CodeInterpreterEndTag;
ToolCallParser::ToolCallParser()
: ToolCallParser(ToolCallConstants::AllTagNames)
{}
ToolCallParser::ToolCallParser(const QStringList &tagNames)
{
QSet<QChar> firstChars;
for (auto &name : tagNames) {
if (name.isEmpty())
throw std::invalid_argument("ToolCallParser(): tag names must not be empty");
if (firstChars.contains(name.at(0)))
throw std::invalid_argument("ToolCallParser(): tag names must not share any prefix");
firstChars << name.at(0);
m_possibleStartTags << makeStartTag(name).toUtf8();
m_possibleEndTags << makeEndTag (name).toUtf8();
}
reset();
}
@ -20,36 +35,68 @@ void ToolCallParser::reset()
resetSearchState();
// These are global states maintained between update calls
m_buffer.clear();
m_hasSplit = false;
m_buffers.clear();
m_buffers << QByteArray();
}
void ToolCallParser::resetSearchState()
{
m_expected = ToolCallStart.at(0);
m_expected = {'<'};
m_expectedIndex = 0;
m_state = ToolEnums::ParseState::None;
m_toolCall.clear();
m_startTagBuffer.clear();
m_endTagBuffer.clear();
m_currentTagIndex = -1;
m_startIndex = -1;
m_endIndex = -1;
}
bool ToolCallParser::isExpected(char c) const
{
return m_expected.isEmpty() || m_expected.contains(c);
}
void ToolCallParser::setExpected(const QList<QByteArray> &tags)
{
m_expected.clear();
for (const auto &tag : tags) {
Q_ASSERT(tag.size() > m_expectedIndex);
m_expected << tag.at(m_expectedIndex);
}
}
QByteArray ToolCallParser::startTag() const
{
if (m_currentTagIndex < 0)
return {};
return m_possibleStartTags.at(m_currentTagIndex);
}
QByteArray ToolCallParser::endTag() const
{
if (m_currentTagIndex < 0)
return {};
return m_possibleEndTags.at(m_currentTagIndex);
}
QByteArray &ToolCallParser::currentBuffer()
{
return m_buffers.last();
}
// This method is called with an arbitrary string and a current state. This method should take the
// current state into account and then parse through the update character by character to arrive at
// the new state.
void ToolCallParser::update(const QString &update)
void ToolCallParser::update(const QByteArray &update)
{
Q_ASSERT(m_state != ToolEnums::ParseState::Complete);
if (m_state == ToolEnums::ParseState::Complete) {
qWarning() << "ERROR: ToolCallParser::update already found a complete toolcall!";
return;
}
currentBuffer().append(update);
m_buffer.append(update);
for (size_t i = m_buffer.size() - update.size(); i < m_buffer.size(); ++i) {
const QChar c = m_buffer[i];
const bool foundMatch = m_expected.isNull() || c == m_expected;
for (qsizetype i = currentBuffer().size() - update.size(); i < currentBuffer().size(); ++i) {
const char c = currentBuffer()[i];
const bool foundMatch = isExpected(c);
if (!foundMatch) {
resetSearchState();
continue;
@ -59,34 +106,58 @@ void ToolCallParser::update(const QString &update)
case ToolEnums::ParseState::None:
{
m_expectedIndex = 1;
m_expected = ToolCallStart.at(1);
m_state = ToolEnums::ParseState::InStart;
setExpected(m_possibleStartTags);
m_state = ToolEnums::ParseState::InTagChoice;
m_startIndex = i;
break;
}
case ToolEnums::ParseState::InTagChoice:
{
for (int i = 0; i < m_possibleStartTags.size(); ++i) {
const auto &tag = m_possibleStartTags.at(i);
if (c == tag.at(1)) m_currentTagIndex = i;
}
if (m_currentTagIndex >= 0) {
m_expectedIndex = 2;
setExpected({m_possibleStartTags.at(m_currentTagIndex)});
m_state = ToolEnums::ParseState::InStart;
} else
resetSearchState();
break;
}
case ToolEnums::ParseState::InStart:
{
if (m_expectedIndex == ToolCallStart.size() - 1) {
m_startTagBuffer.append(c);
const auto startTag = this->startTag();
Q_ASSERT(!startTag.isEmpty());
if (m_expectedIndex == startTag.size() - 1) {
m_expectedIndex = 0;
m_expected = QChar();
setExpected({});
m_state = ToolEnums::ParseState::Partial;
} else {
++m_expectedIndex;
m_expected = ToolCallStart.at(m_expectedIndex);
Q_ASSERT(m_currentTagIndex >= 0);
setExpected({startTag});
}
break;
}
case ToolEnums::ParseState::Partial:
{
Q_ASSERT(m_currentTagIndex >= 0);
const auto endTag = this->endTag();
Q_ASSERT(!endTag.isEmpty());
m_toolCall.append(c);
m_endTagBuffer.append(c);
if (m_endTagBuffer.size() > ToolCallEnd.size())
if (m_endTagBuffer.size() > endTag.size())
m_endTagBuffer.remove(0, 1);
if (m_endTagBuffer == ToolCallEnd) {
m_toolCall.chop(ToolCallEnd.size());
if (m_endTagBuffer == endTag) {
m_endIndex = i + 1;
m_toolCall.chop(endTag.size());
m_state = ToolEnums::ParseState::Complete;
m_endTagBuffer.clear();
}
break;
}
case ToolEnums::ParseState::Complete:
{
@ -97,15 +168,35 @@ void ToolCallParser::update(const QString &update)
}
}
QPair<QString, QString> ToolCallParser::split()
bool ToolCallParser::splitIfPossible()
{
Q_ASSERT(m_state == ToolEnums::ParseState::Partial
|| m_state == ToolEnums::ParseState::Complete);
// The first split happens when we're in a partial state
if (m_buffers.size() < 2 && m_state == ToolEnums::ParseState::Partial) {
Q_ASSERT(m_startIndex >= 0);
const auto beforeToolCall = currentBuffer().left(m_startIndex);
const auto toolCall = currentBuffer().mid (m_startIndex);
m_buffers = { beforeToolCall, toolCall };
return true;
}
Q_ASSERT(m_startIndex >= 0);
m_hasSplit = true;
const QString beforeToolCall = m_buffer.left(m_startIndex);
m_buffer = m_buffer.mid(m_startIndex);
m_startIndex = 0;
return { beforeToolCall, m_buffer };
// The second split happens when we're in the complete state
if (m_buffers.size() < 3 && m_state == ToolEnums::ParseState::Complete) {
Q_ASSERT(m_endIndex >= 0);
const auto &beforeToolCall = m_buffers.first();
const auto toolCall = currentBuffer().left(m_endIndex);
const auto afterToolCall = currentBuffer().mid (m_endIndex);
m_buffers = { beforeToolCall, toolCall, afterToolCall };
return true;
}
return false;
}
QStringList ToolCallParser::buffers() const
{
QStringList result;
result.reserve(m_buffers.size());
for (const auto &buffer : m_buffers)
result << QString::fromUtf8(buffer);
return result;
}

View File

@ -1,47 +1,73 @@
#ifndef TOOLCALLPARSER_H
#define TOOLCALLPARSER_H
#include "tool.h"
#include <QChar>
#include <QByteArray>
#include <QList>
#include <QString>
#include <QPair>
#include <QStringList> // IWYU pragma: keep
namespace ToolEnums { enum class ParseState; }
using namespace Qt::Literals::StringLiterals;
namespace ToolCallConstants
{
const QString CodeInterpreterFunction = R"(javascript_interpret)";
const QString CodeInterpreterTag = R"(<)" + CodeInterpreterFunction + R"(>)";
const QString CodeInterpreterEndTag = R"(</)" + CodeInterpreterFunction + R"(>)";
const QString CodeInterpreterPrefix = CodeInterpreterTag + "\n```javascript\n";
const QString CodeInterpreterSuffix = "```\n" + CodeInterpreterEndTag;
}
class ToolCallParser
{
public:
ToolCallParser();
ToolCallParser(const QStringList &tagNames);
void reset();
void update(const QString &update);
QString buffer() const { return m_buffer; }
QString toolCall() const { return m_toolCall; }
void update(const QByteArray &update);
QString toolCall() const { return QString::fromUtf8(m_toolCall); }
int startIndex() const { return m_startIndex; }
ToolEnums::ParseState state() const { return m_state; }
QByteArray startTag() const;
QByteArray endTag() const;
// Splits
QPair<QString, QString> split();
bool hasSplit() const { return m_hasSplit; }
bool splitIfPossible();
QStringList buffers() const;
int numberOfBuffers() const { return m_buffers.size(); }
static QString makeStartTag(const QString &name) { return u"<%1>"_s .arg(name); }
static QString makeEndTag (const QString &name) { return u"</%1>"_s.arg(name); }
private:
QByteArray &currentBuffer();
void resetSearchState();
bool isExpected(char c) const;
void setExpected(const QList<QByteArray> &tags);
QChar m_expected;
QList<QByteArray> m_possibleStartTags;
QList<QByteArray> m_possibleEndTags;
QByteArray m_startTagBuffer;
QByteArray m_endTagBuffer;
int m_currentTagIndex;
QList<char> m_expected;
int m_expectedIndex;
ToolEnums::ParseState m_state;
QString m_buffer;
QString m_toolCall;
QString m_endTagBuffer;
QList<QByteArray> m_buffers;
QByteArray m_toolCall;
int m_startIndex;
bool m_hasSplit;
int m_endIndex;
};
namespace ToolCallConstants
{
// NB: the parsing code assumes the first char of the various tags differ
inline const QString CodeInterpreterFunction = u"javascript_interpret"_s;
inline const QString CodeInterpreterStartTag = ToolCallParser::makeStartTag(CodeInterpreterFunction);
inline const QString CodeInterpreterEndTag = ToolCallParser::makeEndTag (CodeInterpreterFunction);
inline const QString CodeInterpreterPrefix = u"%1\n```javascript\n"_s.arg(CodeInterpreterStartTag);
inline const QString CodeInterpreterSuffix = u"```\n%1"_s .arg(CodeInterpreterEndTag );
inline const QString ThinkTagName = u"think"_s;
inline const QString ThinkStartTag = ToolCallParser::makeStartTag(ThinkTagName);
inline const QString ThinkEndTag = ToolCallParser::makeEndTag (ThinkTagName);
inline const QStringList AllTagNames { CodeInterpreterFunction, ThinkTagName };
}
#endif // TOOLCALLPARSER_H

View File

@ -6,6 +6,7 @@
#include <QEvent>
#include <QGlobalStatic>
class MyToolModel: public ToolModel { };
Q_GLOBAL_STATIC(MyToolModel, toolModelInstance)
ToolModel *ToolModel::globalInstance()

View File

@ -9,7 +9,8 @@
#include <QList>
#include <QString>
#include <QVariant>
#include <QtGlobal>
#include <QtPreprocessorSupport>
class ToolModel : public QAbstractListModel
{

View File

@ -5,7 +5,7 @@
#include <QByteArray>
#include <QJsonValue>
#include <QLatin1StringView>
#include <QLatin1StringView> // IWYU pragma: keep
#include <QString>
#include <QStringView>
#include <QUtf8StringView>
@ -13,8 +13,9 @@
#include <initializer_list>
#include <string_view>
#include <utility>
#include <utility> // IWYU pragma: keep
// IWYU pragma: no_forward_declare QJsonValue
class QJsonObject;
@ -40,4 +41,4 @@ MAKE_FORMATTER(QVariant, value.toString().toUtf8());
// alternative to QJsonObject's initializer_list constructor that accepts Latin-1 strings
QJsonObject makeJsonObject(std::initializer_list<std::pair<QLatin1StringView, QJsonValue>> args);
#include "utils.inl"
#include "utils.inl" // IWYU pragma: export

View File

@ -1,5 +1,6 @@
#include <QJsonObject>
inline QJsonObject makeJsonObject(std::initializer_list<std::pair<QLatin1StringView, QJsonValue>> args)
{
QJsonObject obj;

View File

@ -7,15 +7,16 @@
#include <xlsxformat.h>
#include <xlsxworksheet.h>
#include <QChar>
#include <QDateTime>
#include <QDebug>
#include <QLatin1StringView>
#include <QList>
#include <QRegularExpression>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QStringView>
#include <QVariant>
#include <QtGlobal>
#include <QtLogging>
#include <memory>

View File

@ -4,6 +4,7 @@
class QIODevice;
class QString;
class XLSXToMD
{
public:

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff