Compare commits

...

164 Commits
v3.4.1 ... main

Author SHA1 Message Date
Jared Van Bortel
b666d16db5
ci: update path-filtering orb to 1.3.0 (#3588)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-05-27 15:46:52 -04:00
Jared Van Bortel
cd70db29ed
readme: add Windows ARM download link
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:51:59 -05:00
Jared Van Bortel
fb72ba1ff5 chat: bump version to 3.10.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:44:45 -05:00
Jared Van Bortel
b968d45c11
chat: release version 3.10.0 (#3515)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 19:41:13 -05:00
Jared Van Bortel
228d5379cf
chat: cut v3.10.0 release (#3511)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 17:15:34 -05:00
Jared Van Bortel
dd820ef7c4
Italian and draft Simplified Chinese translations for v3.10.0 (#3514)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 17:14:10 -05:00
Jared Van Bortel
a7cbc8c3fd
Run lupdate before v3.10.0 release (#3512)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 15:33:27 -05:00
AT
4d171835ac
Add new remote model provider view. (#3506)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-24 14:59:53 -05:00
Lil Bob
0c28ee7059
Translations: Improve Chinese translation (#3467)
Signed-off-by: Junior2Ran <hdr01@126.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-20 20:44:28 -05:00
Jared Van Bortel
96aeb44210
backend: build with CUDA compute 5.0 support by default (#3499)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-19 11:27:06 -05:00
Jared Van Bortel
29f29773af
chat: require Qt 6.8 and fix #includes (#3498)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 13:59:50 -05:00
Jared Van Bortel
d8c04cead8
ci: use LLVM Clang 19 on macOS and Ubuntu (#3500)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 12:02:14 -05:00
Riccardo Giovanetti
b1cb46ec2a
Italian localization update (#3496)
Signed-off-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-18 11:47:39 -05:00
Jared Van Bortel
b83d06e67f translations: run lupdate -no-obsolete on Simplified Chinese
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-13 11:27:04 -05:00
Jared Van Bortel
7aa339cf40 translations: run lupdate
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-13 11:26:28 -05:00
ThiloteE
1b84182030
Add replacement templates for OLMoE and granite-3.1 (#3471)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-12 14:23:46 -05:00
ThiloteE
02e12089d3
Add Granite arch to model whitelist (#3487)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-12 14:17:49 -05:00
Jared Van Bortel
09f37a0ff8
maintainers: remove extra bracket
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-11 14:49:46 -05:00
AT
5e7e4b3f78
Fix spacing issues with deepseek models: (#3470)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
2025-02-06 12:04:32 -05:00
Jared Van Bortel
22ebd42c32
Misc fixes for undefined behavior, crashes, and build failure (#3465)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-06 11:22:52 -05:00
Jared Van Bortel
051a63f031 ci: fix scheduled workflow jobs
s/online/offline/

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-05 11:56:53 -05:00
Jared Van Bortel
26356f872e chat: bump version to 3.9.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 19:15:20 -05:00
Jared Van Bortel
22b8bc546f
chat: release version 3.9.0 (#3462)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 19:12:17 -05:00
Jared Van Bortel
52164142de changelog: fix missing paren
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:14:30 -05:00
Jared Van Bortel
be6347389e
chat: cut v3.9.0 release (#3461)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:09:15 -05:00
Jared Van Bortel
8c10eccd24 changelog: fix missing credit
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:08:06 -05:00
ThiloteE
6ef0bd518e
Whitelist OLMoE and Granite MoE (#3449)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 18:00:07 -05:00
Jared Van Bortel
04dc157b98
minja: update submodule to fix {# hang (redo) (#3457)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 17:30:04 -05:00
Jared Van Bortel
014bf67c63
Fix PDFium abuse that leads to a crash on Windows ARM (#3460)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 17:29:01 -05:00
Jared Van Bortel
8c9f26e249
Ignore DeepSeek-R1 "think" content in name/follow-up responses (#3458)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-04 12:08:17 -05:00
Andriy Mulyar
d4e6a6e485
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2025-02-03 17:40:53 -05:00
Jared Van Bortel
a081255951 Revert "minja: update submodule to fix {# hang (#3446)"
This reverts commit c38c7455d890ea242ed32bca8d9467b8768af296.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 12:44:27 -05:00
Jared Van Bortel
36c852b8be
chat: work around Direct3D 11 rendering artifacts on win11 arm (#3450)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:47:40 -05:00
Jared Van Bortel
c38c7455d8
minja: update submodule to fix {# hang (#3446)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:25:21 -05:00
Jared Van Bortel
9131f4c432
Fix index used by LocalDocs when tool calling/thinking is active (#3451)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-02-03 11:22:46 -05:00
Jared Van Bortel
6bfa014594 cmake: remove reference to deleted README
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-31 16:26:17 -05:00
Jared Van Bortel
5af31278b7
ci: update to Qt 6.8.2 (#3442)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-31 11:20:50 -05:00
Jared Van Bortel
a80f023ed2
chat: release version 3.8.0 (#3439)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 20:06:42 -05:00
Jared Van Bortel
126042fdc9 remove ancient README
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 19:27:44 -05:00
Jared Van Bortel
1f2712d57c
chat: fix emoji corruption (#3443)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 18:15:37 -05:00
Jared Van Bortel
f8f78c6677 ci: allow generate-config to run on tags
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:53:14 -05:00
Jared Van Bortel
643c733be3 ci: fix missing job_allow_tags
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:50:00 -05:00
Jared Van Bortel
0734694fb8 ci: remove conflicting pipeline.git.branch requirement
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:47:58 -05:00
Jared Van Bortel
e267512db9
chat: cut v3.8.0 release (#3441)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:37:02 -05:00
Jared Van Bortel
34037f3101
models: add DeepSeek-R1 distillations to official models list (#3437)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:23:41 -05:00
AT
007a7af1c8
Display DeepSeek-R1 thinking like Reasoner (#3440)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:11:05 -05:00
Jared Van Bortel
f914ee56c9
chat: replace Jinja2Cpp with minja (#3433)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 16:01:49 -05:00
Jared Van Bortel
8a0ec5c303 ci: add missing signing holds to Windows ARM builds
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 15:23:18 -05:00
Jared Van Bortel
c2ee252ef2 chat: bump version to 3.8.0-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 13:12:47 -05:00
Jared Van Bortel
64dcf7682e
ci: build offline installers when pipeline is scheduled (#3436)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-30 13:07:47 -05:00
AT
22b8278ef1
Don't block the gui thread for tool calls (#3435)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-29 18:33:08 -05:00
Jared Van Bortel
adafa17c37
ci: verify that installers we build function and are signed (#3432)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-29 11:29:20 -05:00
Jared Van Bortel
343a4b6b6a
Support DeepSeek-R1 Qwen (#3431)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-29 09:51:50 -05:00
Jared Van Bortel
6a8a840681
ci: selective signing and automatic release builds (#3430)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-28 17:41:01 -05:00
ThiloteE
88f5dac133
[Jinja] Fix typo in Phi-3.1-mini-128k-instruct replacement template (#3412)
Signed-off-by: ThiloteE <73715071+ThiloteE@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2025-01-28 16:54:15 -05:00
Jared Van Bortel
0d974297a5
codeinterpreter: permit console.log with single string arg (#3426)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-27 15:22:20 -05:00
Jared Van Bortel
4fbc20ced9
cmake: do not modify gpt4all.app after signing it (#3417)
Signed-off-by: AT <manyoso@users.noreply.github.com>
2025-01-24 14:15:24 -05:00
Jared Van Bortel
f4f7de51e7 Revert "cmake: do not modify gpt4all.app after signing it (#3413)"
This reverts commit c01ac7fa933ae135dc8d9eed9dcbc2890dff38e3.

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-24 13:21:34 -05:00
Jared Van Bortel
c01ac7fa93
cmake: do not modify gpt4all.app after signing it (#3413)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2025-01-24 12:57:55 -05:00
Jared Van Bortel
173fdb18c2
Update to Qt 6.8.1 (#3386)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-24 10:29:59 -05:00
AT
8790586e57
Server view fix (#3411)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-24 10:29:28 -05:00
AT
b98501c786
Fix regression while using localdocs with server API. (#3410)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-24 10:26:24 -05:00
Jared Van Bortel
49df6464a7 chat: bump version to v3.7.1-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:59:59 -05:00
Jared Van Bortel
6b719e99b5 metadata: fix typo
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:22:54 -05:00
Jared Van Bortel
d85fe40de8
chat: release version 3.7.0 (#3407)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 15:17:13 -05:00
Jared Van Bortel
15f66570fe
ci: fix macOS codesigning (#3408)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-23 11:41:34 -05:00
Jared Van Bortel
a97a28fe4f changelog: fix reference to wrong macOS version
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 13:09:01 -05:00
Jared Van Bortel
df2d124c19 changelog: add missing link
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 11:38:26 -05:00
AT
241d5ff40b Bump version for 3.7.0 release. (#3401)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 10:29:50 -05:00
Jared Van Bortel
0348189cc1 jinja2cpp: update submodule to fix unused var (#3403)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-22 10:29:36 -05:00
Jared Van Bortel
4a8a51f946
jinja2cpp: update submodule for 'not X is defined' fix (#3402)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-21 17:23:54 -05:00
Riccardo Giovanetti
867b3dfceb
Italian localization update (#3389)
Signed-off-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
2025-01-21 16:44:18 -05:00
Jared Van Bortel
58962496b4
ci: add missing context to Windows ARM builds (#3400)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-21 13:45:42 -05:00
Jared Van Bortel
810615d97b
add Windows ARM build (#3385)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-21 11:36:27 -05:00
Jared Van Bortel
82175b27c8
Sign maintenancetool.app on macOS (#3391)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2025-01-21 09:27:19 -05:00
Jared Van Bortel
68047d9a60
jinja2cpp: update submodule for partial subscript crash fix (#3394)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2025-01-21 09:26:27 -05:00
Jared Van Bortel
c871f9eb95
Add more chat template substitutions (#3393)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-21 09:25:39 -05:00
Jared Van Bortel
93c5c001e1
ci: use the shared 'gpt4all' context for environment variables (#3392)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-17 10:57:10 -05:00
Jared Van Bortel
4812ddf1f2
Save chats on quit, even if window isn't closed first (#3387)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-16 11:59:32 -05:00
Andriy Mulyar
cc5ed4737f
Update README.md - brokenlink (#3380)
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2025-01-10 11:42:06 -05:00
Jared Van Bortel
7339d42a81
jinja2cpp: update submodule for else/endif crash fix (#3373) 2025-01-07 20:52:57 -05:00
Jared Van Bortel
a0abc93701
chat templates: work around Jinja2Cpp issue with 'not X is defined' (#3372)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2025-01-07 18:00:10 -05:00
Jared Van Bortel
e2541a24b3
code interpreter: support variadic console.log (#3371)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2025-01-07 17:58:04 -05:00
AT
22f6a7f1bc
Properly report that the computation was timedout to the model (#3369)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2025-01-07 14:02:18 -05:00
Max Cembalest
ce6558ec94
fix: format of language and locale setting (#3370) 2025-01-07 11:03:16 -05:00
Max Cembalest
737e164352
updated settings page (#3368)
Signed-off-by: Max Cembalest <mbcembalest@gmail.com>
2025-01-07 10:23:07 -05:00
AT
c7d7345188
Release notes for v3.6.1 and bump version (#3339)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-20 13:37:38 -05:00
AT
13e694e6e8
ChatView: make "stop" and "copy conversation" work again (#3336)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-20 12:26:03 -05:00
AT
93b4093761
Release notes and latestnews for v3.6.0, and bump version. (#3331)
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-12-19 18:37:17 -05:00
Jared Van Bortel
183eb9fb43
qml: fix missing localdocs and prefill progress (#3330)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-19 17:22:00 -05:00
AT
2afa9f2f25
Release of 3.6.0. (#3329)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-19 16:48:38 -05:00
Jared Van Bortel
cefca34445 undo unintentional partial revert of #3173
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-19 16:39:56 -05:00
Jared Van Bortel
6bbeac2b9f
modellist: automatically replace known chat templates with our versions (#3327)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: AT <manyoso@users.noreply.github.com>
2024-12-19 16:35:37 -05:00
AT
1c89447d63
Code interpreter (#3173)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-19 16:31:37 -05:00
Jared Van Bortel
2efb336b8a
chatmodel: fix sources showing as unconsolidated in UI (#3328)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-19 16:27:10 -05:00
Jared Van Bortel
3819842bcc
Fix Jinja2Cpp bug that broke system msg detection in templates (#3325)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-19 15:39:33 -05:00
AT
5ab70da2ae
Fix for remote model templates when messages contain xml. (#3318)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-12-18 13:39:51 -05:00
AT
aa84e2da39
Update maintainers. (#3322)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-18 13:39:37 -05:00
Jared Van Bortel
0f27359c39 chat: bump version to 3.5.4-dev0
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-16 16:32:27 -05:00
Jared Van Bortel
eedd0507d9
chat: release version 3.5.3 (#3307)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-16 16:31:08 -05:00
Jared Van Bortel
680614779e
ci: downgrade Windows image to fix build (#3306)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-16 14:46:23 -05:00
AT
21c06fdebf
New v3.5.3 hotfix release. (#3304)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-16 11:38:06 -05:00
Jared Van Bortel
db5800356b
chat: fix localdocs breakage in v3.5.2 (#3302)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-16 11:25:19 -05:00
Jared Van Bortel
38d92cbb28
chat: release version 3.5.2 (#3296)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 19:23:13 -05:00
Jared Van Bortel
bbee075660 ci: attempt to fix Ubuntu build
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 18:23:15 -05:00
Jared Van Bortel
57b34d50ca fix chatmodel.h #includes
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 18:15:05 -05:00
Jared Van Bortel
0e0a56038c
chat: cut v3.5.2 release (#3292)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 17:50:57 -05:00
AT
9b978f25e1
Break the explore models view into two. (#3269)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
2024-12-13 17:33:05 -05:00
Jared Van Bortel
03f7ca4409
StartupDialog: fix two untranslated strings (#3293)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 15:19:40 -05:00
Jared Van Bortel
b7df4ebbcb
modellist: fix cloning of chat template and system message (#3262)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 12:22:32 -05:00
Jared Van Bortel
f67b370f5a
Fix local server regressions caused by Jinja PR (#3256)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-13 12:19:47 -05:00
Jared Van Bortel
2c5097c9de latestnews: make it more compact
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-12 14:56:05 -05:00
AT
db7f1c5294
Bump the version to 3.5.2-dev0. (#3254)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-10 17:39:54 -05:00
AT
d6a4ee4531
Release notes and latestnews for v3.5.1. (#3253)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-10 15:05:22 -05:00
AT
0871bd1137
Update changlog and version to make 3.5.1 hotfix release. (#3252)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-10 12:32:20 -05:00
Jared Van Bortel
66a9ae1a80 changelog: add PR #3251
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-10 12:28:11 -05:00
Jared Van Bortel
663ea618f7
models3: fix Llama 3.2 chat template (#3251)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-10 12:27:15 -05:00
Jared Van Bortel
11f57afc58
fix several bad chat templates (#3250)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-10 12:06:26 -05:00
Jared Van Bortel
6f49984a29 metadata: fix typos in release notes
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-10 11:11:01 -05:00
AT
5878f7fe01
Fix the z-ordering of the home button. (#3246)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-09 18:27:53 -05:00
Jared Van Bortel
ca08174a03
chatmodel: fix incorrect currentResponse argument (#3245)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 18:14:01 -05:00
AT
7a1e60d1d4
Bump version to v3.5.1-dev0 (#3242)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-09 16:55:46 -05:00
Jared Van Bortel
f9c74f7c21
chat: release v3.5.0 (#3241)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 16:51:48 -05:00
Jared Van Bortel
f7440c2956
chat: cut v3.5.0 release (#3240)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 14:41:23 -05:00
Victor
fddc10d969
update Romanian translation for v3.5.0 (#3232)
Signed-off-by: Victor <158754254+SINAPSA-IC@users.noreply.github.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 14:32:03 -05:00
Jared Van Bortel
70cca3fdcf
fixups for GPT4All v3.5.0-rc2 (#3239)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 14:30:07 -05:00
Riccardo Giovanetti
7628106d55
Italian localization update (#3236)
Signed-off-by: Riccardo Giovanetti <riccardo.giovanetti@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 11:51:05 -05:00
Jared Van Bortel
7f30185317 changelog: fix parenthesis
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-09 11:20:21 -05:00
Jared Van Bortel
cddd0f7507
chat: run update_translations for v3.5.0 (#3230)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-06 16:25:09 -05:00
Jared Van Bortel
8bf55e99f1
chat: cut v3.5.0-rc2 release candidate (#3229)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-06 15:28:03 -05:00
Jared Van Bortel
9e306114d1
qml: tweaks to new edit/redo buttons (#3228)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-06 14:14:36 -05:00
AT
2b1668eff2
Animate the removal of chat items when editing prompts. (#3227)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-12-06 12:26:22 -05:00
Jared Van Bortel
6b18abb124
changelog: add more changes from #3147 (#3226)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-06 11:22:50 -05:00
Jared Van Bortel
f9863b3b89
add changelog entries for Jinja PR (#3223)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-06 11:00:29 -05:00
Jared Van Bortel
2db59f0092
chat: cut v3.5.0-rc1 release candidate (#3218)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-04 13:00:18 -05:00
Jared Van Bortel
0c70b5a5f4
llamamodel: add missing softmax to fix temperature (#3202)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-04 10:56:19 -05:00
Jared Van Bortel
ffd29eae08
ci: do not run online installer or publish jobs on PR branches (#3217)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-03 19:37:22 -05:00
Jared Van Bortel
92acc7b3ac
Fixups for Jinja PR (#3215)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-12-03 19:36:53 -05:00
Jared Van Bortel
225bf6be93
Remove binary state from high-level API and use Jinja templates (#3147)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
2024-11-25 10:04:17 -05:00
AT
3320094d29
Remove unused state from chatitems. (#3170)
I've verified that the code code compiles and I can't see any errors in runtime QML generation nor can I see any references to this in QML.

Jared has also done a git search and can find no evidence this was ever used.

Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-11-05 12:45:07 -05:00
AT
46cb6b0523
Remove unused state in chat.cpp that saves the chat response messages. (#3169)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-11-05 12:24:37 -05:00
AT
20a99d1794
Separate out the chat item view. (#3160)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-11-01 12:14:21 -04:00
AT
1ea2b45a78
Fix restore of default for system tray setting. (#3158)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-10-31 11:46:55 -04:00
Jared Van Bortel
f07e2e63df
Use the token cache to infer greater n_past and reuse results (#3073)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-31 11:19:12 -04:00
AT
62cab695eb
Add tests for error codes with local API server (#3131)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-10-30 10:15:19 -04:00
AT
861453c4d7
Fixup docx parsing (#3140)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-10-28 13:32:16 -04:00
AT
b19db6c20d
Add txt and markdown files to attach feature. (#3135)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: AT <manyoso@users.noreply.github.com>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-10-28 11:42:46 -04:00
AT
da00527101
We can't return early here as nChunks > 0 (#3137)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-10-28 11:42:25 -04:00
Benjamin Gallois
57c0974f4a
chat: system tray icon and close to tray (#3109)
Signed-off-by: bgallois <benjamin@gallois.cc>
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Co-authored-by: Adam Treat <treat.adam@gmail.com>
2024-10-25 12:20:55 -04:00
Jared Van Bortel
62f90ff7d5
chatllm: remove use of deprecated '_qs' (#3130)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-22 13:30:26 -04:00
Jared Van Bortel
6df252bdcd
cmake: set minimum Qt version back to 6.5 (#3129)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-22 11:41:28 -04:00
Jared Van Bortel
d224a9d3a5
Fix compatibility with Qt 6.8 (#3121)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-21 16:25:28 -04:00
Jared Van Bortel
1764fca192
ci: attempt to fix flaky downloads (#3124)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-21 16:24:29 -04:00
Jared Van Bortel
044ceec7fb
Fix apparent CI failure due to "All Workflows filtered" (#3123)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-21 16:23:41 -04:00
Jared Van Bortel
adf7225f1c
codespell: update .codespellrc (#3122)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-21 13:44:56 -04:00
Jared Van Bortel
7f5f0869e7
Implement the first real test of gpt4all-chat (#3116)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-20 11:38:04 -04:00
AT
9cafd38dcf
Add test scaffolding (#3103)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
Co-authored-by: Jared Van Bortel <jared@nomic.ai>
2024-10-18 15:27:03 -04:00
Jared Van Bortel
c3357b7625
Enable more warning flags, and fix more warnings (#3065)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-18 12:11:03 -04:00
Jared Van Bortel
eed92fd5b2
chat: bump version to 3.4.3-dev0 (#3105)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-16 14:25:34 -04:00
Jared Van Bortel
80cfac7ece
chat: release v3.4.2 (#3104)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-16 14:19:11 -04:00
Jared Van Bortel
b4ad461d86 chat: cut v3.4.2 release (#3102)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-16 13:13:22 -04:00
Jared Van Bortel
36a3826d8c localdocs: avoid cases where batch can make no progress (#3094)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-16 13:13:22 -04:00
AT
f8dde82fda
Localdocs fixes (#3083)
Signed-off-by: Adam Treat <treat.adam@gmail.com>
2024-10-15 15:28:13 -04:00
Jared Van Bortel
1789a3c6d7
chat: release version 3.4.1 (#3082)
Signed-off-by: Jared Van Bortel <jared@nomic.ai>
2024-10-11 18:25:22 -04:00
146 changed files with 20923 additions and 8475 deletions

View File

@ -1,13 +1,17 @@
version: 2.1
setup: true
orbs:
path-filtering: circleci/path-filtering@0.0.1
path-filtering: circleci/path-filtering@1.3.0
workflows:
version: 2.1
generate-config:
jobs:
- path-filtering/filter:
filters:
tags:
only:
- /.*/
base-revision: main
config-path: .circleci/continue_config.yml
mapping: |

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,3 @@
[codespell]
ignore-words-list = blong, afterall, som, assistent, crasher
skip = .git,*.pdf,*.svg,*.lock,*.ts
ignore-words-list = blong, afterall, assistent, crasher, requestor
skip = ./.git,./gpt4all-chat/translations,*.pdf,*.svg,*.lock

1
.gitignore vendored
View File

@ -182,6 +182,7 @@ gpt4all-chat/models/*
build_*
build-*
cmake-build-*
/gpt4all-chat/tests/python/config.py
# IntelliJ
.idea/

6
.gitmodules vendored
View File

@ -17,3 +17,9 @@
[submodule "gpt4all-chat/deps/QXlsx"]
path = gpt4all-chat/deps/QXlsx
url = https://github.com/nomic-ai/QXlsx.git
[submodule "gpt4all-chat/deps/minja"]
path = gpt4all-chat/deps/minja
url = https://github.com/nomic-ai/minja.git
[submodule "gpt4all-chat/deps/json"]
path = gpt4all-chat/deps/json
url = https://github.com/nlohmann/json.git

View File

@ -51,11 +51,6 @@ Thiago Ramos ([@thiagojramos](https://github.com/thiagojramos))<br/>
E-mail: thiagojramos@outlook.com<br/>
- pt\_BR translation
Victor Emanuel ([@SINAPSA-IC](https://github.com/SINAPSA-IC))<br/>
E-mail: contact@sinapsaro.ro<br/>
Discord: `@sinapsa_ic_56124_99632`
- ro\_RO translation
不知火 Shiranui ([@supersonictw](https://github.com/supersonictw))<br/>
E-mail: supersonic@livemail.tw<br/>
Discord: `@supersonictw`
@ -77,6 +72,6 @@ Discord: `@Tim453`
- Flatpak
Jack ([@wuodoo](https://github.com/wuodoo))<br/>
E-mail: 2296103047@qq.com><br/>
E-mail: 2296103047@qq.com<br/>
Discord: `@mikage`
- zh\_CN translation

View File

@ -1,5 +1,9 @@
<h1 align="center">GPT4All</h1>
<p align="center">
Now with support for DeepSeek R1 Distillations
</p>
<p align="center">
<a href="https://www.nomic.ai/gpt4all">Website</a> &bull; <a href="https://docs.gpt4all.io">Documentation</a> &bull; <a href="https://discord.gg/mGZE39AS3e">Discord</a> &bull; <a href="https://www.youtube.com/watch?v=gQcZDXRVJok">YouTube Tutorial</a>
</p>
@ -23,9 +27,6 @@ https://github.com/nomic-ai/gpt4all/assets/70534565/513a0f15-4964-4109-89e4-4f9a
<p align="center">
GPT4All is made possible by our compute partner <a href="https://www.paperspace.com/">Paperspace</a>.
</p>
<p align="center">
<a href="https://www.phorm.ai/query?projectId=755eecd3-24ad-49cc-abf4-0ab84caacf63"><img src="https://img.shields.io/badge/Phorm-Ask_AI-%23F2777A.svg" alt="phorm.ai"></a>
</p>
## Download Links
@ -34,6 +35,11 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows Installer
</a> &mdash;
</p>
<p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-win64-arm.exe">
<img src="gpt4all-bindings/python/docs/assets/windows.png" style="height: 1em; width: auto" /> Windows ARM Installer
</a> &mdash;
</p>
<p>
&mdash; <a href="https://gpt4all.io/installers/gpt4all-installer-darwin.dmg">
<img src="gpt4all-bindings/python/docs/assets/mac.png" style="height: 1em; width: auto" /> macOS Installer
@ -45,10 +51,16 @@ GPT4All is made possible by our compute partner <a href="https://www.paperspace.
</a> &mdash;
</p>
<p>
Windows and Linux require Intel Core i3 2nd Gen / AMD Bulldozer, or better. x86-64 only, no ARM.
The Windows and Linux builds require Intel Core i3 2nd Gen / AMD Bulldozer, or better.
</p>
<p>
macOS requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
The Windows ARM build supports Qualcomm Snapdragon and Microsoft SQ1/SQ2 processors.
</p>
<p>
The Linux build is x86-64 only (no ARM).
</p>
<p>
The macOS build requires Monterey 12.6 or newer. Best results with Apple Silicon M-series processors.
</p>
See the full [System Requirements](gpt4all-chat/system_requirements.md) for more details.

View File

@ -11,8 +11,7 @@ function(gpt4all_add_warning_options target)
-Wextra-semi
-Wformat=2
-Wmissing-include-dirs
-Wnull-dereference
-Wstrict-overflow=2
-Wsuggest-override
-Wvla
# errors
-Werror=format-security
@ -22,8 +21,6 @@ function(gpt4all_add_warning_options target)
# disabled warnings
-Wno-sign-compare
-Wno-unused-parameter
-Wno-unused-function
-Wno-unused-variable
)
if (CMAKE_CXX_COMPILER_ID STREQUAL "GNU")
target_compile_options("${target}" PRIVATE

View File

@ -69,7 +69,7 @@ if (LLMODEL_CUDA)
cmake_minimum_required(VERSION 3.18) # for CMAKE_CUDA_ARCHITECTURES
# Defaults must be set before enable_language(CUDA).
# Keep this in sync with the arch list in ggml/src/CMakeLists.txt.
# Keep this in sync with the arch list in ggml/src/CMakeLists.txt (plus 5.0 for non-F16 branch).
if (NOT DEFINED CMAKE_CUDA_ARCHITECTURES)
# 52 == lowest CUDA 12 standard
# 60 == f16 CUDA intrinsics
@ -78,7 +78,7 @@ if (LLMODEL_CUDA)
if (GGML_CUDA_F16 OR GGML_CUDA_DMMV_F16)
set(CMAKE_CUDA_ARCHITECTURES "60;61;70;75") # needed for f16 CUDA intrinsics
else()
set(CMAKE_CUDA_ARCHITECTURES "52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
set(CMAKE_CUDA_ARCHITECTURES "50;52;61;70;75") # lowest CUDA 12 standard + lowest for integer intrinsics
#set(CMAKE_CUDA_ARCHITECTURES "OFF") # use this to compile much faster, but only F16 models work
endif()
endif()

@ -1 +1 @@
Subproject commit b3b5c0571eda3065035a7f25f7b84640b159d821
Subproject commit 11f734c3b0334dbae4823b4a7467764e447fc6d6

View File

@ -5,6 +5,7 @@
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <expected>
#include <functional>
#include <optional>
#include <span>
@ -24,6 +25,10 @@ using namespace std::string_literals;
class LLModel {
public:
using Token = int32_t;
using PromptCallback = std::function<bool(std::span<const Token> batch, bool cached)>;
using ResponseCallback = std::function<bool(Token token, std::string_view piece)>;
using EmbedCancelCallback = bool(unsigned *batchSizes, unsigned nBatch, const char *backend);
using ProgressCallback = std::function<bool(float progress)>;
class BadArchError: public std::runtime_error {
public:
@ -101,6 +106,7 @@ public:
static int32_t maxContextLength(const std::string &modelPath);
static int32_t layerCount(const std::string &modelPath);
static bool isEmbeddingModel(const std::string &modelPath);
static auto chatTemplate(const char *modelPath) -> std::expected<std::string, std::string>;
static void setImplementationsSearchPath(const std::string &path);
static const std::string &implementationsSearchPath();
static bool hasSupportedCPU();
@ -124,9 +130,6 @@ public:
};
struct PromptContext {
std::vector<int32_t> tokens; // current tokens in the context window
int32_t n_past = 0; // number of tokens in past conversation
int32_t n_ctx = 0; // number of tokens possible in context window
int32_t n_predict = 200;
int32_t top_k = 40;
float top_p = 0.9f;
@ -138,8 +141,6 @@ public:
float contextErase = 0.5f; // percent of context to erase if we exceed the context window
};
using ProgressCallback = std::function<bool(float progress)>;
explicit LLModel() {}
virtual ~LLModel() {}
@ -151,21 +152,17 @@ public:
virtual bool isModelLoaded() const = 0;
virtual size_t requiredMem(const std::string &modelPath, int n_ctx, int ngl) = 0;
virtual size_t stateSize() const = 0;
virtual size_t saveState(std::span<uint8_t> dest) const = 0;
virtual size_t restoreState(std::span<const uint8_t> src) = 0;
virtual size_t saveState(std::span<uint8_t> stateOut, std::vector<Token> &inputTokensOut) const = 0;
virtual size_t restoreState(std::span<const uint8_t> state, std::span<const Token> inputTokens) = 0;
// This method requires the model to return true from supportsCompletion otherwise it will throw
// an error
virtual void prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &ctx,
bool special = false,
std::optional<std::string_view> fakeReply = {});
virtual void prompt(std::string_view prompt,
const PromptCallback &promptCallback,
const ResponseCallback &responseCallback,
const PromptContext &ctx);
using EmbedCancelCallback = bool(unsigned *batchSizes, unsigned nBatch, const char *backend);
virtual int32_t countPromptTokens(std::string_view prompt) const;
virtual size_t embeddingSize() const {
throw std::logic_error(std::string(implementation().modelType()) + " does not support embeddings");
@ -210,17 +207,24 @@ public:
void setProgressCallback(ProgressCallback callback) { m_progressCallback = callback; }
virtual int32_t contextLength() const = 0;
virtual auto specialTokens() -> std::unordered_map<std::string, std::string> const = 0;
protected:
// These are pure virtual because subclasses need to implement as the default implementation of
// 'prompt' above calls these functions
virtual std::vector<Token> tokenize(PromptContext &ctx, std::string_view str, bool special = false) = 0;
virtual std::vector<Token> tokenize(std::string_view str) const = 0;
virtual bool isSpecialToken(Token id) const = 0;
virtual std::string tokenToString(Token id) const = 0;
virtual void initSampler(PromptContext &ctx) = 0;
virtual void initSampler(const PromptContext &ctx) = 0;
virtual Token sampleToken() const = 0;
virtual bool evalTokens(PromptContext &ctx, const std::vector<int32_t> &tokens) const = 0;
virtual void shiftContext(PromptContext &promptCtx) = 0;
virtual int32_t contextLength() const = 0;
virtual bool evalTokens(int32_t nPast, std::span<const Token> tokens) const = 0;
virtual void shiftContext(const PromptContext &promptCtx, int32_t *nPast) = 0;
virtual int32_t inputLength() const = 0;
virtual int32_t computeModelInputPosition(std::span<const Token> input) const = 0;
virtual void setModelInputPosition(int32_t pos) = 0;
virtual void appendInputToken(Token tok) = 0;
virtual std::span<const Token> inputTokens() const = 0;
virtual const std::vector<Token> &endTokens() const = 0;
virtual bool shouldAddBOS() const = 0;
@ -236,6 +240,12 @@ protected:
return -1;
}
virtual auto chatTemplate(const char *modelPath) const -> std::expected<std::string, std::string>
{
(void)modelPath;
return std::unexpected("not implemented");
}
const Implementation *m_implementation = nullptr;
ProgressCallback m_progressCallback;
@ -247,17 +257,15 @@ protected:
return true;
}
bool decodePrompt(std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
std::vector<Token> embd_inp,
bool isResponse = false);
void generateResponse(std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx);
Token m_tokenize_last_token = -1; // not serialized
// prefill context with prompt
auto decodePrompt(const PromptCallback &promptCallback,
const PromptContext &promptCtx,
std::vector<Token> embd_inp)
-> std::optional<int32_t>;
// generate a response
void generateResponse(const ResponseCallback &responseCallback,
const PromptContext &promptCtx,
int32_t nPast);
friend class LLMImplementation;
};

View File

@ -23,6 +23,11 @@ extern "C" {
*/
typedef void *llmodel_model;
/**
* A token.
*/
typedef int32_t token_t;
/**
* llmodel_prompt_context structure for holding the prompt context.
* NOTE: The implementation takes care of all the memory handling of the raw logits pointer and the
@ -30,19 +35,15 @@ typedef void *llmodel_model;
* behavior.
*/
struct llmodel_prompt_context {
int32_t *tokens; // current tokens in the context window
size_t tokens_size; // the size of the raw tokens vector
int32_t n_past; // number of tokens in past conversation
int32_t n_ctx; // number of tokens possible in context window
int32_t n_predict; // number of tokens to predict
int32_t top_k; // top k logits to sample from
float top_p; // nucleus sampling probability threshold
float min_p; // Min P sampling
float temp; // temperature to adjust model's output distribution
float top_p; // nucleus sampling probability threshold
float min_p; // Min P sampling
float temp; // temperature to adjust model's output distribution
int32_t n_batch; // number of predictions to generate in parallel
float repeat_penalty; // penalty factor for repeated tokens
float repeat_penalty; // penalty factor for repeated tokens
int32_t repeat_last_n; // last n tokens to penalize
float context_erase; // percent of context to erase if we exceed the context window
float context_erase; // percent of context to erase if we exceed the context window
};
struct llmodel_gpu_device {
@ -61,10 +62,12 @@ typedef struct llmodel_gpu_device llmodel_gpu_device;
/**
* Callback type for prompt processing.
* @param token_id The token id of the prompt.
* @param token_ids An array of token ids of the prompt.
* @param n_token_ids The number of tokens in the array.
* @param cached Whether the tokens were already in cache.
* @return a bool indicating whether the model should keep processing.
*/
typedef bool (*llmodel_prompt_callback)(int32_t token_id);
typedef bool (*llmodel_prompt_callback)(const token_t *token_ids, size_t n_token_ids, bool cached);
/**
* Callback type for response.
@ -72,7 +75,7 @@ typedef bool (*llmodel_prompt_callback)(int32_t token_id);
* @param response The response string. NOTE: a token_id of -1 indicates the string is an error string.
* @return a bool indicating whether the model should keep generating.
*/
typedef bool (*llmodel_response_callback)(int32_t token_id, const char *response);
typedef bool (*llmodel_response_callback)(token_t token_id, const char *response);
/**
* Embedding cancellation callback for use with llmodel_embed.
@ -83,6 +86,8 @@ typedef bool (*llmodel_response_callback)(int32_t token_id, const char *response
*/
typedef bool (*llmodel_emb_cancel_callback)(unsigned *batch_sizes, unsigned n_batch, const char *backend);
typedef void (*llmodel_special_token_callback)(const char *name, const char *token);
/**
* Create a llmodel instance.
* Recognises correct model type from file at model_path
@ -141,48 +146,57 @@ bool llmodel_isModelLoaded(llmodel_model model);
* @param model A pointer to the llmodel_model instance.
* @return the size in bytes of the internal state of the model
*/
uint64_t llmodel_get_state_size(llmodel_model model);
uint64_t llmodel_state_get_size(llmodel_model model);
/**
* Saves the internal state of the model to the specified destination address.
* Saves the internal state of the model.
* NOTE: This state data is specific to the type of model you have created.
* @param model A pointer to the llmodel_model instance.
* @param dest A pointer to the destination.
* @param size The size of the destination buffer.
* @return the number of bytes copied, or zero on error.
* @param state Where to store the state. This must be a buffer of at least llmodel_state_get_size() bytes.
* @param state_size The size of the destination for the state.
* @param input_tokens_out Where to store the address of the token cache state. This is dynamically allocated and must
* be freed with llmodel_state_free_input_tokens.
* @param n_input_tokens Where to store the size of the token cache state.
* @return The number of bytes copied. On error, zero is returned, the token cache is set to NULL, and the token cache
* size is set to zero.
*/
uint64_t llmodel_save_state_data(llmodel_model model, uint8_t *dest, uint64_t size);
uint64_t llmodel_state_get_data(llmodel_model model, uint8_t *state_out, uint64_t state_size,
token_t **input_tokens_out, uint64_t *n_input_tokens);
/**
* Frees the temporary token cache buffer created by a call to llmodel_state_get_data().
* @param input_tokens The token cache buffer.
*/
void llmodel_state_free_input_tokens(token_t *input_tokens);
/**
* Restores the internal state of the model using data from the specified address.
* NOTE: This state data is specific to the type of model you have created.
* @param model A pointer to the llmodel_model instance.
* @param src A pointer to the state data.
* @param size The size of the source data.
* @param state A pointer to the state data.
* @param state_size The size of the state data.
* @param input_tokens The token cache associated with the saved state.
* @param n_input_tokens The number of tokens in input_tokens.
* @return The number of bytes read, or zero on error.
*/
uint64_t llmodel_restore_state_data(llmodel_model model, const uint8_t *src, size_t size);
uint64_t llmodel_state_set_data(llmodel_model model, const uint8_t *state, uint64_t state_size,
const token_t *input_tokens, uint64_t n_input_tokens);
/**
* Generate a response using the model.
* @param model A pointer to the llmodel_model instance.
* @param prompt A string representing the input prompt.
* @param prompt_template A string representing the input prompt template.
* @param prompt_callback A callback function for handling the processing of prompt.
* @param response_callback A callback function for handling the generated response.
* @param allow_context_shift Whether to allow shifting of context to make room for more input.
* @param special True if special tokens in the prompt should be processed, false otherwise.
* @param fake_reply A string to insert into context as the model's reply, or NULL to generate one.
* @param ctx A pointer to the llmodel_prompt_context structure.
* @param error A pointer to a string; will only be set on error.
*/
void llmodel_prompt(llmodel_model model, const char *prompt,
const char *prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
bool allow_context_shift,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply);
bool llmodel_prompt(llmodel_model model,
const char *prompt,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
llmodel_prompt_context *ctx,
const char **error);
/**
* Generate an embedding using the model.
@ -294,6 +308,10 @@ const char *llmodel_model_backend_name(llmodel_model model);
*/
const char *llmodel_model_gpu_device_name(llmodel_model model);
int32_t llmodel_count_prompt_tokens(llmodel_model model, const char *prompt, const char **error);
void llmodel_model_foreach_special_token(llmodel_model model, llmodel_special_token_callback callback);
#ifdef __cplusplus
}
#endif

View File

@ -53,6 +53,8 @@ static const std::vector<const char *> KNOWN_ARCHES {
"gpt2",
// "gptj", -- no inference code
"gptneox",
"granite",
"granitemoe",
"mpt",
"baichuan",
"starcoder",
@ -80,6 +82,7 @@ static const std::vector<const char *> KNOWN_ARCHES {
"command-r",
// "dbrx", -- 16x12B parameters
"olmo",
"olmoe",
"openelm",
// "arctic", -- 10B+128x3.66B parameters
"deepseek2",
@ -202,7 +205,7 @@ static int32_t get_arch_key_u32(std::string const &modelPath, std::string const
if (keyidx != -1) {
value = gguf_get_val_u32(ctx, keyidx);
} else {
std::cerr << __func__ << ": " << key << "not found in " << modelPath << "\n";
std::cerr << __func__ << ": " << key << " not found in " << modelPath << "\n";
}
}
@ -218,6 +221,7 @@ struct LLamaPrivate {
int64_t n_threads = 0;
std::vector<LLModel::Token> end_tokens;
const char *backend_name = nullptr;
std::vector<LLModel::Token> inputTokens;
llama_model *model = nullptr;
llama_context *ctx = nullptr;
@ -501,28 +505,29 @@ size_t LLamaModel::stateSize() const
return llama_state_get_size(d_ptr->ctx);
}
size_t LLamaModel::saveState(std::span<uint8_t> dest) const
size_t LLamaModel::saveState(std::span<uint8_t> stateOut, std::vector<Token> &inputTokensOut) const
{
return llama_state_get_data(d_ptr->ctx, dest.data(), dest.size());
size_t bytesWritten = llama_state_get_data(d_ptr->ctx, stateOut.data(), stateOut.size());
if (bytesWritten)
inputTokensOut.assign(d_ptr->inputTokens.begin(), d_ptr->inputTokens.end());
return bytesWritten;
}
size_t LLamaModel::restoreState(std::span<const uint8_t> src)
size_t LLamaModel::restoreState(std::span<const uint8_t> state, std::span<const Token> inputTokens)
{
return llama_state_set_data(d_ptr->ctx, src.data(), src.size());
size_t bytesRead = llama_state_set_data(d_ptr->ctx, state.data(), state.size());
if (bytesRead)
d_ptr->inputTokens.assign(inputTokens.begin(), inputTokens.end());
return bytesRead;
}
std::vector<LLModel::Token> LLamaModel::tokenize(PromptContext &ctx, std::string_view str, bool special)
std::vector<LLModel::Token> LLamaModel::tokenize(std::string_view str) const
{
bool atStart = m_tokenize_last_token == -1;
bool insertSpace = atStart || isSpecialToken(m_tokenize_last_token);
std::vector<LLModel::Token> fres(str.length() + 4);
int32_t fres_len = llama_tokenize_gpt4all(
d_ptr->model, str.data(), str.length(), fres.data(), fres.size(), /*add_special*/ atStart,
/*parse_special*/ special, /*insert_space*/ insertSpace
int32_t fres_len = llama_tokenize(
d_ptr->model, str.data(), str.length(), fres.data(), fres.size(), /*add_special*/ true, /*parse_special*/ true
);
fres.resize(fres_len);
if (fres_len)
m_tokenize_last_token = fres.back();
return fres;
}
@ -548,7 +553,7 @@ std::string LLamaModel::tokenToString(Token id) const
return std::string(result.data(), result.size());
}
void LLamaModel::initSampler(PromptContext &promptCtx)
void LLamaModel::initSampler(const PromptContext &promptCtx)
{
auto *model = d_ptr->model;
auto *chain = d_ptr->sampler_chain;
@ -582,7 +587,8 @@ void LLamaModel::initSampler(PromptContext &promptCtx)
llama_sampler_init_top_p(promptCtx.top_p, 1),
llama_sampler_init_min_p(promptCtx.min_p, 1),
llama_sampler_init_temp(promptCtx.temp),
llama_sampler_init_dist(LLAMA_DEFAULT_SEED)
llama_sampler_init_softmax(),
llama_sampler_init_dist(LLAMA_DEFAULT_SEED),
};
for (auto *smpl : samplers)
llama_sampler_chain_add(chain, smpl);
@ -594,9 +600,11 @@ LLModel::Token LLamaModel::sampleToken() const
return llama_sampler_sample(d_ptr->sampler_chain, d_ptr->ctx, -1);
}
bool LLamaModel::evalTokens(PromptContext &ctx, const std::vector<int32_t> &tokens) const
bool LLamaModel::evalTokens(int32_t nPast, std::span<const Token> tokens) const
{
llama_kv_cache_seq_rm(d_ptr->ctx, 0, ctx.n_past, -1);
assert(!tokens.empty());
llama_kv_cache_seq_rm(d_ptr->ctx, 0, nPast, -1);
llama_batch batch = llama_batch_init(tokens.size(), 0, 1);
@ -604,7 +612,7 @@ bool LLamaModel::evalTokens(PromptContext &ctx, const std::vector<int32_t> &toke
for (int32_t i = 0; i < batch.n_tokens; i++) {
batch.token [i] = tokens[i];
batch.pos [i] = ctx.n_past + i;
batch.pos [i] = nPast + i;
batch.n_seq_id[i] = 1;
batch.seq_id [i][0] = 0;
batch.logits [i] = false;
@ -618,14 +626,14 @@ bool LLamaModel::evalTokens(PromptContext &ctx, const std::vector<int32_t> &toke
return res == 0;
}
void LLamaModel::shiftContext(PromptContext &promptCtx)
void LLamaModel::shiftContext(const PromptContext &promptCtx, int32_t *nPast)
{
// infinite text generation via context shifting
// erase up to n_ctx*contextErase tokens
int n_keep = shouldAddBOS();
int n_past = promptCtx.n_past;
int n_discard = std::min(n_past - n_keep, int(promptCtx.n_ctx * promptCtx.contextErase));
int n_past = *nPast;
int n_discard = std::min(n_past - n_keep, int(contextLength() * promptCtx.contextErase));
assert(n_discard > 0);
if (n_discard <= 0)
@ -638,8 +646,9 @@ void LLamaModel::shiftContext(PromptContext &promptCtx)
llama_kv_cache_seq_rm (d_ptr->ctx, 0, n_keep, n_keep + n_discard);
llama_kv_cache_seq_add(d_ptr->ctx, 0, n_keep + n_discard, n_past, -n_discard);
promptCtx.tokens.erase(promptCtx.tokens.begin() + n_keep, promptCtx.tokens.begin() + n_keep + n_discard);
promptCtx.n_past = promptCtx.tokens.size();
auto &inp = d_ptr->inputTokens;
inp.erase(inp.begin() + n_keep, inp.begin() + n_keep + n_discard);
*nPast = inp.size();
}
int32_t LLamaModel::contextLength() const
@ -647,6 +656,56 @@ int32_t LLamaModel::contextLength() const
return llama_n_ctx(d_ptr->ctx);
}
auto LLamaModel::specialTokens() -> std::unordered_map<std::string, std::string> const
{
if (!d_ptr->model)
throw std::logic_error("model not loaded");
std::unordered_map<std::string, std::string> tokens;
if (auto id = llama_token_bos(d_ptr->model); id != LLAMA_TOKEN_NULL)
tokens.emplace("bos_token", tokenToString(id));
if (auto id = llama_token_eos(d_ptr->model); id != LLAMA_TOKEN_NULL)
tokens.emplace("eos_token", tokenToString(id));
return tokens;
}
int32_t LLamaModel::inputLength() const
{
return d_ptr->inputTokens.size();
}
int32_t LLamaModel::computeModelInputPosition(std::span<const Token> input) const
{
// find common prefix
auto cacheIt = d_ptr->inputTokens.begin();
auto inputIt = input.begin();
while (cacheIt < d_ptr->inputTokens.end() && inputIt < input.end() && *cacheIt == *inputIt) {
++cacheIt; ++inputIt;
}
// tell the caller to ignore the tokens between [begin, inputIt)
return inputIt - input.begin();
}
void LLamaModel::setModelInputPosition(int32_t pos)
{
auto &inp = d_ptr->inputTokens;
assert(pos >= 0);
assert(pos <= inp.size());
// truncate token cache to end at the new n_past
if (pos < inp.size())
inp.resize(pos);
}
void LLamaModel::appendInputToken(Token tok)
{
d_ptr->inputTokens.push_back(tok);
}
auto LLamaModel::inputTokens() const -> std::span<const Token>
{
return d_ptr->inputTokens;
}
const std::vector<LLModel::Token> &LLamaModel::endTokens() const
{
return d_ptr->end_tokens;
@ -667,6 +726,37 @@ int32_t LLamaModel::layerCount(std::string const &modelPath) const
return get_arch_key_u32(modelPath, "block_count");
}
// TODO(jared): reduce redundant code and operations by combining all metadata getters for unloaded
// models into a class that keeps the model file open
auto LLamaModel::chatTemplate(const char *modelPath) const -> std::expected<std::string, std::string>
{
auto *ctx = load_gguf(modelPath);
if (!ctx)
return std::unexpected("failed to open model file");
std::expected<std::string, std::string> result;
enum gguf_type ktype;
const int kid = gguf_find_key(ctx, "tokenizer.chat_template");
if (kid == -1) {
result = std::unexpected("key not found");
goto cleanup;
}
ktype = gguf_get_kv_type(ctx, kid);
if (ktype != GGUF_TYPE_STRING) {
result = std::unexpected(
"expected key type STRING (" + std::to_string(GGUF_TYPE_STRING) + "), got " + std::to_string(ktype)
);
goto cleanup;
}
result = gguf_get_val_str(ctx, kid);
cleanup:
gguf_free(ctx);
return result;
}
#ifdef GGML_USE_VULKAN
static const char *getVulkanVendorName(uint32_t vendorID)
{

View File

@ -11,6 +11,7 @@
#include <string>
#include <string_view>
#include <vector>
#include <unordered_map>
struct LLamaPrivate;
struct EmbModelSpec;
@ -28,8 +29,8 @@ public:
bool isModelLoaded() const override;
size_t requiredMem(const std::string &modelPath, int n_ctx, int ngl) override;
size_t stateSize() const override;
size_t saveState(std::span<uint8_t> dest) const override;
size_t restoreState(std::span<const uint8_t> src) override;
size_t saveState(std::span<uint8_t> stateOut, std::vector<Token> &inputTokensOut) const override;
size_t restoreState(std::span<const uint8_t> state, std::span<const Token> inputTokens) override;
void setThreadCount(int32_t n_threads) override;
int32_t threadCount() const override;
std::vector<GPUDevice> availableGPUDevices(size_t memoryRequired = 0) const override;
@ -48,28 +49,36 @@ public:
void embed(const std::vector<std::string> &texts, float *embeddings, bool isRetrieval, int dimensionality = -1,
size_t *tokenCount = nullptr, bool doMean = true, bool atlas = false) override;
private:
std::unique_ptr<LLamaPrivate> d_ptr;
bool m_supportsEmbedding = false;
bool m_supportsCompletion = false;
int32_t contextLength() const override;
auto specialTokens() -> std::unordered_map<std::string, std::string> const override;
protected:
std::vector<Token> tokenize(PromptContext &ctx, std::string_view str, bool special) override;
std::vector<Token> tokenize(std::string_view str) const override;
bool isSpecialToken(Token id) const override;
std::string tokenToString(Token id) const override;
void initSampler(PromptContext &ctx) override;
void initSampler(const PromptContext &ctx) override;
Token sampleToken() const override;
bool evalTokens(PromptContext &ctx, const std::vector<int32_t> &tokens) const override;
void shiftContext(PromptContext &promptCtx) override;
int32_t contextLength() const override;
bool evalTokens(int32_t nPast, std::span<const Token> tokens) const override;
void shiftContext(const PromptContext &promptCtx, int32_t *nPast) override;
int32_t inputLength() const override;
int32_t computeModelInputPosition(std::span<const Token> input) const override;
void setModelInputPosition(int32_t pos) override;
void appendInputToken(Token tok) override;
std::span<const Token> inputTokens() const override;
const std::vector<Token> &endTokens() const override;
bool shouldAddBOS() const override;
int32_t maxContextLength(std::string const &modelPath) const override;
int32_t layerCount(std::string const &modelPath) const override;
auto chatTemplate(const char *modelPath) const -> std::expected<std::string, std::string> override;
void embedInternal(const std::vector<std::string> &texts, float *embeddings, std::string prefix, int dimensionality,
size_t *tokenCount, bool doMean, bool atlas, EmbedCancelCallback *cancelCb,
const EmbModelSpec *spec);
private:
std::unique_ptr<LLamaPrivate> d_ptr;
bool m_supportsEmbedding = false;
bool m_supportsCompletion = false;
};
#endif // LLAMAMODEL_H

View File

@ -140,9 +140,14 @@ const std::vector<LLModel::Implementation> &LLModel::Implementation::implementat
std::string path;
// Split the paths string by the delimiter and process each path.
while (std::getline(ss, path, ';')) {
std::u8string u8_path(path.begin(), path.end());
fs::directory_iterator iter;
try {
iter = fs::directory_iterator(std::u8string(path.begin(), path.end()));
} catch (const fs::filesystem_error &) {
continue; // skip nonexistent path
}
// Iterate over all libraries
for (const auto &f : fs::directory_iterator(u8_path)) {
for (const auto &f : iter) {
const fs::path &p = f.path();
if (p.extension() != LIB_FILE_EXT) continue;
@ -326,6 +331,12 @@ bool LLModel::Implementation::isEmbeddingModel(const std::string &modelPath)
return llama && llama->isEmbeddingModel(modelPath);
}
auto LLModel::Implementation::chatTemplate(const char *modelPath) -> std::expected<std::string, std::string>
{
auto *llama = constructGlobalLlama();
return llama ? llama->chatTemplate(modelPath) : std::unexpected("backend not available");
}
void LLModel::Implementation::setImplementationsSearchPath(const std::string& path)
{
s_implementations_search_path = path;

View File

@ -7,17 +7,20 @@
#include <cstdlib>
#include <cstring>
#include <exception>
#include <functional>
#include <iostream>
#include <memory>
#include <optional>
#include <string>
#include <string_view>
#include <vector>
#include <span>
namespace ranges = std::ranges;
static_assert(sizeof(token_t) == sizeof(LLModel::Token));
struct LLModelWrapper {
LLModel *llModel = nullptr;
LLModel::PromptContext promptContext;
~LLModelWrapper() { delete llModel; }
};
@ -85,74 +88,80 @@ bool llmodel_isModelLoaded(llmodel_model model)
return wrapper->llModel->isModelLoaded();
}
uint64_t llmodel_get_state_size(llmodel_model model)
uint64_t llmodel_state_get_size(llmodel_model model)
{
auto *wrapper = static_cast<LLModelWrapper *>(model);
return wrapper->llModel->stateSize();
}
uint64_t llmodel_save_state_data(llmodel_model model, uint8_t *dest, uint64_t size)
uint64_t llmodel_state_get_data(llmodel_model model, uint8_t *state_out, uint64_t state_size,
token_t **input_tokens_out, uint64_t *n_input_tokens)
{
auto *wrapper = static_cast<LLModelWrapper *>(model);
return wrapper->llModel->saveState({dest, size_t(size)});
std::vector<LLModel::Token> inputTokens;
auto bytesWritten = wrapper->llModel->saveState({state_out, size_t(state_size)}, inputTokens);
if (bytesWritten) {
auto *buf = new LLModel::Token[inputTokens.size()];
ranges::copy(inputTokens, buf);
*input_tokens_out = buf;
*n_input_tokens = uint64_t(inputTokens.size());
} else {
*input_tokens_out = nullptr;
*n_input_tokens = 0;
}
return bytesWritten;
}
uint64_t llmodel_restore_state_data(llmodel_model model, const uint8_t *src, uint64_t size)
void llmodel_state_free_input_tokens(LLModel::Token *input_tokens)
{
auto *wrapper = static_cast<LLModelWrapper *>(model);
return wrapper->llModel->restoreState({src, size_t(size)});
delete[] input_tokens;
}
void llmodel_prompt(llmodel_model model, const char *prompt,
const char *prompt_template,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
bool allow_context_shift,
llmodel_prompt_context *ctx,
bool special,
const char *fake_reply)
uint64_t llmodel_state_set_data(llmodel_model model, const uint8_t *state, uint64_t state_size,
const token_t *input_tokens, uint64_t n_input_tokens)
{
auto *wrapper = static_cast<LLModelWrapper *>(model);
return wrapper->llModel->restoreState({state, size_t(state_size)}, {input_tokens, size_t(n_input_tokens)});
}
auto response_func = [response_callback](int32_t token_id, const std::string &response) {
return response_callback(token_id, response.c_str());
};
bool llmodel_prompt(llmodel_model model,
const char *prompt,
llmodel_prompt_callback prompt_callback,
llmodel_response_callback response_callback,
llmodel_prompt_context *ctx,
const char **error)
{
auto *wrapper = static_cast<LLModelWrapper *>(model);
// Copy the C prompt context
wrapper->promptContext.n_past = ctx->n_past;
wrapper->promptContext.n_ctx = ctx->n_ctx;
wrapper->promptContext.n_predict = ctx->n_predict;
wrapper->promptContext.top_k = ctx->top_k;
wrapper->promptContext.top_p = ctx->top_p;
wrapper->promptContext.min_p = ctx->min_p;
wrapper->promptContext.temp = ctx->temp;
wrapper->promptContext.n_batch = ctx->n_batch;
wrapper->promptContext.repeat_penalty = ctx->repeat_penalty;
wrapper->promptContext.repeat_last_n = ctx->repeat_last_n;
wrapper->promptContext.contextErase = ctx->context_erase;
LLModel::PromptContext promptContext {
.n_predict = ctx->n_predict,
.top_k = ctx->top_k,
.top_p = ctx->top_p,
.min_p = ctx->min_p,
.temp = ctx->temp,
.n_batch = ctx->n_batch,
.repeat_penalty = ctx->repeat_penalty,
.repeat_last_n = ctx->repeat_last_n,
.contextErase = ctx->context_erase,
};
auto prompt_func = [prompt_callback](std::span<const LLModel::Token> token_ids, bool cached) {
return prompt_callback(token_ids.data(), token_ids.size(), cached);
};
auto response_func = [response_callback](LLModel::Token token_id, std::string_view piece) {
return response_callback(token_id, piece.data());
};
// Call the C++ prompt method
wrapper->llModel->prompt(prompt, prompt_template, prompt_callback, response_func, allow_context_shift,
wrapper->promptContext, special,
fake_reply ? std::make_optional<std::string_view>(fake_reply) : std::nullopt);
try {
wrapper->llModel->prompt(prompt, prompt_func, response_func, promptContext);
} catch (std::exception const &e) {
llmodel_set_error(error, e.what());
return false;
}
// Update the C context by giving access to the wrappers raw pointers to std::vector data
// which involves no copies
ctx->tokens = wrapper->promptContext.tokens.data();
ctx->tokens_size = wrapper->promptContext.tokens.size();
// Update the rest of the C prompt context
ctx->n_past = wrapper->promptContext.n_past;
ctx->n_ctx = wrapper->promptContext.n_ctx;
ctx->n_predict = wrapper->promptContext.n_predict;
ctx->top_k = wrapper->promptContext.top_k;
ctx->top_p = wrapper->promptContext.top_p;
ctx->min_p = wrapper->promptContext.min_p;
ctx->temp = wrapper->promptContext.temp;
ctx->n_batch = wrapper->promptContext.n_batch;
ctx->repeat_penalty = wrapper->promptContext.repeat_penalty;
ctx->repeat_last_n = wrapper->promptContext.repeat_last_n;
ctx->context_erase = wrapper->promptContext.contextErase;
return true;
}
float *llmodel_embed(
@ -291,3 +300,21 @@ const char *llmodel_model_gpu_device_name(llmodel_model model)
const auto *wrapper = static_cast<LLModelWrapper *>(model);
return wrapper->llModel->gpuDeviceName();
}
int32_t llmodel_count_prompt_tokens(llmodel_model model, const char *prompt, const char **error)
{
auto *wrapper = static_cast<const LLModelWrapper *>(model);
try {
return wrapper->llModel->countPromptTokens(prompt);
} catch (const std::exception& e) {
llmodel_set_error(error, e.what());
return -1;
}
}
void llmodel_model_foreach_special_token(llmodel_model model, llmodel_special_token_callback callback)
{
auto *wrapper = static_cast<const LLModelWrapper *>(model);
for (auto &[name, token] : wrapper->llModel->specialTokens())
callback(name.c_str(), token.c_str());
}

View File

@ -4,211 +4,120 @@
#include <cassert>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <iostream>
#include <iterator>
#include <optional>
#include <regex>
#include <sstream>
#include <ranges>
#include <stdexcept>
#include <string>
#include <string_view>
#include <vector>
namespace ranges = std::ranges;
namespace views = std::ranges::views;
static bool parsePromptTemplate(const std::string &tmpl, std::vector<std::smatch> &placeholders, std::string &err)
{
static const std::regex placeholderRegex(R"(%[1-2](?![0-9]))");
void LLModel::prompt(
std::string_view prompt,
const PromptCallback &promptCallback,
const ResponseCallback &responseCallback,
const PromptContext &promptCtx
) {
if (!isModelLoaded())
throw std::invalid_argument("Attempted to prompt an unloaded model.");
if (!supportsCompletion())
throw std::invalid_argument("Not a text completion model.");
if (!promptCtx.n_batch)
throw std::invalid_argument("Batch size cannot be zero.");
if (!promptCtx.n_predict)
return; // nothing requested
auto it = std::sregex_iterator(tmpl.begin(), tmpl.end(), placeholderRegex);
placeholders.clear();
placeholders.insert(placeholders.end(), it, std::sregex_iterator());
auto embd_inp = tokenize(prompt);
if (embd_inp.empty())
throw std::invalid_argument("Prompt tokenized to zero tokens.");
if (placeholders.size() > 2) {
err = "ERROR: expected at most two placeholders, got " + std::to_string(placeholders.size());
return false;
}
if (placeholders.size() >= 1 && placeholders[0].str() != "%1") {
err = "ERROR: first placeholder must be %1, got " + placeholders[0].str();
return false;
}
if (placeholders.size() >= 2 && placeholders[1].str() != "%2") {
err = "ERROR: second placeholder must be %2, got " + placeholders[1].str();
return false;
}
return true;
if (auto res = decodePrompt(promptCallback, promptCtx, std::move(embd_inp)))
generateResponse(responseCallback, promptCtx, /*n_past*/ *res);
}
void LLModel::prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
bool special,
std::optional<std::string_view> fakeReply)
int32_t LLModel::countPromptTokens(std::string_view prompt) const
{
if (!isModelLoaded()) {
std::cerr << implementation().modelType() << " ERROR: prompt won't work with an unloaded model!\n";
return;
}
if (!supportsCompletion()) {
std::string errorMessage = "ERROR: this model does not support text completion or chat!";
responseCallback(-1, errorMessage);
std::cerr << implementation().modelType() << " " << errorMessage << "\n";
return;
}
// sanity checks
if (promptCtx.n_past > contextLength()) {
std::ostringstream ss;
ss << "n_past=" << promptCtx.n_past << " is past end of context length=" << contextLength();
throw std::out_of_range(ss.str());
}
if (promptCtx.n_past > promptCtx.tokens.size()) {
std::ostringstream ss;
ss << "n_past=" << promptCtx.n_past << " is past end of token cache length=" << promptCtx.tokens.size();
throw std::out_of_range(ss.str());
}
promptCtx.n_ctx = contextLength();
promptCtx.n_batch = std::min(promptCtx.n_batch, LLMODEL_MAX_PROMPT_BATCH);
if (promptCtx.n_past < promptCtx.tokens.size())
promptCtx.tokens.resize(promptCtx.n_past);
m_tokenize_last_token = promptCtx.tokens.empty() ? -1 : promptCtx.tokens.back(); // not serialized
// parse the prompt template
std::vector<std::smatch> placeholders;
{
std::string err;
if (!parsePromptTemplate(promptTemplate, placeholders, err)) {
responseCallback(-1, err);
std::cerr << err << "\n";
return;
}
}
auto old_n_past = promptCtx.n_past; // prepare to fake n_past for tokenize
// tokenize the user prompt
std::vector<Token> embd_inp;
if (placeholders.empty()) {
// this is unusual, but well-defined
std::cerr << __func__ << ": prompt template has no placeholder\n";
embd_inp = tokenize(promptCtx, promptTemplate, true);
} else {
// template: beginning of user prompt
const auto &phUser = placeholders[0];
std::string userPrefix(phUser.prefix());
if (!userPrefix.empty()) {
embd_inp = tokenize(promptCtx, userPrefix, true);
promptCtx.n_past += embd_inp.size();
}
// user input (shouldn't have special token processing)
auto tokens = tokenize(promptCtx, prompt, special);
embd_inp.insert(embd_inp.end(), tokens.begin(), tokens.end());
promptCtx.n_past += tokens.size();
// template: end of user prompt + start of assistant prompt
size_t start = phUser.position() + phUser.length();
size_t end = placeholders.size() >= 2 ? placeholders[1].position() : promptTemplate.length();
auto userToAsst = promptTemplate.substr(start, end - start);
if (!userToAsst.empty()) {
tokens = tokenize(promptCtx, userToAsst, true);
embd_inp.insert(embd_inp.end(), tokens.begin(), tokens.end());
promptCtx.n_past += tokens.size();
}
}
promptCtx.n_past = old_n_past; // restore n_past so decodePrompt can increment it
// decode the user prompt
if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp))
return; // error
// decode the assistant's reply, either generated or spoofed
if (!fakeReply) {
generateResponse(responseCallback, allowContextShift, promptCtx);
} else {
embd_inp = tokenize(promptCtx, *fakeReply, false);
if (!decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp, true))
return; // error
}
// decode the rest of the prompt template
// template: end of assistant prompt
std::string asstSuffix;
if (placeholders.size() >= 2) {
size_t start = placeholders[1].position() + placeholders[1].length();
asstSuffix = promptTemplate.substr(start);
} else {
asstSuffix = "\n\n"; // default to a blank link, good for e.g. Alpaca
}
if (!asstSuffix.empty()) {
embd_inp = tokenize(promptCtx, asstSuffix, true);
decodePrompt(promptCallback, responseCallback, allowContextShift, promptCtx, embd_inp);
}
if (!isModelLoaded())
throw std::invalid_argument("Attempted to tokenize with an unloaded model.");
return int32_t(tokenize(prompt).size());
}
// returns false on error
bool LLModel::decodePrompt(std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
std::vector<Token> embd_inp,
bool isResponse) {
if ((int) embd_inp.size() > promptCtx.n_ctx - 4) {
// FIXME: (Adam) We should find a way to bubble these strings to the UI level to allow for
// translation
responseCallback(-1, "Your message was too long and could not be processed. Please try again with something shorter.");
std::cerr << implementation().modelType() << " ERROR: The prompt is " << embd_inp.size() <<
" tokens and the context window is " << promptCtx.n_ctx << "!\n";
return false;
auto LLModel::decodePrompt(
const PromptCallback &promptCallback,
const PromptContext &promptCtx,
std::vector<Token> embd_inp
) -> std::optional<int32_t>
{
assert(!embd_inp.empty());
int32_t nCtx = contextLength();
int32_t n_batch = std::min(promptCtx.n_batch, LLMODEL_MAX_PROMPT_BATCH);
// Find the greatest n_past where the beginning of embd_inp matches the end of the token cache, starting at the
// requested n_past.
// This is used to skip unnecessary work when the prompt shares a common prefix with the previous result.
int32_t nPast = computeModelInputPosition(embd_inp);
// always decode up to a full batch before generating, even if cached
nPast -= std::min(n_batch, nPast);
// TODO(jared): generalize this to find the smallest new_embd_inp.size() - nPast given the cache
if (!nPast && int32_t(embd_inp.size()) > nCtx) {
// no cache hit -> shift the input before even processing
int32_t nKeep = shouldAddBOS();
auto newLength = int32_t(nCtx * (1.f - promptCtx.contextErase));
int32_t nDiscard = int32_t(embd_inp.size()) - std::max(1, std::min(nCtx, newLength));
// execute the callback even for skipped tokens. this misrepresents the position of BOS but we don't care
auto discardedTokens = embd_inp | views::drop(nKeep) | views::take(nDiscard);
if (!promptCallback(discardedTokens, true))
return std::nullopt;
// erase nDiscard tokens
embd_inp.erase(discardedTokens.begin(), discardedTokens.end());
assert(int32_t(embd_inp.size()) <= nCtx);
// check the cache again, just in case
nPast = computeModelInputPosition(embd_inp);
nPast -= std::min(n_batch, nPast);
}
// FIXME(jared): There are mitigations for this situation, such as making room before
// copying the prompt context, or restoring the KV cache when we restore the prompt
// context.
if (!allowContextShift && promptCtx.n_past + embd_inp.size() > promptCtx.n_ctx) {
std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_eval=" << embd_inp.size()
<< ", n_ctx=" << promptCtx.n_ctx << "\n";
return false;
}
setModelInputPosition(nPast);
// execute the callback even for skipped tokens
if (!promptCallback(embd_inp | views::take(nPast), true))
return std::nullopt;
// process the prompt in batches
size_t i = 0;
while (i < embd_inp.size()) {
size_t batch_end = std::min(i + promptCtx.n_batch, embd_inp.size());
std::vector<Token> batch(embd_inp.begin() + i, embd_inp.begin() + batch_end);
for (int32_t i = nPast; i < embd_inp.size();) {
auto batch_end = std::min(i + n_batch, int32_t(embd_inp.size()));
std::span batch(embd_inp.begin() + i, embd_inp.begin() + batch_end);
// Check if the context has run out...
if (promptCtx.n_past + int32_t(batch.size()) > promptCtx.n_ctx) {
assert(allowContextShift);
shiftContext(promptCtx);
assert(promptCtx.n_past + int32_t(batch.size()) <= promptCtx.n_ctx);
if (nPast + int32_t(batch.size()) > nCtx) {
shiftContext(promptCtx, &nPast);
assert(nPast + int32_t(batch.size()) <= nCtx);
}
if (!evalTokens(promptCtx, batch)) {
std::cerr << implementation().modelType() << " ERROR: Failed to process prompt\n";
return false;
}
// FIXME(Adam): We should find a way to bubble these strings to the UI level to allow for translation
if (!evalTokens(nPast, batch))
throw std::runtime_error("An internal error was encountered during prompt processing.");
size_t tokens = batch_end - i;
for (size_t t = 0; t < tokens; ++t) {
promptCtx.tokens.push_back(batch.at(t));
promptCtx.n_past += 1;
Token tok = batch.at(t);
bool res = isResponse ? responseCallback(tok, tokenToString(tok)) : promptCallback(tok);
if (!res)
return false;
for (auto &tok : batch) {
appendInputToken(tok);
nPast++;
if (!promptCallback({ &tok, 1 }, false))
return std::nullopt;
}
i = batch_end;
}
return true;
return nPast;
}
/*
@ -230,22 +139,16 @@ static std::string::size_type stringsOverlap(const std::string &s, const std::st
return std::string::npos;
}
void LLModel::generateResponse(std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx) {
void LLModel::generateResponse(
const ResponseCallback &responseCallback,
const PromptContext &promptCtx,
int32_t nPast
) {
static const char *stopSequences[] {
"### Instruction", "### Prompt", "### Response", "### Human", "### Assistant", "### Context",
"### System", "### Instruction", "### Human", "### User", "### Response", "### Assistant", "### Context",
"<|im_start|>", "<|im_end|>", "<|endoftext|>",
};
// Don't even start if there is no room
if (!promptCtx.n_predict)
return;
if (!allowContextShift && promptCtx.n_past >= promptCtx.n_ctx) {
std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_ctx=" << promptCtx.n_ctx
<< "\n";
return;
}
initSampler(promptCtx);
std::string cachedResponse;
@ -260,26 +163,20 @@ void LLModel::generateResponse(std::function<bool(int32_t, const std::string&)>
cachedTokens.push_back(new_tok.value());
cachedResponse += new_piece;
auto accept = [this, &promptCtx, &new_tok, allowContextShift]() -> bool {
auto accept = [this, &promptCtx, &new_tok, &nPast] {
// Shift context if out of space
if (promptCtx.n_past >= promptCtx.n_ctx) {
(void)allowContextShift;
assert(allowContextShift);
shiftContext(promptCtx);
assert(promptCtx.n_past < promptCtx.n_ctx);
if (nPast >= contextLength()) {
shiftContext(promptCtx, &nPast);
assert(nPast < contextLength());
}
// Accept the token
Token tok = std::exchange(new_tok, std::nullopt).value();
if (!evalTokens(promptCtx, { tok })) {
// TODO(jared): raise an exception
std::cerr << implementation().modelType() << " ERROR: Failed to predict next token\n";
return false;
}
if (!evalTokens(nPast, { &tok, 1 }))
throw std::runtime_error("An internal error was encountered during response generation.");
promptCtx.tokens.push_back(tok);
promptCtx.n_past += 1;
return true;
appendInputToken(tok);
nPast++;
};
// Check for EOS
@ -316,13 +213,6 @@ void LLModel::generateResponse(std::function<bool(int32_t, const std::string&)>
lengthLimit = cachedResponse.size() - new_piece.size();
}
// Optionally stop if the context will run out
if (!allowContextShift && promptCtx.n_past + cachedTokens.size() >= promptCtx.n_ctx) {
std::cerr << "LLModel Warning: Not enough space, n_past=" << promptCtx.n_past << ", n_ctx="
<< promptCtx.n_ctx << "\n";
stop = true;
}
// Empty the cache, up to the length limit
std::string::size_type responseLength = 0;
while (!cachedTokens.empty()) {
@ -339,8 +229,8 @@ void LLModel::generateResponse(std::function<bool(int32_t, const std::string&)>
cachedResponse.erase(cachedResponse.begin(), cachedResponse.begin() + piece.size());
// Accept the token, if needed (not cached)
if (cachedTokens.empty() && new_tok && !accept())
return;
if (cachedTokens.empty() && new_tok)
accept();
// Send the token
if (!responseCallback(tok, piece) || ++n_predicted >= promptCtx.n_predict) {
@ -359,24 +249,23 @@ void LLModel::generateResponse(std::function<bool(int32_t, const std::string&)>
assert(!cachedTokens.empty() && cachedTokens.back() == new_tok);
if (stop) {
cachedTokens.pop_back();
} else if (!accept()) {
return;
} else {
accept();
}
}
}
auto &tokens = promptCtx.tokens;
if (tokens.size() < cachedTokens.size()) {
if (inputLength() < cachedTokens.size()) {
/* This is theoretically possible if the longest stop sequence is greater than
* n_ctx * contextErase tokens. */
throw std::runtime_error("shifted too much context, can't go back");
}
auto discard_start = tokens.end() - cachedTokens.size();
assert(std::equal(discard_start, tokens.end(), cachedTokens.begin()));
tokens.erase(discard_start, tokens.end());
promptCtx.n_past -= cachedTokens.size();
#ifndef NDEBUG
auto inp = inputTokens();
auto discard_start = inp.end() - cachedTokens.size();
assert(std::equal(discard_start, inp.end(), cachedTokens.begin()));
#endif
}
void LLModel::embed(

View File

@ -113,10 +113,7 @@ def _old_loop(gpt4all_instance):
full_response = gpt4all_instance.chat_completion(
MESSAGES,
# preferential kwargs for chat ux
logits_size=0,
tokens_size=0,
n_past=0,
n_ctx=0,
n_predict=200,
top_k=40,
top_p=0.9,

View File

@ -8,11 +8,14 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
### Added
- Warn on Windows if the Microsoft Visual C++ runtime libraries are not found ([#2920](https://github.com/nomic-ai/gpt4all/pull/2920))
- Basic cache for faster prefill when the input shares a prefix with previous context ([#3073](https://github.com/nomic-ai/gpt4all/pull/3073))
- Add ability to modify or replace the history of an active chat session ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
### Changed
- Rebase llama.cpp on latest upstream as of September 26th ([#2998](https://github.com/nomic-ai/gpt4all/pull/2998))
- Change the error message when a message is too long ([#3004](https://github.com/nomic-ai/gpt4all/pull/3004))
- Fix CalledProcessError on Intel Macs since v2.8.0 ([#3045](https://github.com/nomic-ai/gpt4all/pull/3045))
- Use Jinja for chat templates instead of per-message QString.arg-style templates ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
## [2.8.2] - 2024-08-14

View File

@ -0,0 +1,206 @@
## What are chat templates?
Natively, large language models only know how to complete plain text and do not know the difference between their input and their output. In order to support a chat with a person, LLMs are designed to use a template to convert the conversation to plain text using a specific format.
For a given model, it is important to use an appropriate chat template, as each model is designed to work best with a specific format. The chat templates included with the built-in models should be sufficient for most purposes.
There are two reasons you would want to alter the chat template:
- You are sideloading a model and there is no chat template available,
- You would like to have greater control over the input to the LLM than a system message provides.
## What is a system message?
A system message is a message that controls the responses from the LLM in a way that affects the entire conversation. System messages can be short, such as "Speak like a pirate.", or they can be long and contain a lot of context for the LLM to keep in mind.
Not all models are designed to use a system message, so they work with some models better than others.
## How do I customize the chat template or system message?
To customize the chat template or system message, go to Settings > Model. Make sure to select the correct model at the top. If you clone a model, you can use a different chat template or system message from the base model, enabling you to use different settings for each conversation.
These settings take effect immediately. After changing them, you can click "Redo last response" in the chat view, and the response will take the new settings into account.
## Do I need to write a chat template?
You typically do not need to write your own chat template. The exception is models that are not in the official model list and do not come with a chat template built-in. These will show a "Clear" option above the chat template field in the Model Settings page instead of a "Reset" option. See the section on [finding] or [creating] a chat template.
[finding]: #how-do-i-find-a-chat-template
[creating]: #advanced-how-do-chat-templates-work
## What changed in GPT4All v3.5?
GPT4All v3.5 overhauled the chat template system. There are three crucial differences:
- The chat template now formats an entire conversation instead of a single pair of messages,
- The chat template now uses Jinja syntax instead of `%1` and `%2` placeholders,
- And the system message should no longer contain control tokens or trailing whitespace.
If you are using any chat templates or system messages that had been added or altered from the default before upgrading to GPT4All v3.5 or newer, these will no longer work. See below for how to solve common errors you may see after upgrading.
## Error/Warning: System message is not plain text.
This is easy to fix. Go to the model's settings and look at the system prompt. There are three things to look for:
- Control tokens such as `<|im_start|>`, `<|start_header_id|>`, or `<|system|>`
- A prefix such as `### System` or `SYSTEM:`
- Trailing whitespace, such as a space character or blank line.
If you see any of these things, remove them. For example, this legacy system prompt:
```
<|start_header_id|>system<|end_header_id|>
You are a helpful assistant.<|eot_id|>
```
Should become this:
```
You are a helpful assistant.
```
If you do not see anything that needs to be changed, you can dismiss the error by making a minor modification to the message and then changing it back.
If you see a warning, your system message does not appear to be plain text. If you believe this warning is incorrect, it can be safely ignored. If in doubt, ask on the [Discord].
[Discord]: https://discord.gg/mGZE39AS3e
## Error: Legacy system prompt needs to be updated in Settings.
This is the same as [above][above-1], but appears on the chat page.
[above-1]: #errorwarning-system-message-is-not-plain-text
## Error/Warning: Chat template is not in Jinja format.
This is the result of attempting to use an old-style template (possibly from a previous version) in GPT4All 3.5+.
Go to the Model Settings page and select the affected model. If you see a "Reset" button, and you have not intentionally modified the prompt template, you can click "Reset". Otherwise, this is what you can do:
1. Back up your chat template by copying it safely to a text file and saving it. In the next step, it will be removed from GPT4All.
2. Click "Reset" or "Clear".
3. If you clicked "Clear", the chat template is now gone. Follow the steps to [find][finding] or [create][creating] a basic chat template for your model.
4. Customize the chat template to suit your needs. For help, read the section about [creating] a chat template.
## Error: Legacy prompt template needs to be updated in Settings.
This is the same as [above][above-2], but appears on the chat page.
[above-2]: #errorwarning-chat-template-is-not-in-jinja-format
## The chat template has a syntax error.
If there is a syntax error while editing the chat template, the details will be displayed in an error message above the input box. This could be because the chat template is not actually in Jinja format (see [above][above-2]).
Otherwise, you have either typed something correctly, or the model comes with a template that is incompatible with GPT4All. See [the below section][creating] on creating chat templates and make sure that everything is correct. When in doubt, ask on the [Discord].
## Error: No chat template configured.
This may appear for models that are not from the official model list and do not include a chat template. Older versions of GPT4All picked a poor default in this case. You will get much better results if you follow the steps to [find][finding] or [create][creating] a chat template for your model.
## Error: The chat template cannot be blank.
If the button above the chat template on the Model Settings page says "Clear", see [above][above-3]. If you see "Reset", click that button to restore a reasonable default. Also see the section on [syntax errors][chat-syntax-error].
[above-3]: #error-no-chat-template-configured
[chat-syntax-error]: #the-chat-template-has-a-syntax-error
## How do I find a chat template?
When in doubt, you can always ask the [Discord] community for help. Below are the instructions to find one on your own.
The authoritative source for a model's chat template is the HuggingFace repo that the original (non-GGUF) model came from. First, you should find this page. If you just have a model file, you can try a google search for the model's name. If you know the page you downloaded the GGUF model from, its README usually links to the original non-GGUF model.
Once you have located the original model, there are two methods you can use to extract its chat template. Pick whichever one you are most comfortable with.
### Using the CLI (all models)
1. Install `jq` using your preferred package manager - e.g. Chocolatey (Windows), Homebrew (macOS), or apt (Ubuntu).
2. Download `tokenizer_config.json` from the model's "Files and versions" tab.
3. Open a command prompt in the directory which you have downloaded the model file.
4. Run `jq -r ".chat_template" tokenizer_config.json`. This shows the chat template in a human-readable form. You can copy this and paste it into the settings page.
5. (Optional) You can save the output to a text file like this: `jq -r ".chat_template" tokenizer_config.json >chat_template.txt`
If the output is "null", the model does not provide a chat template. See the [below instructions][creating] on creating a chat template.
### Python (open models)
1. Install `transformers` using your preferred python package manager, e.g. `pip install transformers`. Make sure it is at least version v4.43.0.
2. Copy the ID of the HuggingFace model, using the clipboard icon next to the name. For example, if the URL is `https://huggingface.co/NousResearch/Hermes-2-Pro-Llama-3-8B`, the ID is `NousResearch/Hermes-2-Pro-Llama-3-8B`.
3. Open a python interpreter (`python`) and run the following commands. Change the model ID in the example to the one you copied.
```
>>> from transformers import AutoTokenizer
>>> tokenizer = AutoTokenizer.from_pretrained('NousResearch/Hermes-2-Pro-Llama-3-8B')
>>> print(tokenizer.get_chat_template())
```
You can copy the output and paste it into the settings page.
4. (Optional) You can save the output to a text file like this:
```
>>> open('chat_template.txt', 'w').write(tokenizer.get_chat_template())
```
If you get a ValueError exception, this model does not provide a chat template. See the [below instructions][creating] on creating a chat template.
### Python (gated models)
Some models, such as Llama and Mistral, do not allow public access to their chat template. You must either use the CLI method above, or follow the following instructions to use Python:
1. For these steps, you must have git and git-lfs installed.
2. You must have a HuggingFace account and be logged in.
3. You must already have access to the gated model. Otherwise, request access.
4. You must have an SSH key configured for git access to HuggingFace.
5. `git clone` the model's HuggingFace repo using the SSH clone URL. There is no need to download the entire model, which is very large. A good way to do this on Linux is:
```console
$ GIT_LFS_SKIP_SMUDGE=1 git clone hf.co:meta-llama/Llama-3.1-8B-Instruct.git
$ cd Llama-3.1-8B-Instruct
$ git lfs pull -I "tokenizer.*"
```
6. Follow the above instructions for open models, but replace the model ID with the path to the directory containing `tokenizer\_config.json`:
```
>>> tokenizer = AutoTokenizer.from_pretrained('.')
```
## Advanced: How do chat templates work?
The chat template is applied to the entire conversation you see in the chat window. The template loops over the list of messages, each containing `role` and `content` fields. `role` is either `user`, `assistant`, or `system`.
GPT4All also supports the special variables `bos_token`, `eos_token`, and `add_generation_prompt`. See the [HuggingFace docs] for what those do.
[HuggingFace docs]: https://huggingface.co/docs/transformers/v4.46.3/en/chat_templating#special-variables
## Advanced: How do I make a chat template?
The best way to create a chat template is to start by using an existing one as a reference. Then, modify it to use the format documented for the given model. Its README page may explicitly give an example of its template. Or, it may mention the name of a well-known standard template, such as ChatML, Alpaca, Vicuna. GPT4All does not yet include presets for these templates, so they will have to be found in other models or taken from the community.
For more information, see the very helpful [HuggingFace guide]. Some of this is not applicable, such as the information about tool calling and RAG - GPT4All implements those features differently.
Some models use a prompt template that does not intuitively map to a multi-turn chat, because it is more intended for single instructions. The [FastChat] implementation of these templates is a useful reference for the correct way to extend them to multiple messages.
[HuggingFace guide]: https://huggingface.co/docs/transformers/v4.46.3/en/chat_templating#advanced-template-writing-tips
[FastChat]: https://github.com/lm-sys/FastChat/blob/main/fastchat/conversation.py
# Advanced: What are GPT4All v1 templates?
GPT4All supports its own template syntax, which is nonstandard but provides complete control over the way LocalDocs sources and file attachments are inserted into the conversation. These templates begin with `{# gpt4all v1 #}` and look similar to the example below.
For standard templates, GPT4All combines the user message, sources, and attachments into the `content` field. For GPT4All v1 templates, this is not done, so they must be used directly in the template for those features to work correctly.
```jinja
{# gpt4all v1 #}
{%- for message in messages %}
{{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\n\n' }}
{%- if message['role'] == 'user' %}
{%- for source in message['sources'] %}
{%- if loop.first %}
{{- '### Context:\n' }}
{%- endif %}
{{- 'Collection: ' + source['collection'] + '\n' +
'Path: ' + source['path'] + '\n' +
'Excerpt: ' + source['text'] + '\n\n' }}
{%- endfor %}
{%- endif %}
{%- for attachment in message['prompt_attachments'] %}
{{- attachment['processed_content'] + '\n\n' }}
{%- endfor %}
{{- message['content'] | trim }}
{{- '<|eot_id|>' }}
{%- endfor %}
{%- if add_generation_prompt %}
{{- '<|start_header_id|>assistant<|end_header_id|>\n\n' }}
{%- endif %}
```

View File

@ -8,8 +8,10 @@
| --- | --- | --- |
| **Theme** | Color theme for the application. Options are `Light`, `Dark`, and `LegacyDark` | `Light` |
| **Font Size** | Font size setting for text throughout the application. Options are Small, Medium, and Large | Small |
| **Language and Locale** | The language and locale of that language you wish to use | System Locale |
| **Device** | Device that will run your models. Options are `Auto` (GPT4All chooses), `Metal` (Apple Silicon M1+), `CPU`, and `GPU` | `Auto` |
| **Default Model** | Choose your preferred LLM to load by default on startup| Auto |
| **Suggestion Mode** | Generate suggested follow up questions at the end of responses | When chatting with LocalDocs |
| **Download Path** | Select a destination on your device to save downloaded models | Windows: `C:\Users\{username}\AppData\Local\nomic.ai\GPT4All`<br><br>Mac: `/Users/{username}/Library/Application Support/nomic.ai/GPT4All/`<br><br>Linux: `/home/{username}/.local/share/nomic.ai/GPT4All` |
| **Enable Datalake** | Opt-in to sharing interactions with GPT4All community (**anonymous** and **optional**) | Off |
@ -18,7 +20,7 @@
| Setting | Description | Default Value |
| --- | --- | --- |
| **CPU Threads** | Number of concurrently running CPU threads (more can speed up responses) | 4 |
| **Save Chat Context** | Save chat context to disk to pick up exactly where a model left off. | Off |
| **Enable System Tray** | The application will minimize to the system tray / taskbar when the window is closed | Off |
| **Enable Local Server** | Allow any application on your device to use GPT4All via an OpenAI-compatible GPT4All API | Off |
| **API Server Port** | Local HTTP port for the local API server | 4891 |
@ -29,8 +31,11 @@
| Setting | Description | Default Value |
| --- | --- | --- |
| **Name** | Unique name of this model / character| set by model uploader |
| **System Prompt** | General instructions for the chats this model will be used for | set by model uploader |
| **Prompt Template** | Format of user <-> assistant interactions for the chats this model will be used for | set by model uploader |
| **Model File** | Filename (.gguf) of the model | set by model uploader |
| **System Message** | General instructions for the chats this model will be used for | set by model uploader |
| **Chat Template** | Format of user <-> assistant interactions for the chats this model will be used for | set by model uploader |
| **Chat Name Prompt** | Prompt used to automatically generate chat names | Describe the above conversation in seven words or less. |
| **Suggested FollowUp Prompt** | Prompt used to automatically generate follow up questions after a chat response | Suggest three very short factual follow-up questions that have not been answered yet or cannot be found inspired by the previous conversation and excerpts. |
### Clone

View File

@ -9,7 +9,7 @@ import textwrap
import threading
from enum import Enum
from queue import Queue
from typing import TYPE_CHECKING, Any, Callable, Generic, Iterable, Literal, NoReturn, TypeVar, overload
from typing import TYPE_CHECKING, Any, Callable, Generic, Iterable, Iterator, Literal, NoReturn, TypeVar, overload
if sys.version_info >= (3, 9):
import importlib.resources as importlib_resources
@ -23,7 +23,9 @@ else:
from typing import TypedDict
if TYPE_CHECKING:
from typing_extensions import TypeAlias
from typing_extensions import ParamSpec, TypeAlias
T = TypeVar("T")
P = ParamSpec("P")
EmbeddingsType = TypeVar('EmbeddingsType', bound='list[Any]')
@ -31,7 +33,7 @@ cuda_found: bool = False
# TODO(jared): use operator.call after we drop python 3.10 support
def _operator_call(obj, /, *args, **kwargs):
def _operator_call(obj: Callable[P, T], /, *args: P.args, **kwargs: P.kwargs) -> T:
return obj(*args, **kwargs)
@ -116,19 +118,15 @@ llmodel = load_llmodel_library()
class LLModelPromptContext(ctypes.Structure):
_fields_ = [
("tokens", ctypes.POINTER(ctypes.c_int32)),
("tokens_size", ctypes.c_size_t),
("n_past", ctypes.c_int32),
("n_ctx", ctypes.c_int32),
("n_predict", ctypes.c_int32),
("top_k", ctypes.c_int32),
("top_p", ctypes.c_float),
("min_p", ctypes.c_float),
("temp", ctypes.c_float),
("n_batch", ctypes.c_int32),
("n_predict", ctypes.c_int32),
("top_k", ctypes.c_int32),
("top_p", ctypes.c_float),
("min_p", ctypes.c_float),
("temp", ctypes.c_float),
("n_batch", ctypes.c_int32),
("repeat_penalty", ctypes.c_float),
("repeat_last_n", ctypes.c_int32),
("context_erase", ctypes.c_float),
("repeat_last_n", ctypes.c_int32),
("context_erase", ctypes.c_float),
]
@ -160,23 +158,21 @@ llmodel.llmodel_required_mem.restype = ctypes.c_size_t
llmodel.llmodel_isModelLoaded.argtypes = [ctypes.c_void_p]
llmodel.llmodel_isModelLoaded.restype = ctypes.c_bool
PromptCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_int32)
ResponseCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_int32, ctypes.c_char_p)
EmbCancelCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_uint), ctypes.c_uint, ctypes.c_char_p)
PromptCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_int32), ctypes.c_size_t, ctypes.c_bool)
ResponseCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.c_int32, ctypes.c_char_p)
EmbCancelCallback = ctypes.CFUNCTYPE(ctypes.c_bool, ctypes.POINTER(ctypes.c_uint), ctypes.c_uint, ctypes.c_char_p)
SpecialTokenCallback = ctypes.CFUNCTYPE(None, ctypes.c_char_p, ctypes.c_char_p)
llmodel.llmodel_prompt.argtypes = [
ctypes.c_void_p,
ctypes.c_char_p,
ctypes.c_char_p,
PromptCallback,
ResponseCallback,
ctypes.c_bool,
ctypes.POINTER(LLModelPromptContext),
ctypes.c_bool,
ctypes.c_char_p,
ctypes.POINTER(ctypes.c_char_p),
]
llmodel.llmodel_prompt.restype = None
llmodel.llmodel_prompt.restype = ctypes.c_bool
llmodel.llmodel_embed.argtypes = [
ctypes.c_void_p,
@ -225,6 +221,12 @@ llmodel.llmodel_model_backend_name.restype = ctypes.c_char_p
llmodel.llmodel_model_gpu_device_name.argtypes = [ctypes.c_void_p]
llmodel.llmodel_model_gpu_device_name.restype = ctypes.c_char_p
llmodel.llmodel_count_prompt_tokens.argtypes = [ctypes.c_void_p, ctypes.POINTER(ctypes.c_char_p)]
llmodel.llmodel_count_prompt_tokens.restype = ctypes.c_int32
llmodel.llmodel_model_foreach_special_token.argtypes = [ctypes.c_void_p, SpecialTokenCallback]
llmodel.llmodel_model_foreach_special_token.restype = None
ResponseCallbackType = Callable[[int, str], bool]
RawResponseCallbackType = Callable[[int, bytes], bool]
EmbCancelCallbackType: TypeAlias = 'Callable[[list[int], str], bool]'
@ -269,7 +271,6 @@ class LLModel:
self.model_path = model_path.encode()
self.n_ctx = n_ctx
self.ngl = ngl
self.context: LLModelPromptContext | None = None
self.buffer = bytearray()
self.buff_expecting_cont_bytes: int = 0
@ -289,6 +290,10 @@ class LLModel:
raise RuntimeError(f"Unable to instantiate model: {errmsg}")
self.model: ctypes.c_void_p | None = model
self.special_tokens_map: dict[str, str] = {}
llmodel.llmodel_model_foreach_special_token(
self.model, lambda n, t: self.special_tokens_map.__setitem__(n.decode(), t.decode()),
)
def __del__(self, llmodel=llmodel):
if hasattr(self, 'model'):
@ -315,6 +320,19 @@ class LLModel:
dev = llmodel.llmodel_model_gpu_device_name(self.model)
return None if dev is None else dev.decode()
def count_prompt_tokens(self, prompt: str) -> int:
if self.model is None:
self._raise_closed()
err = ctypes.c_char_p()
n_tok = llmodel.llmodel_count_prompt_tokens(self.model, prompt, ctypes.byref(err))
if n_tok < 0:
s = err.value
errmsg = 'null' if s is None else s.decode()
raise RuntimeError(f'Unable to count prompt tokens: {errmsg}')
return n_tok
llmodel.llmodel_count_prompt_tokens.argtypes = [ctypes.c_void_p, ctypes.c_char_p]
@staticmethod
def list_gpus(mem_required: int = 0) -> list[str]:
"""
@ -378,50 +396,6 @@ class LLModel:
raise Exception("Model not loaded")
return llmodel.llmodel_threadCount(self.model)
def _set_context(
self,
n_predict: int = 4096,
top_k: int = 40,
top_p: float = 0.9,
min_p: float = 0.0,
temp: float = 0.1,
n_batch: int = 8,
repeat_penalty: float = 1.2,
repeat_last_n: int = 10,
context_erase: float = 0.75,
reset_context: bool = False,
):
if self.context is None:
context = LLModelPromptContext(
tokens_size=0,
n_past=0,
n_ctx=0,
n_predict=n_predict,
top_k=top_k,
top_p=top_p,
min_p=min_p,
temp=temp,
n_batch=n_batch,
repeat_penalty=repeat_penalty,
repeat_last_n=repeat_last_n,
context_erase=context_erase,
)
self.context = context
else:
context = self.context
if reset_context:
self.context.n_past = 0
self.context.n_predict = n_predict
self.context.top_k = top_k
self.context.top_p = top_p
self.context.min_p = min_p
self.context.temp = temp
self.context.n_batch = n_batch
self.context.repeat_penalty = repeat_penalty
self.context.repeat_last_n = repeat_last_n
self.context.context_erase = context_erase
@overload
def generate_embeddings(
self, text: str, prefix: str | None, dimensionality: int, do_mean: bool, atlas: bool,
@ -491,20 +465,18 @@ class LLModel:
def prompt_model(
self,
prompt: str,
prompt_template: str,
callback: ResponseCallbackType,
n_predict: int = 4096,
top_k: int = 40,
top_p: float = 0.9,
min_p: float = 0.0,
temp: float = 0.1,
n_batch: int = 8,
repeat_penalty: float = 1.2,
repeat_last_n: int = 10,
context_erase: float = 0.75,
reset_context: bool = False,
special: bool = False,
prompt : str,
callback : ResponseCallbackType,
n_predict : int = 4096,
top_k : int = 40,
top_p : float = 0.9,
min_p : float = 0.0,
temp : float = 0.1,
n_batch : int = 8,
repeat_penalty : float = 1.2,
repeat_last_n : int = 10,
context_erase : float = 0.75,
reset_context : bool = False,
):
"""
Generate response from model from a prompt.
@ -527,34 +499,38 @@ class LLModel:
self.buffer.clear()
self.buff_expecting_cont_bytes = 0
self._set_context(
n_predict=n_predict,
top_k=top_k,
top_p=top_p,
min_p=min_p,
temp=temp,
n_batch=n_batch,
repeat_penalty=repeat_penalty,
repeat_last_n=repeat_last_n,
context_erase=context_erase,
reset_context=reset_context,
context = LLModelPromptContext(
n_predict = n_predict,
top_k = top_k,
top_p = top_p,
min_p = min_p,
temp = temp,
n_batch = n_batch,
repeat_penalty = repeat_penalty,
repeat_last_n = repeat_last_n,
context_erase = context_erase,
)
llmodel.llmodel_prompt(
error_msg: bytes | None = None
def error_callback(msg: bytes) -> None:
nonlocal error_msg
error_msg = msg
err = ctypes.c_char_p()
if not llmodel.llmodel_prompt(
self.model,
ctypes.c_char_p(prompt.encode()),
ctypes.c_char_p(prompt_template.encode()),
PromptCallback(self._prompt_callback),
ResponseCallback(self._callback_decoder(callback)),
True,
self.context,
special,
ctypes.c_char_p(),
)
context,
ctypes.byref(err),
):
s = err.value
raise RuntimeError(f"prompt error: {'null' if s is None else s.decode()}")
def prompt_model_streaming(
self, prompt: str, prompt_template: str, callback: ResponseCallbackType = empty_response_callback, **kwargs
) -> Iterable[str]:
self, prompt: str, callback: ResponseCallbackType = empty_response_callback, **kwargs: Any,
) -> Iterator[str]:
if self.model is None:
self._raise_closed()
@ -573,15 +549,15 @@ class LLModel:
return _generator_callback
def run_llmodel_prompt(prompt: str, prompt_template: str, callback: ResponseCallbackType, **kwargs):
self.prompt_model(prompt, prompt_template, callback, **kwargs)
def run_llmodel_prompt(prompt: str, callback: ResponseCallbackType, **kwargs):
self.prompt_model(prompt, callback, **kwargs)
output_queue.put(Sentinel.TERMINATING_SYMBOL)
# Kick off llmodel_prompt in separate thread so we can return generator
# immediately
thread = threading.Thread(
target=run_llmodel_prompt,
args=(prompt, prompt_template, _generator_callback_wrapper(callback)),
args=(prompt, _generator_callback_wrapper(callback)),
kwargs=kwargs,
)
thread.start()
@ -636,5 +612,5 @@ class LLModel:
# Empty prompt callback
@staticmethod
def _prompt_callback(token_id: int) -> bool:
def _prompt_callback(token_ids: ctypes._Pointer[ctypes.c_int32], n_token_ids: int, cached: bool) -> bool:
return True

View File

@ -4,37 +4,66 @@ Python only API for running all GPT4All models.
from __future__ import annotations
import hashlib
import json
import os
import platform
import re
import sys
import warnings
from contextlib import contextmanager
from datetime import datetime
from pathlib import Path
from types import TracebackType
from typing import TYPE_CHECKING, Any, Iterable, Literal, Protocol, overload
from typing import TYPE_CHECKING, Any, Iterable, Iterator, Literal, NamedTuple, NoReturn, Protocol, TypedDict, overload
import jinja2
import requests
from jinja2.sandbox import ImmutableSandboxedEnvironment
from requests.exceptions import ChunkedEncodingError
from tqdm import tqdm
from urllib3.exceptions import IncompleteRead, ProtocolError
from ._pyllmodel import (CancellationError as CancellationError, EmbCancelCallbackType, EmbedResult as EmbedResult,
LLModel, ResponseCallbackType, empty_response_callback)
LLModel, ResponseCallbackType, _operator_call, empty_response_callback)
if TYPE_CHECKING:
from typing_extensions import Self, TypeAlias
if sys.platform == 'darwin':
if sys.platform == "darwin":
import fcntl
# TODO: move to config
DEFAULT_MODEL_DIRECTORY = Path.home() / ".cache" / "gpt4all"
DEFAULT_PROMPT_TEMPLATE = "### Human:\n{0}\n\n### Assistant:\n"
ConfigType: TypeAlias = "dict[str, Any]"
ConfigType: TypeAlias = 'dict[str, Any]'
MessageType: TypeAlias = 'dict[str, str]'
# Environment setup adapted from HF transformers
@_operator_call
def _jinja_env() -> ImmutableSandboxedEnvironment:
def raise_exception(message: str) -> NoReturn:
raise jinja2.exceptions.TemplateError(message)
def tojson(obj: Any, indent: int | None = None) -> str:
return json.dumps(obj, ensure_ascii=False, indent=indent)
def strftime_now(fmt: str) -> str:
return datetime.now().strftime(fmt)
env = ImmutableSandboxedEnvironment(trim_blocks=True, lstrip_blocks=True)
env.filters["tojson" ] = tojson
env.globals["raise_exception"] = raise_exception
env.globals["strftime_now" ] = strftime_now
return env
class MessageType(TypedDict):
role: str
content: str
class ChatSession(NamedTuple):
template: jinja2.Template
history: list[MessageType]
class Embed4All:
@ -54,7 +83,7 @@ class Embed4All:
kwargs: Remaining keyword arguments are passed to the `GPT4All` constructor.
"""
if model_name is None:
model_name = 'all-MiniLM-L6-v2.gguf2.f16.gguf'
model_name = "all-MiniLM-L6-v2.gguf2.f16.gguf"
self.gpt4all = GPT4All(model_name, n_threads=n_threads, device=device, **kwargs)
def __enter__(self) -> Self:
@ -145,18 +174,18 @@ class Embed4All:
dimensionality = -1
else:
if dimensionality <= 0:
raise ValueError(f'Dimensionality must be None or a positive integer, got {dimensionality}')
raise ValueError(f"Dimensionality must be None or a positive integer, got {dimensionality}")
if dimensionality < self.MIN_DIMENSIONALITY:
warnings.warn(
f'Dimensionality {dimensionality} is less than the suggested minimum of {self.MIN_DIMENSIONALITY}.'
' Performance may be degraded.'
f"Dimensionality {dimensionality} is less than the suggested minimum of {self.MIN_DIMENSIONALITY}."
" Performance may be degraded."
)
try:
do_mean = {"mean": True, "truncate": False}[long_text_mode]
except KeyError:
raise ValueError(f"Long text mode must be one of 'mean' or 'truncate', got {long_text_mode!r}")
result = self.gpt4all.model.generate_embeddings(text, prefix, dimensionality, do_mean, atlas, cancel_cb)
return result if return_dict else result['embeddings']
return result if return_dict else result["embeddings"]
class GPT4All:
@ -204,8 +233,7 @@ class GPT4All:
"""
self.model_type = model_type
self._history: list[MessageType] | None = None
self._current_prompt_template: str = "{0}"
self._chat_session: ChatSession | None = None
device_init = None
if sys.platform == "darwin":
@ -264,7 +292,13 @@ class GPT4All:
@property
def current_chat_session(self) -> list[MessageType] | None:
return None if self._history is None else list(self._history)
return None if self._chat_session is None else self._chat_session.history
@current_chat_session.setter
def current_chat_session(self, history: list[MessageType]) -> None:
if self._chat_session is None:
raise ValueError("current_chat_session may only be set when there is an active chat session")
self._chat_session.history[:] = history
@staticmethod
def list_models() -> list[ConfigType]:
@ -276,7 +310,7 @@ class GPT4All:
"""
resp = requests.get("https://gpt4all.io/models/models3.json")
if resp.status_code != 200:
raise ValueError(f'Request failed: HTTP {resp.status_code} {resp.reason}')
raise ValueError(f"Request failed: HTTP {resp.status_code} {resp.reason}")
return resp.json()
@classmethod
@ -306,15 +340,9 @@ class GPT4All:
# get the config for the model
config: ConfigType = {}
if allow_download:
available_models = cls.list_models()
for m in available_models:
if model_filename == m["filename"]:
tmpl = m.get("promptTemplate", DEFAULT_PROMPT_TEMPLATE)
# change to Python-style formatting
m["promptTemplate"] = tmpl.replace("%1", "{0}", 1).replace("%2", "{1}", 1)
config.update(m)
break
models = cls.list_models()
if (model := next((m for m in models if m["filename"] == model_filename), None)) is not None:
config.update(model)
# Validate download directory
if model_path is None:
@ -378,13 +406,13 @@ class GPT4All:
headers = {}
if offset:
print(f"\nDownload interrupted, resuming from byte position {offset}", file=sys.stderr)
headers['Range'] = f'bytes={offset}-' # resume incomplete response
headers["Range"] = f"bytes={offset}-" # resume incomplete response
headers["Accept-Encoding"] = "identity" # Content-Encoding changes meaning of ranges
response = requests.get(url, stream=True, headers=headers)
if response.status_code not in (200, 206):
raise ValueError(f'Request failed: HTTP {response.status_code} {response.reason}')
if offset and (response.status_code != 206 or str(offset) not in response.headers.get('Content-Range', '')):
raise ValueError('Connection was interrupted and server does not support range requests')
raise ValueError(f"Request failed: HTTP {response.status_code} {response.reason}")
if offset and (response.status_code != 206 or str(offset) not in response.headers.get("Content-Range", "")):
raise ValueError("Connection was interrupted and server does not support range requests")
if (enc := response.headers.get("Content-Encoding")) is not None:
raise ValueError(f"Expected identity Content-Encoding, got {enc}")
return response
@ -483,19 +511,19 @@ class GPT4All:
def generate(
self,
prompt: str,
prompt : str,
*,
max_tokens: int = 200,
temp: float = 0.7,
top_k: int = 40,
top_p: float = 0.4,
min_p: float = 0.0,
repeat_penalty: float = 1.18,
repeat_last_n: int = 64,
n_batch: int = 8,
n_predict: int | None = None,
streaming: bool = False,
callback: ResponseCallbackType = empty_response_callback,
max_tokens : int = 200,
temp : float = 0.7,
top_k : int = 40,
top_p : float = 0.4,
min_p : float = 0.0,
repeat_penalty : float = 1.18,
repeat_last_n : int = 64,
n_batch : int = 8,
n_predict : int | None = None,
streaming : bool = False,
callback : ResponseCallbackType = empty_response_callback,
) -> Any:
"""
Generate outputs from any GPT4All model.
@ -520,122 +548,94 @@ class GPT4All:
# Preparing the model request
generate_kwargs: dict[str, Any] = dict(
temp=temp,
top_k=top_k,
top_p=top_p,
min_p=min_p,
repeat_penalty=repeat_penalty,
repeat_last_n=repeat_last_n,
n_batch=n_batch,
n_predict=n_predict if n_predict is not None else max_tokens,
temp = temp,
top_k = top_k,
top_p = top_p,
min_p = min_p,
repeat_penalty = repeat_penalty,
repeat_last_n = repeat_last_n,
n_batch = n_batch,
n_predict = n_predict if n_predict is not None else max_tokens,
)
if self._history is not None:
# check if there is only one message, i.e. system prompt:
reset = len(self._history) == 1
self._history.append({"role": "user", "content": prompt})
fct_func = self._format_chat_prompt_template.__func__ # type: ignore[attr-defined]
if fct_func is GPT4All._format_chat_prompt_template:
if reset:
# ingest system prompt
# use "%1%2" and not "%1" to avoid implicit whitespace
self.model.prompt_model(self._history[0]["content"], "%1%2",
empty_response_callback,
n_batch=n_batch, n_predict=0, reset_context=True, special=True)
prompt_template = self._current_prompt_template.format("%1", "%2")
else:
warnings.warn(
"_format_chat_prompt_template is deprecated. Please use a chat session with a prompt template.",
DeprecationWarning,
)
# special tokens won't be processed
prompt = self._format_chat_prompt_template(
self._history[-1:],
self._history[0]["content"] if reset else "",
)
prompt_template = "%1"
generate_kwargs["reset_context"] = reset
else:
prompt_template = "%1"
generate_kwargs["reset_context"] = True
# Prepare the callback, process the model response
output_collector: list[MessageType]
output_collector = [
{"content": ""}
] # placeholder for the self._history if chat session is not activated
full_response = ""
if self._history is not None:
self._history.append({"role": "assistant", "content": ""})
output_collector = self._history
def _callback_wrapper(token_id: int, response: str) -> bool:
nonlocal full_response
full_response += response
return callback(token_id, response)
def _callback_wrapper(
callback: ResponseCallbackType,
output_collector: list[MessageType],
) -> ResponseCallbackType:
def _callback(token_id: int, response: str) -> bool:
nonlocal callback, output_collector
last_msg_rendered = prompt
if self._chat_session is not None:
session = self._chat_session
def render(messages: list[MessageType]) -> str:
return session.template.render(
messages=messages,
add_generation_prompt=True,
**self.model.special_tokens_map,
)
session.history.append(MessageType(role="user", content=prompt))
prompt = render(session.history)
if len(session.history) > 1:
last_msg_rendered = render(session.history[-1:])
output_collector[-1]["content"] += response
return callback(token_id, response)
return _callback
# Check request length
last_msg_len = self.model.count_prompt_tokens(last_msg_rendered)
if last_msg_len > (limit := self.model.n_ctx - 4):
raise ValueError(f"Your message was too long and could not be processed ({last_msg_len} > {limit}).")
# Send the request to the model
if streaming:
return self.model.prompt_model_streaming(
prompt,
prompt_template,
_callback_wrapper(callback, output_collector),
**generate_kwargs,
)
def stream() -> Iterator[str]:
yield from self.model.prompt_model_streaming(prompt, _callback_wrapper, **generate_kwargs)
if self._chat_session is not None:
self._chat_session.history.append(MessageType(role="assistant", content=full_response))
return stream()
self.model.prompt_model(
prompt,
prompt_template,
_callback_wrapper(callback, output_collector),
**generate_kwargs,
)
return output_collector[-1]["content"]
self.model.prompt_model(prompt, _callback_wrapper, **generate_kwargs)
if self._chat_session is not None:
self._chat_session.history.append(MessageType(role="assistant", content=full_response))
return full_response
@contextmanager
def chat_session(
self,
system_prompt: str | None = None,
prompt_template: str | None = None,
system_message: str | Literal[False] | None = None,
chat_template: str | None = None,
):
"""
Context manager to hold an inference optimized chat session with a GPT4All model.
Args:
system_prompt: An initial instruction for the model.
prompt_template: Template for the prompts with {0} being replaced by the user message.
system_message: An initial instruction for the model, None to use the model default, or False to disable. Defaults to None.
chat_template: Jinja template for the conversation, or None to use the model default. Defaults to None.
"""
if system_prompt is None:
system_prompt = self.config.get("systemPrompt", "")
if system_message is None:
system_message = self.config.get("systemMessage", False)
if prompt_template is None:
if (tmpl := self.config.get("promptTemplate")) is None:
warnings.warn("Use of a sideloaded model or allow_download=False without specifying a prompt template "
"is deprecated. Defaulting to Alpaca.", DeprecationWarning)
tmpl = DEFAULT_PROMPT_TEMPLATE
prompt_template = tmpl
if chat_template is None:
if "name" not in self.config:
raise ValueError("For sideloaded models or with allow_download=False, you must specify a chat template.")
if "chatTemplate" not in self.config:
raise NotImplementedError("This model appears to have a built-in chat template, but loading it is not "
"currently implemented. Please pass a template to chat_session() directly.")
if (tmpl := self.config["chatTemplate"]) is None:
raise ValueError(f"The model {self.config['name']!r} does not support chat.")
chat_template = tmpl
if re.search(r"%1(?![0-9])", prompt_template):
raise ValueError("Prompt template containing a literal '%1' is not supported. For a prompt "
"placeholder, please use '{0}' instead.")
self._history = [{"role": "system", "content": system_prompt}]
self._current_prompt_template = prompt_template
history = []
if system_message is not False:
history.append(MessageType(role="system", content=system_message))
self._chat_session = ChatSession(
template=_jinja_env.from_string(chat_template),
history=history,
)
try:
yield self
finally:
self._history = None
self._current_prompt_template = "{0}"
self._chat_session = None
@staticmethod
def list_gpus() -> list[str]:
@ -647,43 +647,6 @@ class GPT4All:
"""
return LLModel.list_gpus()
def _format_chat_prompt_template(
self,
messages: list[MessageType],
default_prompt_header: str = "",
default_prompt_footer: str = "",
) -> str:
"""
Helper method for building a prompt from list of messages using the self._current_prompt_template as a template for each message.
Warning:
This function was deprecated in version 2.3.0, and will be removed in a future release.
Args:
messages: List of dictionaries. Each dictionary should have a "role" key
with value of "system", "assistant", or "user" and a "content" key with a
string value. Messages are organized such that "system" messages are at top of prompt,
and "user" and "assistant" messages are displayed in order. Assistant messages get formatted as
"Response: {content}".
Returns:
Formatted prompt.
"""
full_prompt = default_prompt_header + "\n\n" if default_prompt_header != "" else ""
for message in messages:
if message["role"] == "user":
user_message = self._current_prompt_template.format(message["content"])
full_prompt += user_message
if message["role"] == "assistant":
assistant_message = message["content"] + "\n"
full_prompt += assistant_message
full_prompt += "\n\n" + default_prompt_footer if default_prompt_footer != "" else ""
return full_prompt
def append_extension_if_missing(model_name):
if not model_name.endswith((".bin", ".gguf")):
@ -696,7 +659,7 @@ class _HasFileno(Protocol):
def _fsync(fd: int | _HasFileno) -> None:
if sys.platform == 'darwin':
if sys.platform == "darwin":
# Apple's fsync does not flush the drive write cache
try:
fcntl.fcntl(fd, fcntl.F_FULLFSYNC)

View File

@ -14,6 +14,7 @@ nav:
- 'Models' : 'gpt4all_desktop/models.md'
- 'LocalDocs' : 'gpt4all_desktop/localdocs.md'
- 'Settings' : 'gpt4all_desktop/settings.md'
- 'Chat Templates' : 'gpt4all_desktop/chat_templates.md'
- 'Cookbook':
- 'Local AI Chat with Microsoft Excel': 'gpt4all_desktop/cookbook/use-local-ai-models-to-privately-chat-with-microsoft-excel.md'
- 'Local AI Chat with your Google Drive': 'gpt4all_desktop/cookbook/use-local-ai-models-to-privately-chat-with-google-drive.md'

View File

@ -88,9 +88,10 @@ setup(
python_requires='>=3.8',
packages=find_packages(),
install_requires=[
'importlib_resources; python_version < "3.9"',
'jinja2~=3.1',
'requests',
'tqdm',
'importlib_resources; python_version < "3.9"',
'typing-extensions>=4.3.0; python_version >= "3.9" and python_version < "3.11"',
],
extras_require={

5
gpt4all-chat/.flake8 Normal file
View File

@ -0,0 +1,5 @@
# vim: set syntax=dosini:
[flake8]
exclude = .*,__pycache__
max-line-length = 120
extend-ignore = B001,C408,D,DAR,E221,E303,E722,E741,E800,N801,N806,P101,S101,S324,S404,S406,S410,S603,WPS100,WPS110,WPS111,WPS113,WPS114,WPS115,WPS120,WPS2,WPS300,WPS301,WPS304,WPS305,WPS306,WPS309,WPS316,WPS317,WPS318,WPS319,WPS322,WPS323,WPS326,WPS329,WPS330,WPS332,WPS336,WPS337,WPS347,WPS360,WPS361,WPS407,WPS414,WPS420,WPS421,WPS429,WPS430,WPS431,WPS432,WPS433,WPS437,WPS440,WPS440,WPS441,WPS442,WPS457,WPS458,WPS460,WPS462,WPS463,WPS473,WPS501,WPS504,WPS505,WPS508,WPS509,WPS510,WPS515,WPS516,WPS519,WPS520,WPS529,WPS531,WPS602,WPS604,WPS605,WPS608,WPS609,WPS613,WPS615

View File

@ -4,6 +4,163 @@ All notable changes to this project will be documented in this file.
The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
## [3.10.0] - 2025-02-24
### Added
- Whitelist Granite (non-MoE) model architecture (by [@ThiloteE](https://github.com/ThiloteE) in [#3487](https://github.com/nomic-ai/gpt4all/pull/3487))
- Add support for CUDA compute 5.0 GPUs such as the GTX 750 ([#3499](https://github.com/nomic-ai/gpt4all/pull/3499))
- Add a Remote Providers tab to the Add Model page ([#3506](https://github.com/nomic-ai/gpt4all/pull/3506))
### Changed
- Substitute prettier default templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B (by [@ThiloteE](https://github.com/ThiloteE) in [#3471](https://github.com/nomic-ai/gpt4all/pull/3471))
- Build with LLVM Clang 19 on macOS and Ubuntu ([#3500](https://github.com/nomic-ai/gpt4all/pull/3500))
### Fixed
- Fix several potential crashes ([#3465](https://github.com/nomic-ai/gpt4all/pull/3465))
- Fix visual spacing issues with deepseek models ([#3470](https://github.com/nomic-ai/gpt4all/pull/3470))
- Add missing strings to Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#3496](https://github.com/nomic-ai/gpt4all/pull/3496))
- Update Simplified Chinese translation (by [@Junior2Ran](https://github.com/Junior2Ran) in [#3467](https://github.com/nomic-ai/pull/3467))
## [3.9.0] - 2025-02-04
### Added
- Whitelist OLMoE and Granite MoE model architectures (no Vulkan) (by [@ThiloteE](https://github.com/ThiloteE) in [#3449](https://github.com/nomic-ai/gpt4all/pull/3449))
### Fixed
- Fix "index N is not a prompt" when using LocalDocs with reasoning ([#3451](https://github.com/nomic-ai/gpt4all/pull/3451))
- Work around rendering artifacts on Snapdragon SoCs with Windows ([#3450](https://github.com/nomic-ai/gpt4all/pull/3450))
- Prevent DeepSeek-R1 reasoning from appearing in chat names and follow-up questions ([#3458](https://github.com/nomic-ai/gpt4all/pull/3458))
- Fix LocalDocs crash on Windows ARM when reading PDFs ([#3460](https://github.com/nomic-ai/gpt4all/pull/3460))
- Fix UI freeze when chat template is `{#` ([#3446](https://github.com/nomic-ai/gpt4all/pull/3446))
## [3.8.0] - 2025-01-30
### Added
- Support DeepSeek-R1 Qwen models ([#3431](https://github.com/nomic-ai/gpt4all/pull/3431))
- Support for think tags in the GUI ([#3440](https://github.com/nomic-ai/gpt4all/pull/3440))
- Support specifying SHA256 hash in models3.json instead of MD5 ([#3437](https://github.com/nomic-ai/gpt4all/pull/3437))
### Changed
- Use minja instead of Jinja2Cpp for significantly improved template compatibility ([#3433](https://github.com/nomic-ai/gpt4all/pull/3433))
### Fixed
- Fix regression while using localdocs with server API ([#3410](https://github.com/nomic-ai/gpt4all/pull/3410))
- Don't show system messages in server chat view ([#3411](https://github.com/nomic-ai/gpt4all/pull/3411))
- Fix `codesign --verify` failure on macOS ([#3413](https://github.com/nomic-ai/gpt4all/pull/3413))
- Code Interpreter: Fix console.log not accepting a single string after v3.7.0 ([#3426](https://github.com/nomic-ai/gpt4all/pull/3426))
- Fix Phi 3.1 Mini 128K Instruct template (by [@ThiloteE](https://github.com/ThiloteE) in [#3412](https://github.com/nomic-ai/gpt4all/pull/3412))
- Don't block the gui thread for reasoning ([#3435](https://github.com/nomic-ai/gpt4all/pull/3435))
- Fix corruption of unicode in output of reasoning models ([#3443](https://github.com/nomic-ai/gpt4all/pull/3443))
## [3.7.0] - 2025-01-21
### Added
- Add support for the Windows ARM64 target platform (CPU-only) ([#3385](https://github.com/nomic-ai/gpt4all/pull/3385))
### Changed
- Update from Qt 6.5.1 to 6.8.1 ([#3386](https://github.com/nomic-ai/gpt4all/pull/3386))
### Fixed
- Fix the timeout error in code interpreter ([#3369](https://github.com/nomic-ai/gpt4all/pull/3369))
- Fix code interpreter console.log not accepting multiple arguments ([#3371](https://github.com/nomic-ai/gpt4all/pull/3371))
- Remove 'X is defined' checks from templates for better compatibility ([#3372](https://github.com/nomic-ai/gpt4all/pull/3372))
- Jinja2Cpp: Add 'if' requirement for 'else' parsing to fix crash ([#3373](https://github.com/nomic-ai/gpt4all/pull/3373))
- Save chats on quit, even if the window isn't closed first ([#3387](https://github.com/nomic-ai/gpt4all/pull/3387))
- Add chat template replacements for five new models and fix EM German Mistral ([#3393](https://github.com/nomic-ai/gpt4all/pull/3393))
- Fix crash when entering `{{ a["foo"(` as chat template ([#3394](https://github.com/nomic-ai/gpt4all/pull/3394))
- Sign the maintenance tool on macOS to prevent crash on Sequoia ([#3391](https://github.com/nomic-ai/gpt4all/pull/3391))
- Jinja2Cpp: Fix operator precedence in 'not X is defined' ([#3402](https://github.com/nomic-ai/gpt4all/pull/3402))
## [3.6.1] - 2024-12-20
### Fixed
- Fix the stop generation button no longer working in v3.6.0 ([#3336](https://github.com/nomic-ai/gpt4all/pull/3336))
- Fix the copy entire conversation button no longer working in v3.6.0 ([#3336](https://github.com/nomic-ai/gpt4all/pull/3336))
## [3.6.0] - 2024-12-19
### Added
- Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs ([#3327](https://github.com/nomic-ai/gpt4all/pull/3327))
- Built-in javascript code interpreter tool plus model ([#3173](https://github.com/nomic-ai/gpt4all/pull/3173))
### Fixed
- Fix remote model template to allow for XML in messages ([#3318](https://github.com/nomic-ai/gpt4all/pull/3318))
- Fix Jinja2Cpp bug that broke system message detection in chat templates ([#3325](https://github.com/nomic-ai/gpt4all/pull/3325))
- Fix LocalDocs sources displaying in unconsolidated form after v3.5.0 ([#3328](https://github.com/nomic-ai/gpt4all/pull/3328))
## [3.5.3] - 2024-12-16
### Fixed
- Fix LocalDocs not using information from sources in v3.5.2 ([#3302](https://github.com/nomic-ai/gpt4all/pull/3302))
## [3.5.2] - 2024-12-13
### Added
- Create separate download pages for built-in and HuggingFace models ([#3269](https://github.com/nomic-ai/gpt4all/pull/3269))
### Fixed
- Fix API server ignoring assistant messages in history after v3.5.0 ([#3256](https://github.com/nomic-ai/gpt4all/pull/3256))
- Fix API server replying with incorrect token counts and stop reason after v3.5.0 ([#3256](https://github.com/nomic-ai/gpt4all/pull/3256))
- Fix API server remembering previous, unrelated conversations after v3.5.0 ([#3256](https://github.com/nomic-ai/gpt4all/pull/3256))
- Fix mishandling of default chat template and system message of cloned models in v3.5.0 ([#3262](https://github.com/nomic-ai/gpt4all/pull/3262))
- Fix untranslated text on the startup dialog ([#3293](https://github.com/nomic-ai/gpt4all/pull/3293))
## [3.5.1] - 2024-12-10
### Fixed
- Fix an incorrect value for currentResponse ([#3245](https://github.com/nomic-ai/gpt4all/pull/3245))
- Fix the default model button so it works again after 3.5.0 ([#3246](https://github.com/nomic-ai/gpt4all/pull/3246))
- Fix chat templates for Nous Hermes 2 Mistral, Mistral OpenOrca, Qwen 2, and remote models ([#3250](https://github.com/nomic-ai/gpt4all/pull/3250))
- Fix chat templates for Llama 3.2 models ([#3251](https://github.com/nomic-ai/gpt4all/pull/3251))
## [3.5.0] - 2024-12-09
### Changed
- Update Italian translation (by [@Harvester62](https://github.com/Harvester62) in [#3236](https://github.com/nomic-ai/gpt4all/pull/3236))
- Update Romanian translation (by [@SINAPSA-IC](https://github.com/SINAPSA-IC) in [#3232](https://github.com/nomic-ai/gpt4all/pull/3232))
### Fixed
- Fix a few more problems with the Jinja changes ([#3239](https://github.com/nomic-ai/gpt4all/pull/3239))
## [3.5.0-rc2] - 2024-12-06
### Changed
- Fade messages out with an animation when they are removed from the chat view ([#3227](https://github.com/nomic-ai/gpt4all/pull/3227))
- Tweak wording of edit/redo confirmation dialogs ([#3228](https://github.com/nomic-ai/gpt4all/pull/3228))
- Make edit/redo buttons disabled instead of invisible when they are temporarily unavailable ([#3228](https://github.com/nomic-ai/gpt4all/pull/3228))
## [3.5.0-rc1] - 2024-12-04
### Added
- Add ability to attach text, markdown, and rst files to chat ([#3135](https://github.com/nomic-ai/gpt4all/pull/3135))
- Add feature to minimize to system tray (by [@bgallois](https://github.com/bgallois) in [#3109](https://github.com/nomic-ai/gpt4all/pull/3109))
- Basic cache for faster prefill when the input shares a prefix with previous context ([#3073](https://github.com/nomic-ai/gpt4all/pull/3073))
- Add ability to edit prompts and regenerate any response ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
### Changed
- Implement Qt 6.8 compatibility ([#3121](https://github.com/nomic-ai/gpt4all/pull/3121))
- Use Jinja for chat templates instead of per-message QString.arg-style templates ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
- API server: Use system message(s) from client instead of settings ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
- API server: Accept messages in any order supported by the model instead of requiring user/assistant pairs ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
- Remote models: Pass system message with "system" role instead of joining with user message ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
### Removed
- Remove option to save binary model state to disk ([#3147](https://github.com/nomic-ai/gpt4all/pull/3147))
### Fixed
- Fix bug in GUI when localdocs encounters binary data ([#3137](https://github.com/nomic-ai/gpt4all/pull/3137))
- Fix LocalDocs bugs that prevented some docx files from fully chunking ([#3140](https://github.com/nomic-ai/gpt4all/pull/3140))
- Fix missing softmax that was causing crashes and effectively infinite temperature since 3.4.0 ([#3202](https://github.com/nomic-ai/gpt4all/pull/3202))
## [3.4.2] - 2024-10-16
### Fixed
- Limit bm25 retrieval to only specified collections ([#3083](https://github.com/nomic-ai/gpt4all/pull/3083))
- Fix bug removing documents because of a wrong case sensitive file suffix check ([#3083](https://github.com/nomic-ai/gpt4all/pull/3083))
- Fix bug with hybrid localdocs search where database would get out of sync ([#3083](https://github.com/nomic-ai/gpt4all/pull/3083))
- Fix GUI bug where the localdocs embedding device appears blank ([#3083](https://github.com/nomic-ai/gpt4all/pull/3083))
- Prevent LocalDocs from not making progress in certain cases ([#3094](https://github.com/nomic-ai/gpt4all/pull/3094))
## [3.4.1] - 2024-10-11
### Fixed
@ -155,6 +312,19 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.1.0/).
- Fix several Vulkan resource management issues ([#2694](https://github.com/nomic-ai/gpt4all/pull/2694))
- Fix crash/hang when some models stop generating, by showing special tokens ([#2701](https://github.com/nomic-ai/gpt4all/pull/2701))
[3.10.0]: https://github.com/nomic-ai/gpt4all/compare/v3.9.0...v3.10.0
[3.9.0]: https://github.com/nomic-ai/gpt4all/compare/v3.8.0...v3.9.0
[3.8.0]: https://github.com/nomic-ai/gpt4all/compare/v3.7.0...v3.8.0
[3.7.0]: https://github.com/nomic-ai/gpt4all/compare/v3.6.1...v3.7.0
[3.6.1]: https://github.com/nomic-ai/gpt4all/compare/v3.6.0...v3.6.1
[3.6.0]: https://github.com/nomic-ai/gpt4all/compare/v3.5.3...v3.6.0
[3.5.3]: https://github.com/nomic-ai/gpt4all/compare/v3.5.2...v3.5.3
[3.5.2]: https://github.com/nomic-ai/gpt4all/compare/v3.5.1...v3.5.2
[3.5.1]: https://github.com/nomic-ai/gpt4all/compare/v3.5.0...v3.5.1
[3.5.0]: https://github.com/nomic-ai/gpt4all/compare/v3.5.0-rc2...v3.5.0
[3.5.0-rc2]: https://github.com/nomic-ai/gpt4all/compare/v3.5.0-rc1...v3.5.0-rc2
[3.5.0-rc1]: https://github.com/nomic-ai/gpt4all/compare/v3.4.2...v3.5.0-rc1
[3.4.2]: https://github.com/nomic-ai/gpt4all/compare/v3.4.1...v3.4.2
[3.4.1]: https://github.com/nomic-ai/gpt4all/compare/v3.4.0...v3.4.1
[3.4.0]: https://github.com/nomic-ai/gpt4all/compare/v3.3.0...v3.4.0
[3.3.1]: https://github.com/nomic-ai/gpt4all/compare/v3.3.0...v3.3.1

View File

@ -3,13 +3,17 @@ cmake_minimum_required(VERSION 3.25) # for try_compile SOURCE_FROM_VAR
include(../common/common.cmake)
set(APP_VERSION_MAJOR 3)
set(APP_VERSION_MINOR 4)
set(APP_VERSION_MINOR 10)
set(APP_VERSION_PATCH 1)
set(APP_VERSION_BASE "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
set(APP_VERSION "${APP_VERSION_BASE}")
set(APP_VERSION "${APP_VERSION_BASE}-dev0")
project(gpt4all VERSION ${APP_VERSION_BASE} LANGUAGES CXX C)
if (CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install CACHE PATH "..." FORCE)
endif()
if(APPLE)
option(BUILD_UNIVERSAL "Build a Universal binary on macOS" OFF)
if(BUILD_UNIVERSAL)
@ -22,14 +26,36 @@ if(APPLE)
endif()
endif()
find_package(Python3 3.12 QUIET COMPONENTS Interpreter)
option(GPT4ALL_TEST "Build the tests" ${Python3_FOUND})
option(GPT4ALL_LOCALHOST "Build installer for localhost repo" OFF)
option(GPT4ALL_OFFLINE_INSTALLER "Build an offline installer" OFF)
option(GPT4ALL_SIGN_INSTALL "Sign installed binaries and installers (requires signing identities)" OFF)
option(GPT4ALL_GEN_CPACK_CONFIG "Generate the CPack config.xml in the package step and nothing else." OFF)
set(GPT4ALL_USE_QTPDF "AUTO" CACHE STRING "Whether to Use QtPDF for LocalDocs. If OFF or not available on this platform, PDFium is used.")
set_property(CACHE GPT4ALL_USE_QTPDF PROPERTY STRINGS AUTO ON OFF)
set(GPT4ALL_FORCE_D3D12 "AUTO" CACHE STRING "Whether to use Direct3D 12 as the Qt scene graph backend. Defaults to ON on Windows ARM.")
set_property(CACHE GPT4ALL_FORCE_D3D12 PROPERTY STRINGS AUTO ON OFF)
include(cmake/cpack_config.cmake)
if (GPT4ALL_GEN_CPACK_CONFIG)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/cpack-steal-config.cmake.in"
"${CMAKE_BINARY_DIR}/cmake/cpack-steal-config.cmake" @ONLY)
set(CPACK_POST_BUILD_SCRIPTS ${CMAKE_BINARY_DIR}/cmake/cpack-steal-config.cmake)
include(CPack)
include(CPackIFW)
return()
endif()
set(CMAKE_EXPORT_COMPILE_COMMANDS ON)
set(CMAKE_CXX_STANDARD 23)
set(CMAKE_CXX_STANDARD_REQUIRED ON)
if (MSVC)
# Enable accurate __cplusplus macro
add_compile_options($<$<COMPILE_LANGUAGE:CXX>:/Zc:__cplusplus>)
endif()
# conftests
@ -66,13 +92,23 @@ include_directories("${CMAKE_CURRENT_BINARY_DIR}")
set(CMAKE_AUTOMOC ON)
set(CMAKE_AUTORCC ON)
# Generate a header file with the version number
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/cmake/config.h.in"
"${CMAKE_CURRENT_BINARY_DIR}/config.h"
)
set(CMAKE_FIND_PACKAGE_TARGETS_GLOBAL ON)
set(GPT4ALL_QT_COMPONENTS Core HttpServer LinguistTools Quick QuickDialogs2 Sql Svg)
set(GPT4ALL_USING_QTPDF OFF)
if (CMAKE_SYSTEM_NAME MATCHES Windows AND CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
# QtPDF is not available.
if (GPT4ALL_USE_QTPDF STREQUAL "ON")
message(FATAL_ERROR "QtPDF is not available on Windows ARM64.")
endif()
elseif (GPT4ALL_USE_QTPDF MATCHES "^(ON|AUTO)$")
set(GPT4ALL_USING_QTPDF ON)
list(APPEND GPT4ALL_QT_COMPONENTS Pdf)
endif()
find_package(Qt6 6.8 COMPONENTS ${GPT4ALL_QT_COMPONENTS} REQUIRED)
find_package(Qt6 6.4 COMPONENTS Core HttpServer LinguistTools Pdf Quick QuickDialogs2 Sql Svg REQUIRED)
if (QT_KNOWN_POLICY_QTP0004)
qt_policy(SET QTP0004 NEW) # generate extra qmldir files on Qt 6.8+
endif()
# Get the Qt6Core target properties
get_target_property(Qt6Core_INCLUDE_DIRS Qt6::Core INTERFACE_INCLUDE_DIRECTORIES)
@ -90,9 +126,55 @@ message(STATUS "Qt 6 root directory: ${Qt6_ROOT_DIR}")
set(CMAKE_RUNTIME_OUTPUT_DIRECTORY ${CMAKE_BINARY_DIR}/bin)
set(GPT4ALL_CONFIG_FORCE_D3D12 -1)
if (NOT CMAKE_SYSTEM_NAME MATCHES Windows OR Qt6_VERSION VERSION_LESS "6.6")
# Direct3D 12 is not available.
if (GPT4ALL_FORCE_D3D12 STREQUAL "ON")
message(FATAL_ERROR "Cannot use Direct3D 12 on this platform.")
endif()
elseif (GPT4ALL_FORCE_D3D12 MATCHES "^(ON|AUTO)$")
if (GPT4ALL_FORCE_D3D12 STREQUAL "ON" OR CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
set(GPT4ALL_CONFIG_FORCE_D3D12 1)
endif()
endif()
# Generate a header file for configuration
configure_file(
"${CMAKE_CURRENT_SOURCE_DIR}/src/config.h.in"
"${CMAKE_CURRENT_BINARY_DIR}/config.h"
)
add_subdirectory(deps)
add_subdirectory(../gpt4all-backend llmodel)
if (GPT4ALL_TEST)
enable_testing()
# Llama-3.2-1B model
set(TEST_MODEL "Llama-3.2-1B-Instruct-Q4_0.gguf")
set(TEST_MODEL_MD5 "48ff0243978606fdba19d899b77802fc")
set(TEST_MODEL_PATH "${CMAKE_BINARY_DIR}/resources/${TEST_MODEL}")
set(TEST_MODEL_URL "https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/${TEST_MODEL}")
# Create a custom command to download the file if it does not exist or if the checksum does not match
add_custom_command(
OUTPUT "${TEST_MODEL_PATH}"
COMMAND ${CMAKE_COMMAND} -E echo "Downloading test model from ${TEST_MODEL_URL} ..."
COMMAND ${CMAKE_COMMAND} -DURL="${TEST_MODEL_URL}" -DOUTPUT_PATH="${TEST_MODEL_PATH}" -DEXPECTED_MD5="${TEST_MODEL_MD5}" -P "${CMAKE_SOURCE_DIR}/cmake/download_model.cmake"
DEPENDS "${CMAKE_SOURCE_DIR}/cmake/download_model.cmake"
)
# Define a custom target that depends on the downloaded model
add_custom_target(download_test_model
DEPENDS "${TEST_MODEL_PATH}"
)
add_subdirectory(tests)
# The 'check' target makes sure the tests and their dependencies are up-to-date before running them
add_custom_target(check COMMAND ${CMAKE_CTEST_COMMAND} --output-on-failure DEPENDS download_test_model chat gpt4all_tests)
endif()
set(CHAT_EXE_RESOURCES)
# Metal shader library
@ -137,17 +219,26 @@ if (APPLE)
set_source_files_properties(${CHAT_EXE_RESOURCES} PROPERTIES MACOSX_PACKAGE_LOCATION Resources)
endif()
set(MACOS_SOURCES)
if (APPLE)
find_library(COCOA_LIBRARY Cocoa)
list(APPEND MACOS_SOURCES src/macosdock.mm src/macosdock.h)
endif()
qt_add_executable(chat
src/main.cpp
src/chat.cpp src/chat.h
src/chatapi.cpp src/chatapi.h
src/chatlistmodel.cpp src/chatlistmodel.h
src/chatllm.cpp src/chatllm.h
src/chatmodel.h
src/chatmodel.h src/chatmodel.cpp
src/chatviewtextprocessor.cpp src/chatviewtextprocessor.h
src/codeinterpreter.cpp src/codeinterpreter.h
src/database.cpp src/database.h
src/download.cpp src/download.h
src/embllm.cpp src/embllm.h
src/jinja_helpers.cpp src/jinja_helpers.h
src/jinja_replacements.cpp src/jinja_replacements.h
src/llm.cpp src/llm.h
src/localdocs.cpp src/localdocs.h
src/localdocsmodel.cpp src/localdocsmodel.h
@ -156,8 +247,12 @@ qt_add_executable(chat
src/mysettings.cpp src/mysettings.h
src/network.cpp src/network.h
src/server.cpp src/server.h
src/tool.cpp src/tool.h
src/toolcallparser.cpp src/toolcallparser.h
src/toolmodel.cpp src/toolmodel.h
src/xlsxtomd.cpp src/xlsxtomd.h
${CHAT_EXE_RESOURCES}
${MACOS_SOURCES}
)
gpt4all_add_warning_options(chat)
@ -169,8 +264,15 @@ qt_add_qml_module(chat
main.qml
qml/AddCollectionView.qml
qml/AddModelView.qml
qml/AddGPT4AllModelView.qml
qml/AddHFModelView.qml
qml/AddRemoteModelView.qml
qml/ApplicationSettings.qml
qml/ChatDrawer.qml
qml/ChatCollapsibleItem.qml
qml/ChatItemView.qml
qml/ChatMessageButton.qml
qml/ChatTextItem.qml
qml/ChatView.qml
qml/CollectionsDrawer.qml
qml/HomeView.qml
@ -183,18 +285,20 @@ qt_add_qml_module(chat
qml/PopupDialog.qml
qml/SettingsView.qml
qml/StartupDialog.qml
qml/SwitchModelDialog.qml
qml/ConfirmationDialog.qml
qml/Theme.qml
qml/ThumbsDownDialog.qml
qml/Toast.qml
qml/ToastManager.qml
qml/MyBusyIndicator.qml
qml/MyButton.qml
qml/MyTabButton.qml
qml/MyCheckBox.qml
qml/MyComboBox.qml
qml/MyDialog.qml
qml/MyDirectoryField.qml
qml/MyFileDialog.qml
qml/MyFileIcon.qml
qml/MyFolderDialog.qml
qml/MyFancyLink.qml
qml/MyMenu.qml
@ -211,6 +315,7 @@ qt_add_qml_module(chat
qml/MyTextField.qml
qml/MyToolButton.qml
qml/MyWelcomeButton.qml
qml/RemoteModelCard.qml
RESOURCES
icons/antenna_1.svg
icons/antenna_2.svg
@ -229,6 +334,7 @@ qt_add_qml_module(chat
icons/eject.svg
icons/email.svg
icons/file-doc.svg
icons/file-docx.svg
icons/file-md.svg
icons/file-pdf.svg
icons/file-txt.svg
@ -240,6 +346,7 @@ qt_add_qml_module(chat
icons/gpt4all-48.png
icons/gpt4all.svg
icons/gpt4all_transparent.svg
icons/groq.svg
icons/home.svg
icons/image.svg
icons/info.svg
@ -247,12 +354,14 @@ qt_add_qml_module(chat
icons/left_panel_open.svg
icons/local-docs.svg
icons/models.svg
icons/mistral.svg
icons/network.svg
icons/nomic_logo.svg
icons/notes.svg
icons/paperclip.svg
icons/plus.svg
icons/plus_circle.svg
icons/openai.svg
icons/recycle.svg
icons/regenerate.svg
icons/search.svg
@ -338,25 +447,38 @@ target_include_directories(chat PRIVATE deps/usearch/include
deps/usearch/fp16/include)
target_link_libraries(chat
PRIVATE Qt6::Core Qt6::HttpServer Qt6::Pdf Qt6::Quick Qt6::Sql Qt6::Svg)
PRIVATE Qt6::Core Qt6::HttpServer Qt6::Quick Qt6::Sql Qt6::Svg)
if (GPT4ALL_USING_QTPDF)
target_compile_definitions(chat PRIVATE GPT4ALL_USE_QTPDF)
target_link_libraries(chat PRIVATE Qt6::Pdf)
else()
# Link PDFium
target_link_libraries(chat PRIVATE pdfium)
endif()
target_link_libraries(chat
PRIVATE llmodel SingleApplication fmt::fmt duckx::duckx QXlsx)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/json/include)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/json/include/nlohmann)
target_include_directories(chat PRIVATE ${CMAKE_CURRENT_SOURCE_DIR}/deps/minja/include)
if (APPLE)
target_link_libraries(chat PRIVATE ${COCOA_LIBRARY})
endif()
# -- install --
set(COMPONENT_NAME_MAIN ${PROJECT_NAME})
if(CMAKE_INSTALL_PREFIX_INITIALIZED_TO_DEFAULT)
set(CMAKE_INSTALL_PREFIX ${CMAKE_BINARY_DIR}/install CACHE PATH "..." FORCE)
if (APPLE)
set(GPT4ALL_LIB_DEST bin/gpt4all.app/Contents/Frameworks)
else()
set(GPT4ALL_LIB_DEST lib)
endif()
install(TARGETS chat DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN})
install(
TARGETS llmodel
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN} # .dll
LIBRARY DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
# We should probably iterate through the list of the cmake for backend, but these need to be installed
@ -379,8 +501,8 @@ endif()
install(
TARGETS ${MODEL_IMPL_TARGETS}
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
LIBRARY DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
if(APPLE AND GPT4ALL_SIGN_INSTALL)
@ -409,7 +531,7 @@ if (LLMODEL_CUDA)
TARGETS llamamodel-mainline-cuda
llamamodel-mainline-cuda-avxonly
RUNTIME_DEPENDENCY_SET llama-cuda-deps
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so/.dylib
LIBRARY DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .so
RUNTIME DESTINATION lib COMPONENT ${COMPONENT_NAME_MAIN} # .dll
)
if (WIN32)
@ -423,66 +545,38 @@ if (LLMODEL_CUDA)
endif()
endif()
if (NOT GPT4ALL_USING_QTPDF)
# Install PDFium
if (WIN32)
install(FILES ${PDFium_LIBRARY} DESTINATION bin COMPONENT ${COMPONENT_NAME_MAIN}) # .dll
else()
install(FILES ${PDFium_LIBRARY} DESTINATION ${GPT4ALL_LIB_DEST} COMPONENT ${COMPONENT_NAME_MAIN}) # .so/.dylib
endif()
endif()
if (NOT APPLE)
install(FILES "${LOCAL_EMBEDDING_MODEL_PATH}"
DESTINATION resources
COMPONENT ${COMPONENT_NAME_MAIN})
endif()
set(CPACK_GENERATOR "IFW")
set(CPACK_VERBATIM_VARIABLES YES)
set(CPACK_IFW_VERBOSE ON)
if(${CMAKE_SYSTEM_NAME} MATCHES Linux)
if (CMAKE_SYSTEM_NAME MATCHES Linux)
find_program(LINUXDEPLOYQT linuxdeployqt HINTS "$ENV{HOME}/dev/linuxdeployqt/build/tools/linuxdeployqt" "$ENV{HOME}/project/linuxdeployqt/bin")
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/deploy-qt-linux.cmake.in"
"${CMAKE_BINARY_DIR}/cmake/deploy-qt-linux.cmake" @ONLY)
set(CPACK_PRE_BUILD_SCRIPTS ${CMAKE_BINARY_DIR}/cmake/deploy-qt-linux.cmake)
set(CPACK_IFW_ROOT "~/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-linux")
set(CPACK_IFW_TARGET_DIRECTORY "@HomeDir@/${COMPONENT_NAME_MAIN}")
elseif(${CMAKE_SYSTEM_NAME} MATCHES Windows)
find_program(WINDEPLOYQT windeployqt HINTS ${_qt_bin_dir})
elseif (CMAKE_SYSTEM_NAME MATCHES Windows)
find_program(WINDEPLOYQT windeployqt)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/deploy-qt-windows.cmake.in"
"${CMAKE_BINARY_DIR}/cmake/deploy-qt-windows.cmake" @ONLY)
set(CPACK_PRE_BUILD_SCRIPTS ${CMAKE_BINARY_DIR}/cmake/deploy-qt-windows.cmake)
set(CPACK_IFW_ROOT "C:/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_IFW_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.ico")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-win64")
set(CPACK_IFW_TARGET_DIRECTORY "@HomeDir@\\${COMPONENT_NAME_MAIN}")
elseif(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
find_program(MACDEPLOYQT macdeployqt HINTS ${_qt_bin_dir})
elseif (CMAKE_SYSTEM_NAME MATCHES Darwin)
find_program(MACDEPLOYQT macdeployqt)
configure_file("${CMAKE_CURRENT_SOURCE_DIR}/cmake/deploy-qt-mac.cmake.in"
"${CMAKE_BINARY_DIR}/cmake/deploy-qt-mac.cmake" @ONLY)
set(CPACK_PRE_BUILD_SCRIPTS ${CMAKE_BINARY_DIR}/cmake/deploy-qt-mac.cmake)
set(CPACK_IFW_ROOT "~/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_IFW_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.icns")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-darwin")
set(CPACK_IFW_TARGET_DIRECTORY "@ApplicationsDir@/${COMPONENT_NAME_MAIN}")
set(CPACK_BUNDLE_NAME ${COMPONENT_NAME_MAIN})
set(CPACK_BUNDLE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.icns")
endif()
set(CPACK_PACKAGE_INSTALL_DIRECTORY ${COMPONENT_NAME_MAIN})
set(CPACK_PACKAGE_VERSION_MAJOR ${PROJECT_VERSION_MAJOR})
set(CPACK_PACKAGE_VERSION_MINOR ${PROJECT_VERSION_MINOR})
SET(CPACK_PACKAGE_VERSION_PATCH ${PROJECT_VERSION_PATCH})
set(CPACK_PACKAGE_HOMEPAGE_URL "https://www.nomic.ai/gpt4all")
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_RESOURCE_FILE_LICENSE ${CMAKE_CURRENT_SOURCE_DIR}/LICENSE)
set(CPACK_RESOURCE_FILE_README ${CMAKE_CURRENT_SOURCE_DIR}/README.md)
set(CPACK_PACKAGE_EXECUTABLES "GPT4All")
set(CPACK_CREATE_DESKTOP_LINKS "GPT4All")
set(CPACK_IFW_PACKAGE_NAME "GPT4All")
set(CPACK_IFW_PACKAGE_TITLE "GPT4All Installer")
set(CPACK_IFW_PACKAGE_PUBLISHER "Nomic, Inc.")
set(CPACK_IFW_PRODUCT_URL "https://www.nomic.ai/gpt4all")
set(CPACK_IFW_PACKAGE_WIZARD_STYLE "Aero")
set(CPACK_IFW_PACKAGE_LOGO "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_IFW_PACKAGE_WINDOW_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-32.png")
set(CPACK_IFW_PACKAGE_WIZARD_SHOW_PAGE_LIST OFF)
set(CPACK_IFW_PACKAGE_CONTROL_SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/cmake/installer_control.qs")
include(InstallRequiredSystemLibraries)
include(CPack)
include(CPackIFW)
@ -494,20 +588,35 @@ endif()
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} ESSENTIAL FORCED_INSTALLATION)
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} VERSION ${APP_VERSION})
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} LICENSES "MIT LICENSE" ${CPACK_RESOURCE_FILE_LICENSE})
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/cmake/installer_component.qs")
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/cmake/installer_gpt4all_component.qs")
cpack_ifw_configure_component(${COMPONENT_NAME_MAIN} REPLACES "gpt4all-chat") #Was used in very earliest prototypes
if (APPLE AND GPT4ALL_SIGN_INSTALL)
if (GPT4ALL_OFFLINE_INSTALLER)
cpack_add_component(maintenancetool HIDDEN)
else()
cpack_add_component(maintenancetool HIDDEN DOWNLOADED)
endif()
cpack_ifw_configure_component(maintenancetool ESSENTIAL FORCED_INSTALLATION)
cpack_ifw_configure_component(maintenancetool VERSION ${APP_VERSION})
cpack_ifw_configure_component(maintenancetool SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/cmake/installer_maintenancetool_component.qs")
endif()
if (GPT4ALL_LOCALHOST)
cpack_ifw_add_repository("GPT4AllRepository" URL "http://localhost/repository")
elseif(GPT4ALL_OFFLINE_INSTALLER)
add_compile_definitions(GPT4ALL_OFFLINE_INSTALLER)
elseif (GPT4ALL_OFFLINE_INSTALLER)
add_compile_definitions(GPT4ALL_OFFLINE_INSTALLER)
else()
if(${CMAKE_SYSTEM_NAME} MATCHES Linux)
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/linux/repository")
elseif(${CMAKE_SYSTEM_NAME} MATCHES Windows)
#To sign the target on windows have to create a batch script add use it as a custom target and then use CPACK_IFW_EXTRA_TARGETS to set this extra target
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/windows/repository")
elseif(${CMAKE_SYSTEM_NAME} MATCHES Darwin)
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/mac/repository")
endif()
if (CMAKE_SYSTEM_NAME MATCHES Linux)
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/linux/repository")
elseif (CMAKE_SYSTEM_NAME MATCHES Windows)
# To sign the target on windows have to create a batch script add use it as a custom target and then use CPACK_IFW_EXTRA_TARGETS to set this extra target
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|AMD64|amd64)$")
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/windows/repository")
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/windows_arm/repository")
endif()
elseif (CMAKE_SYSTEM_NAME MATCHES Darwin)
cpack_ifw_add_repository("GPT4AllRepository" URL "https://gpt4all.io/installer_repos/mac/repository")
endif()
endif()

View File

@ -1,45 +0,0 @@
# gpt4all-chat
Cross platform Qt based GUI for GPT4All versions with GPT-J as the base
model. NOTE: The model seen in the screenshot is actually a preview of a
new training run for GPT4All based on GPT-J. The GPT4All project is busy
at work getting ready to release this model including installers for all
three major OS's. In the meantime, you can try this UI out with the original
GPT-J model by following build instructions below.
![image](https://user-images.githubusercontent.com/50458173/231464085-da9edff6-a593-410e-8f38-7513f75c8aab.png)
## Install
One click installers for macOS, Linux, and Windows at https://www.nomic.ai/gpt4all
## Features
* Cross-platform (Linux, Windows, MacOSX)
* The UI is made to look and feel like you've come to expect from a chatty gpt
* Check for updates so you can always stay fresh with latest models
* Easy to install with precompiled binaries available for all three major desktop platforms
* Multi-modal - Ability to load more than one model and switch between them
* Multi-chat - a list of current and past chats and the ability to save/delete/export and switch between
* Supports models that are supported by llama.cpp
* Model downloader in GUI featuring many popular open source models
* Settings dialog to change temp, top_p, min_p, top_k, threads, etc
* Copy your conversation to clipboard
* RAG via LocalDocs feature
* Check for updates to get the very latest GUI
## Building and running
* Follow the visual instructions on the [build_and_run](build_and_run.md) page
## Getting the latest
If you've already checked out the source code and/or built the program make sure when you do a git fetch to get the latest changes and that you also do `git submodule update --init --recursive` to update the submodules. (If you ever run into trouble, deinitializing via `git submodule deinit -f .` and then initializing again via `git submodule update --init --recursive` fixes most issues)
## Contributing
* Pull requests welcome. See the feature wish list for ideas :)
## License
The source code of this chat interface is currently under a MIT license.

View File

@ -1,6 +0,0 @@
#ifndef CONFIG_H
#define CONFIG_H
#define APP_VERSION "@APP_VERSION@"
#endif // CONFIG_H

View File

@ -0,0 +1,2 @@
set(OUTPUT_DIR "@CMAKE_BINARY_DIR@")
file(COPY ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/config DESTINATION ${OUTPUT_DIR}/cpack-config)

View File

@ -0,0 +1,50 @@
set(COMPONENT_NAME_MAIN "gpt4all")
set(CPACK_GENERATOR "IFW")
set(CPACK_VERBATIM_VARIABLES YES)
set(CPACK_IFW_VERBOSE ON)
if (CMAKE_SYSTEM_NAME MATCHES Linux)
set(CPACK_IFW_ROOT "~/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-linux")
set(CPACK_IFW_TARGET_DIRECTORY "@HomeDir@/${COMPONENT_NAME_MAIN}")
elseif (CMAKE_SYSTEM_NAME MATCHES Windows)
set(CPACK_IFW_ROOT "C:/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_IFW_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.ico")
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|AMD64|amd64)$")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-win64")
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-win64-arm")
else()
message(FATAL_ERROR "Unrecognized processor: ${CMAKE_SYSTEM_PROCESSOR}")
endif()
set(CPACK_IFW_TARGET_DIRECTORY "@HomeDir@\\${COMPONENT_NAME_MAIN}")
elseif (CMAKE_SYSTEM_NAME MATCHES Darwin)
set(CPACK_IFW_ROOT "~/Qt/Tools/QtInstallerFramework/4.6")
set(CPACK_IFW_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.icns")
set(CPACK_PACKAGE_FILE_NAME "${COMPONENT_NAME_MAIN}-installer-darwin")
set(CPACK_IFW_TARGET_DIRECTORY "@ApplicationsDir@/${COMPONENT_NAME_MAIN}")
endif()
set(CPACK_COMPONENTS_ALL ${COMPONENT_NAME_MAIN}) # exclude development components
if (APPLE AND GPT4ALL_SIGN_INSTALL)
list(APPEND CPACK_COMPONENTS_ALL maintenancetool)
endif()
set(CPACK_PACKAGE_INSTALL_DIRECTORY ${COMPONENT_NAME_MAIN})
set(CPACK_PACKAGE_VERSION_MAJOR ${PROJECT_VERSION_MAJOR})
set(CPACK_PACKAGE_VERSION_MINOR ${PROJECT_VERSION_MINOR})
set(CPACK_PACKAGE_VERSION_PATCH ${PROJECT_VERSION_PATCH})
set(CPACK_PACKAGE_HOMEPAGE_URL "https://www.nomic.ai/gpt4all")
set(CPACK_PACKAGE_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_RESOURCE_FILE_LICENSE ${CMAKE_CURRENT_SOURCE_DIR}/LICENSE)
set(CPACK_PACKAGE_EXECUTABLES "GPT4All")
set(CPACK_CREATE_DESKTOP_LINKS "GPT4All")
set(CPACK_IFW_PACKAGE_NAME "GPT4All")
set(CPACK_IFW_PACKAGE_TITLE "GPT4All Installer")
set(CPACK_IFW_PACKAGE_PUBLISHER "Nomic, Inc.")
set(CPACK_IFW_PRODUCT_URL "https://www.nomic.ai/gpt4all")
set(CPACK_IFW_PACKAGE_WIZARD_STYLE "Aero")
set(CPACK_IFW_PACKAGE_LOGO "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png")
set(CPACK_IFW_PACKAGE_WINDOW_ICON "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-32.png")
set(CPACK_IFW_PACKAGE_WIZARD_SHOW_PAGE_LIST OFF)
set(CPACK_IFW_PACKAGE_CONTROL_SCRIPT "${CMAKE_CURRENT_SOURCE_DIR}/cmake/installer_control.qs")

View File

@ -1,20 +1,26 @@
set(MACDEPLOYQT "@MACDEPLOYQT@")
set(COMPONENT_NAME_MAIN "@COMPONENT_NAME_MAIN@")
set(CMAKE_CURRENT_SOURCE_DIR "@CMAKE_CURRENT_SOURCE_DIR@")
set(GPT4ALL_SIGN_INSTALL "@GPT4ALL_SIGN_INSTALL@")
set(GPT4ALL_SIGNING_ID "@MAC_SIGNING_IDENTITY@")
if (GPT4ALL_SIGNING_ID)
set(CPACK_CONFIG_DIR "@CMAKE_BINARY_DIR@")
if (GPT4ALL_SIGN_INSTALL)
set(MAC_NOTARIZE -sign-for-notarization=${GPT4ALL_SIGNING_ID})
endif()
execute_process(COMMAND ${MACDEPLOYQT} ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app -qmldir=${CMAKE_CURRENT_SOURCE_DIR} -verbose=2 ${MAC_NOTARIZE})
file(GLOB MYLLAMALIBS ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/lib/libllama*)
file(GLOB MYLLMODELLIBS ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/lib/libllmodel.*)
file(COPY ${MYLLAMALIBS}
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app/Contents/Frameworks)
file(COPY ${MYLLMODELLIBS}
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data/bin/gpt4all.app/Contents/Frameworks)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-32.png"
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/icons/gpt4all-48.png"
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data)
file(COPY "${CMAKE_CURRENT_SOURCE_DIR}/resources/gpt4all.icns"
DESTINATION ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/${COMPONENT_NAME_MAIN}/data)
if (GPT4ALL_SIGN_INSTALL)
# Create signed MaintenanceTool
set(MT_DATA_DIR ${CPACK_TEMPORARY_INSTALL_DIRECTORY}/packages/maintenancetool/data)
file(MAKE_DIRECTORY ${MT_DATA_DIR})
execute_process(
COMMAND binarycreator --config ${CPACK_CONFIG_DIR}/cpack-config/config/config.xml --create-maintenancetool --sign ${GPT4ALL_SIGNING_ID}
WORKING_DIRECTORY ${MT_DATA_DIR}
)
endif()

View File

@ -0,0 +1,12 @@
if(NOT DEFINED URL OR NOT DEFINED OUTPUT_PATH OR NOT DEFINED EXPECTED_MD5)
message(FATAL_ERROR "Usage: cmake -DURL=<url> -DOUTPUT_PATH=<path> -DEXPECTED_MD5=<md5> -P download_model.cmake")
endif()
message(STATUS "Downloading model from ${URL} to ${OUTPUT_PATH} ...")
file(DOWNLOAD "${URL}" "${OUTPUT_PATH}" EXPECTED_MD5 "${EXPECTED_MD5}" STATUS status)
list(GET status 0 status_code)
if(NOT status_code EQUAL 0)
message(FATAL_ERROR "Failed to download model: ${status}")
endif()

View File

@ -0,0 +1,19 @@
function Component()
{
component.ifwVersion = installer.value("FrameworkVersion");
installer.installationStarted.connect(this, Component.prototype.onInstallationStarted);
}
Component.prototype.onInstallationStarted = function()
{
if (component.updateRequested() || component.installationRequested()) {
if (installer.value("os") == "win") {
component.installerbaseBinaryPath = "@TargetDir@/installerbase.exe";
} else if (installer.value("os") == "x11") {
component.installerbaseBinaryPath = "@TargetDir@/installerbase";
} else if (installer.value("os") == "mac") {
component.installerbaseBinaryPath = "@TargetDir@/MaintenanceTool.app";
}
installer.setInstallerBaseBinary(component.installerbaseBinaryPath);
}
}

View File

@ -1,9 +1,12 @@
include(FetchContent)
set(BUILD_SHARED_LIBS OFF)
set(FMT_INSTALL OFF)
add_subdirectory(fmt)
set(QAPPLICATION_CLASS QGuiApplication)
set(QAPPLICATION_CLASS QApplication)
add_subdirectory(SingleApplication)
set(DUCKX_INSTALL OFF)
@ -11,3 +14,38 @@ add_subdirectory(DuckX)
set(QT_VERSION_MAJOR 6)
add_subdirectory(QXlsx/QXlsx)
if (NOT GPT4ALL_USING_QTPDF)
# If we do not use QtPDF, we need to get PDFium.
set(GPT4ALL_PDFIUM_TAG "chromium/6996")
if (CMAKE_SYSTEM_NAME MATCHES Linux)
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-linux-x64.tgz"
URL_HASH "SHA256=68b381b87efed539f2e33ae1e280304c9a42643a878cc296c1d66a93b0cb4335"
)
elseif (CMAKE_SYSTEM_NAME MATCHES Windows)
if (CMAKE_SYSTEM_PROCESSOR MATCHES "^(x86_64|AMD64|amd64)$")
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-win-x64.tgz"
URL_HASH "SHA256=83e714c302ceacccf403826d5cb57ea39b77f393d83b8d5781283012774a9378"
)
elseif (CMAKE_SYSTEM_PROCESSOR MATCHES "^(aarch64|AARCH64|arm64|ARM64)$")
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-win-arm64.tgz"
URL_HASH "SHA256=78e77e871453a4915cbf66fb381b951c9932f88a747c6b2b33c9f27ec2371445"
)
endif()
elseif (CMAKE_SYSTEM_NAME MATCHES Darwin)
FetchContent_Declare(
pdfium
URL "https://github.com/bblanchon/pdfium-binaries/releases/download/${GPT4ALL_PDFIUM_TAG}/pdfium-mac-univ.tgz"
URL_HASH "SHA256=e7577f3242ff9c1df50025f9615673a43601a201bc51ee4792975f98920793a2"
)
endif()
FetchContent_MakeAvailable(pdfium)
find_package(PDFium REQUIRED PATHS "${pdfium_SOURCE_DIR}" NO_DEFAULT_PATH)
endif()

@ -0,0 +1 @@
Subproject commit 606b6347edf0758c531abb6c36743e09a4c48a84

@ -0,0 +1 @@
Subproject commit e97bb2442cd6ab3d5bb5f5a3e8a1f7d6081d613b

@ -1 +1 @@
Subproject commit 1f0618a86f9dbb7386237241cee96cc425dd7b55
Subproject commit 9e59f1036657303b29eaf709945f339e403e5f2f

View File

@ -0,0 +1,11 @@
-r test-requirements.txt
# dev tools
flake8~=7.1
mypy~=1.12
pytype>=2024.10.11
wemake-python-styleguide~=0.19.2
# type stubs and other optional modules
types-requests~=2.32
urllib3[socks]

View File

@ -1,3 +1 @@
<svg width="32" height="32" viewBox="0 0 32 32" fill="none" xmlns="http://www.w3.org/2000/svg">
<path d="M28.4138 9.17125L22.8288 3.585C22.643 3.39924 22.4225 3.25188 22.1799 3.15134C21.9372 3.0508 21.6771 2.99905 21.4144 2.99905C21.1517 2.99905 20.8916 3.0508 20.6489 3.15134C20.4062 3.25188 20.1857 3.39924 20 3.585L4.58626 19C4.39973 19.185 4.25185 19.4053 4.15121 19.648C4.05057 19.8907 3.99917 20.151 4.00001 20.4138V26C4.00001 26.5304 4.21072 27.0391 4.5858 27.4142C4.96087 27.7893 5.46958 28 6.00001 28H11.5863C11.849 28.0008 12.1093 27.9494 12.352 27.8488C12.5947 27.7482 12.815 27.6003 13 27.4138L28.4138 12C28.5995 11.8143 28.7469 11.5938 28.8474 11.3511C28.948 11.1084 28.9997 10.8483 28.9997 10.5856C28.9997 10.3229 28.948 10.0628 28.8474 9.82015C28.7469 9.57747 28.5995 9.35698 28.4138 9.17125ZM6.41376 20L17 9.41375L19.0863 11.5L8.50001 22.085L6.41376 20ZM6.00001 22.4138L9.58626 26H6.00001V22.4138ZM12 25.5863L9.91376 23.5L20.5 12.9138L22.5863 15L12 25.5863ZM24 13.5863L18.4138 8L21.4138 5L27 10.585L24 13.5863Z" fill="black"/>
</svg>
<svg xmlns="http://www.w3.org/2000/svg" width="32" height="32" fill="#000000" viewBox="0 0 256 256"><path d="M227.31,73.37,182.63,28.68a16,16,0,0,0-22.63,0L36.69,152A15.86,15.86,0,0,0,32,163.31V208a16,16,0,0,0,16,16H92.69A15.86,15.86,0,0,0,104,219.31L227.31,96a16,16,0,0,0,0-22.63ZM92.69,208H48V163.31l88-88L180.69,120ZM192,108.68,147.31,64l24-24L216,84.68Z"></path></svg>

Before

Width:  |  Height:  |  Size: 1.0 KiB

After

Width:  |  Height:  |  Size: 372 B

View File

@ -0,0 +1 @@
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 256 256"><rect width="256" height="256" fill="none"/><line x1="152" y1="96" x2="208" y2="96" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16"/><line x1="152" y1="160" x2="208" y2="160" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16"/><path d="M64,72V40a8,8,0,0,1,8-8H200a8,8,0,0,1,8,8V216a8,8,0,0,1-8,8H72a8,8,0,0,1-8-8V184" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16"/><polyline points="64 104 76 152 92 120 108 152 120 104" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16"/><rect x="32" y="72" width="120" height="112" rx="8" fill="none" stroke="currentColor" stroke-linecap="round" stroke-linejoin="round" stroke-width="16"/></svg>

After

Width:  |  Height:  |  Size: 893 B

View File

@ -0,0 +1,3 @@
<?xml version="1.0" encoding="utf-8" ?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN" "http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<svg xmlns="http://www.w3.org/2000/svg" viewBox="0 0 26.3 26.3"><defs><style>.cls-1{fill:#f05237;}.cls-2{fill:#fff;}</style></defs><g id="Layer_2" data-name="Layer 2"><g id="Content"><circle class="cls-1" cx="13.15" cy="13.15" r="13.15"/><path class="cls-2" d="M13.17,6.88a4.43,4.43,0,0,0,0,8.85h1.45V14.07H13.17a2.77,2.77,0,1,1,2.77-2.76v4.07a2.74,2.74,0,0,1-4.67,2L10.1,18.51a4.37,4.37,0,0,0,3.07,1.29h.06a4.42,4.42,0,0,0,4.36-4.4V11.2a4.43,4.43,0,0,0-4.42-4.32"/></g></g></svg>

After

Width:  |  Height:  |  Size: 620 B

View File

@ -0,0 +1 @@
<svg viewBox="0 0 512 512" xmlns="http://www.w3.org/2000/svg" fill-rule="evenodd" clip-rule="evenodd" stroke-linejoin="round" stroke-miterlimit="2"><path d="M189.08 303.228H94.587l.044-94.446h94.497l-.048 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M283.528 397.674h-94.493l.044-94.446h94.496l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M283.575 303.228H189.08l.046-94.446h94.496l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M378.07 303.228h-94.495l.044-94.446h94.498l-.047 94.446zM189.128 208.779H94.633l.044-94.448h94.498l-.047 94.448zM378.115 208.779h-94.494l.045-94.448h94.496l-.047 94.448zM94.587 303.227H.093l.044-96.017h94.496l-.046 96.017z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.633 208.779H.138l.046-94.448H94.68l-.047 94.448z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.68 115.902H.185L.23 19.885h94.498l-.047 96.017zM472.657 114.331h-94.495l.044-94.446h94.497l-.046 94.446zM94.54 399.244H.046l.044-97.588h94.497l-.047 97.588z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M94.495 492.123H0l.044-94.446H94.54l-.045 94.446zM472.563 303.228H378.07l.044-94.446h94.496l-.047 94.446zM472.61 208.779h-94.495l.044-94.448h94.498l-.047 94.448z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M472.517 397.674h-94.494l.044-94.446h94.497l-.047 94.446z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M472.47 492.121h-94.493l.044-96.017h94.496l-.047 96.017z" fill="#1c1c1b" fill-rule="nonzero"/><path d="M228.375 303.22h-96.061l.046-94.446h96.067l-.052 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M322.827 397.666h-94.495l.044-96.018h94.498l-.047 96.018z" fill="#ff4900" fill-rule="nonzero"/><path d="M324.444 303.22h-97.636l.046-94.446h97.638l-.048 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M418.938 303.22h-96.064l.045-94.446h96.066l-.047 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M228.423 208.77H132.36l.045-94.445h96.066l-.05 94.446zM418.985 208.77H322.92l.044-94.445h96.069l-.048 94.446z" fill="#ffa300" fill-rule="nonzero"/><path d="M133.883 304.79H39.392l.044-96.017h94.496l-.049 96.017z" fill="#ff7000" fill-rule="nonzero"/><path d="M133.929 208.77H39.437l.044-95.445h94.496l-.048 95.445z" fill="#ffa300" fill-rule="nonzero"/><path d="M133.976 114.325H39.484l.044-94.448h94.497l-.05 94.448zM511.954 115.325h-94.493l.044-95.448h94.497l-.048 95.448z" fill="#ffce00" fill-rule="nonzero"/><path d="M133.836 399.667H39.345l.044-96.447h94.496l-.049 96.447z" fill="#ff4900" fill-rule="nonzero"/><path d="M133.79 492.117H39.3l.044-94.448h94.496l-.049 94.448z" fill="#ff0107" fill-rule="nonzero"/><path d="M511.862 303.22h-94.495l.046-94.446h94.496l-.047 94.446z" fill="#ff7000" fill-rule="nonzero"/><path d="M511.907 208.77h-94.493l.044-94.445h94.496l-.047 94.446z" fill="#ffa300" fill-rule="nonzero"/><path d="M511.815 398.666h-94.493l.044-95.447h94.496l-.047 95.447z" fill="#ff4900" fill-rule="nonzero"/><path d="M511.77 492.117h-94.496l.046-94.448h94.496l-.047 94.448z" fill="#ff0107" fill-rule="nonzero"/></svg>

After

Width:  |  Height:  |  Size: 2.9 KiB

View File

@ -0,0 +1,2 @@
<?xml version="1.0" encoding="utf-8"?><!-- Uploaded to: SVG Repo, www.svgrepo.com, Generator: SVG Repo Mixer Tools -->
<svg fill="#000000" width="800px" height="800px" viewBox="0 0 24 24" role="img" xmlns="http://www.w3.org/2000/svg"><title>OpenAI icon</title><path d="M22.2819 9.8211a5.9847 5.9847 0 0 0-.5157-4.9108 6.0462 6.0462 0 0 0-6.5098-2.9A6.0651 6.0651 0 0 0 4.9807 4.1818a5.9847 5.9847 0 0 0-3.9977 2.9 6.0462 6.0462 0 0 0 .7427 7.0966 5.98 5.98 0 0 0 .511 4.9107 6.051 6.051 0 0 0 6.5146 2.9001A5.9847 5.9847 0 0 0 13.2599 24a6.0557 6.0557 0 0 0 5.7718-4.2058 5.9894 5.9894 0 0 0 3.9977-2.9001 6.0557 6.0557 0 0 0-.7475-7.0729zm-9.022 12.6081a4.4755 4.4755 0 0 1-2.8764-1.0408l.1419-.0804 4.7783-2.7582a.7948.7948 0 0 0 .3927-.6813v-6.7369l2.02 1.1686a.071.071 0 0 1 .038.052v5.5826a4.504 4.504 0 0 1-4.4945 4.4944zm-9.6607-4.1254a4.4708 4.4708 0 0 1-.5346-3.0137l.142.0852 4.783 2.7582a.7712.7712 0 0 0 .7806 0l5.8428-3.3685v2.3324a.0804.0804 0 0 1-.0332.0615L9.74 19.9502a4.4992 4.4992 0 0 1-6.1408-1.6464zM2.3408 7.8956a4.485 4.485 0 0 1 2.3655-1.9728V11.6a.7664.7664 0 0 0 .3879.6765l5.8144 3.3543-2.0201 1.1685a.0757.0757 0 0 1-.071 0l-4.8303-2.7865A4.504 4.504 0 0 1 2.3408 7.872zm16.5963 3.8558L13.1038 8.364 15.1192 7.2a.0757.0757 0 0 1 .071 0l4.8303 2.7913a4.4944 4.4944 0 0 1-.6765 8.1042v-5.6772a.79.79 0 0 0-.407-.667zm2.0107-3.0231l-.142-.0852-4.7735-2.7818a.7759.7759 0 0 0-.7854 0L9.409 9.2297V6.8974a.0662.0662 0 0 1 .0284-.0615l4.8303-2.7866a4.4992 4.4992 0 0 1 6.6802 4.66zM8.3065 12.863l-2.02-1.1638a.0804.0804 0 0 1-.038-.0567V6.0742a4.4992 4.4992 0 0 1 7.3757-3.4537l-.142.0805L8.704 5.459a.7948.7948 0 0 0-.3927.6813zm1.0976-2.3654l2.602-1.4998 2.6069 1.4998v2.9994l-2.5974 1.4997-2.6067-1.4997Z"/></svg>

After

Width:  |  Height:  |  Size: 1.7 KiB

View File

@ -12,6 +12,7 @@ import network
import gpt4all
import localdocs
import mysettings
import Qt.labs.platform
Window {
id: window
@ -22,6 +23,43 @@ Window {
visible: true
title: qsTr("GPT4All v%1").arg(Qt.application.version)
SystemTrayIcon {
id: systemTrayIcon
property bool shouldClose: false
visible: MySettings.systemTray && !shouldClose
icon.source: "qrc:/gpt4all/icons/gpt4all.svg"
function restore() {
LLM.showDockIcon();
window.show();
window.raise();
window.requestActivate();
}
onActivated: function(reason) {
if (reason === SystemTrayIcon.Context && Qt.platform.os !== "osx")
menu.open();
else if (reason === SystemTrayIcon.Trigger)
restore();
}
menu: Menu {
MenuItem {
text: qsTr("Restore")
onTriggered: systemTrayIcon.restore()
}
MenuItem {
text: qsTr("Quit")
onTriggered: {
systemTrayIcon.restore();
systemTrayIcon.shouldClose = true;
window.shouldClose = true;
savingPopup.open();
ChatListModel.saveChatsForQuit();
}
}
}
}
Settings {
property alias x: window.x
property alias y: window.y
@ -156,7 +194,7 @@ Window {
font.pixelSize: theme.fontSizeLarge
}
property bool hasSaved: false
property bool shouldClose: false
PopupDialog {
id: savingPopup
@ -180,20 +218,29 @@ Window {
}
onClosing: function(close) {
if (window.hasSaved)
if (systemTrayIcon.visible) {
LLM.hideDockIcon();
window.visible = false;
ChatListModel.saveChats();
close.accepted = false;
return;
}
if (window.shouldClose)
return;
window.shouldClose = true;
savingPopup.open();
ChatListModel.saveChats();
close.accepted = false
ChatListModel.saveChatsForQuit();
close.accepted = false;
}
Connections {
target: ChatListModel
function onSaveChatsFinished() {
window.hasSaved = true;
savingPopup.close();
window.close()
if (window.shouldClose)
window.close()
}
}
@ -627,9 +674,6 @@ Window {
function show() {
stackLayout.currentIndex = 2;
// FIXME This expanded code should be removed and we should be changing the names of
// the classes here in ModelList for the proxy/filter models
ModelList.downloadableModels.expanded = true
}
function isShown() {

View File

@ -1,20 +1,15 @@
## Latest News
<br/>
GPT4All v3.10.0 was released on February 24th. Changes include:
**UPDATE:** We are aware of problems with LocalDocs in v3.4.0 including hangs and missing words in references. We are working on a fix.
---
<br/>
GPT4All v3.4.0 was released on October 8th. Changes include:
* **Attached Files:** You can now attach a small Microsoft Excel spreadsheet (.xlsx) to a chat message and ask the model about it.
* **LocalDocs Accuracy:** The LocalDocs algorithm has been enhanced to find more accurate references for some queries.
* **Word Document Support:** LocalDocs now supports Microsoft Word (.docx) documents natively.
* **IMPORTANT NOTE:** If .docx files are not found, make sure Settings > LocalDocs > Allowed File Extensions includes "docx".
* **Forgetful Model Fixes:** Issues with the "Redo last chat response" button, and with continuing chats from previous sessions, have been fixed.
* **Chat Saving Improvements:** On exit, GPT4All will no longer save chats that are not new or modified. As a bonus, downgrading without losing access to all chats will be possible in the future, should the need arise.
* **UI Fixes:** The model list no longer scrolls to the top when you start downloading a model.
* **New Models:** LLama 3.2 Instruct 3B and 1B models now available in model list.
* **Remote Models:**
* The Add Model page now has a dedicated tab for remote model providers.
* Groq, OpenAI, and Mistral remote models are now easier to configure.
* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.
* **New Model:** The non-MoE Granite model is now supported.
* **Translation Updates:**
* The Italian translation has been updated.
* The Simplified Chinese translation has been significantly improved.
* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.
* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.
* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.

View File

@ -1,6 +1,22 @@
[
{
"order": "a",
"md5sum": "a54c08a7b90e4029a8c2ab5b5dc936aa",
"name": "Reasoner v1",
"filename": "qwen2.5-coder-7b-instruct-q4_0.gguf",
"filesize": "4431390720",
"requires": "3.6.0",
"ramrequired": "8",
"parameters": "8 billion",
"quant": "q4_0",
"type": "qwen2",
"description": "<ul><li>Based on <a href=\"https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct\">Qwen2.5-Coder 7B</a></li><li>Uses built-in javascript code interpreter</li><li>Use for complex reasoning tasks that can be aided by computation analysis</li><li>License: <a href=\"https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct/blob/main/LICENSE\">Apache License Version 2.0</a></li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/Qwen/Qwen2.5-Coder-7B-Instruct-GGUF/resolve/main/qwen2.5-coder-7b-instruct-q4_0.gguf",
"chatTemplate": "{{- '<|im_start|>system\\n' }}\n{% if toolList|length > 0 %}You have access to the following functions:\n{% for tool in toolList %}\nUse the function '{{tool.function}}' to: '{{tool.description}}'\n{% if tool.parameters|length > 0 %}\nparameters:\n{% for info in tool.parameters %}\n {{info.name}}:\n type: {{info.type}}\n description: {{info.description}}\n required: {{info.required}}\n{% endfor %}\n{% endif %}\n# Tool Instructions\nIf you CHOOSE to call this function ONLY reply with the following format:\n'{{tool.symbolicFormat}}'\nHere is an example. If the user says, '{{tool.examplePrompt}}', then you reply\n'{{tool.exampleCall}}'\nAfter the result you might reply with, '{{tool.exampleReply}}'\n{% endfor %}\nYou MUST include both the start and end tags when you use a function.\n\nYou are a helpful AI assistant who uses the functions to break down, analyze, perform, and verify complex reasoning tasks. You SHOULD try to verify your answers using the functions where possible.\n{% endif %}\n{{- '<|im_end|>\\n' }}\n{% for message in messages %}\n{{'<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{% endfor %}\n{% if add_generation_prompt %}\n{{ '<|im_start|>assistant\\n' }}\n{% endif %}\n",
"systemPrompt": ""
},
{
"order": "aa",
"md5sum": "c87ad09e1e4c8f9c35a5fcef52b6f1c9",
"name": "Llama 3 8B Instruct",
"filename": "Meta-Llama-3-8B-Instruct.Q4_0.gguf",
@ -13,7 +29,68 @@
"description": "<ul><li>Fast responses</li><li>Chat based model</li><li>Accepts system prompts in Llama 3 format</li><li>Trained by Meta</li><li>License: <a href=\"https://llama.meta.com/llama3/license/\">Meta Llama 3 Community License</a></li></ul>",
"url": "https://gpt4all.io/models/gguf/Meta-Llama-3-8B-Instruct.Q4_0.gguf",
"promptTemplate": "<|start_header_id|>user<|end_header_id|>\n\n%1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n%2<|eot_id|>",
"systemPrompt": ""
"systemPrompt": "",
"chatTemplate": "{%- set loop_messages = messages %}\n{%- for message in loop_messages %}\n {%- set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' %}\n {{- content }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}"
},
{
"order": "aa1",
"sha256sum": "5cd4ee65211770f1d99b4f6f4951780b9ef40e29314bd6542bb5bd0ad0bc29d1",
"name": "DeepSeek-R1-Distill-Qwen-7B",
"filename": "DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf",
"filesize": "4444121056",
"requires": "3.8.0",
"ramrequired": "8",
"parameters": "7 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-Math-7B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-7B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-7B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa2",
"sha256sum": "906b3382f2680f4ce845459b4a122e904002b075238080307586bcffcde49eef",
"name": "DeepSeek-R1-Distill-Qwen-14B",
"filename": "DeepSeek-R1-Distill-Qwen-14B-Q4_0.gguf",
"filesize": "8544267680",
"requires": "3.8.0",
"ramrequired": "16",
"parameters": "14 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-14B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-14B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-14B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa3",
"sha256sum": "0eb93e436ac8beec18aceb958c120d282cb2cf5451b23185e7be268fe9d375cc",
"name": "DeepSeek-R1-Distill-Llama-8B",
"filename": "DeepSeek-R1-Distill-Llama-8B-Q4_0.gguf",
"filesize": "4675894112",
"requires": "3.8.0",
"ramrequired": "8",
"parameters": "8 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Llama-3.1-8B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Llama-8B-GGUF/resolve/main/DeepSeek-R1-Distill-Llama-8B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "aa4",
"sha256sum": "b3af887d0a015b39fab2395e4faf682c1a81a6a3fd09a43f0d4292f7d94bf4d0",
"name": "DeepSeek-R1-Distill-Qwen-1.5B",
"filename": "DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf",
"filesize": "1068807776",
"requires": "3.8.0",
"ramrequired": "3",
"parameters": "1.5 billion",
"quant": "q4_0",
"type": "deepseek",
"description": "<p>The official Qwen2.5-Math-1.5B distillation of DeepSeek-R1.</p><ul><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li><li>#reasoning</li></ul>",
"url": "https://huggingface.co/bartowski/DeepSeek-R1-Distill-Qwen-1.5B-GGUF/resolve/main/DeepSeek-R1-Distill-Qwen-1.5B-Q4_0.gguf",
"chatTemplate": "{%- if not add_generation_prompt is defined %}\n {%- set add_generation_prompt = false %}\n{%- endif %}\n{%- if messages[0]['role'] == 'system' %}\n {{- messages[0]['content'] }}\n{%- endif %}\n{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<User>' + message['content'] }}\n {%- endif %}\n {%- if message['role'] == 'assistant' %}\n {%- set content = message['content'] | regex_replace('^[\\\\s\\\\S]*</think>', '') %}\n {{- '<Assistant>' + content + '<end▁of▁sentence>' }}\n {%- endif %}\n{%- endfor -%}\n{%- if add_generation_prompt %}\n {{- '<Assistant>' }}\n{%- endif %}"
},
{
"order": "b",
@ -29,7 +106,8 @@
"description": "<ul><li>Fast responses</li><li>Instruct model</li><li>Multilingual dialogue use</li><li>Agentic system capable</li><li>Trained by Meta</li><li>License: <a href=\"https://llama.meta.com/llama3_2/license/\">Meta Llama 3.2 Community License</a></li></ul>",
"url": "https://huggingface.co/bartowski/Llama-3.2-3B-Instruct-GGUF/resolve/main/Llama-3.2-3B-Instruct-Q4_0.gguf",
"promptTemplate": "<|start_header_id|>user<|end_header_id|>\n\n%1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n%2",
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>"
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>",
"chatTemplate": "{{- bos_token }}\n{%- set date_string = strftime_now('%d %b %Y') %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] | trim %}\n {%- set loop_start = 1 %}\n{%- else %}\n {%- set system_message = '' %}\n {%- set loop_start = 0 %}\n{%- endif %}\n\n{#- System message #}\n{{- '<|start_header_id|>system<|end_header_id|>\\n\\n' }}\n{{- 'Cutting Knowledge Date: December 2023\\n' }}\n{{- 'Today Date: ' + date_string + '\\n\\n' }}\n{{- system_message }}\n{{- '<|eot_id|>' }}\n\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' + message['content'] | trim + '<|eot_id|>' }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}"
},
{
"order": "c",
@ -45,7 +123,8 @@
"description": "<ul><li>Fast responses</li><li>Instruct model</li><li>Multilingual dialogue use</li><li>Agentic system capable</li><li>Trained by Meta</li><li>License: <a href=\"https://llama.meta.com/llama3_2/license/\">Meta Llama 3.2 Community License</a></li></ul>",
"url": "https://huggingface.co/bartowski/Llama-3.2-1B-Instruct-GGUF/resolve/main/Llama-3.2-1B-Instruct-Q4_0.gguf",
"promptTemplate": "<|start_header_id|>user<|end_header_id|>\n\n%1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n%2",
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>"
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>",
"chatTemplate": "{{- bos_token }}\n{%- set date_string = strftime_now('%d %b %Y') %}\n\n{#- This block extracts the system message, so we can slot it into the right place. #}\n{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] | trim %}\n {%- set loop_start = 1 %}\n{%- else %}\n {%- set system_message = '' %}\n {%- set loop_start = 0 %}\n{%- endif %}\n\n{#- System message #}\n{{- '<|start_header_id|>system<|end_header_id|>\\n\\n' }}\n{{- 'Cutting Knowledge Date: December 2023\\n' }}\n{{- 'Today Date: ' + date_string + '\\n\\n' }}\n{{- system_message }}\n{{- '<|eot_id|>' }}\n\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {{- '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n' + message['content'] | trim + '<|eot_id|>' }}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}\n{%- endif %}"
},
{
"order": "d",
@ -61,7 +140,8 @@
"description": "<strong>Good overall fast chat model</strong><br><ul><li>Fast responses</li><li>Chat based model</li><li>Accepts system prompts in ChatML format</li><li>Trained by Mistral AI<li>Finetuned by Nous Research on the OpenHermes-2.5 dataset<li>Licensed for commercial use</ul>",
"url": "https://huggingface.co/NousResearch/Nous-Hermes-2-Mistral-7B-DPO-GGUF/resolve/main/Nous-Hermes-2-Mistral-7B-DPO.Q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n",
"systemPrompt": ""
"systemPrompt": "",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "e",
@ -77,7 +157,8 @@
"systemPrompt": "",
"description": "<strong>Strong overall fast instruction following model</strong><br><ul><li>Fast responses</li><li>Trained by Mistral AI<li>Uncensored</li><li>Licensed for commercial use</li></ul>",
"url": "https://gpt4all.io/models/gguf/mistral-7b-instruct-v0.1.Q4_0.gguf",
"promptTemplate": "[INST] %1 [/INST]"
"promptTemplate": "[INST] %1 [/INST]",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set system_message = messages[0]['content'] %}\n {%- set loop_start = 1 %}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if (message['role'] == 'user') != ((loop.index0 - loop_start) % 2 == 0) %}\n {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}\n {%- endif %}\n {%- if message['role'] == 'user' %}\n {%- if loop.index0 == loop_start and loop_start == 1 %}\n {{- ' [INST] ' + system_message + '\\n\\n' + message['content'] + ' [/INST]' }}\n {%- else %}\n {{- ' [INST] ' + message['content'] + ' [/INST]' }}\n {%- endif %}\n {%- elif message['role'] == 'assistant' %}\n {{- ' ' + message['content'] + eos_token }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}"
},
{
"order": "f",
@ -93,7 +174,8 @@
"description": "<ul><li><strong>For advanced users only. Not recommended for use on Windows or Linux without selecting CUDA due to speed issues.</strong></li><li>Fast responses</li><li>Chat based model</li><li>Large context size of 128k</li><li>Accepts agentic system prompts in Llama 3.1 format</li><li>Trained by Meta</li><li>License: <a href=\"https://llama.meta.com/llama3_1/license/\">Meta Llama 3.1 Community License</a></li></ul>",
"url": "https://huggingface.co/GPT4All-Community/Meta-Llama-3.1-8B-Instruct-128k/resolve/main/Meta-Llama-3.1-8B-Instruct-128k-Q4_0.gguf",
"promptTemplate": "<|start_header_id|>user<|end_header_id|>\n\n%1<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n%2",
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>"
"systemPrompt": "<|start_header_id|>system<|end_header_id|>\nCutting Knowledge Date: December 2023\n\nYou are a helpful assistant.<|eot_id|>",
"chatTemplate": "{%- set loop_messages = messages %}\n{%- for message in loop_messages %}\n {%- set content = '<|start_header_id|>' + message['role'] + '<|end_header_id|>\\n\\n'+ message['content'] | trim + '<|eot_id|>' %}\n {%- if loop.index0 == 0 %}\n {%- set content = bos_token + content %}\n {%- endif %}\n {{- content }}\n{%- endfor %}\n{{- '<|start_header_id|>assistant<|end_header_id|>\\n\\n' }}"
},
{
"order": "g",
@ -109,7 +191,8 @@
"description": "<strong>Strong overall fast chat model</strong><br><ul><li>Fast responses</li><li>Chat based model</li><li>Trained by Mistral AI<li>Finetuned on OpenOrca dataset curated via <a href=\"https://atlas.nomic.ai/\">Nomic Atlas</a><li>Licensed for commercial use</ul>",
"url": "https://gpt4all.io/models/gguf/mistral-7b-openorca.gguf2.Q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n",
"systemPrompt": "<|im_start|>system\nYou are MistralOrca, a large language model trained by Alignment Lab AI.\n<|im_end|>\n"
"systemPrompt": "<|im_start|>system\nYou are MistralOrca, a large language model trained by Alignment Lab AI.\n<|im_end|>\n",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "h",
@ -125,7 +208,8 @@
"systemPrompt": "",
"description": "<strong>Very fast model with good quality</strong><br><ul><li>Fastest responses</li><li>Instruction based</li><li>Trained by TII<li>Finetuned by Nomic AI<li>Licensed for commercial use</ul>",
"url": "https://gpt4all.io/models/gguf/gpt4all-falcon-newbpe-q4_0.gguf",
"promptTemplate": "### Instruction:\n%1\n\n### Response:\n"
"promptTemplate": "### Instruction:\n%1\n\n### Response:\n",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- messages[0]['content'] + '\\n\\n' }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if message['role'] == 'user' %}\n {{- '### User: ' + message['content'] + '\\n\\n' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '### Assistant: ' + message['content'] + '\\n\\n' }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '### Assistant:' }}\n{%- endif %}"
},
{
"order": "i",
@ -140,7 +224,8 @@
"type": "LLaMA2",
"systemPrompt": "",
"description": "<ul><li>Instruction based<li>Trained by Microsoft<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/orca-2-7b.Q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/orca-2-7b.Q4_0.gguf",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "j",
@ -155,7 +240,8 @@
"type": "LLaMA2",
"systemPrompt": "",
"description": "<ul><li>Instruction based<li>Trained by Microsoft<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/orca-2-13b.Q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/orca-2-13b.Q4_0.gguf",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "k",
@ -170,7 +256,9 @@
"type": "LLaMA2",
"systemPrompt": "",
"description": "<strong>Strong overall larger model</strong><br><ul><li>Instruction based<li>Gives very long responses<li>Finetuned with only 1k of high-quality data<li>Trained by Microsoft and Peking University<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/wizardlm-13b-v1.2.Q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/wizardlm-13b-v1.2.Q4_0.gguf",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- messages[0]['content'] + ' ' }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in loop_messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if message['role'] == 'user' %}\n {{- 'USER: ' + message['content'] }}\n {%- elif message['role'] == 'assistant' %}\n {{- 'ASSISTANT: ' + message['content'] }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- if (loop.index0 - loop_start) % 2 == 0 %}\n {{- ' ' }}\n {%- else %}\n {{- eos_token }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- 'ASSISTANT:' }}\n{%- endif %}",
"systemMessage": "A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions."
},
{
"order": "l",
@ -186,7 +274,9 @@
"description": "<strong>Ghost 7B v0.9.1</strong> fast, powerful and smooth for Vietnamese and English languages.",
"url": "https://huggingface.co/lamhieu/ghost-7b-v0.9.1-gguf/resolve/main/ghost-7b-v0.9.1-Q4_0.gguf",
"promptTemplate": "<|user|>\n%1</s>\n<|assistant|>\n%2</s>\n",
"systemPrompt": "<|system|>\nYou are Ghost created by Lam Hieu. You are a helpful and knowledgeable assistant. You like to help and always give honest information, in its original language. In communication, you are always respectful, equal and promote positive behavior.\n</s>"
"systemPrompt": "<|system|>\nYou are Ghost created by Lam Hieu. You are a helpful and knowledgeable assistant. You like to help and always give honest information, in its original language. In communication, you are always respectful, equal and promote positive behavior.\n</s>",
"chatTemplate": "{%- for message in messages %}\n {%- if message['role'] == 'user' %}\n {{- '<|user|>\\n' + message['content'] + eos_token }}\n {%- elif message['role'] == 'system' %}\n {{- '<|system|>\\n' + message['content'] + eos_token }}\n {%- elif message['role'] == 'assistant' %}\n {{- '<|assistant|>\\n' + message['content'] + eos_token }}\n {%- endif %}\n {%- if loop.last and add_generation_prompt %}\n {{- '<|assistant|>' }}\n {%- endif %}\n{%- endfor %}",
"systemMessage": "You are Ghost created by Lam Hieu. You are a helpful and knowledgeable assistant. You like to help and always give honest information, in its original language. In communication, you are always respectful, equal and promote positive behavior."
},
{
"order": "m",
@ -202,7 +292,8 @@
"systemPrompt": "",
"description": "<strong>Extremely good model</strong><br><ul><li>Instruction based<li>Gives long responses<li>Curated with 300,000 uncensored instructions<li>Trained by Nous Research<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/nous-hermes-llama2-13b.Q4_0.gguf",
"promptTemplate": "### Instruction:\n%1\n\n### Response:\n"
"promptTemplate": "### Instruction:\n%1\n\n### Response:\n",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- messages[0]['content'] + '\\n\\n' }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if message['role'] == 'user' %}\n {{- '### Instruction:\\n' + message['content'] + '\\n\\n' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '### Response:\\n' + message['content'] + '\\n\\n' }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '### Instruction:\\n' }}\n{%- endif %}"
},
{
"order": "n",
@ -217,7 +308,9 @@
"type": "LLaMA",
"systemPrompt": "",
"description": "<strong>Very good overall model</strong><br><ul><li>Instruction based<li>Based on the same dataset as Groovy<li>Slower than Groovy, with higher quality responses<li>Trained by Nomic AI<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/gpt4all-13b-snoozy-q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/gpt4all-13b-snoozy-q4_0.gguf",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- messages[0]['content'] + '\\n\\n' }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if message['role'] == 'user' %}\n {{- '### Instruction:\\n' + message['content'] + '\\n\\n' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '### Response:\\n' + message['content'] + '\\n\\n' }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '### Response:\\n' }}\n{%- endif %}",
"systemMessage": "Below is an instruction that describes a task. Write a response that appropriately completes the request."
},
{
"order": "o",
@ -234,7 +327,8 @@
"description": "<strong>Good model with novel architecture</strong><br><ul><li>Fast responses<li>Chat based<li>Trained by Mosaic ML<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/mpt-7b-chat-newbpe-q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n",
"systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>\n"
"systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>\n",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "p",
@ -250,7 +344,8 @@
"description": "<strong>Good model with novel architecture</strong><br><ul><li>Fast responses<li>Chat based<li>Trained by Mosaic ML<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/mpt-7b-chat.gguf4.Q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>\n",
"systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>\n"
"systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>\n",
"chatTemplate": "{%- for message in messages %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
},
{
"order": "q",
@ -266,7 +361,8 @@
"description": "<ul><li>Very fast responses</li><li>Chat based model</li><li>Accepts system prompts in Phi-3 format</li><li>Trained by Microsoft</li><li>License: <a href=\"https://opensource.org/license/mit\">MIT</a></li><li>No restrictions on commercial use</li></ul>",
"url": "https://gpt4all.io/models/gguf/Phi-3-mini-4k-instruct.Q4_0.gguf",
"promptTemplate": "<|user|>\n%1<|end|>\n<|assistant|>\n%2<|end|>\n",
"systemPrompt": ""
"systemPrompt": "",
"chatTemplate": "{{- bos_token }}\n{%- for message in messages %}\n {{- '<|' + message['role'] + '|>\\n' + message['content'] + '<|end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|assistant|>\\n' }}\n{%- else %}\n {{- eos_token }}\n{%- endif %}"
},
{
"order": "r",
@ -282,7 +378,8 @@
"description": "<strong>Small version of new model with novel dataset</strong><br><ul><li>Very fast responses</li><li>Instruction based</li><li>Explain tuned datasets</li><li>Orca Research Paper dataset construction approaches</li><li>Cannot be used commercially</li></ul>",
"url": "https://gpt4all.io/models/gguf/orca-mini-3b-gguf2-q4_0.gguf",
"promptTemplate": "### User:\n%1\n\n### Response:\n",
"systemPrompt": "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n"
"systemPrompt": "### System:\nYou are an AI assistant that follows instruction extremely well. Help as much as you can.\n\n",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- '### System:\\n' + messages[0]['content'] + '\\n\\n' }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if message['role'] == 'user' %}\n {{- '### User:\\n' + message['content'] + '\\n\\n' }}\n {%- elif message['role'] == 'assistant' %}\n {{- '### Response:\\n' + message['content'] + '\\n\\n' }}\n {%- else %}\n {{- raise_exception('Only user and assistant roles are supported, with the exception of an initial optional system message!') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '### Response:\\n' }}\n{%- endif %}"
},
{
"order": "s",
@ -299,7 +396,8 @@
"systemPrompt": "",
"promptTemplate": "%1",
"description": "<strong>Trained on subset of the Stack</strong><br><ul><li>Code completion based<li>Licensed for commercial use<li>WARNING: Not available for chat GUI</ul>",
"url": "https://gpt4all.io/models/gguf/replit-code-v1_5-3b-newbpe-q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/replit-code-v1_5-3b-newbpe-q4_0.gguf",
"chatTemplate": null
},
{
"order": "t",
@ -316,7 +414,8 @@
"systemPrompt": "",
"promptTemplate": "%1",
"description": "<strong>Trained on subset of the Stack</strong><br><ul><li>Code completion based<li>WARNING: Not available for chat GUI</ul>",
"url": "https://gpt4all.io/models/gguf/starcoder-newbpe-q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/starcoder-newbpe-q4_0.gguf",
"chatTemplate": null
},
{
"order": "u",
@ -333,7 +432,8 @@
"systemPrompt": "",
"promptTemplate": "%1",
"description": "<strong>Trained on collection of Python and TypeScript</strong><br><ul><li>Code completion based<li>WARNING: Not available for chat GUI</li>",
"url": "https://gpt4all.io/models/gguf/rift-coder-v0-7b-q4_0.gguf"
"url": "https://gpt4all.io/models/gguf/rift-coder-v0-7b-q4_0.gguf",
"chatTemplate": null
},
{
"order": "v",
@ -351,7 +451,8 @@
"embeddingModel": true,
"systemPrompt": "",
"description": "<strong>LocalDocs text embeddings model</strong><br><ul><li>For use with LocalDocs feature<li>Used for retrieval augmented generation (RAG)",
"url": "https://gpt4all.io/models/gguf/all-MiniLM-L6-v2-f16.gguf"
"url": "https://gpt4all.io/models/gguf/all-MiniLM-L6-v2-f16.gguf",
"chatTemplate": null
},
{
"order": "w",
@ -367,7 +468,8 @@
"type": "Bert",
"embeddingModel": true,
"description": "<strong>LocalDocs text embeddings model</strong><br><ul><li>For use with LocalDocs feature<li>Used for retrieval augmented generation (RAG)",
"url": "https://gpt4all.io/models/gguf/all-MiniLM-L6-v2.gguf2.f16.gguf"
"url": "https://gpt4all.io/models/gguf/all-MiniLM-L6-v2.gguf2.f16.gguf",
"chatTemplate": null
},
{
"order": "x",
@ -383,7 +485,9 @@
"description": "<strong>Mistral-based model for German-language applications</strong><br><ul><li>Fast responses</li><li>Chat based model</li><li>Trained by ellamind<li>Finetuned on German instruction and chat data</a><li>Licensed for commercial use</ul>",
"url": "https://huggingface.co/TheBloke/em_german_mistral_v01-GGUF/resolve/main/em_german_mistral_v01.Q4_0.gguf",
"promptTemplate": "USER: %1 ASSISTANT: ",
"systemPrompt": "Du bist ein hilfreicher Assistent. "
"systemPrompt": "Du bist ein hilfreicher Assistent. ",
"chatTemplate": "{%- if messages[0]['role'] == 'system' %}\n {%- set loop_start = 1 %}\n {{- messages[0]['content'] }}\n{%- else %}\n {%- set loop_start = 0 %}\n{%- endif %}\n{%- for message in messages %}\n {%- if loop.index0 >= loop_start %}\n {%- if not loop.first %}\n {{- ' ' }}\n {%- endif %}\n {%- if message['role'] == 'user' %}\n {{- 'USER: ' + message['content'] }}\n {%- elif message['role'] == 'assistant' %}\n {{- 'ASSISTANT: ' + message['content'] }}\n {%- else %}\n {{- raise_exception('After the optional system message, conversation roles must be either user or assistant.') }}\n {%- endif %}\n {%- endif %}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {%- if messages %}\n {{- ' ' }}\n {%- endif %}\n {{- 'ASSISTANT:' }}\n{%- endif %}",
"systemMessage": "Du bist ein hilfreicher Assistent."
},
{
"order": "y",
@ -400,7 +504,8 @@
"embeddingModel": true,
"systemPrompt": "",
"description": "nomic-embed-text-v1",
"url": "https://gpt4all.io/models/gguf/nomic-embed-text-v1.f16.gguf"
"url": "https://gpt4all.io/models/gguf/nomic-embed-text-v1.f16.gguf",
"chatTemplate": null
},
{
"order": "z",
@ -417,7 +522,8 @@
"embeddingModel": true,
"systemPrompt": "",
"description": "nomic-embed-text-v1.5",
"url": "https://gpt4all.io/models/gguf/nomic-embed-text-v1.5.f16.gguf"
"url": "https://gpt4all.io/models/gguf/nomic-embed-text-v1.5.f16.gguf",
"chatTemplate": null
},
{
"order": "zzz",
@ -426,13 +532,14 @@
"filename": "qwen2-1_5b-instruct-q4_0.gguf",
"filesize": "937532800",
"requires": "3.0",
"ramrequired": "4",
"ramrequired": "3",
"parameters": "1.5 billion",
"quant": "q4_0",
"type": "qwen2",
"description": "<ul><li>Very fast responses</li><li>Instruction based model</li><li>Usage of LocalDocs (RAG): Highly recommended</li><li>Supports context length of up to 32768</li><li>Trained and finetuned by Qwen (Alibaba Cloud)</li><li>License: <a href=\"https://www.apache.org/licenses/LICENSE-2.0.html/\">Apache 2.0</a></li></ul>",
"url": "https://huggingface.co/Qwen/Qwen2-1.5B-Instruct-GGUF/resolve/main/qwen2-1_5b-instruct-q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|>\n<|im_start|>assistant\n%2<|im_end|>",
"systemPrompt": "<|im_start|>system\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>\n"
"systemPrompt": "<|im_start|>system\nBelow is an instruction that describes a task. Write a response that appropriately completes the request.<|im_end|>\n",
"chatTemplate": "{%- for message in messages %}\n {%- if loop.first and messages[0]['role'] != 'system' %}\n {{- '<|im_start|>system\\nYou are a helpful assistant.<|im_end|>\\n' }}\n {%- endif %}\n {{- '<|im_start|>' + message['role'] + '\\n' + message['content'] + '<|im_end|>\\n' }}\n{%- endfor %}\n{%- if add_generation_prompt %}\n {{- '<|im_start|>assistant\\n' }}\n{%- endif %}"
}
]

View File

@ -216,7 +216,67 @@
},
{
"version": "3.4.0",
"notes": "* **Attached Files:** You can now attach a small Microsoft Excel spreadsheet (.xlsx) to a chat message and ask the model about it.\n* **LocalDocs Accuracy:** The LocalDocs algorithm has been enhanced to find more accurate references for some queries.\n* **Word Document Support:** LocalDocs now supports Microsoft Word (.docx) documents natively.\n * **IMPORTANT NOTE:** If .docx files are not found, make sure Settings > LocalDocs > Allowed File Extensions includes \"docx\".\n* **Forgetful Model Fixes:** Issues with the \"Redo last chat response\" button, and with continuing chats from previous sessions, have been fixed.\n* **Chat Saving Improvements:** On exit, GPT4All will no longer save chats that are not new or modified. As a bonus, downgrading without losing access to all chats will be possible in the future, should the need arise.\n* **UI Fixes:** The model list no longer scrolls to the top when you start downloading a model.\n* **New Models:** LLama 3.2 Instruct 3B and 1B models now available in model list.",
"notes": "* **Attached Files:** You can now attach a small Microsoft Excel spreadsheet (.xlsx) to a chat message and ask the model about it.\n* **LocalDocs Accuracy:** The LocalDocs algorithm has been enhanced to find more accurate references for some queries.\n* **Word Document Support:** LocalDocs now supports Microsoft Word (.docx) documents natively.\n * **IMPORTANT NOTE:** If .docx files are not found, make sure Settings > LocalDocs > Allowed File Extensions includes \"docx\".\n* **Forgetful Model Fixes:** Issues with the \"Redo last chat response\" button, and with continuing chats from previous sessions, have been fixed.\n* **Chat Saving Improvements:** On exit, GPT4All will no longer save chats that are not new or modified. As a bonus, downgrading without losing access to all chats will be possible in the future, should the need arise.\n* **UI Fixes:** The model list no longer scrolls to the top when you start downloading a model.\n* **New Models:** LLama 3.2 Instruct 3B and 1B models now available in model list.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* Andriy Mulyar (Nomic AI)\n* Ikko Eltociear Ashimine (`@eltociear`)\n* Victor Emanuel (`@SINAPSA-IC`)\n* Shiranui (`@supersonictw`)"
},
{
"version": "3.4.1",
"notes": "* **LocalDocs Fixes:** Several issues with LocalDocs in v3.4.0 have been fixed, including missing words and very slow indexing.\n* **Syntax Highlighting:** Go code is now highlighted with the correct colors.\n* **Cache Fixes:** The model list cache is now stored with a version number, and in a more appropriate directory.\n* **Translation Updates:** The Italian translation has been improved.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* John Parent (Kitware)\n* Riccardo Giovanetti (`@Harvester62`)"
},
{
"version": "3.4.2",
"notes": "* **LocalDocs Fixes:** Several issues with LocalDocs, some of which were introduced in v3.4.0, have been fixed.\n * Fixed the possible use of references from unselected collections.\n * Fixed unnecessary reindexing of files with uppercase extensions.\n * Fixed hybrid search failure due to inconsistent database state.\n * Fully fixed the blank Embeddings Device selection in LocalDocs settings.\n * Fixed LocalDocs indexing of large PDFs making very slow progress or even stalling.\n",
"contributors": "* Adam Treat (Nomic AI)\n* Jared Van Bortel (Nomic AI)"
},
{
"version": "3.5.0",
"notes": "* **Message Editing:**\n * You can now edit any message you've sent by clicking the pencil icon below it.\n * You can now redo earlier responses in the conversation.\n* **Templates:** Chat templates have been completely overhauled! They now use Jinja-style syntax. You may notice warnings or errors in the UI. Read the linked docs, and if you have any questions, please ask on the Discord.\n* **File Attachments:** Markdown and plain text files are now supported as file attachments.\n* **System Tray:** There is now an option in Application Settings to allow GPT4All to minimize to the system tray instead of closing.\n* **Local API Server:**\n * The API server now supports system messages from the client and no longer uses the system message in settings.\n * You can now send messages to the API server in any order supported by the model instead of just user/assistant pairs.\n* **Translations:** The Italian and Romanian translations have been improved.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* Benjamin Gallois (`@bgallois`)\n* Riccardo Giovanetti (`@Harvester62`)\n* Victor Emanuel (`@SINAPSA-IC`)"
},
{
"version": "3.5.1",
"notes": "* **Chat template fixes:** Llama 3.2 models, Nous Hermes 2 Mistral, Mistral OpenOrca, Qwen 2 and remote models\n* **Bugfix:** Fix the default model button so it works again after 3.5.0\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)"
},
{
"version": "3.5.2",
"notes": "* **Model Search:** There are now separate tabs for official and third-party models.\n* **Local Server Fixes:** Several mistakes in v3.5's changes to the API server have been corrected.\n* **Cloned Model Fixes:** The chat template and system message of cloned models now manage their defaults correctly.\n* **Translation Improvements:** The Romanian and Italian translations have been updated.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* Riccardo Giovanetti (`@Harvester62`)\n* Victor Emanuel (`@SINAPSA-IC`)"
},
{
"version": "3.5.3",
"notes": "* **LocalDocs Fix:** A serious issue causing LocalDocs to not work properly in v3.5.2 has been fixed.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)"
},
{
"version": "3.6.0",
"notes": "* **Reasoner v1:**\n * Built-in javascript code interpreter tool.\n * Custom curated model that utilizes the code interpreter to break down, analyze, perform, and verify complex reasoning tasks.\n* **Templates:** Automatically substitute chat templates that are not compatible with Jinja2Cpp in GGUFs.\n* **Fixes:**\n * Remote model template to allow for XML in messages.\n * Jinja2Cpp bug that broke system message detection in chat templates.\n * LocalDocs sources displaying in unconsolidated form after v3.5.0.\n",
"contributors": "* Adam Treat (Nomic AI)\n* Jared Van Bortel (Nomic AI)"
},
{
"version": "3.6.1",
"notes": "* **Fixes:**\n * The stop generation button no longer working in v3.6.0.\n * The copy entire conversation button no longer working in v3.6.0.\n",
"contributors": "* Adam Treat (Nomic AI)"
},
{
"version": "3.7.0",
"notes": "* **Windows ARM Support:** GPT4All now supports the Windows ARM platform, ensuring compatibility with devices powered by Qualcomm Snapdragon and Microsoft SQ-series processors.\n * **NOTE:** Support for GPU and/or NPU acceleration is not available at this time. Only the CPU will be used to run LLMs.\n * **NOTE:** You must install the new *Windows ARM* version of GPT4All from the website. The standard *Windows* version will not work due to emulation limitations.\n* **Fixed Updating on macOS:** The maintenance tool no longer crashes when attempting to update or uninstall GPT4All on Sequoia.\n * **NOTE:** If you have installed the version from the GitHub releases as a workaround for this issue, you can safely uninstall it and switch back to the version from the website.\n* **Fixed Chat Saving on macOS:** Chats now save as expected when the application is quit with Command-Q.\n* **Code Interpreter Improvements:**\n * The behavior when the code takes too long to execute and times out has been improved.\n * console.log now accepts multiple arguments for better compatibility with native JavaScript.\n* **Chat Templating Improvements:**\n * Two crashes and one compatibility issue have been fixed in the chat template parser.\n * The default chat template for EM German Mistral has been fixed.\n * Automatic replacements have been added for five new models as we continue to improve compatibility with common chat templates.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* Riccardo Giovanetti (`@Harvester62`)"
},
{
"version": "3.8.0",
"notes": "* **Native DeepSeek-R1-Distill Support:** GPT4All now has robust support for the DeepSeek-R1 family of distillations.\n * Several model variants are now available on the downloads page.\n * Reasoning (wrapped in \"think\" tags) is displayed similarly to the Reasoner model.\n * The DeepSeek-R1 Qwen pretokenizer is now supported, resolving the loading failure in previous versions.\n * The model is now configured with a GPT4All-compatible prompt template by default.\n* **Chat Templating Overhaul:** The template parser has been *completely* replaced with one that has much better compatibility with common models.\n* **Code Interpreter Fixes:**\n * An issue preventing the code interpreter from logging a single string in v3.7.0 has been fixed.\n * The UI no longer freezes while the code interpreter is running a computation.\n* **Local Server Fixes:**\n * An issue preventing the server from using LocalDocs after the first request since v3.5.0 has been fixed.\n * System messages are now correctly hidden from the message history.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.9.0",
"notes": "* **LocalDocs Fix:** LocalDocs no longer shows an error on later messages with reasoning models.\n* **DeepSeek Fix:** DeepSeek-R1 reasoning (in 'think' tags) no longer appears in chat names and follow-up questions.\n* **Windows ARM Improvements:**\n * Graphical artifacts on some SoCs have been fixed.\n * A crash when adding a collection of PDFs to LocalDocs has been fixed.\n* **Template Parser Fixes:** Chat templates containing an unclosed comment no longer freeze GPT4All.\n* **New Models:** OLMoE and Granite MoE models are now supported.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)"
},
{
"version": "3.10.0",
"notes": "* **Remote Models:**\n * The Add Model page now has a dedicated tab for remote model providers.\n * Groq, OpenAI, and Mistral remote models are now easier to configure.\n* **CUDA Compatibility:** GPUs with CUDA compute capability 5.0 such as the GTX 750 are now supported by the CUDA backend.\n* **New Model:** The non-MoE Granite model is now supported.\n* **Translation Updates:**\n * The Italian translation has been updated.\n * The Simplified Chinese translation has been significantly improved.\n* **Better Chat Templates:** The default chat templates for OLMoE 7B 0924/0125 and Granite 3.1 3B/8B have been improved.\n* **Whitespace Fixes:** DeepSeek-R1-based models now have better whitespace behavior in their output.\n* **Crash Fixes:** Several issues that could potentially cause GPT4All to crash have been fixed.\n",
"contributors": "* Jared Van Bortel (Nomic AI)\n* Adam Treat (Nomic AI)\n* ThiloteE (`@ThiloteE`)\n* Lil Bob (`@Junior2Ran`)\n* Riccardo Giovanetti (`@Harvester62`)"
}
]

View File

@ -0,0 +1,29 @@
[tool.pytest.ini_options]
addopts = ['--import-mode=importlib']
[tool.mypy]
files = 'tests/python'
pretty = true
strict = true
warn_unused_ignores = false
[tool.pytype]
inputs = ['tests/python']
jobs = 'auto'
bind_decorated_methods = true
none_is_not_bool = true
overriding_renamed_parameter_count_checks = true
strict_none_binding = true
precise_return = true
# protocols:
# - https://github.com/google/pytype/issues/1423
# - https://github.com/google/pytype/issues/1424
strict_import = true
strict_parameter_checks = true
strict_primitive_comparisons = true
# strict_undefined_checks: too many false positives
[tool.isort]
src_paths = ['tests/python']
line_length = 120
combine_as_imports = true

View File

@ -0,0 +1,483 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
ColumnLayout {
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop
spacing: 5
Label {
Layout.topMargin: 0
Layout.bottomMargin: 25
Layout.rightMargin: 150 * theme.fontScale
Layout.alignment: Qt.AlignTop
Layout.fillWidth: true
verticalAlignment: Text.AlignTop
text: qsTr("These models have been specifically configured for use in GPT4All. The first few models on the " +
"list are known to work the best, but you should only attempt to use models that will fit in your " +
"available memory.")
font.pixelSize: theme.fontSizeLarger
color: theme.textColor
wrapMode: Text.WordWrap
}
Label {
visible: !ModelList.gpt4AllDownloadableModels.count && !ModelList.asyncModelRequestOngoing
Layout.fillWidth: true
Layout.fillHeight: true
horizontalAlignment: Qt.AlignHCenter
verticalAlignment: Qt.AlignVCenter
text: qsTr("Network error: could not retrieve %1").arg("http://gpt4all.io/models/models3.json")
font.pixelSize: theme.fontSizeLarge
color: theme.mutedTextColor
}
MyBusyIndicator {
visible: !ModelList.gpt4AllDownloadableModels.count && ModelList.asyncModelRequestOngoing
running: ModelList.asyncModelRequestOngoing
Accessible.role: Accessible.Animation
Layout.alignment: Qt.AlignCenter
Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the models request is ongoing")
}
RowLayout {
ButtonGroup {
id: buttonGroup
exclusive: true
}
MyButton {
text: qsTr("All")
checked: true
borderWidth: 0
backgroundColor: checked ? theme.lightButtonBackground : "transparent"
backgroundColorHovered: theme.lighterButtonBackgroundHovered
backgroundRadius: 5
padding: 15
topPadding: 8
bottomPadding: 8
textColor: theme.lighterButtonForeground
fontPixelSize: theme.fontSizeLarge
fontPixelBold: true
checkable: true
ButtonGroup.group: buttonGroup
onClicked: {
ModelList.gpt4AllDownloadableModels.filter("");
}
}
MyButton {
text: qsTr("Reasoning")
borderWidth: 0
backgroundColor: checked ? theme.lightButtonBackground : "transparent"
backgroundColorHovered: theme.lighterButtonBackgroundHovered
backgroundRadius: 5
padding: 15
topPadding: 8
bottomPadding: 8
textColor: theme.lighterButtonForeground
fontPixelSize: theme.fontSizeLarge
fontPixelBold: true
checkable: true
ButtonGroup.group: buttonGroup
onClicked: {
ModelList.gpt4AllDownloadableModels.filter("#reasoning");
}
}
Layout.bottomMargin: 10
}
ScrollView {
id: scrollView
ScrollBar.vertical.policy: ScrollBar.AsNeeded
Layout.fillWidth: true
Layout.fillHeight: true
clip: true
ListView {
id: modelListView
model: ModelList.gpt4AllDownloadableModels
boundsBehavior: Flickable.StopAtBounds
spacing: 30
delegate: Rectangle {
id: delegateItem
width: modelListView.width
height: childrenRect.height + 60
color: theme.conversationBackground
radius: 10
border.width: 1
border.color: theme.controlBorder
ColumnLayout {
anchors.top: parent.top
anchors.left: parent.left
anchors.right: parent.right
anchors.margins: 30
Text {
Layout.fillWidth: true
Layout.alignment: Qt.AlignLeft
text: name
elide: Text.ElideRight
color: theme.titleTextColor
font.pixelSize: theme.fontSizeLargest
font.bold: true
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Model file")
Accessible.description: qsTr("Model file to be downloaded")
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
RowLayout {
Layout.topMargin: 10
Layout.fillWidth: true
Text {
id: descriptionText
text: description
font.pixelSize: theme.fontSizeLarge
Layout.fillWidth: true
wrapMode: Text.WordWrap
textFormat: Text.StyledText
color: theme.textColor
linkColor: theme.textColor
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Description")
Accessible.description: qsTr("File description")
onLinkActivated: function(link) { Qt.openUrlExternally(link); }
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.NoButton // pass clicks to parent
cursorShape: parent.hoveredLink ? Qt.PointingHandCursor : Qt.ArrowCursor
}
}
// FIXME Need to overhaul design here which must take into account
// features not present in current figma including:
// * Ability to cancel a current download
// * Ability to resume a download
// * The presentation of an error if encountered
// * Whether to show already installed models
// * Install of remote models with API keys
// * The presentation of the progress bar
Rectangle {
id: actionBox
width: childrenRect.width + 20
color: "transparent"
border.width: 1
border.color: theme.dividerColor
radius: 10
Layout.rightMargin: 20
Layout.bottomMargin: 20
Layout.minimumHeight: childrenRect.height + 20
Layout.alignment: Qt.AlignRight | Qt.AlignTop
ColumnLayout {
spacing: 0
MySettingsButton {
id: downloadButton
text: isDownloading ? qsTr("Cancel") : isIncomplete ? qsTr("Resume") : qsTr("Download")
font.pixelSize: theme.fontSizeLarge
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !installed && !calcHash && downloadError === ""
Accessible.description: qsTr("Stop/restart/start the download")
onClicked: {
if (!isDownloading) {
Download.downloadModel(filename);
} else {
Download.cancelDownload(filename);
}
}
}
MySettingsDestructiveButton {
id: removeButton
text: qsTr("Remove")
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isDownloading && (installed || isIncomplete)
Accessible.description: qsTr("Remove model from filesystem")
onClicked: {
Download.removeModel(filename);
}
}
ColumnLayout {
spacing: 0
Label {
Layout.topMargin: 20
Layout.leftMargin: 20
visible: downloadError !== ""
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"1\"><a href=\"#error\">Error</a></strong></font>")
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
linkColor: theme.textErrorColor
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Describes an error that occurred when downloading")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
Label {
visible: LLM.systemTotalRAMInGB() < ramrequired
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.maximumWidth: 300
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"2\">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>").arg(ramrequired).arg(LLM.systemTotalRAMInGBString())
color: theme.textErrorColor
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WordWrap
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Error for incompatible hardware")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
}
ColumnLayout {
visible: isDownloading && !calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
spacing: 20
ProgressBar {
id: itemProgressBar
Layout.fillWidth: true
width: 200
value: bytesReceived / bytesTotal
background: Rectangle {
implicitHeight: 45
color: theme.progressBackground
radius: 3
}
contentItem: Item {
implicitHeight: 40
Rectangle {
width: itemProgressBar.visualPosition * parent.width
height: parent.height
radius: 2
color: theme.progressForeground
}
}
Accessible.role: Accessible.ProgressBar
Accessible.name: qsTr("Download progressBar")
Accessible.description: qsTr("Shows the progress made in the download")
}
Label {
id: speedLabel
color: theme.textColor
Layout.alignment: Qt.AlignRight
text: speed
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Download speed")
Accessible.description: qsTr("Download speed in bytes/kilobytes/megabytes per second")
}
}
RowLayout {
visible: calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.maximumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
clip: true
Label {
id: calcHashLabel
color: theme.textColor
text: qsTr("Calculating...")
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyBusyIndicator {
id: busyCalcHash
running: calcHash
Accessible.role: Accessible.Animation
Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the file hash is being calculated")
}
}
}
}
}
Item {
Layout.minimumWidth: childrenRect.width
Layout.minimumHeight: childrenRect.height
Layout.bottomMargin: 10
RowLayout {
id: paramRow
anchors.centerIn: parent
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("File size")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: filesize
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("RAM required")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: ramrequired >= 0 ? qsTr("%1 GB").arg(ramrequired) : qsTr("?")
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Parameters")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: parameters !== "" ? parameters : qsTr("?")
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Quant")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: quant
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Type")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: type
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
}
Rectangle {
color: "transparent"
anchors.fill: paramRow
border.color: theme.dividerColor
border.width: 1
radius: 10
}
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
}
}
}
}
}

View File

@ -0,0 +1,703 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
ColumnLayout {
Layout.fillWidth: true
Layout.fillHeight: true
Layout.alignment: Qt.AlignTop
spacing: 5
Label {
Layout.topMargin: 0
Layout.bottomMargin: 25
Layout.rightMargin: 150 * theme.fontScale
Layout.alignment: Qt.AlignTop
Layout.fillWidth: true
verticalAlignment: Text.AlignTop
text: qsTr("Use the search to find and download models from HuggingFace. There is NO GUARANTEE that these " +
"will work. Many will require additional configuration before they can be used.")
font.pixelSize: theme.fontSizeLarger
color: theme.textColor
wrapMode: Text.WordWrap
}
RowLayout {
Layout.fillWidth: true
Layout.fillHeight: true
Layout.alignment: Qt.AlignCenter
Layout.margins: 0
spacing: 10
MyTextField {
id: discoverField
property string textBeingSearched: ""
readOnly: ModelList.discoverInProgress
Layout.alignment: Qt.AlignCenter
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarger
placeholderText: qsTr("Discover and download models by keyword search...")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Text field for discovering and filtering downloadable models")
Connections {
target: ModelList
function onDiscoverInProgressChanged() {
if (ModelList.discoverInProgress) {
discoverField.textBeingSearched = discoverField.text;
discoverField.text = qsTr("Searching \u00B7 %1").arg(discoverField.textBeingSearched);
} else {
discoverField.text = discoverField.textBeingSearched;
discoverField.textBeingSearched = "";
}
}
}
background: ProgressBar {
id: discoverProgressBar
indeterminate: ModelList.discoverInProgress && ModelList.discoverProgress === 0.0
value: ModelList.discoverProgress
background: Rectangle {
color: theme.controlBackground
border.color: theme.controlBorder
radius: 10
}
contentItem: Item {
Rectangle {
visible: ModelList.discoverInProgress
anchors.bottom: parent.bottom
width: discoverProgressBar.visualPosition * parent.width
height: 10
radius: 2
color: theme.progressForeground
}
}
}
Keys.onReturnPressed: (event)=> {
if (event.modifiers & Qt.ControlModifier || event.modifiers & Qt.ShiftModifier)
event.accepted = false;
else {
editingFinished();
sendDiscovery()
}
}
function sendDiscovery() {
ModelList.huggingFaceDownloadableModels.discoverAndFilter(discoverField.text);
}
RowLayout {
spacing: 0
anchors.right: discoverField.right
anchors.verticalCenter: discoverField.verticalCenter
anchors.rightMargin: 15
visible: !ModelList.discoverInProgress
MyMiniButton {
id: clearDiscoverButton
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
visible: discoverField.text !== ""
source: "qrc:/gpt4all/icons/close.svg"
onClicked: {
discoverField.text = ""
discoverField.sendDiscovery() // should clear results
}
}
MyMiniButton {
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
source: "qrc:/gpt4all/icons/settings.svg"
onClicked: {
discoveryTools.visible = !discoveryTools.visible
}
}
MyMiniButton {
id: sendButton
enabled: !ModelList.discoverInProgress
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
source: "qrc:/gpt4all/icons/send_message.svg"
Accessible.name: qsTr("Initiate model discovery and filtering")
Accessible.description: qsTr("Triggers discovery and filtering of models")
onClicked: {
discoverField.sendDiscovery()
}
}
}
}
}
RowLayout {
id: discoveryTools
Layout.fillWidth: true
Layout.alignment: Qt.AlignCenter
Layout.margins: 0
spacing: 20
visible: false
MyComboBox {
id: comboSort
model: ListModel {
ListElement { name: qsTr("Default") }
ListElement { name: qsTr("Likes") }
ListElement { name: qsTr("Downloads") }
ListElement { name: qsTr("Recent") }
}
currentIndex: ModelList.discoverSort
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Sort by: %1").arg(comboSort.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
ModelList.discoverSort = index;
}
}
MyComboBox {
id: comboSortDirection
model: ListModel {
ListElement { name: qsTr("Asc") }
ListElement { name: qsTr("Desc") }
}
currentIndex: {
if (ModelList.discoverSortDirection === 1)
return 0
else
return 1;
}
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Sort dir: %1").arg(comboSortDirection.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
if (index === 0)
ModelList.discoverSortDirection = 1;
else
ModelList.discoverSortDirection = -1;
}
}
MyComboBox {
id: comboLimit
model: ListModel {
ListElement { name: "5" }
ListElement { name: "10" }
ListElement { name: "20" }
ListElement { name: "50" }
ListElement { name: "100" }
ListElement { name: qsTr("None") }
}
currentIndex: {
if (ModelList.discoverLimit === 5)
return 0;
else if (ModelList.discoverLimit === 10)
return 1;
else if (ModelList.discoverLimit === 20)
return 2;
else if (ModelList.discoverLimit === 50)
return 3;
else if (ModelList.discoverLimit === 100)
return 4;
else if (ModelList.discoverLimit === -1)
return 5;
}
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Limit: %1").arg(comboLimit.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
switch (index) {
case 0:
ModelList.discoverLimit = 5; break;
case 1:
ModelList.discoverLimit = 10; break;
case 2:
ModelList.discoverLimit = 20; break;
case 3:
ModelList.discoverLimit = 50; break;
case 4:
ModelList.discoverLimit = 100; break;
case 5:
ModelList.discoverLimit = -1; break;
}
}
}
}
ScrollView {
id: scrollView
ScrollBar.vertical.policy: ScrollBar.AsNeeded
Layout.fillWidth: true
Layout.fillHeight: true
clip: true
ListView {
id: modelListView
model: ModelList.huggingFaceDownloadableModels
boundsBehavior: Flickable.StopAtBounds
spacing: 30
delegate: Rectangle {
id: delegateItem
width: modelListView.width
height: childrenRect.height + 60
color: theme.conversationBackground
radius: 10
border.width: 1
border.color: theme.controlBorder
ColumnLayout {
anchors.top: parent.top
anchors.left: parent.left
anchors.right: parent.right
anchors.margins: 30
Text {
Layout.fillWidth: true
Layout.alignment: Qt.AlignLeft
text: name
elide: Text.ElideRight
color: theme.titleTextColor
font.pixelSize: theme.fontSizeLargest
font.bold: true
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Model file")
Accessible.description: qsTr("Model file to be downloaded")
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
RowLayout {
Layout.topMargin: 10
Layout.fillWidth: true
Text {
id: descriptionText
text: description
font.pixelSize: theme.fontSizeLarge
Layout.fillWidth: true
wrapMode: Text.WordWrap
textFormat: Text.StyledText
color: theme.textColor
linkColor: theme.textColor
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Description")
Accessible.description: qsTr("File description")
onLinkActivated: function(link) { Qt.openUrlExternally(link); }
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.NoButton // pass clicks to parent
cursorShape: parent.hoveredLink ? Qt.PointingHandCursor : Qt.ArrowCursor
}
}
// FIXME Need to overhaul design here which must take into account
// features not present in current figma including:
// * Ability to cancel a current download
// * Ability to resume a download
// * The presentation of an error if encountered
// * Whether to show already installed models
// * Install of remote models with API keys
// * The presentation of the progress bar
Rectangle {
id: actionBox
width: childrenRect.width + 20
color: "transparent"
border.width: 1
border.color: theme.dividerColor
radius: 10
Layout.rightMargin: 20
Layout.bottomMargin: 20
Layout.minimumHeight: childrenRect.height + 20
Layout.alignment: Qt.AlignRight | Qt.AlignTop
ColumnLayout {
spacing: 0
MySettingsButton {
id: downloadButton
text: isDownloading ? qsTr("Cancel") : isIncomplete ? qsTr("Resume") : qsTr("Download")
font.pixelSize: theme.fontSizeLarge
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isOnline && !installed && !calcHash && downloadError === ""
Accessible.description: qsTr("Stop/restart/start the download")
onClicked: {
if (!isDownloading) {
Download.downloadModel(filename);
} else {
Download.cancelDownload(filename);
}
}
}
MySettingsDestructiveButton {
id: removeButton
text: qsTr("Remove")
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isDownloading && (installed || isIncomplete)
Accessible.description: qsTr("Remove model from filesystem")
onClicked: {
Download.removeModel(filename);
}
}
MySettingsButton {
id: installButton
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
text: qsTr("Install")
font.pixelSize: theme.fontSizeLarge
onClicked: {
var apiKeyText = apiKey.text.trim(),
baseUrlText = baseUrl.text.trim(),
modelNameText = modelName.text.trim();
var apiKeyOk = apiKeyText !== "",
baseUrlOk = !isCompatibleApi || baseUrlText !== "",
modelNameOk = !isCompatibleApi || modelNameText !== "";
if (!apiKeyOk)
apiKey.showError();
if (!baseUrlOk)
baseUrl.showError();
if (!modelNameOk)
modelName.showError();
if (!apiKeyOk || !baseUrlOk || !modelNameOk)
return;
if (!isCompatibleApi)
Download.installModel(
filename,
apiKeyText,
);
else
Download.installCompatibleModel(
modelNameText,
apiKeyText,
baseUrlText,
);
}
Accessible.role: Accessible.Button
Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install online model")
}
ColumnLayout {
spacing: 0
Label {
Layout.topMargin: 20
Layout.leftMargin: 20
visible: downloadError !== ""
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"1\"><a href=\"#error\">Error</a></strong></font>")
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
linkColor: theme.textErrorColor
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Describes an error that occurred when downloading")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
Label {
visible: LLM.systemTotalRAMInGB() < ramrequired
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.maximumWidth: 300
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"2\">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>").arg(ramrequired).arg(LLM.systemTotalRAMInGBString())
color: theme.textErrorColor
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WordWrap
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Error for incompatible hardware")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
}
ColumnLayout {
visible: isDownloading && !calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
spacing: 20
ProgressBar {
id: itemProgressBar
Layout.fillWidth: true
width: 200
value: bytesReceived / bytesTotal
background: Rectangle {
implicitHeight: 45
color: theme.progressBackground
radius: 3
}
contentItem: Item {
implicitHeight: 40
Rectangle {
width: itemProgressBar.visualPosition * parent.width
height: parent.height
radius: 2
color: theme.progressForeground
}
}
Accessible.role: Accessible.ProgressBar
Accessible.name: qsTr("Download progressBar")
Accessible.description: qsTr("Shows the progress made in the download")
}
Label {
id: speedLabel
color: theme.textColor
Layout.alignment: Qt.AlignRight
text: speed
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Download speed")
Accessible.description: qsTr("Download speed in bytes/kilobytes/megabytes per second")
}
}
RowLayout {
visible: calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.maximumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
clip: true
Label {
id: calcHashLabel
color: theme.textColor
text: qsTr("Calculating...")
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyBusyIndicator {
id: busyCalcHash
running: calcHash
Accessible.role: Accessible.Animation
Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the file hash is being calculated")
}
}
MyTextField {
id: apiKey
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $API_KEY is empty."));
apiKey.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
apiKey.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $API_KEY")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: baseUrl
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $BASE_URL is empty."));
baseUrl.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
baseUrl.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $BASE_URL")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: modelName
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $MODEL_NAME is empty."))
modelName.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
modelName.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $MODEL_NAME")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
}
}
}
Item {
Layout.minimumWidth: childrenRect.width
Layout.minimumHeight: childrenRect.height
Layout.bottomMargin: 10
RowLayout {
id: paramRow
anchors.centerIn: parent
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("File size")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: filesize
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Quant")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: quant
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Type")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: type
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
}
Rectangle {
color: "transparent"
anchors.fill: paramRow
border.color: theme.dividerColor
border.width: 1
radius: 10
}
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
}
}
}
}
}

View File

@ -42,12 +42,12 @@ Rectangle {
anchors.top: parent.top
anchors.bottom: parent.bottom
anchors.margins: 30
spacing: 30
spacing: 10
ColumnLayout {
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop
spacing: 30
spacing: 10
MyButton {
id: backButton
@ -76,732 +76,80 @@ Rectangle {
font.pixelSize: theme.fontSizeBanner
color: theme.titleTextColor
}
}
RowLayout {
Layout.fillWidth: true
Layout.alignment: Qt.AlignCenter
Layout.margins: 0
spacing: 10
MyTextField {
id: discoverField
property string textBeingSearched: ""
readOnly: ModelList.discoverInProgress
Layout.alignment: Qt.AlignCenter
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarger
placeholderText: qsTr("Discover and download models by keyword search...")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Text field for discovering and filtering downloadable models")
Connections {
target: ModelList
function onDiscoverInProgressChanged() {
if (ModelList.discoverInProgress) {
discoverField.textBeingSearched = discoverField.text;
discoverField.text = qsTr("Searching \u00B7 %1").arg(discoverField.textBeingSearched);
} else {
discoverField.text = discoverField.textBeingSearched;
discoverField.textBeingSearched = "";
}
}
}
background: ProgressBar {
id: discoverProgressBar
indeterminate: ModelList.discoverInProgress && ModelList.discoverProgress === 0.0
value: ModelList.discoverProgress
background: Rectangle {
color: theme.controlBackground
border.color: theme.controlBorder
radius: 10
}
contentItem: Item {
Rectangle {
visible: ModelList.discoverInProgress
anchors.bottom: parent.bottom
width: discoverProgressBar.visualPosition * parent.width
height: 10
radius: 2
color: theme.progressForeground
}
}
}
Keys.onReturnPressed: (event)=> {
if (event.modifiers & Qt.ControlModifier || event.modifiers & Qt.ShiftModifier)
event.accepted = false;
else {
editingFinished();
sendDiscovery()
}
}
function sendDiscovery() {
ModelList.downloadableModels.discoverAndFilter(discoverField.text);
}
RowLayout {
spacing: 0
anchors.right: discoverField.right
anchors.verticalCenter: discoverField.verticalCenter
anchors.rightMargin: 15
visible: !ModelList.discoverInProgress
MyMiniButton {
id: clearDiscoverButton
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
visible: discoverField.text !== ""
source: "qrc:/gpt4all/icons/close.svg"
onClicked: {
discoverField.text = ""
discoverField.sendDiscovery() // should clear results
}
}
MyMiniButton {
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
source: "qrc:/gpt4all/icons/settings.svg"
onClicked: {
discoveryTools.visible = !discoveryTools.visible
}
}
MyMiniButton {
id: sendButton
enabled: !ModelList.discoverInProgress
backgroundColor: theme.textColor
backgroundColorHovered: theme.iconBackgroundDark
source: "qrc:/gpt4all/icons/send_message.svg"
Accessible.name: qsTr("Initiate model discovery and filtering")
Accessible.description: qsTr("Triggers discovery and filtering of models")
onClicked: {
discoverField.sendDiscovery()
}
}
}
RowLayout {
id: bar
implicitWidth: 600
spacing: 10
MyTabButton {
text: qsTr("GPT4All")
isSelected: gpt4AllModelView.isShown()
onPressed: {
gpt4AllModelView.show();
}
}
RowLayout {
id: discoveryTools
Layout.fillWidth: true
Layout.alignment: Qt.AlignCenter
Layout.margins: 0
spacing: 20
visible: false
MyComboBox {
id: comboSort
model: ListModel {
ListElement { name: qsTr("Default") }
ListElement { name: qsTr("Likes") }
ListElement { name: qsTr("Downloads") }
ListElement { name: qsTr("Recent") }
}
currentIndex: ModelList.discoverSort
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Sort by: %1").arg(comboSort.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
ModelList.discoverSort = index;
}
MyTabButton {
text: qsTr("Remote Providers")
isSelected: remoteModelView.isShown()
onPressed: {
remoteModelView.show();
}
MyComboBox {
id: comboSortDirection
model: ListModel {
ListElement { name: qsTr("Asc") }
ListElement { name: qsTr("Desc") }
}
currentIndex: {
if (ModelList.discoverSortDirection === 1)
return 0
else
return 1;
}
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Sort dir: %1").arg(comboSortDirection.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
if (index === 0)
ModelList.discoverSortDirection = 1;
else
ModelList.discoverSortDirection = -1;
}
}
MyComboBox {
id: comboLimit
model: ListModel {
ListElement { name: "5" }
ListElement { name: "10" }
ListElement { name: "20" }
ListElement { name: "50" }
ListElement { name: "100" }
ListElement { name: qsTr("None") }
}
currentIndex: {
if (ModelList.discoverLimit === 5)
return 0;
else if (ModelList.discoverLimit === 10)
return 1;
else if (ModelList.discoverLimit === 20)
return 2;
else if (ModelList.discoverLimit === 50)
return 3;
else if (ModelList.discoverLimit === 100)
return 4;
else if (ModelList.discoverLimit === -1)
return 5;
}
contentItem: Text {
anchors.horizontalCenter: parent.horizontalCenter
rightPadding: 30
color: theme.textColor
text: {
return qsTr("Limit: %1").arg(comboLimit.displayText)
}
font.pixelSize: theme.fontSizeLarger
verticalAlignment: Text.AlignVCenter
horizontalAlignment: Text.AlignHCenter
elide: Text.ElideRight
}
onActivated: function (index) {
switch (index) {
case 0:
ModelList.discoverLimit = 5; break;
case 1:
ModelList.discoverLimit = 10; break;
case 2:
ModelList.discoverLimit = 20; break;
case 3:
ModelList.discoverLimit = 50; break;
case 4:
ModelList.discoverLimit = 100; break;
case 5:
ModelList.discoverLimit = -1; break;
}
}
}
MyTabButton {
text: qsTr("HuggingFace")
isSelected: huggingfaceModelView.isShown()
onPressed: {
huggingfaceModelView.show();
}
}
}
Label {
visible: !ModelList.downloadableModels.count && !ModelList.asyncModelRequestOngoing
StackLayout {
id: stackLayout
Layout.fillWidth: true
Layout.fillHeight: true
horizontalAlignment: Qt.AlignHCenter
verticalAlignment: Qt.AlignVCenter
text: qsTr("Network error: could not retrieve %1").arg("http://gpt4all.io/models/models3.json")
font.pixelSize: theme.fontSizeLarge
color: theme.mutedTextColor
}
MyBusyIndicator {
visible: !ModelList.downloadableModels.count && ModelList.asyncModelRequestOngoing
running: ModelList.asyncModelRequestOngoing
Accessible.role: Accessible.Animation
Layout.alignment: Qt.AlignCenter
Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the models request is ongoing")
}
AddGPT4AllModelView {
id: gpt4AllModelView
Layout.fillWidth: true
Layout.fillHeight: true
ScrollView {
id: scrollView
ScrollBar.vertical.policy: ScrollBar.AsNeeded
Layout.fillWidth: true
Layout.fillHeight: true
clip: true
function show() {
stackLayout.currentIndex = 0;
}
function isShown() {
return stackLayout.currentIndex === 0;
}
}
ListView {
id: modelListView
model: ModelList.downloadableModels
boundsBehavior: Flickable.StopAtBounds
spacing: 30
AddRemoteModelView {
id: remoteModelView
Layout.fillWidth: true
Layout.fillHeight: true
delegate: Rectangle {
id: delegateItem
width: modelListView.width
height: childrenRect.height + 60
color: theme.conversationBackground
radius: 10
border.width: 1
border.color: theme.controlBorder
function show() {
stackLayout.currentIndex = 1;
}
function isShown() {
return stackLayout.currentIndex === 1;
}
}
ColumnLayout {
anchors.top: parent.top
anchors.left: parent.left
anchors.right: parent.right
anchors.margins: 30
AddHFModelView {
id: huggingfaceModelView
Layout.fillWidth: true
Layout.fillHeight: true
// FIXME: This generates a warning and should not be used inside a layout, but without
// it the text field inside this qml does not display at full width so it looks like
// a bug in stacklayout
anchors.fill: parent
Text {
Layout.fillWidth: true
Layout.alignment: Qt.AlignLeft
text: name
elide: Text.ElideRight
color: theme.titleTextColor
font.pixelSize: theme.fontSizeLargest
font.bold: true
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Model file")
Accessible.description: qsTr("Model file to be downloaded")
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
RowLayout {
Layout.topMargin: 10
Layout.fillWidth: true
Text {
id: descriptionText
text: description
font.pixelSize: theme.fontSizeLarge
Layout.fillWidth: true
wrapMode: Text.WordWrap
textFormat: Text.StyledText
color: theme.textColor
linkColor: theme.textColor
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Description")
Accessible.description: qsTr("File description")
onLinkActivated: function(link) { Qt.openUrlExternally(link); }
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.NoButton // pass clicks to parent
cursorShape: parent.hoveredLink ? Qt.PointingHandCursor : Qt.ArrowCursor
}
}
// FIXME Need to overhaul design here which must take into account
// features not present in current figma including:
// * Ability to cancel a current download
// * Ability to resume a download
// * The presentation of an error if encountered
// * Whether to show already installed models
// * Install of remote models with API keys
// * The presentation of the progress bar
Rectangle {
id: actionBox
width: childrenRect.width + 20
color: "transparent"
border.width: 1
border.color: theme.dividerColor
radius: 10
Layout.rightMargin: 20
Layout.bottomMargin: 20
Layout.minimumHeight: childrenRect.height + 20
Layout.alignment: Qt.AlignRight | Qt.AlignTop
ColumnLayout {
spacing: 0
MySettingsButton {
id: downloadButton
text: isDownloading ? qsTr("Cancel") : isIncomplete ? qsTr("Resume") : qsTr("Download")
font.pixelSize: theme.fontSizeLarge
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isOnline && !installed && !calcHash && downloadError === ""
Accessible.description: qsTr("Stop/restart/start the download")
onClicked: {
if (!isDownloading) {
Download.downloadModel(filename);
} else {
Download.cancelDownload(filename);
}
}
}
MySettingsDestructiveButton {
id: removeButton
text: qsTr("Remove")
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isDownloading && (installed || isIncomplete)
Accessible.description: qsTr("Remove model from filesystem")
onClicked: {
Download.removeModel(filename);
}
}
MySettingsButton {
id: installButton
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
text: qsTr("Install")
font.pixelSize: theme.fontSizeLarge
onClicked: {
var apiKeyText = apiKey.text.trim(),
baseUrlText = baseUrl.text.trim(),
modelNameText = modelName.text.trim();
var apiKeyOk = apiKeyText !== "",
baseUrlOk = !isCompatibleApi || baseUrlText !== "",
modelNameOk = !isCompatibleApi || modelNameText !== "";
if (!apiKeyOk)
apiKey.showError();
if (!baseUrlOk)
baseUrl.showError();
if (!modelNameOk)
modelName.showError();
if (!apiKeyOk || !baseUrlOk || !modelNameOk)
return;
if (!isCompatibleApi)
Download.installModel(
filename,
apiKeyText,
);
else
Download.installCompatibleModel(
modelNameText,
apiKeyText,
baseUrlText,
);
}
Accessible.role: Accessible.Button
Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install online model")
}
ColumnLayout {
spacing: 0
Label {
Layout.topMargin: 20
Layout.leftMargin: 20
visible: downloadError !== ""
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"1\"><a href=\"#error\">Error</a></strong></font>")
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
linkColor: theme.textErrorColor
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Describes an error that occurred when downloading")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
Label {
visible: LLM.systemTotalRAMInGB() < ramrequired
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.maximumWidth: 300
textFormat: Text.StyledText
text: qsTr("<strong><font size=\"2\">WARNING: Not recommended for your hardware. Model requires more memory (%1 GB) than your system has available (%2).</strong></font>").arg(ramrequired).arg(LLM.systemTotalRAMInGBString())
color: theme.textErrorColor
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WordWrap
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Error for incompatible hardware")
onLinkActivated: {
downloadingErrorPopup.text = downloadError;
downloadingErrorPopup.open();
}
}
}
ColumnLayout {
visible: isDownloading && !calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
spacing: 20
ProgressBar {
id: itemProgressBar
Layout.fillWidth: true
width: 200
value: bytesReceived / bytesTotal
background: Rectangle {
implicitHeight: 45
color: theme.progressBackground
radius: 3
}
contentItem: Item {
implicitHeight: 40
Rectangle {
width: itemProgressBar.visualPosition * parent.width
height: parent.height
radius: 2
color: theme.progressForeground
}
}
Accessible.role: Accessible.ProgressBar
Accessible.name: qsTr("Download progressBar")
Accessible.description: qsTr("Shows the progress made in the download")
}
Label {
id: speedLabel
color: theme.textColor
Layout.alignment: Qt.AlignRight
text: speed
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Download speed")
Accessible.description: qsTr("Download speed in bytes/kilobytes/megabytes per second")
}
}
RowLayout {
visible: calcHash
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.maximumWidth: 200
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
clip: true
Label {
id: calcHashLabel
color: theme.textColor
text: qsTr("Calculating...")
font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyBusyIndicator {
id: busyCalcHash
running: calcHash
Accessible.role: Accessible.Animation
Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the file hash is being calculated")
}
}
MyTextField {
id: apiKey
visible: !installed && isOnline
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $API_KEY is empty."));
apiKey.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
apiKey.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $API_KEY")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: baseUrl
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $BASE_URL is empty."));
baseUrl.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
baseUrl.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $BASE_URL")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
MyTextField {
id: modelName
visible: !installed && isOnline && isCompatibleApi
Layout.topMargin: 20
Layout.leftMargin: 20
Layout.minimumWidth: 200
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $MODEL_NAME is empty."))
modelName.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
modelName.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $MODEL_NAME")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
}
}
}
Item {
Layout.minimumWidth: childrenRect.width
Layout.minimumHeight: childrenRect.height
Layout.bottomMargin: 10
RowLayout {
id: paramRow
anchors.centerIn: parent
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("File size")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: filesize
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("RAM required")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: ramrequired >= 0 ? qsTr("%1 GB").arg(ramrequired) : qsTr("?")
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Parameters")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: parameters !== "" ? parameters : qsTr("?")
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Quant")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: quant
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
Rectangle {
width: 1
Layout.fillHeight: true
color: theme.dividerColor
}
ColumnLayout {
Layout.topMargin: 10
Layout.bottomMargin: 10
Layout.leftMargin: 20
Layout.rightMargin: 20
Text {
text: qsTr("Type")
font.pixelSize: theme.fontSizeSmall
color: theme.mutedDarkTextColor
}
Text {
text: type
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
font.bold: true
}
}
}
Rectangle {
color: "transparent"
anchors.fill: paramRow
border.color: theme.dividerColor
border.width: 1
radius: 10
}
}
Rectangle {
Layout.fillWidth: true
height: 1
color: theme.dividerColor
}
}
function show() {
stackLayout.currentIndex = 2;
}
function isShown() {
return stackLayout.currentIndex === 2;
}
}
}

View File

@ -0,0 +1,147 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
ColumnLayout {
Layout.fillWidth: true
Layout.alignment: Qt.AlignTop
spacing: 5
Label {
Layout.topMargin: 0
Layout.bottomMargin: 25
Layout.rightMargin: 150 * theme.fontScale
Layout.alignment: Qt.AlignTop
Layout.fillWidth: true
verticalAlignment: Text.AlignTop
text: qsTr("Various remote model providers that use network resources for inference.")
font.pixelSize: theme.fontSizeLarger
color: theme.textColor
wrapMode: Text.WordWrap
}
ScrollView {
id: scrollView
ScrollBar.vertical.policy: ScrollBar.AsNeeded
Layout.fillWidth: true
Layout.fillHeight: true
contentWidth: availableWidth
clip: true
Flow {
anchors.left: parent.left
anchors.right: parent.right
spacing: 20
bottomPadding: 20
property int childWidth: 330 * theme.fontScale
property int childHeight: 400 + 166 * theme.fontScale
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.groq.com/openai/v1/"
providerName: qsTr("Groq")
providerImage: "qrc:/gpt4all/icons/groq.svg"
providerDesc: qsTr('Groq offers a high-performance AI inference engine designed for low-latency and efficient processing. Optimized for real-time applications, Groqs technology is ideal for users who need fast responses from open large language models and other AI workloads.<br><br>Get your API key: <a href="https://console.groq.com/keys">https://groq.com/</a>')
modelWhitelist: [
// last updated 2025-02-24
"deepseek-r1-distill-llama-70b",
"deepseek-r1-distill-qwen-32b",
"gemma2-9b-it",
"llama-3.1-8b-instant",
"llama-3.2-1b-preview",
"llama-3.2-3b-preview",
"llama-3.3-70b-specdec",
"llama-3.3-70b-versatile",
"llama3-70b-8192",
"llama3-8b-8192",
"mixtral-8x7b-32768",
"qwen-2.5-32b",
"qwen-2.5-coder-32b",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.openai.com/v1/"
providerName: qsTr("OpenAI")
providerImage: "qrc:/gpt4all/icons/openai.svg"
providerDesc: qsTr('OpenAI provides access to advanced AI models, including GPT-4 supporting a wide range of applications, from conversational AI to content generation and code completion.<br><br>Get your API key: <a href="https://platform.openai.com/signup">https://openai.com/</a>')
modelWhitelist: [
// last updated 2025-02-24
"gpt-3.5-turbo",
"gpt-3.5-turbo-16k",
"gpt-4",
"gpt-4-32k",
"gpt-4-turbo",
"gpt-4o",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerBaseUrl: "https://api.mistral.ai/v1/"
providerName: qsTr("Mistral")
providerImage: "qrc:/gpt4all/icons/mistral.svg"
providerDesc: qsTr('Mistral AI specializes in efficient, open-weight language models optimized for various natural language processing tasks. Their models are designed for flexibility and performance, making them a solid option for applications requiring scalable AI solutions.<br><br>Get your API key: <a href="https://mistral.ai/">https://mistral.ai/</a>')
modelWhitelist: [
// last updated 2025-02-24
"codestral-2405",
"codestral-2411-rc5",
"codestral-2412",
"codestral-2501",
"codestral-latest",
"codestral-mamba-2407",
"codestral-mamba-latest",
"ministral-3b-2410",
"ministral-3b-latest",
"ministral-8b-2410",
"ministral-8b-latest",
"mistral-large-2402",
"mistral-large-2407",
"mistral-large-2411",
"mistral-large-latest",
"mistral-medium-2312",
"mistral-medium-latest",
"mistral-saba-2502",
"mistral-saba-latest",
"mistral-small-2312",
"mistral-small-2402",
"mistral-small-2409",
"mistral-small-2501",
"mistral-small-latest",
"mistral-tiny-2312",
"mistral-tiny-2407",
"mistral-tiny-latest",
"open-codestral-mamba",
"open-mistral-7b",
"open-mistral-nemo",
"open-mistral-nemo-2407",
"open-mixtral-8x22b",
"open-mixtral-8x22b-2404",
"open-mixtral-8x7b",
]
}
RemoteModelCard {
width: parent.childWidth
height: parent.childHeight
providerIsCustom: true
providerName: qsTr("Custom")
providerImage: "qrc:/gpt4all/icons/antenna_3.svg"
providerDesc: qsTr("The custom provider option allows users to connect their own OpenAI-compatible AI models or third-party inference services. This is useful for organizations with proprietary models or those leveraging niche AI providers not listed here.")
}
}
}
}

View File

@ -10,7 +10,7 @@ import network
import llm
MySettingsTab {
onRestoreDefaultsClicked: {
onRestoreDefaults: {
MySettings.restoreApplicationDefaults();
}
title: qsTr("Application")
@ -487,32 +487,32 @@ MySettingsTab {
Accessible.description: ToolTip.text
}
MySettingsLabel {
id: saveChatsContextLabel
text: qsTr("Save Chat Context")
helpText: qsTr("Save the chat model's state to disk for faster loading. WARNING: Uses ~2GB per chat.")
Layout.row: 12
id: trayLabel
text: qsTr("Enable System Tray")
helpText: qsTr("The application will minimize to the system tray when the window is closed.")
Layout.row: 13
Layout.column: 0
}
MyCheckBox {
id: saveChatsContextBox
Layout.row: 12
id: trayBox
Layout.row: 13
Layout.column: 2
Layout.alignment: Qt.AlignRight
checked: MySettings.saveChatsContext
checked: MySettings.systemTray
onClicked: {
MySettings.saveChatsContext = !MySettings.saveChatsContext
MySettings.systemTray = !MySettings.systemTray
}
}
MySettingsLabel {
id: serverChatLabel
text: qsTr("Enable Local API Server")
helpText: qsTr("Expose an OpenAI-Compatible server to localhost. WARNING: Results in increased resource usage.")
Layout.row: 13
Layout.row: 14
Layout.column: 0
}
MyCheckBox {
id: serverChatBox
Layout.row: 13
Layout.row: 14
Layout.column: 2
Layout.alignment: Qt.AlignRight
checked: MySettings.serverChat
@ -524,7 +524,7 @@ MySettingsTab {
id: serverPortLabel
text: qsTr("API Server Port")
helpText: qsTr("The port to use for the local server. Requires restart.")
Layout.row: 14
Layout.row: 15
Layout.column: 0
}
MyTextField {
@ -532,7 +532,7 @@ MySettingsTab {
text: MySettings.networkPort
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
Layout.row: 14
Layout.row: 15
Layout.column: 2
Layout.minimumWidth: 200
Layout.maximumWidth: 200
@ -577,12 +577,12 @@ MySettingsTab {
id: updatesLabel
text: qsTr("Check For Updates")
helpText: qsTr("Manually check for an update to GPT4All.");
Layout.row: 15
Layout.row: 16
Layout.column: 0
}
MySettingsButton {
Layout.row: 15
Layout.row: 16
Layout.column: 2
Layout.alignment: Qt.AlignRight
text: qsTr("Updates");
@ -593,7 +593,7 @@ MySettingsTab {
}
Rectangle {
Layout.row: 16
Layout.row: 17
Layout.column: 0
Layout.columnSpan: 3
Layout.fillWidth: true

View File

@ -0,0 +1,166 @@
import Qt5Compat.GraphicalEffects
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import gpt4all
import mysettings
import toolenums
ColumnLayout {
property alias textContent: innerTextItem.textContent
property bool isCurrent: false
property bool isError: false
property bool isThinking: false
property int thinkingTime: 0
Layout.topMargin: 10
Layout.bottomMargin: 10
Item {
Layout.preferredWidth: childrenRect.width
Layout.preferredHeight: 38
RowLayout {
anchors.left: parent.left
anchors.top: parent.top
anchors.bottom: parent.bottom
Item {
Layout.preferredWidth: myTextArea.implicitWidth
Layout.preferredHeight: myTextArea.implicitHeight
TextArea {
id: myTextArea
text: {
if (isError)
return qsTr("Analysis encountered error");
if (isCurrent)
return isThinking ? qsTr("Thinking") : qsTr("Analyzing");
return isThinking
? qsTr("Thought for %1 %2")
.arg(Math.ceil(thinkingTime / 1000.0))
.arg(Math.ceil(thinkingTime / 1000.0) === 1 ? qsTr("second") : qsTr("seconds"))
: qsTr("Analyzed");
}
padding: 0
font.pixelSize: theme.fontSizeLarger
enabled: false
focus: false
readOnly: true
color: headerMA.containsMouse ? theme.mutedDarkTextColorHovered : theme.mutedTextColor
hoverEnabled: false
}
Item {
id: textColorOverlay
anchors.fill: parent
clip: true
visible: false
Rectangle {
id: animationRec
width: myTextArea.width * 0.3
anchors.top: parent.top
anchors.bottom: parent.bottom
color: theme.textColor
SequentialAnimation {
running: isCurrent
loops: Animation.Infinite
NumberAnimation {
target: animationRec;
property: "x";
from: -animationRec.width;
to: myTextArea.width * 3;
duration: 2000
}
}
}
}
OpacityMask {
visible: isCurrent
anchors.fill: parent
maskSource: myTextArea
source: textColorOverlay
}
}
Item {
id: caret
Layout.preferredWidth: contentCaret.width
Layout.preferredHeight: contentCaret.height
Image {
id: contentCaret
anchors.centerIn: parent
visible: false
sourceSize.width: theme.fontSizeLarge
sourceSize.height: theme.fontSizeLarge
mipmap: true
source: {
if (contentLayout.state === "collapsed")
return "qrc:/gpt4all/icons/caret_right.svg";
else
return "qrc:/gpt4all/icons/caret_down.svg";
}
}
ColorOverlay {
anchors.fill: contentCaret
source: contentCaret
color: headerMA.containsMouse ? theme.mutedDarkTextColorHovered : theme.mutedTextColor
}
}
}
MouseArea {
id: headerMA
hoverEnabled: true
anchors.fill: parent
onClicked: {
if (contentLayout.state === "collapsed")
contentLayout.state = "expanded";
else
contentLayout.state = "collapsed";
}
}
}
ColumnLayout {
id: contentLayout
spacing: 0
state: "collapsed"
clip: true
states: [
State {
name: "expanded"
PropertyChanges { target: contentLayout; Layout.preferredHeight: innerContentLayout.height }
},
State {
name: "collapsed"
PropertyChanges { target: contentLayout; Layout.preferredHeight: 0 }
}
]
transitions: [
Transition {
SequentialAnimation {
PropertyAnimation {
target: contentLayout
property: "Layout.preferredHeight"
duration: 300
easing.type: Easing.InOutQuad
}
}
}
]
ColumnLayout {
id: innerContentLayout
Layout.leftMargin: 30
ChatTextItem {
id: innerTextItem
}
}
}
}

View File

@ -0,0 +1,832 @@
import Qt5Compat.GraphicalEffects
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import Qt.labs.qmlmodels
import gpt4all
import mysettings
import toolenums
ColumnLayout {
property var inputBoxText: null
signal setInputBoxText(text: string)
Item {
Layout.fillWidth: true
Layout.maximumWidth: parent.width
Layout.preferredHeight: gridLayout.height
HoverHandler { id: hoverArea }
GridLayout {
id: gridLayout
anchors.left: parent.left
anchors.right: parent.right
columns: 2
Item {
Layout.row: 0
Layout.column: 0
Layout.alignment: Qt.AlignVCenter | Qt.AlignRight
Layout.preferredWidth: 32
Layout.preferredHeight: 32
Layout.topMargin: model.index > 0 ? 25 : 0
Image {
id: logo
sourceSize: Qt.size(32, 32)
fillMode: Image.PreserveAspectFit
mipmap: true
visible: false
source: name !== "Response: " ? "qrc:/gpt4all/icons/you.svg" : "qrc:/gpt4all/icons/gpt4all_transparent.svg"
}
ColorOverlay {
id: colorOver
anchors.fill: logo
source: logo
color: theme.conversationHeader
RotationAnimation {
id: rotationAnimation
target: colorOver
property: "rotation"
from: 0
to: 360
duration: 1000
loops: Animation.Infinite
running: isCurrentResponse && currentChat.responseInProgress
}
}
}
Item {
Layout.row: 0
Layout.column: 1
Layout.fillWidth: true
Layout.preferredHeight: 38
Layout.topMargin: model.index > 0 ? 25 : 0
RowLayout {
spacing: 5
anchors.left: parent.left
anchors.top: parent.top
anchors.bottom: parent.bottom
TextArea {
text: {
if (name === "Response: ")
return qsTr("GPT4All");
return qsTr("You");
}
padding: 0
font.pixelSize: theme.fontSizeLarger
font.bold: true
color: theme.conversationHeader
enabled: false
focus: false
readOnly: true
}
Text {
visible: name === "Response: "
font.pixelSize: theme.fontSizeLarger
text: currentModelName()
color: theme.mutedTextColor
}
RowLayout {
visible: isCurrentResponse && (content === "" && currentChat.responseInProgress)
Text {
color: theme.mutedTextColor
font.pixelSize: theme.fontSizeLarger
text: {
switch (currentChat.responseState) {
case Chat.ResponseStopped: return qsTr("response stopped ...");
case Chat.LocalDocsRetrieval: return qsTr("retrieving localdocs: %1 ...").arg(currentChat.collectionList.join(", "));
case Chat.LocalDocsProcessing: return qsTr("searching localdocs: %1 ...").arg(currentChat.collectionList.join(", "));
case Chat.PromptProcessing: return qsTr("processing ...")
case Chat.ResponseGeneration: return qsTr("generating response ...");
case Chat.GeneratingQuestions: return qsTr("generating questions ...");
case Chat.ToolCallGeneration: return qsTr("generating toolcall ...");
default: return ""; // handle unexpected values
}
}
}
}
}
}
ColumnLayout {
Layout.row: 1
Layout.column: 1
Layout.fillWidth: true
spacing: 10
Flow {
id: attachedUrlsFlow
Layout.fillWidth: true
Layout.bottomMargin: 10
spacing: 10
visible: promptAttachments.length !== 0
Repeater {
model: promptAttachments
delegate: Rectangle {
width: 350
height: 50
radius: 5
color: theme.attachmentBackground
border.color: theme.controlBorder
Row {
spacing: 5
anchors.fill: parent
anchors.margins: 5
MyFileIcon {
iconSize: 40
fileName: modelData.file
}
Text {
width: 295
height: 40
text: modelData.file
color: theme.textColor
horizontalAlignment: Text.AlignHLeft
verticalAlignment: Text.AlignVCenter
font.pixelSize: theme.fontSizeMedium
font.bold: true
wrapMode: Text.WrapAnywhere
elide: Qt.ElideRight
}
}
}
}
}
Repeater {
model: childItems
DelegateChooser {
id: chooser
role: "name"
DelegateChoice {
roleValue: "Text: ";
ChatTextItem {
Layout.fillWidth: true
textContent: modelData.content
}
}
DelegateChoice {
roleValue: "ToolCall: ";
ChatCollapsibleItem {
Layout.fillWidth: true
textContent: modelData.content
isCurrent: modelData.isCurrentResponse
isError: modelData.isToolCallError
}
}
DelegateChoice {
roleValue: "Think: ";
ChatCollapsibleItem {
Layout.fillWidth: true
textContent: modelData.content
isCurrent: modelData.isCurrentResponse
isError: false
isThinking: true
thinkingTime: modelData.thinkingTime
visible: modelData.content !== ""
}
}
}
delegate: chooser
}
ChatTextItem {
Layout.fillWidth: true
textContent: content
}
ThumbsDownDialog {
id: thumbsDownDialog
x: Math.round((parent.width - width) / 2)
y: Math.round((parent.height - height) / 2)
width: 640
height: 300
property string text: content
response: newResponse === undefined || newResponse === "" ? text : newResponse
onAccepted: {
var responseHasChanged = response !== text && response !== newResponse
if (thumbsDownState && !thumbsUpState && !responseHasChanged)
return
chatModel.updateNewResponse(model.index, response)
chatModel.updateThumbsUpState(model.index, false)
chatModel.updateThumbsDownState(model.index, true)
Network.sendConversation(currentChat.id, getConversationJson());
}
}
}
Item {
Layout.row: 2
Layout.column: 1
Layout.topMargin: 5
Layout.alignment: Qt.AlignVCenter
Layout.preferredWidth: childrenRect.width
Layout.preferredHeight: childrenRect.height
visible: {
if (name !== "Response: ")
return false
if (consolidatedSources.length === 0)
return false
if (!MySettings.localDocsShowReferences)
return false
if (isCurrentResponse && currentChat.responseInProgress
&& currentChat.responseState !== Chat.GeneratingQuestions )
return false
return true
}
MyButton {
backgroundColor: theme.sourcesBackground
backgroundColorHovered: theme.sourcesBackgroundHovered
contentItem: RowLayout {
anchors.centerIn: parent
Item {
Layout.preferredWidth: 24
Layout.preferredHeight: 24
Image {
id: sourcesIcon
visible: false
anchors.fill: parent
sourceSize.width: 24
sourceSize.height: 24
mipmap: true
source: "qrc:/gpt4all/icons/db.svg"
}
ColorOverlay {
anchors.fill: sourcesIcon
source: sourcesIcon
color: theme.textColor
}
}
Text {
text: qsTr("%n Source(s)", "", consolidatedSources.length)
padding: 0
font.pixelSize: theme.fontSizeLarge
font.bold: true
color: theme.styledTextColor
}
Item {
Layout.preferredWidth: caret.width
Layout.preferredHeight: caret.height
Image {
id: caret
anchors.centerIn: parent
visible: false
sourceSize.width: theme.fontSizeLarge
sourceSize.height: theme.fontSizeLarge
mipmap: true
source: {
if (sourcesLayout.state === "collapsed")
return "qrc:/gpt4all/icons/caret_right.svg";
else
return "qrc:/gpt4all/icons/caret_down.svg";
}
}
ColorOverlay {
anchors.fill: caret
source: caret
color: theme.textColor
}
}
}
onClicked: {
if (sourcesLayout.state === "collapsed")
sourcesLayout.state = "expanded";
else
sourcesLayout.state = "collapsed";
}
}
}
ColumnLayout {
id: sourcesLayout
Layout.row: 3
Layout.column: 1
Layout.topMargin: 5
visible: {
if (consolidatedSources.length === 0)
return false
if (!MySettings.localDocsShowReferences)
return false
if (isCurrentResponse && currentChat.responseInProgress
&& currentChat.responseState !== Chat.GeneratingQuestions )
return false
return true
}
clip: true
Layout.fillWidth: true
Layout.preferredHeight: 0
state: "collapsed"
states: [
State {
name: "expanded"
PropertyChanges { target: sourcesLayout; Layout.preferredHeight: sourcesFlow.height }
},
State {
name: "collapsed"
PropertyChanges { target: sourcesLayout; Layout.preferredHeight: 0 }
}
]
transitions: [
Transition {
SequentialAnimation {
PropertyAnimation {
target: sourcesLayout
property: "Layout.preferredHeight"
duration: 300
easing.type: Easing.InOutQuad
}
}
}
]
Flow {
id: sourcesFlow
Layout.fillWidth: true
spacing: 10
visible: consolidatedSources.length !== 0
Repeater {
model: consolidatedSources
delegate: Rectangle {
radius: 10
color: ma.containsMouse ? theme.sourcesBackgroundHovered : theme.sourcesBackground
width: 200
height: 75
MouseArea {
id: ma
enabled: modelData.path !== ""
anchors.fill: parent
hoverEnabled: true
onClicked: function() {
Qt.openUrlExternally(modelData.fileUri)
}
}
Rectangle {
id: debugTooltip
anchors.right: parent.right
anchors.bottom: parent.bottom
width: 24
height: 24
color: "transparent"
ToolTip {
parent: debugTooltip
visible: debugMouseArea.containsMouse
text: modelData.text
contentWidth: 900
delay: 500
}
MouseArea {
id: debugMouseArea
anchors.fill: parent
hoverEnabled: true
}
}
ColumnLayout {
anchors.left: parent.left
anchors.top: parent.top
anchors.margins: 10
spacing: 0
RowLayout {
id: title
spacing: 5
Layout.maximumWidth: 180
MyFileIcon {
iconSize: 24
fileName: modelData.file
Layout.preferredWidth: iconSize
Layout.preferredHeight: iconSize
}
Text {
Layout.maximumWidth: 156
text: modelData.collection !== "" ? modelData.collection : qsTr("LocalDocs")
font.pixelSize: theme.fontSizeLarge
font.bold: true
color: theme.styledTextColor
elide: Qt.ElideRight
}
Rectangle {
Layout.fillWidth: true
color: "transparent"
height: 1
}
}
Text {
Layout.fillHeight: true
Layout.maximumWidth: 180
Layout.maximumHeight: 55 - title.height
text: modelData.file
color: theme.textColor
font.pixelSize: theme.fontSizeSmall
elide: Qt.ElideRight
wrapMode: Text.WrapAnywhere
}
}
}
}
}
}
ConfirmationDialog {
id: editPromptDialog
dialogTitle: qsTr("Edit this message?")
description: qsTr("All following messages will be permanently erased.")
onAccepted: {
const msg = currentChat.popPrompt(index);
if (msg !== null)
setInputBoxText(msg);
}
}
ConfirmationDialog {
id: redoResponseDialog
dialogTitle: qsTr("Redo this response?")
description: qsTr("All following messages will be permanently erased.")
onAccepted: currentChat.regenerateResponse(index)
}
RowLayout {
id: buttonRow
Layout.row: 4
Layout.column: 1
Layout.maximumWidth: parent.width
Layout.fillWidth: false
Layout.alignment: Qt.AlignLeft | Qt.AlignTop
spacing: 3
visible: !isCurrentResponse || !currentChat.responseInProgress
enabled: opacity > 0
opacity: hoverArea.hovered
Behavior on opacity {
OpacityAnimator { duration: 30 }
}
ChatMessageButton {
readonly property var editingDisabledReason: {
if (!currentChat.isModelLoaded)
return qsTr("Cannot edit chat without a loaded model.");
if (currentChat.responseInProgress)
return qsTr("Cannot edit chat while the model is generating.");
return null;
}
visible: !currentChat.isServer && model.name === "Prompt: "
enabled: editingDisabledReason === null
Layout.maximumWidth: 24
Layout.maximumHeight: 24
Layout.alignment: Qt.AlignVCenter
Layout.fillWidth: false
name: editingDisabledReason ?? qsTr("Edit")
source: "qrc:/gpt4all/icons/edit.svg"
onClicked: {
if (inputBoxText === "")
editPromptDialog.open();
}
}
ChatMessageButton {
readonly property var editingDisabledReason: {
if (!currentChat.isModelLoaded)
return qsTr("Cannot redo response without a loaded model.");
if (currentChat.responseInProgress)
return qsTr("Cannot redo response while the model is generating.");
return null;
}
visible: !currentChat.isServer && model.name === "Response: "
enabled: editingDisabledReason === null
Layout.maximumWidth: 24
Layout.maximumHeight: 24
Layout.alignment: Qt.AlignVCenter
Layout.fillWidth: false
name: editingDisabledReason ?? qsTr("Redo")
source: "qrc:/gpt4all/icons/regenerate.svg"
onClicked: {
if (index == chatModel.count - 1) {
// regenerate last message without confirmation
currentChat.regenerateResponse(index);
return;
}
redoResponseDialog.open();
}
}
ChatMessageButton {
Layout.maximumWidth: 24
Layout.maximumHeight: 24
Layout.alignment: Qt.AlignVCenter
Layout.fillWidth: false
name: qsTr("Copy")
source: "qrc:/gpt4all/icons/copy.svg"
onClicked: {
chatModel.copyToClipboard(index);
}
}
Item {
visible: name === "Response: " && MySettings.networkIsActive
Layout.alignment: Qt.AlignVCenter
Layout.preferredWidth: childrenRect.width
Layout.preferredHeight: childrenRect.height
Layout.fillWidth: false
ChatMessageButton {
id: thumbsUp
anchors.left: parent.left
anchors.verticalCenter: parent.verticalCenter
opacity: thumbsUpState || thumbsUpState == thumbsDownState ? 1.0 : 0.2
source: "qrc:/gpt4all/icons/thumbs_up.svg"
name: qsTr("Like response")
onClicked: {
if (thumbsUpState && !thumbsDownState)
return
chatModel.updateNewResponse(index, "")
chatModel.updateThumbsUpState(index, true)
chatModel.updateThumbsDownState(index, false)
Network.sendConversation(currentChat.id, getConversationJson());
}
}
ChatMessageButton {
id: thumbsDown
anchors.top: thumbsUp.top
anchors.topMargin: buttonRow.spacing
anchors.left: thumbsUp.right
anchors.leftMargin: buttonRow.spacing
checked: thumbsDownState
opacity: thumbsDownState || thumbsUpState == thumbsDownState ? 1.0 : 0.2
bgTransform: [
Matrix4x4 {
matrix: Qt.matrix4x4(-1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1)
},
Translate {
x: thumbsDown.width
}
]
source: "qrc:/gpt4all/icons/thumbs_down.svg"
name: qsTr("Dislike response")
onClicked: {
thumbsDownDialog.open()
}
}
}
}
} // GridLayout
} // Item
GridLayout {
Layout.fillWidth: true
Layout.maximumWidth: parent.width
function shouldShowSuggestions() {
if (!isCurrentResponse)
return false;
if (MySettings.suggestionMode === 2) // Off
return false;
if (MySettings.suggestionMode === 0 && consolidatedSources.length === 0) // LocalDocs only
return false;
return currentChat.responseState === Chat.GeneratingQuestions || currentChat.generatedQuestions.length !== 0;
}
Item {
visible: parent.shouldShowSuggestions()
Layout.row: 5
Layout.column: 0
Layout.topMargin: 20
Layout.alignment: Qt.AlignVCenter | Qt.AlignRight
Layout.preferredWidth: 28
Layout.preferredHeight: 28
Image {
id: stack
sourceSize: Qt.size(28, 28)
fillMode: Image.PreserveAspectFit
mipmap: true
visible: false
source: "qrc:/gpt4all/icons/stack.svg"
}
ColorOverlay {
anchors.fill: stack
source: stack
color: theme.conversationHeader
}
}
Item {
visible: parent.shouldShowSuggestions()
Layout.row: 5
Layout.column: 1
Layout.topMargin: 20
Layout.fillWidth: true
Layout.preferredHeight: 38
RowLayout {
spacing: 5
anchors.left: parent.left
anchors.top: parent.top
anchors.bottom: parent.bottom
TextArea {
text: qsTr("Suggested follow-ups")
padding: 0
font.pixelSize: theme.fontSizeLarger
font.bold: true
color: theme.conversationHeader
enabled: false
focus: false
readOnly: true
}
}
}
ColumnLayout {
visible: parent.shouldShowSuggestions()
Layout.row: 6
Layout.column: 1
Layout.fillWidth: true
Layout.minimumHeight: 1
spacing: 10
Repeater {
model: currentChat.generatedQuestions
TextArea {
id: followUpText
Layout.fillWidth: true
Layout.alignment: Qt.AlignLeft
rightPadding: 40
topPadding: 10
leftPadding: 20
bottomPadding: 10
text: modelData
focus: false
readOnly: true
wrapMode: Text.WordWrap
hoverEnabled: !currentChat.responseInProgress
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
background: Rectangle {
color: hovered ? theme.sourcesBackgroundHovered : theme.sourcesBackground
radius: 10
}
MouseArea {
id: maFollowUp
anchors.fill: parent
enabled: !currentChat.responseInProgress
onClicked: function() {
var chat = window.currentChat
var followup = modelData
chat.stopGenerating()
chat.newPromptResponsePair(followup)
}
}
Item {
anchors.right: parent.right
anchors.verticalCenter: parent.verticalCenter
width: 40
height: 40
visible: !currentChat.responseInProgress
Image {
id: plusImage
anchors.verticalCenter: parent.verticalCenter
sourceSize.width: 20
sourceSize.height: 20
mipmap: true
visible: false
source: "qrc:/gpt4all/icons/plus.svg"
}
ColorOverlay {
anchors.fill: plusImage
source: plusImage
color: theme.styledTextColor
}
}
}
}
Rectangle {
Layout.fillWidth: true
color: "transparent"
radius: 10
Layout.preferredHeight: currentChat.responseInProgress ? 40 : 0
clip: true
ColumnLayout {
id: followUpLayout
anchors.fill: parent
Rectangle {
id: myRect1
Layout.preferredWidth: 0
Layout.minimumWidth: 0
Layout.maximumWidth: parent.width
height: 12
color: theme.sourcesBackgroundHovered
}
Rectangle {
id: myRect2
Layout.preferredWidth: 0
Layout.minimumWidth: 0
Layout.maximumWidth: parent.width
height: 12
color: theme.sourcesBackgroundHovered
}
SequentialAnimation {
id: followUpProgressAnimation
ParallelAnimation {
PropertyAnimation {
target: myRect1
property: "Layout.preferredWidth"
from: 0
to: followUpLayout.width
duration: 1000
}
PropertyAnimation {
target: myRect2
property: "Layout.preferredWidth"
from: 0
to: followUpLayout.width / 2
duration: 1000
}
}
SequentialAnimation {
loops: Animation.Infinite
ParallelAnimation {
PropertyAnimation {
target: myRect1
property: "opacity"
from: 1
to: 0.2
duration: 1500
}
PropertyAnimation {
target: myRect2
property: "opacity"
from: 1
to: 0.2
duration: 1500
}
}
ParallelAnimation {
PropertyAnimation {
target: myRect1
property: "opacity"
from: 0.2
to: 1
duration: 1500
}
PropertyAnimation {
target: myRect2
property: "opacity"
from: 0.2
to: 1
duration: 1500
}
}
}
}
onVisibleChanged: {
if (visible)
followUpProgressAnimation.start();
}
}
Behavior on Layout.preferredHeight {
NumberAnimation {
duration: 300
easing.type: Easing.InOutQuad
}
}
}
}
} // GridLayout
} // ColumnLayout

View File

@ -0,0 +1,20 @@
import QtQuick
import QtQuick.Controls
import gpt4all
MyToolButton {
property string name
width: 24
height: 24
imageWidth: width
imageHeight: height
ToolTip {
visible: parent.hovered
y: parent.height * 1.5
text: name
delay: Qt.styleHints.mousePressAndHoldInterval
}
Accessible.name: name
}

View File

@ -0,0 +1,139 @@
import Qt5Compat.GraphicalEffects
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import gpt4all
import mysettings
import toolenums
TextArea {
id: myTextArea
property string textContent: ""
visible: textContent != ""
Layout.fillWidth: true
padding: 0
color: {
if (!currentChat.isServer)
return theme.textColor
return theme.white
}
wrapMode: Text.WordWrap
textFormat: TextEdit.PlainText
focus: false
readOnly: true
font.pixelSize: theme.fontSizeLarge
cursorVisible: isCurrentResponse ? currentChat.responseInProgress : false
cursorPosition: text.length
TapHandler {
id: tapHandler
onTapped: function(eventPoint, button) {
var clickedPos = myTextArea.positionAt(eventPoint.position.x, eventPoint.position.y);
var success = textProcessor.tryCopyAtPosition(clickedPos);
if (success)
copyCodeMessage.open();
}
}
MouseArea {
id: conversationMouseArea
anchors.fill: parent
acceptedButtons: Qt.RightButton
onClicked: (mouse) => {
if (mouse.button === Qt.RightButton) {
conversationContextMenu.x = conversationMouseArea.mouseX
conversationContextMenu.y = conversationMouseArea.mouseY
conversationContextMenu.open()
}
}
}
onLinkActivated: function(link) {
if (!isCurrentResponse || !currentChat.responseInProgress)
Qt.openUrlExternally(link)
}
onLinkHovered: function (link) {
if (!isCurrentResponse || !currentChat.responseInProgress)
statusBar.externalHoveredLink = link
}
MyMenu {
id: conversationContextMenu
MyMenuItem {
text: qsTr("Copy")
enabled: myTextArea.selectedText !== ""
height: enabled ? implicitHeight : 0
onTriggered: myTextArea.copy()
}
MyMenuItem {
text: qsTr("Copy Message")
enabled: myTextArea.selectedText === ""
height: enabled ? implicitHeight : 0
onTriggered: {
myTextArea.selectAll()
myTextArea.copy()
myTextArea.deselect()
}
}
MyMenuItem {
text: textProcessor.shouldProcessText ? qsTr("Disable markdown") : qsTr("Enable markdown")
height: enabled ? implicitHeight : 0
onTriggered: {
textProcessor.shouldProcessText = !textProcessor.shouldProcessText;
textProcessor.setValue(textContent);
}
}
}
ChatViewTextProcessor {
id: textProcessor
}
function resetChatViewTextProcessor() {
textProcessor.fontPixelSize = myTextArea.font.pixelSize
textProcessor.codeColors.defaultColor = theme.codeDefaultColor
textProcessor.codeColors.keywordColor = theme.codeKeywordColor
textProcessor.codeColors.functionColor = theme.codeFunctionColor
textProcessor.codeColors.functionCallColor = theme.codeFunctionCallColor
textProcessor.codeColors.commentColor = theme.codeCommentColor
textProcessor.codeColors.stringColor = theme.codeStringColor
textProcessor.codeColors.numberColor = theme.codeNumberColor
textProcessor.codeColors.headerColor = theme.codeHeaderColor
textProcessor.codeColors.backgroundColor = theme.codeBackgroundColor
textProcessor.textDocument = textDocument
textProcessor.setValue(textContent);
}
property bool textProcessorReady: false
Component.onCompleted: {
resetChatViewTextProcessor();
textProcessorReady = true;
}
Connections {
target: myTextArea
function onTextContentChanged() {
if (myTextArea.textProcessorReady)
textProcessor.setValue(textContent);
}
}
Connections {
target: MySettings
function onFontSizeChanged() {
myTextArea.resetChatViewTextProcessor();
}
function onChatThemeChanged() {
myTextArea.resetChatViewTextProcessor();
}
}
Accessible.role: Accessible.Paragraph
Accessible.name: text
Accessible.description: name === "Response: " ? "The response by the model" : "The prompt by the user"
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,59 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
MyDialog {
id: confirmationDialog
anchors.centerIn: parent
modal: true
padding: 20
property alias dialogTitle: titleText.text
property alias description: descriptionText.text
Theme { id: theme }
contentItem: ColumnLayout {
Text {
id: titleText
Layout.alignment: Qt.AlignHCenter
textFormat: Text.StyledText
color: theme.textColor
font.pixelSize: theme.fontSizeLarger
font.bold: true
}
Text {
id: descriptionText
Layout.alignment: Qt.AlignHCenter
textFormat: Text.StyledText
color: theme.textColor
font.pixelSize: theme.fontSizeMedium
}
}
footer: DialogButtonBox {
id: dialogBox
padding: 20
alignment: Qt.AlignRight
spacing: 10
MySettingsButton {
text: qsTr("OK")
textColor: theme.mediumButtonText
backgroundColor: theme.mediumButtonBackground
backgroundColorHovered: theme.mediumButtonBackgroundHovered
DialogButtonBox.buttonRole: DialogButtonBox.AcceptRole
}
MySettingsButton {
text: qsTr("Cancel")
DialogButtonBox.buttonRole: DialogButtonBox.RejectRole
}
background: Rectangle {
color: "transparent"
}
Keys.onEnterPressed: confirmationDialog.accept()
Keys.onReturnPressed: confirmationDialog.accept()
}
Component.onCompleted: dialogBox.forceActiveFocus()
}

View File

@ -47,7 +47,7 @@ Rectangle {
id: welcome
Layout.alignment: Qt.AlignHCenter
text: qsTr("Welcome to GPT4All")
font.pixelSize: theme.fontSizeBanner
font.pixelSize: theme.fontSizeBannerLarge
color: theme.titleTextColor
}

View File

@ -10,7 +10,7 @@ import mysettings
import network
MySettingsTab {
onRestoreDefaultsClicked: {
onRestoreDefaults: {
MySettings.restoreLocalDocsDefaults();
}
@ -176,6 +176,7 @@ MySettingsTab {
ListElement { text: qsTr("Application default") }
Component.onCompleted: {
MySettings.embeddingsDeviceList.forEach(d => append({"text": d}));
deviceBox.updateModel();
}
}
Accessible.name: deviceLabel.text

View File

@ -8,10 +8,34 @@ import mysettings
import chatlistmodel
MySettingsTab {
onRestoreDefaultsClicked: {
onRestoreDefaults: {
MySettings.restoreModelDefaults(root.currentModelInfo);
}
title: qsTr("Model")
ConfirmationDialog {
id: resetSystemMessageDialog
property var index: null
property bool resetClears: false
dialogTitle: qsTr("%1 system message?").arg(resetClears ? qsTr("Clear") : qsTr("Reset"))
description: qsTr("The system message will be %1.").arg(resetClears ? qsTr("removed") : qsTr("reset to the default"))
onAccepted: MySettings.resetModelSystemMessage(ModelList.modelInfo(index))
function show(index_, resetClears_) { index = index_; resetClears = resetClears_; open(); }
}
ConfirmationDialog {
id: resetChatTemplateDialog
property bool resetClears: false
property var index: null
dialogTitle: qsTr("%1 chat template?").arg(resetClears ? qsTr("Clear") : qsTr("Reset"))
description: qsTr("The chat template will be %1.").arg(resetClears ? qsTr("erased") : qsTr("reset to the default"))
onAccepted: {
MySettings.resetModelChatTemplate(ModelList.modelInfo(index));
templateTextArea.resetText();
}
function show(index_, resetClears_) { index = index_; resetClears = resetClears_; open(); }
}
contentItem: GridLayout {
id: root
columns: 3
@ -35,6 +59,7 @@ MySettingsTab {
RowLayout {
Layout.fillWidth: true
Layout.maximumWidth: parent.width
Layout.row: 2
Layout.column: 0
Layout.columnSpan: 2
@ -153,69 +178,154 @@ MySettingsTab {
Layout.fillWidth: true
}
MySettingsLabel {
visible: !root.currentModelInfo.isOnline
text: qsTr("System Prompt")
helpText: qsTr("Prefixed at the beginning of every conversation. Must contain the appropriate framing tokens.")
RowLayout {
Layout.row: 7
Layout.column: 0
Layout.columnSpan: 2
Layout.topMargin: 15
Layout.fillWidth: true
Layout.maximumWidth: parent.width
spacing: 10
MySettingsLabel {
id: systemMessageLabel
text: qsTr("System Message")
helpText: qsTr("A message to set the context or guide the behavior of the model. Leave blank for " +
"none. NOTE: Since GPT4All 3.5, this should not contain control tokens.")
onReset: () => resetSystemMessageDialog.show(root.currentModelId, resetClears)
function updateResetButton() {
const info = root.currentModelInfo;
// NOTE: checks if the *override* is set, regardless of whether there is a default
canReset = !!info.id && MySettings.isModelSystemMessageSet(info);
resetClears = !info.defaultSystemMessage;
}
Component.onCompleted: updateResetButton()
Connections {
target: root
function onCurrentModelIdChanged() { systemMessageLabel.updateResetButton(); }
}
Connections {
target: MySettings
function onSystemMessageChanged(info)
{ if (info.id === root.currentModelId) systemMessageLabel.updateResetButton(); }
}
}
Label {
id: systemMessageLabelHelp
visible: systemMessageArea.errState !== "ok"
Layout.alignment: Qt.AlignBottom
Layout.fillWidth: true
Layout.rightMargin: 5
Layout.maximumHeight: systemMessageLabel.height
text: qsTr("System message is not " +
"<a href=\"https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html\">plain text</a>.")
color: systemMessageArea.errState === "error" ? theme.textErrorColor : theme.textWarningColor
font.pixelSize: theme.fontSizeLarger
font.bold: true
wrapMode: Text.Wrap
elide: Text.ElideRight
onLinkActivated: function(link) { Qt.openUrlExternally(link) }
}
}
Rectangle {
id: systemPrompt
visible: !root.currentModelInfo.isOnline
id: systemMessage
Layout.row: 8
Layout.column: 0
Layout.columnSpan: 2
Layout.fillWidth: true
color: "transparent"
Layout.minimumHeight: Math.max(100, systemPromptArea.contentHeight + 20)
Layout.minimumHeight: Math.max(100, systemMessageArea.contentHeight + 20)
MyTextArea {
id: systemPromptArea
id: systemMessageArea
anchors.fill: parent
text: root.currentModelInfo.systemPrompt
property bool isBeingReset: false
function resetText() {
const info = root.currentModelInfo;
isBeingReset = true;
text = (info.id ? info.systemMessage.value : null) ?? "";
isBeingReset = false;
}
Component.onCompleted: resetText()
Connections {
target: MySettings
function onSystemPromptChanged() {
systemPromptArea.text = root.currentModelInfo.systemPrompt;
}
function onSystemMessageChanged(info)
{ if (info.id === root.currentModelId) systemMessageArea.resetText(); }
}
Connections {
target: root
function onCurrentModelInfoChanged() {
systemPromptArea.text = root.currentModelInfo.systemPrompt;
}
function onCurrentModelIdChanged() { systemMessageArea.resetText(); }
}
// strict validation, because setModelSystemMessage clears isLegacy
readonly property var reLegacyCheck: (
/(?:^|\s)(?:### *System\b|S(?:ystem|YSTEM):)|<\|(?:im_(?:start|end)|(?:start|end)_header_id|eot_id|SYSTEM_TOKEN)\|>|<<SYS>>/m
)
onTextChanged: {
MySettings.setModelSystemPrompt(root.currentModelInfo, text)
const info = root.currentModelInfo;
if (!info.id) {
errState = "ok";
} else if (info.systemMessage.isLegacy && (isBeingReset || reLegacyCheck.test(text))) {
errState = "error";
} else
errState = reLegacyCheck.test(text) ? "warning" : "ok";
if (info.id && errState !== "error" && !isBeingReset)
MySettings.setModelSystemMessage(info, text);
systemMessageLabel.updateResetButton();
}
Accessible.role: Accessible.EditableText
Accessible.name: systemMessageLabel.text
Accessible.description: systemMessageLabelHelp.text
}
}
RowLayout {
Layout.row: 9
Layout.column: 0
Layout.columnSpan: 2
Layout.topMargin: 15
Layout.fillWidth: true
Layout.maximumWidth: parent.width
spacing: 10
MySettingsLabel {
id: promptTemplateLabel
text: qsTr("Prompt Template")
helpText: qsTr("The template that wraps every prompt.")
id: chatTemplateLabel
text: qsTr("Chat Template")
helpText: qsTr("This Jinja template turns the chat into input for the model.")
onReset: () => resetChatTemplateDialog.show(root.currentModelId, resetClears)
function updateResetButton() {
const info = root.currentModelInfo;
canReset = !!info.id && (
MySettings.isModelChatTemplateSet(info)
|| templateTextArea.text !== (info.chatTemplate.value ?? "")
);
resetClears = !info.defaultChatTemplate;
}
Component.onCompleted: updateResetButton()
Connections {
target: root
function onCurrentModelIdChanged() { chatTemplateLabel.updateResetButton(); }
}
Connections {
target: MySettings
function onChatTemplateChanged(info)
{ if (info.id === root.currentModelId) chatTemplateLabel.updateResetButton(); }
}
}
MySettingsLabel {
id: promptTemplateLabelHelp
text: qsTr("Must contain the string \"%1\" to be replaced with the user's input.")
color: theme.textErrorColor
visible: templateTextArea.text.indexOf("%1") === -1
wrapMode: TextArea.Wrap
Label {
id: chatTemplateLabelHelp
visible: templateTextArea.errState !== "ok"
Layout.alignment: Qt.AlignBottom
Layout.fillWidth: true
Layout.rightMargin: 5
Layout.maximumHeight: chatTemplateLabel.height
text: templateTextArea.errMsg
color: templateTextArea.errState === "error" ? theme.textErrorColor : theme.textWarningColor
font.pixelSize: theme.fontSizeLarger
font.bold: true
wrapMode: Text.Wrap
elide: Text.ElideRight
onLinkActivated: function(link) { Qt.openUrlExternally(link) }
}
}
Rectangle {
id: promptTemplate
id: chatTemplate
Layout.row: 10
Layout.column: 0
Layout.columnSpan: 2
@ -226,27 +336,71 @@ MySettingsTab {
MyTextArea {
id: templateTextArea
anchors.fill: parent
text: root.currentModelInfo.promptTemplate
font: fixedFont
property bool isBeingReset: false
property var errMsg: null
function resetText() {
const info = root.currentModelInfo;
isBeingReset = true;
text = (info.id ? info.chatTemplate.value : null) ?? "";
isBeingReset = false;
}
Component.onCompleted: resetText()
Connections {
target: MySettings
function onPromptTemplateChanged() {
templateTextArea.text = root.currentModelInfo.promptTemplate;
}
function onChatTemplateChanged() { templateTextArea.resetText(); }
}
Connections {
target: root
function onCurrentModelInfoChanged() {
templateTextArea.text = root.currentModelInfo.promptTemplate;
}
function onCurrentModelIdChanged() { templateTextArea.resetText(); }
}
function legacyCheck() {
return /%[12]\b/.test(text) || !/\{%.*%\}.*\{\{.*\}\}.*\{%.*%\}/.test(text.replace(/\n/g, ''))
|| !/\bcontent\b/.test(text);
}
onTextChanged: {
if (templateTextArea.text.indexOf("%1") !== -1) {
MySettings.setModelPromptTemplate(root.currentModelInfo, text)
const info = root.currentModelInfo;
let jinjaError;
if (!info.id) {
errMsg = null;
errState = "ok";
} else if (info.chatTemplate.isLegacy && (isBeingReset || legacyCheck())) {
errMsg = null;
errState = "error";
} else if (text === "" && !info.chatTemplate.isSet) {
errMsg = qsTr("No <a href=\"https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html\">" +
"chat template</a> configured.");
errState = "error";
} else if (/^\s*$/.test(text)) {
errMsg = qsTr("The <a href=\"https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html\">" +
"chat template</a> cannot be blank.");
errState = "error";
} else if ((jinjaError = MySettings.checkJinjaTemplateError(text)) !== null) {
errMsg = qsTr("<a href=\"https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html\">Syntax" +
" error</a>: %1").arg(jinjaError);
errState = "error";
} else if (legacyCheck()) {
errMsg = qsTr("Chat template is not in " +
"<a href=\"https://docs.gpt4all.io/gpt4all_desktop/chat_templates.html\">" +
"Jinja format</a>.")
errState = "warning";
} else {
errState = "ok";
}
if (info.id && errState !== "error" && !isBeingReset)
MySettings.setModelChatTemplate(info, text);
chatTemplateLabel.updateResetButton();
}
Keys.onPressed: event => {
if (event.key === Qt.Key_Tab) {
const a = templateTextArea;
event.accepted = true; // suppress tab
a.insert(a.cursorPosition, ' '); // four spaces
}
}
Accessible.role: Accessible.EditableText
Accessible.name: promptTemplateLabel.text
Accessible.description: promptTemplateLabelHelp.text
Accessible.name: chatTemplateLabel.text
Accessible.description: chatTemplateLabelHelp.text
}
}

View File

@ -60,27 +60,28 @@ ComboBox {
highlighted: comboBox.highlightedIndex === index
}
popup: Popup {
// FIXME This should be made much nicer to take into account lists that are very long so
// that it is scrollable and also sized optimally taking into account the x,y and the content
// width and height as well as the window width and height
y: comboBox.height - 1
width: comboBox.width
implicitHeight: contentItem.implicitHeight + 20
implicitHeight: Math.min(window.height - y, contentItem.implicitHeight + 20)
padding: 0
contentItem: Rectangle {
implicitWidth: myListView.contentWidth
implicitWidth: comboBox.width
implicitHeight: myListView.contentHeight
color: "transparent"
ListView {
id: myListView
radius: 10
ScrollView {
anchors.fill: parent
anchors.margins: 10
clip: true
implicitHeight: contentHeight
model: comboBox.popup.visible ? comboBox.delegateModel : null
currentIndex: comboBox.highlightedIndex
ScrollIndicator.vertical: ScrollIndicator { }
ScrollBar.vertical.policy: ScrollBar.AsNeeded
ScrollBar.horizontal.policy: ScrollBar.AlwaysOff
ListView {
id: myListView
implicitHeight: contentHeight
model: comboBox.popup.visible ? comboBox.delegateModel : null
currentIndex: comboBox.highlightedIndex
ScrollIndicator.vertical: ScrollIndicator { }
}
}
}

View File

@ -0,0 +1,41 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import Qt5Compat.GraphicalEffects
Item {
id: fileIcon
property real iconSize: 24
property string fileName: ""
implicitWidth: iconSize
implicitHeight: iconSize
Image {
id: fileImage
anchors.fill: parent
visible: false
sourceSize.width: iconSize
sourceSize.height: iconSize
mipmap: true
source: {
if (fileIcon.fileName.toLowerCase().endsWith(".txt"))
return "qrc:/gpt4all/icons/file-txt.svg"
else if (fileIcon.fileName.toLowerCase().endsWith(".pdf"))
return "qrc:/gpt4all/icons/file-pdf.svg"
else if (fileIcon.fileName.toLowerCase().endsWith(".md"))
return "qrc:/gpt4all/icons/file-md.svg"
else if (fileIcon.fileName.toLowerCase().endsWith(".xlsx"))
return "qrc:/gpt4all/icons/file-xls.svg"
else if (fileIcon.fileName.toLowerCase().endsWith(".docx"))
return "qrc:/gpt4all/icons/file-docx.svg"
else
return "qrc:/gpt4all/icons/file.svg"
}
}
ColorOverlay {
anchors.fill: fileImage
source: fileImage
color: theme.textColor
}
}

View File

@ -17,6 +17,7 @@ Button {
property color borderColor: "transparent"
property real fontPixelSize: theme.fontSizeLarge
property string toolTip
property alias backgroundRadius: background.radius
contentItem: Text {
text: myButton.text
@ -28,6 +29,7 @@ Button {
Accessible.name: text
}
background: Rectangle {
id: background
radius: 10
border.width: borderWidth
border.color: borderColor

View File

@ -17,13 +17,42 @@ ColumnLayout {
property alias color: mainTextLabel.color
property alias linkColor: mainTextLabel.linkColor
Label {
id: mainTextLabel
color: theme.settingsTitleTextColor
font.pixelSize: theme.fontSizeLarger
font.bold: true
onLinkActivated: function(link) {
root.linkActivated(link);
property var onReset: null
property alias canReset: resetButton.enabled
property bool resetClears: false
Item {
anchors.margins: 5
width: childrenRect.width
height: mainTextLabel.contentHeight
Label {
id: mainTextLabel
anchors.left: parent.left
anchors.top: parent.top
anchors.bottom: parent.bottom
color: theme.settingsTitleTextColor
font.pixelSize: theme.fontSizeLarger
font.bold: true
verticalAlignment: Text.AlignVCenter
onLinkActivated: function(link) {
root.linkActivated(link);
}
}
MySettingsButton {
id: resetButton
anchors.baseline: mainTextLabel.baseline
anchors.left: mainTextLabel.right
height: mainTextLabel.contentHeight
anchors.leftMargin: 10
padding: 2
leftPadding: 10
rightPadding: 10
backgroundRadius: 5
text: resetClears ? qsTr("Clear") : qsTr("Reset")
visible: root.onReset !== null
onClicked: root.onReset()
}
}
Label {

View File

@ -2,6 +2,7 @@ import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Controls.impl
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel

View File

@ -9,7 +9,7 @@ Item {
property string title: ""
property Item contentItem: null
property bool showRestoreDefaultsButton: true
signal restoreDefaultsClicked
signal restoreDefaults
onContentItemChanged: function() {
if (contentItem) {
@ -19,6 +19,13 @@ Item {
}
}
ConfirmationDialog {
id: restoreDefaultsDialog
dialogTitle: qsTr("Restore defaults?")
description: qsTr("This page of settings will be reset to the defaults.")
onAccepted: root.restoreDefaults()
}
ScrollView {
id: scrollView
width: parent.width
@ -47,6 +54,7 @@ Item {
Column {
id: contentInner
Layout.fillWidth: true
Layout.maximumWidth: parent.width
}
Item {
@ -63,9 +71,7 @@ Item {
Accessible.role: Accessible.Button
Accessible.name: text
Accessible.description: qsTr("Restores settings dialog to a default state")
onClicked: {
root.restoreDefaultsClicked();
}
onClicked: restoreDefaultsDialog.open()
}
}
}

View File

@ -0,0 +1,26 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import mysettings
import mysettingsenums
MySettingsButton {
property bool isSelected: false
contentItem: Text {
text: parent.text
horizontalAlignment: Qt.AlignCenter
color: isSelected ? theme.titleTextColor : theme.styledTextColor
font.pixelSize: theme.fontSizeLarger
}
background: Item {
visible: isSelected || hovered
Rectangle {
anchors.bottom: parent.bottom
anchors.left: parent.left
anchors.right: parent.right
height: 3
color: isSelected ? theme.titleTextColor : theme.styledTextColorLighter
}
}
}

View File

@ -5,18 +5,27 @@ import QtQuick.Controls.Basic
TextArea {
id: myTextArea
property string errState: "ok" // one of "ok", "error", "warning"
color: enabled ? theme.textColor : theme.mutedTextColor
placeholderTextColor: theme.mutedTextColor
font.pixelSize: theme.fontSizeLarge
background: Rectangle {
implicitWidth: 150
color: theme.controlBackground
border.width: 1
border.color: theme.controlBorder
border.width: errState === "ok" ? 1 : 2
border.color: {
switch (errState) {
case "ok": return theme.controlBorder;
case "warning": return theme.textWarningColor;
case "error": return theme.textErrorColor;
}
}
radius: 10
}
padding: 10
wrapMode: TextArea.Wrap
ToolTip.delay: Qt.styleHints.mousePressAndHoldInterval
}
}

View File

@ -16,6 +16,7 @@ Button {
property alias fillMode: image.fillMode
property alias imageWidth: image.sourceSize.width
property alias imageHeight: image.sourceSize.height
property alias bgTransform: background.transform
contentItem: Text {
text: myButton.text
horizontalAlignment: Text.AlignHCenter
@ -26,6 +27,7 @@ Button {
}
background: Item {
id: background
anchors.fill: parent
Rectangle {
anchors.fill: parent
@ -47,7 +49,7 @@ Button {
ColorOverlay {
anchors.fill: image
source: image
color: myButton.hovered ? backgroundColorHovered : backgroundColor
color: !myButton.enabled ? theme.mutedTextColor : myButton.hovered ? backgroundColorHovered : backgroundColor
}
}
Accessible.role: Accessible.Button

View File

@ -0,0 +1,221 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import QtQuick.Dialogs
import Qt.labs.folderlistmodel
import Qt5Compat.GraphicalEffects
import llm
import chatlistmodel
import download
import modellist
import network
import gpt4all
import mysettings
import localdocs
Rectangle {
property alias providerName: providerNameLabel.text
property alias providerImage: myimage.source
property alias providerDesc: providerDescLabel.text
property string providerBaseUrl: ""
property bool providerIsCustom: false
property var modelWhitelist: null
color: theme.conversationBackground
radius: 10
border.width: 1
border.color: theme.controlBorder
implicitHeight: topColumn.height + bottomColumn.height + 33 * theme.fontScale
ColumnLayout {
id: topColumn
anchors.left: parent.left
anchors.right: parent.right
anchors.top: parent.top
anchors.margins: 20
spacing: 15 * theme.fontScale
RowLayout {
Layout.alignment: Qt.AlignTop
spacing: 10
Item {
Layout.preferredWidth: 27 * theme.fontScale
Layout.preferredHeight: 27 * theme.fontScale
Layout.alignment: Qt.AlignLeft
Image {
id: myimage
anchors.centerIn: parent
sourceSize.width: parent.width
sourceSize.height: parent.height
mipmap: true
fillMode: Image.PreserveAspectFit
}
}
Label {
id: providerNameLabel
color: theme.textColor
font.pixelSize: theme.fontSizeBanner
}
}
Label {
id: providerDescLabel
Layout.fillWidth: true
wrapMode: Text.Wrap
color: theme.settingsTitleTextColor
font.pixelSize: theme.fontSizeLarge
onLinkActivated: function(link) { Qt.openUrlExternally(link); }
MouseArea {
anchors.fill: parent
acceptedButtons: Qt.NoButton // pass clicks to parent
cursorShape: parent.hoveredLink ? Qt.PointingHandCursor : Qt.ArrowCursor
}
}
}
ColumnLayout {
id: bottomColumn
anchors.left: parent.left
anchors.right: parent.right
anchors.bottom: parent.bottom
anchors.margins: 20
spacing: 30
ColumnLayout {
MySettingsLabel {
text: qsTr("API Key")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: apiKeyField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $API_KEY is empty."));
apiKeyField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
apiKeyField.placeholderTextColor = theme.mutedTextColor;
if (!providerIsCustom) {
let models = ModelList.remoteModelList(apiKeyField.text, providerBaseUrl);
if (modelWhitelist !== null)
models = models.filter(m => modelWhitelist.includes(m));
myModelList.model = models;
myModelList.currentIndex = -1;
}
}
placeholderText: qsTr("enter $API_KEY")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
Accessible.description: qsTr("Whether the file hash is being calculated")
}
}
ColumnLayout {
visible: providerIsCustom
MySettingsLabel {
text: qsTr("Base Url")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: baseUrlField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $BASE_URL is empty."));
baseUrlField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
baseUrlField.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $BASE_URL")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
}
}
ColumnLayout {
visible: providerIsCustom
MySettingsLabel {
text: qsTr("Model Name")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
MyTextField {
id: modelNameField
Layout.fillWidth: true
font.pixelSize: theme.fontSizeLarge
wrapMode: Text.WrapAnywhere
function showError() {
messageToast.show(qsTr("ERROR: $MODEL_NAME is empty."))
modelNameField.placeholderTextColor = theme.textErrorColor;
}
onTextChanged: {
modelNameField.placeholderTextColor = theme.mutedTextColor;
}
placeholderText: qsTr("enter $MODEL_NAME")
Accessible.role: Accessible.EditableText
Accessible.name: placeholderText
}
}
ColumnLayout {
visible: myModelList.count > 0 && !providerIsCustom
MySettingsLabel {
text: qsTr("Models")
font.bold: true
font.pixelSize: theme.fontSizeLarge
color: theme.settingsTitleTextColor
}
RowLayout {
spacing: 10
MyComboBox {
Layout.fillWidth: true
id: myModelList
currentIndex: -1;
}
}
}
MySettingsButton {
id: installButton
Layout.alignment: Qt.AlignRight
text: qsTr("Install")
font.pixelSize: theme.fontSizeLarge
property string apiKeyText: apiKeyField.text.trim()
property string baseUrlText: providerIsCustom ? baseUrlField.text.trim() : providerBaseUrl.trim()
property string modelNameText: providerIsCustom ? modelNameField.text.trim() : myModelList.currentText.trim()
enabled: apiKeyText !== "" && baseUrlText !== "" && modelNameText !== ""
onClicked: {
Download.installCompatibleModel(
modelNameText,
apiKeyText,
baseUrlText,
);
myModelList.currentIndex = -1;
}
Accessible.role: Accessible.Button
Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install remote model")
}
}
}

View File

@ -115,7 +115,7 @@ model release that uses your data!")
anchors.right: parent.right
Label {
id: optInStatistics
text: "Opt-in to anonymous usage analytics used to improve GPT4All"
text: qsTr("Opt-in to anonymous usage analytics used to improve GPT4All")
Layout.row: 0
Layout.column: 0
color: theme.textColor
@ -229,7 +229,7 @@ model release that uses your data!")
Label {
id: optInNetwork
text: "Opt-in to anonymous sharing of chats to the GPT4All Datalake"
text: qsTr("Opt-in to anonymous sharing of chats to the GPT4All Datalake")
Layout.row: 1
Layout.column: 0
color: theme.textColor

View File

@ -1,46 +0,0 @@
import QtCore
import QtQuick
import QtQuick.Controls
import QtQuick.Controls.Basic
import QtQuick.Layouts
import llm
import mysettings
MyDialog {
id: switchModelDialog
anchors.centerIn: parent
modal: true
padding: 20
property int index: -1
Theme {
id: theme
}
contentItem: Text {
textFormat: Text.StyledText
text: qsTr("<b>Warning:</b> changing the model will erase the current conversation. Do you wish to continue?")
color: theme.textColor
font.pixelSize: theme.fontSizeLarge
}
footer: DialogButtonBox {
id: dialogBox
padding: 20
alignment: Qt.AlignRight
spacing: 10
MySettingsButton {
text: qsTr("Continue")
Accessible.description: qsTr("Continue with model loading")
DialogButtonBox.buttonRole: DialogButtonBox.AcceptRole
}
MySettingsButton {
text: qsTr("Cancel")
Accessible.description: qsTr("Cancel")
DialogButtonBox.buttonRole: DialogButtonBox.RejectRole
}
background: Rectangle {
color: "transparent"
}
}
}

View File

@ -64,6 +64,9 @@ QtObject {
property color green800: Qt.hsla(123/360, 0.17, 0.24)
property color green900: Qt.hsla(124/360, 0.17, 0.20)
property color green950: Qt.hsla(125/360, 0.22, 0.10)
property color green300_sat: Qt.hsla(122/360, 0.24, 0.73)
property color green400_sat: Qt.hsla(122/360, 0.23, 0.58)
property color green450_sat: Qt.hsla(122/360, 0.23, 0.52)
// yellow
property color yellow0: Qt.hsla(47/360, 0.90, 0.99)
@ -99,6 +102,7 @@ QtObject {
property color purple200: Qt.hsla(279/360, 1.0, 0.91)
property color purple300: Qt.hsla(279/360, 1.0, 0.84)
property color purple400: Qt.hsla(279/360, 1.0, 0.73)
property color purple450: Qt.hsla(279/360, 1.0, 0.68)
property color purple500: Qt.hsla(279/360, 1.0, 0.63)
property color purple600: Qt.hsla(279/360, 1.0, 0.53)
property color purple700: Qt.hsla(279/360, 1.0, 0.47)
@ -408,6 +412,39 @@ QtObject {
}
}
property color mediumButtonBackground: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
return purple400
case MySettingsEnums.ChatTheme.Dark:
return green400_sat
default:
return green400_sat
}
}
property color mediumButtonBackgroundHovered: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
return purple450
case MySettingsEnums.ChatTheme.Dark:
return green450_sat
default:
return green300_sat
}
}
property color mediumButtonText: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
return textColor
case MySettingsEnums.ChatTheme.Dark:
return textColor
default:
return white
}
}
property color darkButtonText: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
@ -922,16 +959,8 @@ QtObject {
}
}
property color textErrorColor: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
return red400
case MySettingsEnums.ChatTheme.Dark:
return red400
default:
return red400
}
}
readonly property color textErrorColor: red400
readonly property color textWarningColor: yellow400
property color settingsTitleTextColor: {
switch (MySettings.chatTheme) {
@ -988,6 +1017,17 @@ QtObject {
}
}
property color styledTextColorLighter: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
return purple50
case MySettingsEnums.ChatTheme.Dark:
return yellow0
default:
return grayRed400
}
}
property color styledTextColor2: {
switch (MySettings.chatTheme) {
case MySettingsEnums.ChatTheme.LegacyDark:
@ -1227,5 +1267,6 @@ QtObject {
property real fontSizeLarger: 14 * fontScale
property real fontSizeLargest: 18 * fontScale
property real fontSizeBannerSmall: 24 * fontScale**.8
property real fontSizeBanner: 48 * fontScale**.8
property real fontSizeBanner: 32 * fontScale**.8
property real fontSizeBannerLarge: 48 * fontScale**.8
}

View File

@ -1,23 +1,32 @@
#include "chat.h"
#include "chatlistmodel.h"
#include "mysettings.h"
#include "network.h"
#include "server.h"
#include "tool.h"
#include "toolcallparser.h"
#include "toolmodel.h"
#include <QBuffer>
#include <QByteArray>
#include <QDataStream>
#include <QDebug>
#include <QFile>
#include <QFileInfo>
#include <QIODevice>
#include <QLatin1String>
#include <QMap>
#include <QRegularExpression>
#include <QString>
#include <QStringList>
#include <QVariant>
#include <Qt>
#include <QtAssert>
#include <QtLogging>
#include <optional>
#include <utility>
using namespace ToolEnums;
Chat::Chat(QObject *parent)
: QObject(parent)
, m_id(Network::globalInstance()->generateUniqueId())
@ -31,7 +40,7 @@ Chat::Chat(QObject *parent)
connectLLM();
}
Chat::Chat(bool isServer, QObject *parent)
Chat::Chat(server_tag_t, QObject *parent)
: QObject(parent)
, m_id(Network::globalInstance()->generateUniqueId())
, m_name(tr("Server Chat"))
@ -61,13 +70,12 @@ void Chat::connectLLM()
connect(m_llmodel, &ChatLLM::responseStopped, this, &Chat::responseStopped, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelLoadingError, this, &Chat::handleModelLoadingError, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelLoadingWarning, this, &Chat::modelLoadingWarning, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::restoringFromTextChanged, this, &Chat::handleRestoringFromText, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::generatedNameChanged, this, &Chat::generatedNameChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::generatedQuestionFinished, this, &Chat::generatedQuestionFinished, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::reportSpeed, this, &Chat::handleTokenSpeedChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::loadedModelInfoChanged, this, &Chat::loadedModelInfoChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::databaseResultsChanged, this, &Chat::handleDatabaseResultsChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelInfoChanged, this, &Chat::handleModelInfoChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::modelInfoChanged, this, &Chat::handleModelChanged, Qt::QueuedConnection);
connect(m_llmodel, &ChatLLM::trySwitchContextOfLoadedModelCompleted, this, &Chat::handleTrySwitchContextOfLoadedModelCompleted, Qt::QueuedConnection);
connect(this, &Chat::promptRequested, m_llmodel, &ChatLLM::prompt, Qt::QueuedConnection);
@ -75,11 +83,10 @@ void Chat::connectLLM()
connect(this, &Chat::loadDefaultModelRequested, m_llmodel, &ChatLLM::loadDefaultModel, Qt::QueuedConnection);
connect(this, &Chat::generateNameRequested, m_llmodel, &ChatLLM::generateName, Qt::QueuedConnection);
connect(this, &Chat::regenerateResponseRequested, m_llmodel, &ChatLLM::regenerateResponse, Qt::QueuedConnection);
connect(this, &Chat::resetResponseRequested, m_llmodel, &ChatLLM::resetResponse, Qt::QueuedConnection);
connect(this, &Chat::resetContextRequested, m_llmodel, &ChatLLM::resetContext, Qt::QueuedConnection);
connect(this, &Chat::processSystemPromptRequested, m_llmodel, &ChatLLM::processSystemPrompt, Qt::QueuedConnection);
connect(this, &Chat::collectionListChanged, m_collectionModel, &LocalDocsCollectionsModel::setCollections);
connect(ModelList::globalInstance(), &ModelList::modelInfoChanged, this, &Chat::handleModelInfoChanged);
}
void Chat::reset()
@ -87,28 +94,17 @@ void Chat::reset()
stopGenerating();
// Erase our current on disk representation as we're completely resetting the chat along with id
ChatListModel::globalInstance()->removeChatFile(this);
emit resetContextRequested();
m_id = Network::globalInstance()->generateUniqueId();
emit idChanged(m_id);
// NOTE: We deliberately do no reset the name or creation date to indicate that this was originally
// an older chat that was reset for another purpose. Resetting this data will lead to the chat
// name label changing back to 'New Chat' and showing up in the chat model list as a 'New Chat'
// further down in the list. This might surprise the user. In the future, we might get rid of
// the "reset context" button in the UI. Right now, by changing the model in the combobox dropdown
// we effectively do a reset context. We *have* to do this right now when switching between different
// types of models. The only way to get rid of that would be a very long recalculate where we rebuild
// the context if we switch between different types of models. Probably the right way to fix this
// is to allow switching models but throwing up a dialog warning users if we switch between types
// of models that a long recalculation will ensue.
// the "reset context" button in the UI.
m_chatModel->clear();
m_needsSave = true;
}
void Chat::processSystemPrompt()
{
emit processSystemPromptRequested();
}
void Chat::resetResponseState()
{
if (m_responseInProgress && m_responseState == Chat::LocalDocsRetrieval)
@ -132,7 +128,13 @@ void Chat::newPromptResponsePair(const QString &prompt, const QList<QUrl> &attac
Q_ASSERT(url.isLocalFile());
const QString localFilePath = url.toLocalFile();
const QFileInfo info(localFilePath);
Q_ASSERT(info.suffix() == "xlsx"); // We only support excel right now
Q_ASSERT(
info.suffix().toLower() == "xlsx" ||
info.suffix().toLower() == "txt" ||
info.suffix().toLower() == "md" ||
info.suffix().toLower() == "rst"
);
PromptAttachment attached;
attached.url = url;
@ -154,53 +156,52 @@ void Chat::newPromptResponsePair(const QString &prompt, const QList<QUrl> &attac
if (!attachedContexts.isEmpty())
promptPlusAttached = attachedContexts.join("\n\n") + "\n\n" + prompt;
newPromptResponsePairInternal(prompt, attachments);
emit resetResponseRequested();
resetResponseState();
if (int count = m_chatModel->count())
m_chatModel->updateCurrentResponse(count - 1, false);
m_chatModel->appendPrompt(prompt, attachments);
m_chatModel->appendResponse();
this->prompt(promptPlusAttached);
emit promptRequested(m_collections);
m_needsSave = true;
}
void Chat::prompt(const QString &prompt)
void Chat::regenerateResponse(int index)
{
resetResponseState();
emit promptRequested(m_collections, prompt);
emit regenerateResponseRequested(index);
m_needsSave = true;
}
void Chat::regenerateResponse()
QVariant Chat::popPrompt(int index)
{
const int index = m_chatModel->count() - 1;
m_chatModel->updateSources(index, QList<ResultInfo>());
emit regenerateResponseRequested();
auto content = m_llmodel->popPrompt(index);
m_needsSave = true;
if (content) return *content;
return QVariant::fromValue(nullptr);
}
void Chat::stopGenerating()
{
// In future if we have more than one tool we'll have to keep track of which tools are possibly
// running, but for now we only have one
Tool *toolInstance = ToolModel::globalInstance()->get(ToolCallConstants::CodeInterpreterFunction);
Q_ASSERT(toolInstance);
toolInstance->interrupt();
m_llmodel->stopGenerating();
}
QString Chat::response() const
{
return m_response;
}
Chat::ResponseState Chat::responseState() const
{
return m_responseState;
}
void Chat::handleResponseChanged(const QString &response)
void Chat::handleResponseChanged()
{
if (m_responseState != Chat::ResponseGeneration) {
m_responseState = Chat::ResponseGeneration;
emit responseStateChanged();
}
m_response = response;
const int index = m_chatModel->count() - 1;
m_chatModel->updateValue(index, this->response());
emit responseChanged();
}
void Chat::handleModelLoadingPercentageChanged(float loadingPercentage)
@ -240,20 +241,79 @@ void Chat::responseStopped(qint64 promptResponseMs)
{
m_tokenSpeed = QString();
emit tokenSpeedChanged();
emit responseChanged();
m_responseInProgress = false;
m_responseState = Chat::ResponseStopped;
emit responseInProgressChanged();
emit responseStateChanged();
if (m_generatedName.isEmpty())
emit generateNameRequested();
Network::globalInstance()->trackChatEvent("response_complete", {
const QString possibleToolcall = m_chatModel->possibleToolcall();
Network::globalInstance()->trackChatEvent("response_stopped", {
{"first", m_firstResponse},
{"message_count", chatModel()->count()},
{"$duration", promptResponseMs / 1000.},
});
ToolCallParser parser;
parser.update(possibleToolcall.toUtf8());
if (parser.state() == ToolEnums::ParseState::Complete && parser.startTag() != ToolCallConstants::ThinkStartTag)
processToolCall(parser.toolCall());
else
responseComplete();
}
void Chat::processToolCall(const QString &toolCall)
{
m_responseState = Chat::ToolCallGeneration;
emit responseStateChanged();
// Regex to remove the formatting around the code
static const QRegularExpression regex("^\\s*```javascript\\s*|\\s*```\\s*$");
QString code = toolCall;
code.remove(regex);
code = code.trimmed();
// Right now the code interpreter is the only available tool
Tool *toolInstance = ToolModel::globalInstance()->get(ToolCallConstants::CodeInterpreterFunction);
Q_ASSERT(toolInstance);
connect(toolInstance, &Tool::runComplete, this, &Chat::toolCallComplete, Qt::SingleShotConnection);
// The param is the code
const ToolParam param = { "code", ToolEnums::ParamType::String, code };
m_responseInProgress = true;
emit responseInProgressChanged();
toolInstance->run({param});
}
void Chat::toolCallComplete(const ToolCallInfo &info)
{
// Update the current response with meta information about toolcall and re-parent
m_chatModel->updateToolCall(info);
++m_consecutiveToolCalls;
m_responseInProgress = false;
emit responseInProgressChanged();
// We limit the number of consecutive toolcalls otherwise we get into a potentially endless loop
if (m_consecutiveToolCalls < 3 || info.error == ToolEnums::Error::NoError) {
resetResponseState();
emit promptRequested(m_collections); // triggers a new response
return;
}
responseComplete();
}
void Chat::responseComplete()
{
if (m_generatedName.isEmpty())
emit generateNameRequested();
m_responseState = Chat::ResponseStopped;
emit responseStateChanged();
m_consecutiveToolCalls = 0;
m_firstResponse = false;
}
@ -274,25 +334,6 @@ void Chat::setModelInfo(const ModelInfo &modelInfo)
emit modelChangeRequested(modelInfo);
}
// the server needs to block until response is reset, so it calls resetResponse on its own m_llmThread
void Chat::serverNewPromptResponsePair(const QString &prompt, const QList<PromptAttachment> &attachments)
{
newPromptResponsePairInternal(prompt, attachments);
}
void Chat::newPromptResponsePairInternal(const QString &prompt, const QList<PromptAttachment> &attachments)
{
resetResponseState();
m_chatModel->updateCurrentResponse(m_chatModel->count() - 1, false);
m_chatModel->appendPrompt("Prompt: ", prompt, attachments);
m_chatModel->appendResponse("Response: ");
}
bool Chat::restoringFromText() const
{
return m_llmodel->restoringFromText();
}
void Chat::unloadAndDeleteLater()
{
if (!isModelLoaded()) {
@ -342,11 +383,8 @@ void Chat::trySwitchContextOfLoadedModel()
void Chat::generatedNameChanged(const QString &name)
{
// Only use the first three words maximum and remove newlines and extra spaces
m_generatedName = name.simplified();
QStringList words = m_generatedName.split(' ', Qt::SkipEmptyParts);
int wordCount = qMin(7, words.size());
m_name = words.mid(0, wordCount).join(' ');
m_generatedName = name;
m_name = name;
emit nameChanged();
m_needsSave = true;
}
@ -358,12 +396,6 @@ void Chat::generatedQuestionFinished(const QString &question)
m_needsSave = true;
}
void Chat::handleRestoringFromText()
{
Network::globalInstance()->trackChatEvent("recalc_context", { {"length", m_chatModel->count()} });
emit restoringFromTextChanged();
}
void Chat::handleModelLoadingError(const QString &error)
{
if (!error.isEmpty()) {
@ -398,12 +430,19 @@ QString Chat::fallbackReason() const
void Chat::handleDatabaseResultsChanged(const QList<ResultInfo> &results)
{
m_databaseResults = results;
const int index = m_chatModel->count() - 1;
m_chatModel->updateSources(index, m_databaseResults);
m_needsSave = true;
}
// we need to notify listeners of the modelInfo property when its properties are updated,
// since it's a gadget and can't do that on its own
void Chat::handleModelInfoChanged(const ModelInfo &modelInfo)
{
if (!m_modelInfo.id().isNull() && modelInfo.id() == m_modelInfo.id())
emit modelInfoChanged();
}
// react if a new model is loaded
void Chat::handleModelChanged(const ModelInfo &modelInfo)
{
if (m_modelInfo == modelInfo)
return;
@ -432,10 +471,7 @@ bool Chat::serialize(QDataStream &stream, int version) const
if (version >= 3)
stream << m_collections;
const bool serializeKV = MySettings::globalInstance()->saveChatsContext();
if (version >= 6)
stream << serializeKV;
if (!m_llmodel->serialize(stream, version, serializeKV))
if (!m_llmodel->serialize(stream, version))
return false;
if (!m_chatModel->serialize(stream, version))
return false;
@ -464,19 +500,13 @@ bool Chat::deserialize(QDataStream &stream, int version)
if (!m_modelInfo.id().isEmpty())
emit modelInfoChanged();
bool discardKV = m_modelInfo.id().isEmpty();
if (version >= 3) {
stream >> m_collections;
emit collectionListChanged(m_collections);
}
bool deserializeKV = true;
if (version >= 6)
stream >> deserializeKV;
m_llmodel->setModelInfo(m_modelInfo);
if (!m_llmodel->deserialize(stream, version, deserializeKV, discardKV))
if (!m_llmodel->deserialize(stream, version))
return false;
if (!m_chatModel->deserialize(stream, version))
return false;

View File

@ -3,19 +3,26 @@
#include "chatllm.h"
#include "chatmodel.h"
#include "database.h" // IWYU pragma: keep
#include "localdocsmodel.h" // IWYU pragma: keep
#include "database.h"
#include "localdocsmodel.h"
#include "modellist.h"
#include "tool.h"
#include <QDateTime>
#include <QList>
#include <QObject>
#include <QQmlEngine>
#include <QQmlEngine> // IWYU pragma: keep
#include <QString>
#include <QtGlobal>
#include <QStringList> // IWYU pragma: keep
#include <QUrl>
#include <QVariant>
#include <QtTypes>
// IWYU pragma: no_forward_declare LocalDocsCollectionsModel
// IWYU pragma: no_forward_declare ToolCallInfo
class QDataStream;
class Chat : public QObject
{
Q_OBJECT
@ -25,10 +32,8 @@ class Chat : public QObject
Q_PROPERTY(bool isModelLoaded READ isModelLoaded NOTIFY isModelLoadedChanged)
Q_PROPERTY(bool isCurrentlyLoading READ isCurrentlyLoading NOTIFY isCurrentlyLoadingChanged)
Q_PROPERTY(float modelLoadingPercentage READ modelLoadingPercentage NOTIFY modelLoadingPercentageChanged)
Q_PROPERTY(QString response READ response NOTIFY responseChanged)
Q_PROPERTY(ModelInfo modelInfo READ modelInfo WRITE setModelInfo NOTIFY modelInfoChanged)
Q_PROPERTY(bool responseInProgress READ responseInProgress NOTIFY responseInProgressChanged)
Q_PROPERTY(bool restoringFromText READ restoringFromText NOTIFY restoringFromTextChanged)
Q_PROPERTY(bool isServer READ isServer NOTIFY isServerChanged)
Q_PROPERTY(ResponseState responseState READ responseState NOTIFY responseStateChanged)
Q_PROPERTY(QList<QString> collectionList READ collectionList NOTIFY collectionListChanged)
@ -45,18 +50,23 @@ class Chat : public QObject
QML_UNCREATABLE("Only creatable from c++!")
public:
// tag for constructing a server chat
struct server_tag_t { explicit server_tag_t() = default; };
static inline constexpr server_tag_t server_tag = server_tag_t();
enum ResponseState {
ResponseStopped,
LocalDocsRetrieval,
LocalDocsProcessing,
PromptProcessing,
GeneratingQuestions,
ResponseGeneration
ResponseGeneration,
ToolCallGeneration
};
Q_ENUM(ResponseState)
explicit Chat(QObject *parent = nullptr);
explicit Chat(bool isServer, QObject *parent = nullptr);
explicit Chat(server_tag_t, QObject *parent = nullptr);
virtual ~Chat();
void destroy() { m_llmodel->destroy(); }
void connectLLM();
@ -74,23 +84,20 @@ public:
bool isNewChat() const { return m_name == tr("New Chat") && !m_chatModel->count(); }
Q_INVOKABLE void reset();
Q_INVOKABLE void processSystemPrompt();
bool isModelLoaded() const { return m_modelLoadingPercentage == 1.0f; }
bool isCurrentlyLoading() const { return m_modelLoadingPercentage > 0.0f && m_modelLoadingPercentage < 1.0f; }
float modelLoadingPercentage() const { return m_modelLoadingPercentage; }
Q_INVOKABLE void newPromptResponsePair(const QString &prompt, const QList<QUrl> &attachedUrls = {});
Q_INVOKABLE void prompt(const QString &prompt);
Q_INVOKABLE void regenerateResponse();
Q_INVOKABLE void regenerateResponse(int index);
Q_INVOKABLE QVariant popPrompt(int index);
Q_INVOKABLE void stopGenerating();
QList<ResultInfo> databaseResults() const { return m_databaseResults; }
QString response() const;
bool responseInProgress() const { return m_responseInProgress; }
ResponseState responseState() const;
ModelInfo modelInfo() const;
void setModelInfo(const ModelInfo &modelInfo);
bool restoringFromText() const;
Q_INVOKABLE void unloadModel();
Q_INVOKABLE void reloadModel();
@ -111,7 +118,6 @@ public:
Q_INVOKABLE bool hasCollection(const QString &collection) const;
Q_INVOKABLE void addCollection(const QString &collection);
Q_INVOKABLE void removeCollection(const QString &collection);
void resetResponseState();
QString modelLoadingError() const { return m_modelLoadingError; }
@ -126,9 +132,10 @@ public:
QList<QString> generatedQuestions() const { return m_generatedQuestions; }
bool needsSave() const { return m_needsSave; }
void setNeedsSave(bool n) { m_needsSave = n; }
public Q_SLOTS:
void serverNewPromptResponsePair(const QString &prompt, const QList<PromptAttachment> &attachments = {});
void resetResponseState();
Q_SIGNALS:
void idChanged(const QString &id);
@ -138,17 +145,14 @@ Q_SIGNALS:
void isCurrentlyLoadingChanged();
void modelLoadingPercentageChanged();
void modelLoadingWarning(const QString &warning);
void responseChanged();
void responseInProgressChanged();
void responseStateChanged();
void promptRequested(const QList<QString> &collectionList, const QString &prompt);
void regenerateResponseRequested();
void promptRequested(const QStringList &enabledCollections);
void regenerateResponseRequested(int index);
void resetResponseRequested();
void resetContextRequested();
void processSystemPromptRequested();
void modelChangeRequested(const ModelInfo &modelInfo);
void modelInfoChanged();
void restoringFromTextChanged();
void loadDefaultModelRequested();
void generateNameRequested();
void modelLoadingErrorChanged();
@ -163,23 +167,23 @@ Q_SIGNALS:
void generatedQuestionsChanged();
private Q_SLOTS:
void handleResponseChanged(const QString &response);
void handleResponseChanged();
void handleModelLoadingPercentageChanged(float);
void promptProcessing();
void generatingQuestions();
void responseStopped(qint64 promptResponseMs);
void processToolCall(const QString &toolCall);
void toolCallComplete(const ToolCallInfo &info);
void responseComplete();
void generatedNameChanged(const QString &name);
void generatedQuestionFinished(const QString &question);
void handleRestoringFromText();
void handleModelLoadingError(const QString &error);
void handleTokenSpeedChanged(const QString &tokenSpeed);
void handleDatabaseResultsChanged(const QList<ResultInfo> &results);
void handleModelInfoChanged(const ModelInfo &modelInfo);
void handleModelChanged(const ModelInfo &modelInfo);
void handleTrySwitchContextOfLoadedModelCompleted(int value);
private:
void newPromptResponsePairInternal(const QString &prompt, const QList<PromptAttachment> &attachments);
private:
QString m_id;
QString m_name;
@ -190,7 +194,6 @@ private:
QString m_tokenSpeed;
QString m_device;
QString m_fallbackReason;
QString m_response;
QList<QString> m_collections;
QList<QString> m_generatedQuestions;
ChatModel *m_chatModel;
@ -210,6 +213,7 @@ private:
// - The chat was freshly created during this launch.
// - The chat was changed after loading it from disk.
bool m_needsSave = true;
int m_consecutiveToolCalls = 0;
};
#endif // CHAT_H

View File

@ -1,29 +1,40 @@
#include "chatapi.h"
#include <gpt4all-backend/llmodel.h>
#include "utils.h"
#include <fmt/format.h>
#include <QAnyStringView>
#include <QCoreApplication>
#include <QGuiApplication>
#include <QDebug>
#include <QGuiApplication>
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QLatin1String>
#include <QNetworkAccessManager>
#include <QNetworkRequest>
#include <QStringView>
#include <QThread>
#include <QUrl>
#include <QUtf8StringView> // IWYU pragma: keep
#include <QVariant>
#include <QXmlStreamReader>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <expected>
#include <functional>
#include <iostream>
#include <utility>
using namespace Qt::Literals::StringLiterals;
//#define DEBUG
ChatAPI::ChatAPI()
: QObject(nullptr)
, m_modelName("gpt-3.5-turbo")
@ -51,7 +62,6 @@ bool ChatAPI::loadModel(const std::string &modelPath, int n_ctx, int ngl)
void ChatAPI::setThreadCount(int32_t n_threads)
{
Q_UNUSED(n_threads);
qt_noop();
}
int32_t ChatAPI::threadCount() const
@ -68,89 +78,119 @@ bool ChatAPI::isModelLoaded() const
return true;
}
// All three of the state virtual functions are handled custom inside of chatllm save/restore
size_t ChatAPI::stateSize() const
static auto parsePrompt(QXmlStreamReader &xml) -> std::expected<QJsonArray, QString>
{
throw std::logic_error("not implemented");
QJsonArray messages;
auto xmlError = [&xml] {
return std::unexpected(u"%1:%2: %3"_s.arg(xml.lineNumber()).arg(xml.columnNumber()).arg(xml.errorString()));
};
if (xml.hasError())
return xmlError();
if (xml.atEnd())
return messages;
// skip header
bool foundElement = false;
do {
switch (xml.readNext()) {
using enum QXmlStreamReader::TokenType;
case Invalid:
return xmlError();
case EndDocument:
return messages;
default:
foundElement = true;
case StartDocument:
case Comment:
case DTD:
case ProcessingInstruction:
;
}
} while (!foundElement);
// document body loop
bool foundRoot = false;
for (;;) {
switch (xml.tokenType()) {
using enum QXmlStreamReader::TokenType;
case StartElement:
{
auto name = xml.name();
if (!foundRoot) {
if (name != "chat"_L1)
return std::unexpected(u"unexpected tag: %1"_s.arg(name));
foundRoot = true;
} else {
if (name != "user"_L1 && name != "assistant"_L1 && name != "system"_L1)
return std::unexpected(u"unknown role: %1"_s.arg(name));
auto content = xml.readElementText();
if (xml.tokenType() != EndElement)
return xmlError();
messages << makeJsonObject({
{ "role"_L1, name.toString().trimmed() },
{ "content"_L1, content },
});
}
break;
}
case Characters:
if (!xml.isWhitespace())
return std::unexpected(u"unexpected text: %1"_s.arg(xml.text()));
case Comment:
case ProcessingInstruction:
case EndElement:
break;
case EndDocument:
return messages;
case Invalid:
return xmlError();
default:
return std::unexpected(u"unexpected token: %1"_s.arg(xml.tokenString()));
}
xml.readNext();
}
}
size_t ChatAPI::saveState(std::span<uint8_t> dest) const
{
Q_UNUSED(dest);
throw std::logic_error("not implemented");
}
void ChatAPI::prompt(
std::string_view prompt,
const PromptCallback &promptCallback,
const ResponseCallback &responseCallback,
const PromptContext &promptCtx
) {
Q_UNUSED(promptCallback)
size_t ChatAPI::restoreState(std::span<const uint8_t> src)
{
Q_UNUSED(src);
throw std::logic_error("not implemented");
}
void ChatAPI::prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &promptCtx,
bool special,
std::optional<std::string_view> fakeReply) {
Q_UNUSED(promptCallback);
Q_UNUSED(allowContextShift);
Q_UNUSED(special);
if (!isModelLoaded()) {
std::cerr << "ChatAPI ERROR: prompt won't work with an unloaded model!\n";
return;
}
if (!promptCtx.n_past) { m_queuedPrompts.clear(); }
Q_ASSERT(promptCtx.n_past <= m_context.size());
m_context.resize(promptCtx.n_past);
// FIXME(cebtenzzre): We're assuming people don't try to use %2 with ChatGPT. What would that even mean?
m_queuedPrompts << QString::fromStdString(promptTemplate).arg(QString::fromStdString(prompt));
if (!promptCtx.n_predict && !fakeReply) {
return; // response explicitly suppressed, queue prompt for later
}
QString formattedPrompt = m_queuedPrompts.join("");
m_queuedPrompts.clear();
if (fakeReply) {
promptCtx.n_past += 1;
m_context.append(formattedPrompt);
m_context.append(QString::fromUtf8(fakeReply->data(), fakeReply->size()));
return;
}
if (!isModelLoaded())
throw std::invalid_argument("Attempted to prompt an unloaded model.");
if (!promptCtx.n_predict)
return; // nothing requested
// FIXME: We don't set the max_tokens on purpose because in order to do so safely without encountering
// an error we need to be able to count the tokens in our prompt. The only way to do this is to use
// the OpenAI tiktokken library or to implement our own tokenization function that matches precisely
// the OpenAI tiktoken library or to implement our own tokenization function that matches precisely
// the tokenization used by the OpenAI model we're calling. OpenAI has not introduced any means of
// using the REST API to count tokens in a prompt.
QJsonObject root;
root.insert("model", m_modelName);
root.insert("stream", true);
root.insert("temperature", promptCtx.temp);
root.insert("top_p", promptCtx.top_p);
auto root = makeJsonObject({
{ "model"_L1, m_modelName },
{ "stream"_L1, true },
{ "temperature"_L1, promptCtx.temp },
{ "top_p"_L1, promptCtx.top_p },
});
// conversation history
QJsonArray messages;
for (int i = 0; i < m_context.count(); ++i) {
QJsonObject message;
message.insert("role", i % 2 == 0 ? "user" : "assistant");
message.insert("content", m_context.at(i));
messages.append(message);
{
QUtf8StringView promptUtf8(prompt);
QXmlStreamReader xml(promptUtf8);
auto messages = parsePrompt(xml);
if (!messages) {
auto error = fmt::format("Failed to parse API model prompt: {}", messages.error());
qDebug().noquote() << "ChatAPI ERROR:" << error << "Prompt:\n\n" << promptUtf8 << '\n';
throw std::invalid_argument(error);
}
root.insert("messages"_L1, *messages);
}
QJsonObject promptObject;
promptObject.insert("role", "user");
promptObject.insert("content", formattedPrompt);
messages.append(promptObject);
root.insert("messages", messages);
QJsonDocument doc(root);
#if defined(DEBUG)
@ -167,12 +207,9 @@ void ChatAPI::prompt(const std::string &prompt,
connect(&worker, &ChatAPIWorker::finished, &workerThread, &QThread::quit, Qt::DirectConnection);
connect(this, &ChatAPI::request, &worker, &ChatAPIWorker::request, Qt::QueuedConnection);
workerThread.start();
emit request(m_apiKey, &promptCtx, doc.toJson(QJsonDocument::Compact));
emit request(m_apiKey, doc.toJson(QJsonDocument::Compact));
workerThread.wait();
promptCtx.n_past += 1;
m_context.append(formattedPrompt);
m_context.append(worker.currentResponse());
m_responseCallback = nullptr;
#if defined(DEBUG)
@ -190,12 +227,8 @@ bool ChatAPI::callResponse(int32_t token, const std::string& string)
return m_responseCallback(token, string);
}
void ChatAPIWorker::request(const QString &apiKey,
LLModel::PromptContext *promptCtx,
const QByteArray &array)
void ChatAPIWorker::request(const QString &apiKey, const QByteArray &array)
{
m_ctx = promptCtx;
QUrl apiUrl(m_chat->url());
const QString authorization = u"Bearer %1"_s.arg(apiKey).trimmed();
QNetworkRequest request(apiUrl);
@ -302,7 +335,6 @@ void ChatAPIWorker::handleReadyRead()
const QJsonObject choice = choices.first().toObject();
const QJsonObject delta = choice.value("delta").toObject();
const QString content = delta.value("content").toString();
Q_ASSERT(m_ctx);
m_currentResponse += content;
if (!m_chat->callResponse(0, content.toStdString())) {
reply->abort();

View File

@ -7,35 +7,34 @@
#include <QNetworkReply>
#include <QObject>
#include <QString>
#include <QStringList>
#include <QList>
#include <QtPreprocessorSupport>
#include <cstddef>
#include <cstdint>
#include <functional>
#include <span>
#include <stdexcept>
#include <string>
#include <string_view>
#include <unordered_map>
#include <vector>
// IWYU pragma: no_forward_declare QByteArray
class ChatAPI;
class QNetworkAccessManager;
class ChatAPI;
class ChatAPIWorker : public QObject {
Q_OBJECT
public:
ChatAPIWorker(ChatAPI *chatAPI)
: QObject(nullptr)
, m_ctx(nullptr)
, m_networkManager(nullptr)
, m_chat(chatAPI) {}
virtual ~ChatAPIWorker() {}
QString currentResponse() const { return m_currentResponse; }
void request(const QString &apiKey,
LLModel::PromptContext *promptCtx,
const QByteArray &array);
void request(const QString &apiKey, const QByteArray &array);
Q_SIGNALS:
void finished();
@ -47,7 +46,6 @@ private Q_SLOTS:
private:
ChatAPI *m_chat;
LLModel::PromptContext *m_ctx;
QNetworkAccessManager *m_networkManager;
QString m_currentResponse;
};
@ -63,17 +61,23 @@ public:
bool loadModel(const std::string &modelPath, int n_ctx, int ngl) override;
bool isModelLoaded() const override;
size_t requiredMem(const std::string &modelPath, int n_ctx, int ngl) override;
size_t stateSize() const override;
size_t saveState(std::span<uint8_t> dest) const override;
size_t restoreState(std::span<const uint8_t> src) override;
void prompt(const std::string &prompt,
const std::string &promptTemplate,
std::function<bool(int32_t)> promptCallback,
std::function<bool(int32_t, const std::string&)> responseCallback,
bool allowContextShift,
PromptContext &ctx,
bool special,
std::optional<std::string_view> fakeReply) override;
// All three of the state virtual functions are handled custom inside of chatllm save/restore
size_t stateSize() const override
{ throwNotImplemented(); }
size_t saveState(std::span<uint8_t> stateOut, std::vector<Token> &inputTokensOut) const override
{ Q_UNUSED(stateOut); Q_UNUSED(inputTokensOut); throwNotImplemented(); }
size_t restoreState(std::span<const uint8_t> state, std::span<const Token> inputTokens) override
{ Q_UNUSED(state); Q_UNUSED(inputTokens); throwNotImplemented(); }
void prompt(std::string_view prompt,
const PromptCallback &promptCallback,
const ResponseCallback &responseCallback,
const PromptContext &ctx) override;
[[noreturn]]
int32_t countPromptTokens(std::string_view prompt) const override
{ Q_UNUSED(prompt); throwNotImplemented(); }
void setThreadCount(int32_t n_threads) override;
int32_t threadCount() const override;
@ -83,84 +87,87 @@ public:
void setRequestURL(const QString &requestURL) { m_requestURL = requestURL; }
QString url() const { return m_requestURL; }
QList<QString> context() const { return m_context; }
void setContext(const QList<QString> &context) { m_context = context; }
bool callResponse(int32_t token, const std::string &string);
[[noreturn]]
int32_t contextLength() const override
{ throwNotImplemented(); }
auto specialTokens() -> std::unordered_map<std::string, std::string> const override
{ return {}; }
Q_SIGNALS:
void request(const QString &apiKey,
LLModel::PromptContext *ctx,
const QByteArray &array);
void request(const QString &apiKey, const QByteArray &array);
protected:
// We have to implement these as they are pure virtual in base class, but we don't actually use
// them as they are only called from the default implementation of 'prompt' which we override and
// completely replace
std::vector<Token> tokenize(PromptContext &ctx, std::string_view str, bool special) override
{
(void)ctx;
(void)str;
(void)special;
throw std::logic_error("not implemented");
}
[[noreturn]]
static void throwNotImplemented() { throw std::logic_error("not implemented"); }
[[noreturn]]
std::vector<Token> tokenize(std::string_view str) const override
{ Q_UNUSED(str); throwNotImplemented(); }
[[noreturn]]
bool isSpecialToken(Token id) const override
{
(void)id;
throw std::logic_error("not implemented");
}
{ Q_UNUSED(id); throwNotImplemented(); }
[[noreturn]]
std::string tokenToString(Token id) const override
{
(void)id;
throw std::logic_error("not implemented");
}
{ Q_UNUSED(id); throwNotImplemented(); }
void initSampler(PromptContext &ctx) override
{
(void)ctx;
throw std::logic_error("not implemented");
}
[[noreturn]]
void initSampler(const PromptContext &ctx) override
{ Q_UNUSED(ctx); throwNotImplemented(); }
Token sampleToken() const override { throw std::logic_error("not implemented"); }
[[noreturn]]
Token sampleToken() const override
{ throwNotImplemented(); }
bool evalTokens(PromptContext &ctx, const std::vector<int32_t> &tokens) const override
{
(void)ctx;
(void)tokens;
throw std::logic_error("not implemented");
}
[[noreturn]]
bool evalTokens(int32_t nPast, std::span<const Token> tokens) const override
{ Q_UNUSED(nPast); Q_UNUSED(tokens); throwNotImplemented(); }
void shiftContext(PromptContext &promptCtx) override
{
(void)promptCtx;
throw std::logic_error("not implemented");
}
[[noreturn]]
void shiftContext(const PromptContext &promptCtx, int32_t *nPast) override
{ Q_UNUSED(promptCtx); Q_UNUSED(nPast); throwNotImplemented(); }
int32_t contextLength() const override
{
throw std::logic_error("not implemented");
}
[[noreturn]]
int32_t inputLength() const override
{ throwNotImplemented(); }
[[noreturn]]
int32_t computeModelInputPosition(std::span<const Token> input) const override
{ Q_UNUSED(input); throwNotImplemented(); }
[[noreturn]]
void setModelInputPosition(int32_t pos) override
{ Q_UNUSED(pos); throwNotImplemented(); }
[[noreturn]]
void appendInputToken(Token tok) override
{ Q_UNUSED(tok); throwNotImplemented(); }
[[noreturn]]
const std::vector<Token> &endTokens() const override
{
throw std::logic_error("not implemented");
}
{ throwNotImplemented(); }
[[noreturn]]
bool shouldAddBOS() const override
{
throw std::logic_error("not implemented");
}
{ throwNotImplemented(); }
[[noreturn]]
std::span<const Token> inputTokens() const override
{ throwNotImplemented(); }
private:
std::function<bool(int32_t, const std::string&)> m_responseCallback;
QString m_modelName;
QString m_apiKey;
QString m_requestURL;
QList<QString> m_context;
QStringList m_queuedPrompts;
ResponseCallback m_responseCallback;
QString m_modelName;
QString m_apiKey;
QString m_requestURL;
};
#endif // CHATAPI_H

View File

@ -1,25 +1,27 @@
#include "chatlistmodel.h"
#include "database.h" // IWYU pragma: keep
#include "mysettings.h"
#include <QCoreApplication>
#include <QDataStream>
#include <QDir>
#include <QElapsedTimer>
#include <QEvent>
#include <QFile>
#include <QFileInfo>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QIODevice>
#include <QSettings>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <Qt>
#include <QtTypes>
#include <algorithm>
#define CHAT_FORMAT_MAGIC 0xF5D553CC
#define CHAT_FORMAT_VERSION 10
static constexpr quint32 CHAT_FORMAT_MAGIC = 0xF5D553CC;
static constexpr qint32 CHAT_FORMAT_VERSION = 12;
class MyChatListModel: public ChatListModel { };
Q_GLOBAL_STATIC(MyChatListModel, chatListModelInstance)
@ -51,6 +53,12 @@ void ChatListModel::loadChats()
connect(thread, &ChatsRestoreThread::finished, thread, &QObject::deleteLater);
thread->start();
m_chatSaver = std::make_unique<ChatSaver>();
connect(this, &ChatListModel::requestSaveChats, m_chatSaver.get(), &ChatSaver::saveChats, Qt::QueuedConnection);
connect(m_chatSaver.get(), &ChatSaver::saveChatsFinished, this, &ChatListModel::saveChatsFinished, Qt::QueuedConnection);
// save chats on application quit
connect(QCoreApplication::instance(), &QCoreApplication::aboutToQuit, this, &ChatListModel::saveChatsSync);
connect(MySettings::globalInstance(), &MySettings::serverChatChanged, this, &ChatListModel::handleServerEnabledChanged);
}
@ -73,29 +81,50 @@ ChatSaver::ChatSaver()
m_thread.start();
}
ChatSaver::~ChatSaver()
{
m_thread.quit();
m_thread.wait();
}
QVector<Chat *> ChatListModel::getChatsToSave() const
{
QVector<Chat *> toSave;
for (auto *chat : m_chats)
if (chat != m_serverChat && !chat->isNewChat())
toSave << chat;
return toSave;
}
void ChatListModel::saveChats()
{
QVector<Chat*> toSave;
for (Chat *chat : m_chats) {
if (chat == m_serverChat)
continue;
if (chat->isNewChat())
continue;
toSave.append(chat);
}
auto toSave = getChatsToSave();
if (toSave.isEmpty()) {
emit saveChatsFinished();
return;
}
ChatSaver *saver = new ChatSaver;
connect(this, &ChatListModel::requestSaveChats, saver, &ChatSaver::saveChats, Qt::QueuedConnection);
connect(saver, &ChatSaver::saveChatsFinished, this, &ChatListModel::saveChatsFinished, Qt::QueuedConnection);
emit requestSaveChats(toSave);
}
void ChatListModel::saveChatsForQuit()
{
saveChats();
m_startedFinalSave = true;
}
void ChatListModel::saveChatsSync()
{
auto toSave = getChatsToSave();
if (!m_startedFinalSave && !toSave.isEmpty())
m_chatSaver->saveChats(toSave);
}
void ChatSaver::saveChats(const QVector<Chat *> &chats)
{
// we can be called from the main thread instead of a worker thread at quit time, so take a lock
QMutexLocker locker(&m_mutex);
QElapsedTimer timer;
timer.start();
const QString savePath = MySettings::globalInstance()->modelPath();
@ -117,8 +146,8 @@ void ChatSaver::saveChats(const QVector<Chat *> &chats)
}
QDataStream out(&tempFile);
out << (quint32)CHAT_FORMAT_MAGIC;
out << (qint32)CHAT_FORMAT_VERSION;
out << CHAT_FORMAT_MAGIC;
out << CHAT_FORMAT_VERSION;
out.setVersion(QDataStream::Qt_6_2);
qDebug() << "serializing chat" << fileName;
@ -128,6 +157,7 @@ void ChatSaver::saveChats(const QVector<Chat *> &chats)
continue;
}
chat->setNeedsSave(false);
if (originalFile.exists())
originalFile.remove();
tempFile.rename(filePath);
@ -255,12 +285,15 @@ void ChatsRestoreThread::run()
qDebug() << "deserializing chat" << f.file;
Chat *chat = new Chat;
auto chat = std::make_unique<Chat>();
chat->moveToThread(qGuiApp->thread());
if (!chat->deserialize(in, version)) {
bool ok = chat->deserialize(in, version);
if (!ok) {
qWarning() << "ERROR: Couldn't deserialize chat from file:" << file.fileName();
} else if (!in.atEnd()) {
qWarning().nospace() << "error loading chat from " << file.fileName() << ": extra data at end of file";
} else {
emit chatRestored(chat);
emit chatRestored(chat.release());
}
if (f.oldFile)
file.remove(); // No longer storing in this directory

View File

@ -7,16 +7,23 @@
#include <QAbstractListModel>
#include <QByteArray>
#include <QDate>
#include <QDebug>
#include <QHash>
#include <QList>
#include <QMutex>
#include <QObject>
#include <QString>
#include <QThread>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <QtPreprocessorSupport>
#include <memory>
class ChatsRestoreThread : public QThread
{
@ -33,7 +40,7 @@ class ChatSaver : public QObject
Q_OBJECT
public:
explicit ChatSaver();
void stop();
~ChatSaver() override;
Q_SIGNALS:
void saveChatsFinished();
@ -43,6 +50,7 @@ public Q_SLOTS:
private:
QThread m_thread;
QMutex m_mutex;
};
class ChatListModel : public QAbstractListModel
@ -147,7 +155,7 @@ public:
if (m_serverChat)
return;
m_serverChat = new Chat(true /*isServer*/, this);
m_serverChat = new Chat(Chat::server_tag, this);
beginInsertRows(QModelIndex(), m_chats.size(), m_chats.size());
m_chats.append(m_serverChat);
endInsertRows();
@ -229,6 +237,7 @@ public:
void removeChatFile(Chat *chat) const;
Q_INVOKABLE void saveChats();
Q_INVOKABLE void saveChatsForQuit();
void restoreChat(Chat *chat);
void chatsRestoredFinished();
@ -238,7 +247,6 @@ public Q_SLOTS:
Q_SIGNALS:
void countChanged();
void currentChatChanged();
void chatsSavedFinished();
void requestSaveChats(const QVector<Chat*> &);
void saveChatsFinished();
@ -246,6 +254,9 @@ protected:
bool eventFilter(QObject *obj, QEvent *ev) override;
private Q_SLOTS:
// Used with QCoreApplication::aboutToQuit. Does not require an event loop.
void saveChatsSync();
void newChatCountChanged()
{
Q_ASSERT(m_newChat && m_newChat->chatModel()->count());
@ -276,11 +287,16 @@ private Q_SLOTS:
}
}
private:
QVector<Chat *> getChatsToSave() const;
private:
Chat* m_newChat = nullptr;
Chat* m_serverChat = nullptr;
Chat* m_currentChat = nullptr;
QList<Chat*> m_chats;
std::unique_ptr<ChatSaver> m_chatSaver;
bool m_startedFinalSave = false;
private:
explicit ChatListModel();

File diff suppressed because it is too large Load Diff

View File

@ -1,7 +1,8 @@
#ifndef CHATLLM_H
#define CHATLLM_H
#include "database.h" // IWYU pragma: keep
#include "chatmodel.h"
#include "database.h"
#include "modellist.h"
#include <gpt4all-backend/llmodel.h>
@ -13,20 +14,26 @@
#include <QObject>
#include <QPointer>
#include <QString>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QVariantMap>
#include <QtGlobal>
#include <QVariantMap> // IWYU pragma: keep
#include <QtNumeric>
#include <atomic>
#include <cstdint>
#include <memory>
#include <optional>
#include <span>
#include <string>
#include <string_view>
#include <variant>
#include <vector>
using namespace Qt::Literals::StringLiterals;
class ChatLLM;
class QDataStream;
// NOTE: values serialized to disk, do not change or reuse
enum class LLModelTypeV0 { // chat versions 2-5
MPT = 0,
@ -51,7 +58,7 @@ enum class LLModelTypeV1 { // since chat version 6 (v2.5.0)
NONE = -1, // no state
};
static LLModelTypeV1 parseLLModelTypeV1(int type)
inline LLModelTypeV1 parseLLModelTypeV1(int type)
{
switch (LLModelTypeV1(type)) {
case LLModelTypeV1::GPTJ:
@ -68,7 +75,7 @@ static LLModelTypeV1 parseLLModelTypeV1(int type)
}
}
static LLModelTypeV1 parseLLModelTypeV0(int v0)
inline LLModelTypeV1 parseLLModelTypeV0(int v0)
{
switch (LLModelTypeV0(v0)) {
case LLModelTypeV0::MPT: return LLModelTypeV1::MPT;
@ -83,9 +90,6 @@ static LLModelTypeV1 parseLLModelTypeV0(int v0)
}
}
class ChatLLM;
class ChatModel;
struct LLModelInfo {
std::unique_ptr<LLModel> model;
QFileInfo fileInfo;
@ -141,7 +145,6 @@ class Chat;
class ChatLLM : public QObject
{
Q_OBJECT
Q_PROPERTY(bool restoringFromText READ restoringFromText NOTIFY restoringFromTextChanged)
Q_PROPERTY(QString deviceBackend READ deviceBackend NOTIFY loadedModelInfoChanged)
Q_PROPERTY(QString device READ device NOTIFY loadedModelInfoChanged)
Q_PROPERTY(QString fallbackReason READ fallbackReason NOTIFY loadedModelInfoChanged)
@ -149,12 +152,14 @@ public:
ChatLLM(Chat *parent, bool isServer = false);
virtual ~ChatLLM();
void destroy();
static void destroyStore();
static std::optional<std::string> checkJinjaTemplateError(const std::string &source);
void destroy();
bool isModelLoaded() const;
void regenerateResponse();
void resetResponse();
void resetContext();
void regenerateResponse(int index);
// used to implement edit functionality
std::optional<QString> popPrompt(int index);
void stopGenerating() { m_stopGenerating = true; }
@ -164,13 +169,9 @@ public:
void setForceUnloadModel(bool b) { m_forceUnloadModel = b; }
void setMarkedForDeletion(bool b) { m_markedForDeletion = b; }
QString response(bool trim = true) const;
ModelInfo modelInfo() const;
void setModelInfo(const ModelInfo &info);
bool restoringFromText() const { return m_restoringFromText; }
void acquireModel();
void resetModel();
@ -195,13 +196,11 @@ public:
return m_llModelInfo.fallbackReason.value_or(u""_s);
}
QString generatedName() const { return QString::fromStdString(m_nameResponse); }
bool serialize(QDataStream &stream, int version, bool serializeKV);
bool deserialize(QDataStream &stream, int version, bool deserializeKV, bool discardKV);
bool serialize(QDataStream &stream, int version);
bool deserialize(QDataStream &stream, int version);
public Q_SLOTS:
bool prompt(const QList<QString> &collectionList, const QString &prompt);
void prompt(const QStringList &enabledCollections);
bool loadDefaultModel();
void trySwitchContextOfLoadedModel(const ModelInfo &modelInfo);
bool loadModel(const ModelInfo &modelInfo);
@ -209,22 +208,19 @@ public Q_SLOTS:
void unloadModel();
void reloadModel();
void generateName();
void generateQuestions(qint64 elapsed);
void handleChatIdChanged(const QString &id);
void handleShouldBeLoadedChanged();
void handleThreadStarted();
void handleForceMetalChanged(bool forceMetal);
void handleDeviceChanged();
void processSystemPrompt();
void processRestoreStateFromText();
Q_SIGNALS:
void restoringFromTextChanged();
void loadedModelInfoChanged();
void modelLoadingPercentageChanged(float);
void modelLoadingError(const QString &error);
void modelLoadingWarning(const QString &warning);
void responseChanged(const QString &response);
void responseChanged();
void responseFailed();
void promptProcessing();
void generatingQuestions();
void responseStopped(qint64 promptResponseMs);
@ -243,56 +239,53 @@ Q_SIGNALS:
void modelInfoChanged(const ModelInfo &modelInfo);
protected:
bool promptInternal(const QList<QString> &collectionList, const QString &prompt, const QString &promptTemplate,
int32_t n_predict, int32_t top_k, float top_p, float min_p, float temp, int32_t n_batch, float repeat_penalty,
int32_t repeat_penalty_tokens, std::optional<QString> fakeReply = {});
bool handlePrompt(int32_t token);
bool handleResponse(int32_t token, const std::string &response);
bool handleNamePrompt(int32_t token);
bool handleNameResponse(int32_t token, const std::string &response);
bool handleSystemPrompt(int32_t token);
bool handleSystemResponse(int32_t token, const std::string &response);
bool handleRestoreStateFromTextPrompt(int32_t token);
bool handleRestoreStateFromTextResponse(int32_t token, const std::string &response);
bool handleQuestionPrompt(int32_t token);
bool handleQuestionResponse(int32_t token, const std::string &response);
void saveState();
void restoreState();
struct PromptResult {
QByteArray response; // raw UTF-8
int promptTokens; // note: counts *entire* history, even if cached
int responseTokens;
};
protected:
LLModel::PromptContext m_ctx;
quint32 m_promptTokens;
quint32 m_promptResponseTokens;
struct ChatPromptResult : PromptResult {
QList<ResultInfo> databaseResults;
};
ChatPromptResult promptInternalChat(const QStringList &enabledCollections, const LLModel::PromptContext &ctx,
qsizetype startOffset = 0);
// passing a string_view directly skips templating and uses the raw string
PromptResult promptInternal(const std::variant<std::span<const MessageItem>, std::string_view> &prompt,
const LLModel::PromptContext &ctx,
bool usedLocalDocs);
private:
bool loadNewModel(const ModelInfo &modelInfo, QVariantMap &modelLoadProps);
std::vector<MessageItem> forkConversation(const QString &prompt) const;
// Applies the Jinja template. Query mode returns only the last message without special tokens.
// Returns a (# of messages, rendered prompt) pair.
std::string applyJinjaTemplate(std::span<const MessageItem> items) const;
void generateQuestions(qint64 elapsed);
protected:
QPointer<ChatModel> m_chatModel;
private:
const Chat *m_chat;
std::string m_response;
std::string m_trimmedResponse;
std::string m_nameResponse;
QString m_questionResponse;
LLModelInfo m_llModelInfo;
LLModelTypeV1 m_llModelType = LLModelTypeV1::NONE;
ModelInfo m_modelInfo;
TokenTimer *m_timer;
QByteArray m_state;
QThread m_llmThread;
std::atomic<bool> m_stopGenerating;
std::atomic<bool> m_shouldBeLoaded;
std::atomic<bool> m_restoringFromText; // status indication
std::atomic<bool> m_forceUnloadModel;
std::atomic<bool> m_markedForDeletion;
bool m_isServer;
bool m_forceMetal;
bool m_reloadingToChangeVariant;
bool m_processedSystemPrompt;
bool m_restoreStateFromText;
// m_pristineLoadedState is set if saveSate is unnecessary, either because:
// - an unload was queued during LLModel::restoreState()
// - the chat will be restored from text and hasn't been interacted with yet
bool m_pristineLoadedState = false;
QPointer<ChatModel> m_chatModel;
friend class ChatViewResponseHandler;
friend class SimpleResponseHandler;
};
#endif // CHATLLM_H

View File

@ -0,0 +1,368 @@
#include "chatmodel.h"
#include <QDebug>
#include <QMap>
#include <QTextStream>
#include <QtLogging>
#include <exception>
QList<ResultInfo> ChatItem::consolidateSources(const QList<ResultInfo> &sources)
{
QMap<QString, ResultInfo> groupedData;
for (const ResultInfo &info : sources) {
if (groupedData.contains(info.file)) {
groupedData[info.file].text += "\n---\n" + info.text;
} else {
groupedData[info.file] = info;
}
}
QList<ResultInfo> consolidatedSources = groupedData.values();
return consolidatedSources;
}
void ChatItem::serializeResponse(QDataStream &stream, int version)
{
stream << value;
}
void ChatItem::serializeToolCall(QDataStream &stream, int version)
{
stream << value;
toolCallInfo.serialize(stream, version);
}
void ChatItem::serializeToolResponse(QDataStream &stream, int version)
{
stream << value;
}
void ChatItem::serializeText(QDataStream &stream, int version)
{
stream << value;
}
void ChatItem::serializeThink(QDataStream &stream, int version)
{
stream << value;
stream << thinkingTime;
}
void ChatItem::serializeSubItems(QDataStream &stream, int version)
{
stream << name;
switch (auto typ = type()) {
using enum ChatItem::Type;
case Response: { serializeResponse(stream, version); break; }
case ToolCall: { serializeToolCall(stream, version); break; }
case ToolResponse: { serializeToolResponse(stream, version); break; }
case Text: { serializeText(stream, version); break; }
case Think: { serializeThink(stream, version); break; }
case System:
case Prompt:
throw std::invalid_argument(fmt::format("cannot serialize subitem type {}", int(typ)));
}
stream << qsizetype(subItems.size());
for (ChatItem *item :subItems)
item->serializeSubItems(stream, version);
}
void ChatItem::serialize(QDataStream &stream, int version)
{
stream << name;
stream << value;
stream << newResponse;
stream << isCurrentResponse;
stream << stopped;
stream << thumbsUpState;
stream << thumbsDownState;
if (version >= 11 && type() == ChatItem::Type::Response)
stream << isError;
if (version >= 8) {
stream << sources.size();
for (const ResultInfo &info : sources) {
Q_ASSERT(!info.file.isEmpty());
stream << info.collection;
stream << info.path;
stream << info.file;
stream << info.title;
stream << info.author;
stream << info.date;
stream << info.text;
stream << info.page;
stream << info.from;
stream << info.to;
}
} else if (version >= 3) {
QList<QString> references;
QList<QString> referencesContext;
int validReferenceNumber = 1;
for (const ResultInfo &info : sources) {
if (info.file.isEmpty())
continue;
QString reference;
{
QTextStream stream(&reference);
stream << (validReferenceNumber++) << ". ";
if (!info.title.isEmpty())
stream << "\"" << info.title << "\". ";
if (!info.author.isEmpty())
stream << "By " << info.author << ". ";
if (!info.date.isEmpty())
stream << "Date: " << info.date << ". ";
stream << "In " << info.file << ". ";
if (info.page != -1)
stream << "Page " << info.page << ". ";
if (info.from != -1) {
stream << "Lines " << info.from;
if (info.to != -1)
stream << "-" << info.to;
stream << ". ";
}
stream << "[Context](context://" << validReferenceNumber - 1 << ")";
}
references.append(reference);
referencesContext.append(info.text);
}
stream << references.join("\n");
stream << referencesContext;
}
if (version >= 10) {
stream << promptAttachments.size();
for (const PromptAttachment &a : promptAttachments) {
Q_ASSERT(!a.url.isEmpty());
stream << a.url;
stream << a.content;
}
}
if (version >= 12) {
stream << qsizetype(subItems.size());
for (ChatItem *item :subItems)
item->serializeSubItems(stream, version);
}
}
bool ChatItem::deserializeToolCall(QDataStream &stream, int version)
{
stream >> value;
return toolCallInfo.deserialize(stream, version);;
}
bool ChatItem::deserializeToolResponse(QDataStream &stream, int version)
{
stream >> value;
return true;
}
bool ChatItem::deserializeText(QDataStream &stream, int version)
{
stream >> value;
return true;
}
bool ChatItem::deserializeResponse(QDataStream &stream, int version)
{
stream >> value;
return true;
}
bool ChatItem::deserializeThink(QDataStream &stream, int version)
{
stream >> value;
stream >> thinkingTime;
return true;
}
bool ChatItem::deserializeSubItems(QDataStream &stream, int version)
{
stream >> name;
try {
type(); // check name
} catch (const std::exception &e) {
qWarning() << "ChatModel ERROR:" << e.what();
return false;
}
switch (auto typ = type()) {
using enum ChatItem::Type;
case Response: { deserializeResponse(stream, version); break; }
case ToolCall: { deserializeToolCall(stream, version); break; }
case ToolResponse: { deserializeToolResponse(stream, version); break; }
case Text: { deserializeText(stream, version); break; }
case Think: { deserializeThink(stream, version); break; }
case System:
case Prompt:
throw std::invalid_argument(fmt::format("cannot serialize subitem type {}", int(typ)));
}
qsizetype count;
stream >> count;
for (int i = 0; i < count; ++i) {
ChatItem *c = new ChatItem(this);
if (!c->deserializeSubItems(stream, version)) {
delete c;
return false;
}
subItems.push_back(c);
}
return true;
}
bool ChatItem::deserialize(QDataStream &stream, int version)
{
if (version < 12) {
int id;
stream >> id;
}
stream >> name;
try {
type(); // check name
} catch (const std::exception &e) {
qWarning() << "ChatModel ERROR:" << e.what();
return false;
}
stream >> value;
if (version < 10) {
// This is deprecated and no longer used
QString prompt;
stream >> prompt;
}
stream >> newResponse;
stream >> isCurrentResponse;
stream >> stopped;
stream >> thumbsUpState;
stream >> thumbsDownState;
if (version >= 11 && type() == ChatItem::Type::Response)
stream >> isError;
if (version >= 8) {
qsizetype count;
stream >> count;
for (int i = 0; i < count; ++i) {
ResultInfo info;
stream >> info.collection;
stream >> info.path;
stream >> info.file;
stream >> info.title;
stream >> info.author;
stream >> info.date;
stream >> info.text;
stream >> info.page;
stream >> info.from;
stream >> info.to;
sources.append(info);
}
consolidatedSources = ChatItem::consolidateSources(sources);
} else if (version >= 3) {
QString references;
QList<QString> referencesContext;
stream >> references;
stream >> referencesContext;
if (!references.isEmpty()) {
QList<QString> referenceList = references.split("\n");
// Ignore empty lines and those that begin with "---" which is no longer used
for (auto it = referenceList.begin(); it != referenceList.end();) {
if (it->trimmed().isEmpty() || it->trimmed().startsWith("---"))
it = referenceList.erase(it);
else
++it;
}
Q_ASSERT(referenceList.size() == referencesContext.size());
for (int j = 0; j < referenceList.size(); ++j) {
QString reference = referenceList[j];
QString context = referencesContext[j];
ResultInfo info;
QTextStream refStream(&reference);
QString dummy;
int validReferenceNumber;
refStream >> validReferenceNumber >> dummy;
// Extract title (between quotes)
if (reference.contains("\"")) {
int startIndex = reference.indexOf('"') + 1;
int endIndex = reference.indexOf('"', startIndex);
info.title = reference.mid(startIndex, endIndex - startIndex);
}
// Extract author (after "By " and before the next period)
if (reference.contains("By ")) {
int startIndex = reference.indexOf("By ") + 3;
int endIndex = reference.indexOf('.', startIndex);
info.author = reference.mid(startIndex, endIndex - startIndex).trimmed();
}
// Extract date (after "Date: " and before the next period)
if (reference.contains("Date: ")) {
int startIndex = reference.indexOf("Date: ") + 6;
int endIndex = reference.indexOf('.', startIndex);
info.date = reference.mid(startIndex, endIndex - startIndex).trimmed();
}
// Extract file name (after "In " and before the "[Context]")
if (reference.contains("In ") && reference.contains(". [Context]")) {
int startIndex = reference.indexOf("In ") + 3;
int endIndex = reference.indexOf(". [Context]", startIndex);
info.file = reference.mid(startIndex, endIndex - startIndex).trimmed();
}
// Extract page number (after "Page " and before the next space)
if (reference.contains("Page ")) {
int startIndex = reference.indexOf("Page ") + 5;
int endIndex = reference.indexOf(' ', startIndex);
if (endIndex == -1) endIndex = reference.length();
info.page = reference.mid(startIndex, endIndex - startIndex).toInt();
}
// Extract lines (after "Lines " and before the next space or hyphen)
if (reference.contains("Lines ")) {
int startIndex = reference.indexOf("Lines ") + 6;
int endIndex = reference.indexOf(' ', startIndex);
if (endIndex == -1) endIndex = reference.length();
int hyphenIndex = reference.indexOf('-', startIndex);
if (hyphenIndex != -1 && hyphenIndex < endIndex) {
info.from = reference.mid(startIndex, hyphenIndex - startIndex).toInt();
info.to = reference.mid(hyphenIndex + 1, endIndex - hyphenIndex - 1).toInt();
} else {
info.from = reference.mid(startIndex, endIndex - startIndex).toInt();
}
}
info.text = context;
sources.append(info);
}
consolidatedSources = ChatItem::consolidateSources(sources);
}
}
if (version >= 10) {
qsizetype count;
stream >> count;
QList<PromptAttachment> attachments;
for (int i = 0; i < count; ++i) {
PromptAttachment a;
stream >> a.url;
stream >> a.content;
attachments.append(a);
}
promptAttachments = attachments;
}
if (version >= 12) {
qsizetype count;
stream >> count;
for (int i = 0; i < count; ++i) {
ChatItem *c = new ChatItem(this);
if (!c->deserializeSubItems(stream, version)) {
delete c;
return false;
}
subItems.push_back(c);
}
}
return true;
}

File diff suppressed because it is too large Load Diff

View File

@ -1,29 +1,32 @@
#include "chatviewtextprocessor.h"
#include <QAbstractTextDocumentLayout>
#include <QBrush>
#include <QChar>
#include <QClipboard>
#include <QDebug>
#include <QFlag>
#include <QFont>
#include <QFontMetricsF>
#include <QGuiApplication>
#include <QList>
#include <QPainter>
#include <QList> // IWYU pragma: keep
#include <QPair>
#include <QQuickTextDocument>
#include <QRegularExpression>
#include <QStringList>
#include <QTextBlock>
#include <QTextCharFormat>
#include <QStringList> // IWYU pragma: keep
#include <QTextBlock> // IWYU pragma: keep
#include <QTextCharFormat> // IWYU pragma: keep
#include <QTextCursor>
#include <QTextDocument>
#include <QTextDocumentFragment>
#include <QTextFrame>
#include <QTextFrameFormat>
#include <QTextFrame> // IWYU pragma: keep
#include <QTextFrameFormat> // IWYU pragma: keep
#include <QTextTableCell>
#include <QVariant>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <algorithm>
#include <utility>
enum Language {
None,
@ -967,8 +970,6 @@ void ChatViewTextProcessor::handleCodeBlocks()
cursor.setPosition(matchesCode[index].capturedEnd(), QTextCursor::KeepAnchor);
cursor.removeSelectedText();
int startPos = cursor.position();
QTextFrameFormat frameFormat = frameFormatBase;
QString capturedText = matchesCode[index].captured(1);
QString codeLanguage;
@ -1004,7 +1005,7 @@ void ChatViewTextProcessor::handleCodeBlocks()
QTextFrame *mainFrame = cursor.currentFrame();
cursor.setCharFormat(textFormat);
QTextFrame *frame = cursor.insertFrame(frameFormat);
cursor.insertFrame(frameFormat);
QTextTable *table = cursor.insertTable(codeLanguage.isEmpty() ? 1 : 2, 1, tableFormat);
if (!codeLanguage.isEmpty()) {
@ -1016,7 +1017,6 @@ void ChatViewTextProcessor::handleCodeBlocks()
headerCursor.insertText(codeLanguage);
QTextTableCell copy = headerTable->cellAt(0, 1);
QTextCursor copyCursor = copy.firstCursorPosition();
int startPos = copyCursor.position();
CodeCopy newCopy;
newCopy.text = lines.join("\n");
newCopy.startPos = copyCursor.position();

View File

@ -3,18 +3,15 @@
#include <QColor>
#include <QObject>
#include <QQmlEngine>
#include <QQuickTextDocument> // IWYU pragma: keep
#include <QRectF>
#include <QSizeF>
#include <QQmlEngine> // IWYU pragma: keep
#include <QQuickTextDocument>
#include <QString>
#include <QSyntaxHighlighter>
#include <QTextObjectInterface>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <QtTypes>
// IWYU pragma: no_forward_declare QQuickTextDocument
class QPainter;
class QTextDocument;
class QTextFormat;
struct CodeColors {
Q_GADGET

View File

@ -0,0 +1,179 @@
#include "codeinterpreter.h"
#include <QJSEngine>
#include <QJSValue>
#include <QList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QVariant>
#include <Qt>
using namespace Qt::Literals::StringLiterals;
CodeInterpreter::CodeInterpreter()
: Tool()
, m_error(ToolEnums::Error::NoError)
{
m_worker = new CodeInterpreterWorker;
connect(this, &CodeInterpreter::request, m_worker, &CodeInterpreterWorker::request, Qt::QueuedConnection);
}
void CodeInterpreter::run(const QList<ToolParam> &params)
{
m_error = ToolEnums::Error::NoError;
m_errorString = QString();
Q_ASSERT(params.count() == 1
&& params.first().name == "code"
&& params.first().type == ToolEnums::ParamType::String);
const QString code = params.first().value.toString();
connect(m_worker, &CodeInterpreterWorker::finished, [this, params] {
m_error = m_worker->error();
m_errorString = m_worker->errorString();
emit runComplete({
ToolCallConstants::CodeInterpreterFunction,
params,
m_worker->response(),
m_error,
m_errorString
});
});
emit request(code);
}
bool CodeInterpreter::interrupt()
{
return m_worker->interrupt();
}
QList<ToolParamInfo> CodeInterpreter::parameters() const
{
return {{
"code",
ToolEnums::ParamType::String,
"javascript code to compute",
true
}};
}
QString CodeInterpreter::symbolicFormat() const
{
return "{human readable plan to complete the task}\n" + ToolCallConstants::CodeInterpreterPrefix + "{code}\n" + ToolCallConstants::CodeInterpreterSuffix;
}
QString CodeInterpreter::examplePrompt() const
{
return R"(Write code to check if a number is prime, use that to see if the number 7 is prime)";
}
QString CodeInterpreter::exampleCall() const
{
static const QString example = R"(function isPrime(n) {
if (n <= 1) {
return false;
}
for (let i = 2; i <= Math.sqrt(n); i++) {
if (n % i === 0) {
return false;
}
}
return true;
}
const number = 7;
console.log(`The number ${number} is prime: ${isPrime(number)}`);
)";
return "Certainly! Let's compute the answer to whether the number 7 is prime.\n" + ToolCallConstants::CodeInterpreterPrefix + example + ToolCallConstants::CodeInterpreterSuffix;
}
QString CodeInterpreter::exampleReply() const
{
return R"("The computed result shows that 7 is a prime number.)";
}
CodeInterpreterWorker::CodeInterpreterWorker()
: QObject(nullptr)
, m_engine(new QJSEngine(this))
{
moveToThread(&m_thread);
QJSValue consoleInternalObject = m_engine->newQObject(&m_consoleCapture);
m_engine->globalObject().setProperty("console_internal", consoleInternalObject);
// preprocess console.log args in JS since Q_INVOKE doesn't support varargs
auto consoleObject = m_engine->evaluate(uR"(
class Console {
log(...args) {
if (args.length == 0)
return;
if (args.length >= 2 && typeof args[0] === 'string')
throw new Error('console.log string formatting not supported');
let cat = '';
for (const arg of args) {
cat += String(arg);
}
console_internal.log(cat);
}
}
new Console();
)"_s);
m_engine->globalObject().setProperty("console", consoleObject);
m_thread.start();
}
void CodeInterpreterWorker::reset()
{
m_response.clear();
m_error = ToolEnums::Error::NoError;
m_errorString.clear();
m_consoleCapture.output.clear();
m_engine->setInterrupted(false);
}
void CodeInterpreterWorker::request(const QString &code)
{
reset();
const QJSValue result = m_engine->evaluate(code);
QString resultString;
if (m_engine->isInterrupted()) {
resultString = QString("Error: code execution was interrupted or timed out.");
} else if (result.isError()) {
// NOTE: We purposely do not set the m_error or m_errorString for the code interpreter since
// we *want* the model to see the response has an error so it can hopefully correct itself. The
// error member variables are intended for tools that have error conditions that cannot be corrected.
// For instance, a tool depending upon the network might set these error variables if the network
// is not available.
const QStringList lines = code.split('\n');
const int line = result.property("lineNumber").toInt();
const int index = line - 1;
const QString lineContent = (index >= 0 && index < lines.size()) ? lines.at(index) : "Line not found in code.";
resultString = QString("Uncaught exception at line %1: %2\n\t%3")
.arg(line)
.arg(result.toString())
.arg(lineContent);
m_error = ToolEnums::Error::UnknownError;
m_errorString = resultString;
} else {
resultString = result.isUndefined() ? QString() : result.toString();
}
if (resultString.isEmpty())
resultString = m_consoleCapture.output;
else if (!m_consoleCapture.output.isEmpty())
resultString += "\n" + m_consoleCapture.output;
m_response = resultString;
emit finished();
}
bool CodeInterpreterWorker::interrupt()
{
m_error = ToolEnums::Error::TimeoutError;
m_engine->setInterrupted(true);
return true;
}

View File

@ -0,0 +1,97 @@
#ifndef CODEINTERPRETER_H
#define CODEINTERPRETER_H
#include "tool.h"
#include "toolcallparser.h"
#include <QObject>
#include <QString>
#include <QThread>
#include <QtAssert>
class QJSEngine;
class JavaScriptConsoleCapture : public QObject
{
Q_OBJECT
public:
QString output;
Q_INVOKABLE void log(const QString &message)
{
const int maxLength = 1024;
if (output.length() >= maxLength)
return;
if (output.length() + message.length() + 1 > maxLength) {
static const QString trunc = "\noutput truncated at " + QString::number(maxLength) + " characters...";
int remainingLength = maxLength - output.length();
if (remainingLength > 0)
output.append(message.left(remainingLength));
output.append(trunc);
Q_ASSERT(output.length() > maxLength);
} else {
output.append(message + "\n");
}
}
};
class CodeInterpreterWorker : public QObject {
Q_OBJECT
public:
CodeInterpreterWorker();
virtual ~CodeInterpreterWorker() {}
void reset();
QString response() const { return m_response; }
ToolEnums::Error error() const { return m_error; }
QString errorString() const { return m_errorString; }
bool interrupt();
public Q_SLOTS:
void request(const QString &code);
Q_SIGNALS:
void finished();
private:
QString m_response;
ToolEnums::Error m_error = ToolEnums::Error::NoError;
QString m_errorString;
QThread m_thread;
JavaScriptConsoleCapture m_consoleCapture;
QJSEngine *m_engine = nullptr;
};
class CodeInterpreter : public Tool
{
Q_OBJECT
public:
explicit CodeInterpreter();
virtual ~CodeInterpreter() {}
void run(const QList<ToolParam> &params) override;
bool interrupt() override;
ToolEnums::Error error() const override { return m_error; }
QString errorString() const override { return m_errorString; }
QString name() const override { return tr("Code Interpreter"); }
QString description() const override { return tr("compute javascript code using console.log as output"); }
QString function() const override { return ToolCallConstants::CodeInterpreterFunction; }
QList<ToolParamInfo> parameters() const override;
virtual QString symbolicFormat() const override;
QString examplePrompt() const override;
QString exampleCall() const override;
QString exampleReply() const override;
Q_SIGNALS:
void request(const QString &code);
private:
ToolEnums::Error m_error = ToolEnums::Error::NoError;
QString m_errorString;
CodeInterpreterWorker *m_worker;
};
#endif // CODEINTERPRETER_H

View File

@ -0,0 +1,7 @@
#pragma once
#define APP_VERSION "@APP_VERSION@"
#define G4A_CONFIG(name) (1/G4A_CONFIG_##name == 1)
#define G4A_CONFIG_force_d3d12 @GPT4ALL_CONFIG_FORCE_D3D12@

View File

@ -1,10 +1,11 @@
#include "database.h"
#include "mysettings.h"
#include "utils.h"
#include "utils.h" // IWYU pragma: keep
#include <duckx/duckx.hpp>
#include <fmt/format.h>
#include <usearch/index.hpp>
#include <usearch/index_plugins.hpp>
#include <QDebug>
@ -12,9 +13,9 @@
#include <QDirIterator>
#include <QFile>
#include <QFileSystemWatcher>
#include <QFlags>
#include <QIODevice>
#include <QPdfDocument>
#include <QPdfSelection>
#include <QKeyValueIterator>
#include <QRegularExpression>
#include <QSqlError>
#include <QSqlQuery>
@ -23,14 +24,24 @@
#include <QMap>
#include <QUtf8StringView>
#include <QVariant>
#include <Qt>
#include <QtLogging>
#include <QtMinMax>
#include <QtTypes>
#include <algorithm>
#include <cmath>
#include <optional>
#include <stdexcept>
#ifdef GPT4ALL_USE_QTPDF
# include <QPdfDocument>
# include <QPdfSelection>
#else
# include <fpdfview.h>
# include <fpdf_doc.h>
# include <fpdf_text.h>
#endif
using namespace Qt::Literals::StringLiterals;
namespace ranges = std::ranges;
namespace us = unum::usearch;
@ -38,6 +49,7 @@ namespace us = unum::usearch;
//#define DEBUG
//#define DEBUG_EXAMPLE
namespace {
/* QFile that checks input for binary data. If seen, it fails the read and returns true
@ -233,12 +245,17 @@ static const QString SELECT_COUNT_CHUNKS_SQL = uR"(
)"_s;
static const QString SELECT_CHUNKS_FTS_SQL = uR"(
select id, bm25(chunks_fts) as score
from chunks_fts
select fts.id, bm25(chunks_fts) as score
from chunks_fts fts
join documents d on fts.document_id = d.id
join collection_items ci on d.folder_id = ci.folder_id
join collections co on ci.collection_id = co.id
where chunks_fts match ?
order by score limit %1;
and co.name in ('%1')
order by score limit %2;
)"_s;
#define NAMED_PAIR(name, typea, a, typeb, b) \
struct name { typea a; typeb b; }; \
static bool operator==(const name &x, const name &y) { return x.a == y.a && x.b == y.b; } \
@ -285,7 +302,7 @@ static bool selectCountChunks(QSqlQuery &q, int folder_id, int &count)
return true;
}
static bool selectChunk(QSqlQuery &q, const QList<int> &chunk_ids, int retrievalSize)
static bool selectChunk(QSqlQuery &q, const QList<int> &chunk_ids)
{
QString chunk_ids_str = QString::number(chunk_ids[0]);
for (size_t i = 1; i < chunk_ids.size(); ++i)
@ -302,10 +319,6 @@ static const QString INSERT_COLLECTION_SQL = uR"(
returning id;
)"_s;
static const QString DELETE_COLLECTION_SQL = uR"(
delete from collections where name = ? and folder_id = ?;
)"_s;
static const QString SELECT_FOLDERS_FROM_COLLECTIONS_SQL = uR"(
select f.id, f.path
from collections c
@ -349,6 +362,14 @@ static const QString UPDATE_LAST_UPDATE_TIME_SQL = uR"(
update collections set last_update_time = ? where id = ?;
)"_s;
static const QString FTS_INTEGRITY_SQL = uR"(
insert into chunks_fts(chunks_fts, rank) values('integrity-check', 1);
)"_s;
static const QString FTS_REBUILD_SQL = uR"(
insert into chunks_fts(chunks_fts) values('rebuild');
)"_s;
static bool addCollection(QSqlQuery &q, const QString &collection_name, const QDateTime &start_update,
const QDateTime &last_update, const QString &embedding_model, CollectionItem &item)
{
@ -366,15 +387,6 @@ static bool addCollection(QSqlQuery &q, const QString &collection_name, const QD
return true;
}
static bool removeCollection(QSqlQuery &q, const QString &collection_name, int folder_id)
{
if (!q.prepare(DELETE_COLLECTION_SQL))
return false;
q.addBindValue(collection_name);
q.addBindValue(folder_id);
return q.exec();
}
static bool selectFoldersFromCollection(QSqlQuery &q, const QString &collection_name, QList<QPair<int, QString>> *folders)
{
if (!q.prepare(SELECT_FOLDERS_FROM_COLLECTIONS_SQL))
@ -507,10 +519,6 @@ static const QString GET_FOLDER_EMBEDDING_MODEL_SQL = uR"(
where ci.folder_id = ?;
)"_s;
static const QString SELECT_ALL_FOLDERPATHS_SQL = uR"(
select path from folders;
)"_s;
static const QString FOLDER_REMOVE_ALL_DOCS_SQL[] = {
uR"(
delete from embeddings
@ -585,17 +593,6 @@ static bool sqlGetFolderEmbeddingModel(QSqlQuery &q, int id, QString &embedding_
return true;
}
static bool selectAllFolderPaths(QSqlQuery &q, QList<QString> *folder_paths)
{
if (!q.prepare(SELECT_ALL_FOLDERPATHS_SQL))
return false;
if (!q.exec())
return false;
while (q.next())
folder_paths->append(q.value(0).toString());
return true;
}
static const QString INSERT_COLLECTION_ITEM_SQL = uR"(
insert into collection_items(collection_id, folder_id)
values(?, ?)
@ -1116,9 +1113,12 @@ static void handleDocumentError(const QString &errorMessage, int document_id, co
class DocumentReader {
public:
static std::unique_ptr<DocumentReader> fromDocument(const DocumentInfo &info);
struct Metadata { QString title, author, subject, keywords; };
const DocumentInfo &doc () const { return *m_info; }
static std::unique_ptr<DocumentReader> fromDocument(DocumentInfo info);
const DocumentInfo &doc () const { return m_info; }
const Metadata &metadata() const { return m_metadata; }
const std::optional<QString> &word () const { return m_word; }
const std::optional<QString> &nextWord() { m_word = advance(); return m_word; }
virtual std::optional<ChunkStreamer::Status> getError() const { return std::nullopt; }
@ -1127,28 +1127,40 @@ public:
virtual ~DocumentReader() = default;
protected:
explicit DocumentReader(const DocumentInfo &info)
: m_info(&info) {}
explicit DocumentReader(DocumentInfo info)
: m_info(std::move(info)) {}
void postInit() { m_word = advance(); }
void postInit(Metadata &&metadata = {})
{
m_metadata = std::move(metadata);
m_word = advance();
}
virtual std::optional<QString> advance() = 0;
const DocumentInfo *m_info;
std::optional<QString> m_word;
DocumentInfo m_info;
Metadata m_metadata;
std::optional<QString> m_word;
};
namespace {
#ifdef GPT4ALL_USE_QTPDF
class PdfDocumentReader final : public DocumentReader {
public:
explicit PdfDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit PdfDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
{
QString path = info.file.canonicalFilePath();
if (m_doc.load(path) != QPdfDocument::Error::None)
throw std::runtime_error(fmt::format("Failed to load PDF: {}", path));
postInit();
Metadata metadata {
.title = m_doc.metaData(QPdfDocument::MetaDataField::Title ).toString(),
.author = m_doc.metaData(QPdfDocument::MetaDataField::Author ).toString(),
.subject = m_doc.metaData(QPdfDocument::MetaDataField::Subject ).toString(),
.keywords = m_doc.metaData(QPdfDocument::MetaDataField::Keywords).toString(),
};
postInit(std::move(metadata));
}
int page() const override { return m_currentPage; }
@ -1174,11 +1186,103 @@ private:
QString m_pageText;
std::optional<QTextStream> m_stream;
};
#else
class PdfDocumentReader final : public DocumentReader {
public:
explicit PdfDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
{
QString path = info.file.canonicalFilePath();
m_doc = FPDF_LoadDocument(path.toUtf8().constData(), nullptr);
if (!m_doc)
throw std::runtime_error(fmt::format("Failed to load PDF: {}", path));
// Extract metadata
Metadata metadata {
.title = getMetadata("Title" ),
.author = getMetadata("Author" ),
.subject = getMetadata("Subject" ),
.keywords = getMetadata("Keywords"),
};
postInit(std::move(metadata));
}
~PdfDocumentReader() override
{
if (m_page)
FPDF_ClosePage(m_page);
if (m_doc)
FPDF_CloseDocument(m_doc);
}
int page() const override { return m_currentPage; }
private:
std::optional<QString> advance() override
{
QString word;
do {
while (!m_stream || m_stream->atEnd()) {
if (m_currentPage >= FPDF_GetPageCount(m_doc))
return std::nullopt;
if (m_page)
FPDF_ClosePage(std::exchange(m_page, nullptr));
m_page = FPDF_LoadPage(m_doc, m_currentPage++);
if (!m_page)
throw std::runtime_error("Failed to load page.");
m_pageText = extractTextFromPage(m_page);
m_stream.emplace(&m_pageText);
}
*m_stream >> word;
} while (word.isEmpty());
return word;
}
QString getMetadata(FPDF_BYTESTRING key)
{
// FPDF_GetMetaText includes a 2-byte null terminator
ulong nBytes = FPDF_GetMetaText(m_doc, key, nullptr, 0);
if (nBytes <= sizeof (FPDF_WCHAR))
return { "" };
QByteArray buffer(nBytes, Qt::Uninitialized);
ulong nResultBytes = FPDF_GetMetaText(m_doc, key, buffer.data(), buffer.size());
Q_ASSERT(nResultBytes % 2 == 0);
Q_ASSERT(nResultBytes <= nBytes);
return QString::fromUtf16(reinterpret_cast<const char16_t *>(buffer.data()), nResultBytes / 2 - 1);
}
QString extractTextFromPage(FPDF_PAGE page)
{
FPDF_TEXTPAGE textPage = FPDFText_LoadPage(page);
if (!textPage)
throw std::runtime_error("Failed to load text page.");
int nChars = FPDFText_CountChars(textPage);
if (!nChars)
return {};
// FPDFText_GetText includes a 2-byte null terminator
QByteArray buffer((nChars + 1) * sizeof (FPDF_WCHAR), Qt::Uninitialized);
int nResultChars = FPDFText_GetText(textPage, 0, nChars, reinterpret_cast<ushort *>(buffer.data()));
Q_ASSERT(nResultChars <= nChars + 1);
FPDFText_ClosePage(textPage);
return QString::fromUtf16(reinterpret_cast<const char16_t *>(buffer.data()), nResultChars - 1);
}
FPDF_DOCUMENT m_doc = nullptr;
FPDF_PAGE m_page = nullptr;
int m_currentPage = 0;
QString m_pageText;
std::optional<QTextStream> m_stream;
};
#endif // !defined(GPT4ALL_USE_QTPDF)
class WordDocumentReader final : public DocumentReader {
public:
explicit WordDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit WordDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
, m_doc(info.file.canonicalFilePath().toStdString())
{
m_doc.open();
@ -1187,6 +1291,7 @@ public:
m_paragraph = &m_doc.paragraphs();
m_run = &m_paragraph->runs();
// TODO(jared): metadata for Word documents?
postInit();
}
@ -1208,11 +1313,14 @@ protected:
qsizetype wordEnd = wordStart + 1;
while (wordEnd >= m_buffer.size() || !m_buffer[wordEnd].isSpace()) {
if (wordEnd >= m_buffer.size() && !fillBuffer())
return std::nullopt;
break;
if (!m_buffer[wordEnd].isSpace())
++wordEnd;
}
if (wordStart == wordEnd)
return std::nullopt;
auto size = wordEnd - wordStart;
QString word = std::move(m_buffer);
m_buffer = word.sliced(wordStart + size);
@ -1220,7 +1328,6 @@ protected:
word.resize(size);
else
word = word.sliced(wordStart, size);
return word;
}
@ -1232,18 +1339,30 @@ protected:
// try next paragraph
if (!m_paragraph->has_next())
return false;
m_paragraph->next();
m_buffer += u'\n';
}
bool foundText = false;
auto &run = m_run->get_node();
const char *text = run.child("w:t").text().get();
if (!*text && run.child("w:tab"))
text = "\t";
m_run->next();
if (*text) {
m_buffer += QUtf8StringView(text);
return true;
for (auto node = run.first_child(); node; node = node.next_sibling()) {
std::string node_name = node.name();
if (node_name == "w:t") {
const char *text = node.text().get();
if (*text) {
foundText = true;
m_buffer += QUtf8StringView(text);
}
} else if (node_name == "w:br") {
m_buffer += u'\n';
} else if (node_name == "w:tab") {
m_buffer += u'\t';
}
}
m_run->next();
if (foundText) return true;
}
}
@ -1255,8 +1374,8 @@ protected:
class TxtDocumentReader final : public DocumentReader {
public:
explicit TxtDocumentReader(const DocumentInfo &info)
: DocumentReader(info)
explicit TxtDocumentReader(DocumentInfo info)
: DocumentReader(std::move(info))
, m_file(info.file.canonicalFilePath())
{
if (!m_file.open(QIODevice::ReadOnly))
@ -1297,13 +1416,13 @@ protected:
} // namespace
std::unique_ptr<DocumentReader> DocumentReader::fromDocument(const DocumentInfo &doc)
std::unique_ptr<DocumentReader> DocumentReader::fromDocument(DocumentInfo doc)
{
if (doc.isPdf())
return std::make_unique<PdfDocumentReader>(doc);
return std::make_unique<PdfDocumentReader>(std::move(doc));
if (doc.isDocx())
return std::make_unique<WordDocumentReader>(doc);
return std::make_unique<TxtDocumentReader>(doc);
return std::make_unique<WordDocumentReader>(std::move(doc));
return std::make_unique<TxtDocumentReader>(std::move(doc));
}
ChunkStreamer::ChunkStreamer(Database *database)
@ -1311,20 +1430,14 @@ ChunkStreamer::ChunkStreamer(Database *database)
ChunkStreamer::~ChunkStreamer() = default;
void ChunkStreamer::setDocument(const DocumentInfo &doc, int documentId, const QString &embeddingModel,
const QString &title, const QString &author, const QString &subject,
const QString &keywords)
void ChunkStreamer::setDocument(DocumentInfo doc, int documentId, const QString &embeddingModel)
{
auto docKey = doc.key();
if (!m_docKey || *m_docKey != docKey) {
m_docKey = docKey;
m_reader = DocumentReader::fromDocument(doc);
m_reader = DocumentReader::fromDocument(std::move(doc));
m_documentId = documentId;
m_embeddingModel = embeddingModel;
m_title = title;
m_author = author;
m_subject = subject;
m_keywords = keywords;
m_chunk.clear();
m_page = 0;
@ -1332,7 +1445,8 @@ void ChunkStreamer::setDocument(const DocumentInfo &doc, int documentId, const Q
if (m_database->m_documentIdCache.contains(documentId)) {
QSqlQuery q(m_database->m_db);
if (!m_database->removeChunksByDocumentId(q, documentId))
handleDocumentError("ERROR: Cannot remove chunks of document", documentId, doc.file.canonicalPath(), q.lastError());
handleDocumentError("ERROR: Cannot remove chunks of document",
documentId, m_reader->doc().file.canonicalPath(), q.lastError());
}
}
}
@ -1361,10 +1475,7 @@ ChunkStreamer::Status ChunkStreamer::step()
for (;;) {
if (auto error = m_reader->getError()) {
m_docKey.reset(); // done processing
return *error;
}
if (m_database->scanQueueInterrupted()) {
retval = Status::INTERRUPTED;
retval = *error;
break;
}
@ -1425,14 +1536,15 @@ ChunkStreamer::Status ChunkStreamer::step()
QSqlQuery q(m_database->m_db);
int chunkId = 0;
auto &metadata = m_reader->metadata();
if (!m_database->addChunk(q,
m_documentId,
chunk,
m_reader->doc().file.fileName(), // basename
m_title,
m_author,
m_subject,
m_keywords,
metadata.title,
metadata.author,
metadata.subject,
metadata.keywords,
m_page,
line_from,
line_to,
@ -1459,6 +1571,11 @@ ChunkStreamer::Status ChunkStreamer::step()
break;
}
}
if (m_database->scanQueueInterrupted()) {
retval = Status::INTERRUPTED;
break;
}
}
if (nChunks) {
@ -1519,8 +1636,22 @@ void Database::handleEmbeddingsGenerated(const QVector<EmbeddingResult> &embeddi
for (const auto &[key, stat]: std::as_const(stats).asKeyValueRange()) {
if (!m_collectionMap.contains(key.folder_id)) continue;
CollectionItem item = guiCollectionItem(key.folder_id);
item.currentEmbeddingsToIndex -= stat.nAdded + stat.nSkipped;
item.totalEmbeddingsToIndex -= stat.nSkipped;
Q_ASSERT(item.currentEmbeddingsToIndex >= stat.nAdded + stat.nSkipped);
if (item.currentEmbeddingsToIndex < stat.nAdded + stat.nSkipped) {
qWarning() << "Database ERROR: underflow in current embeddings to index statistics";
item.currentEmbeddingsToIndex = 0;
} else {
item.currentEmbeddingsToIndex -= stat.nAdded + stat.nSkipped;
}
Q_ASSERT(item.totalEmbeddingsToIndex >= stat.nSkipped);
if (item.totalEmbeddingsToIndex < stat.nSkipped) {
qWarning() << "Database ERROR: underflow in total embeddings to index statistics";
item.totalEmbeddingsToIndex = 0;
} else {
item.totalEmbeddingsToIndex -= stat.nSkipped;
}
if (!stat.lastFile.isNull())
item.fileCurrentlyProcessing = stat.lastFile;
@ -1622,13 +1753,16 @@ bool Database::scanQueueInterrupted() const
void Database::scanQueueBatch()
{
m_scanDurationTimer.start();
transaction();
// scan for up to 100ms or until we run out of documents
while (!m_docsToScan.empty() && !scanQueueInterrupted())
m_scanDurationTimer.start();
// scan for up to the maximum scan duration or until we run out of documents
while (!m_docsToScan.empty()) {
scanQueue();
if (scanQueueInterrupted())
break;
}
commit();
@ -1714,22 +1848,8 @@ void Database::scanQueue()
Q_ASSERT(document_id != -1);
{
QString title, author, subject, keywords;
if (info.isPdf()) {
QPdfDocument doc;
if (doc.load(document_path) != QPdfDocument::Error::None) {
qWarning() << "ERROR: Could not load pdf" << document_id << document_path;
return updateFolderToIndex(folder_id, countForFolder);
}
title = doc.metaData(QPdfDocument::MetaDataField::Title).toString();
author = doc.metaData(QPdfDocument::MetaDataField::Author).toString();
subject = doc.metaData(QPdfDocument::MetaDataField::Subject).toString();
keywords = doc.metaData(QPdfDocument::MetaDataField::Keywords).toString();
// TODO(jared): metadata for Word documents?
}
try {
m_chunkStreamer.setDocument(info, document_id, embedding_model, title, author, subject, keywords);
m_chunkStreamer.setDocument(info, document_id, embedding_model);
} catch (const std::runtime_error &e) {
qWarning() << "LocalDocs ERROR:" << e.what();
goto dequeue;
@ -1761,7 +1881,13 @@ void Database::scanQueue()
dequeue:
auto item = guiCollectionItem(folder_id);
item.currentBytesToIndex -= info.file.size();
Q_ASSERT(item.currentBytesToIndex >= info.file.size());
if (item.currentBytesToIndex < info.file.size()) {
qWarning() << "Database ERROR: underflow in current bytes to index statistics";
item.currentBytesToIndex = 0;
} else {
item.currentBytesToIndex -= info.file.size();
}
updateGuiForCollectionItem(item);
return updateFolderToIndex(folder_id, countForFolder);
}
@ -1815,6 +1941,7 @@ void Database::start()
m_databaseValid = false;
} else {
cleanDB();
ftsIntegrityCheck();
QSqlQuery q(m_db);
if (!refreshDocumentIdCache(q)) {
m_databaseValid = false;
@ -2328,7 +2455,7 @@ QList<int> Database::searchBM25(const QString &query, const QList<QString> &coll
QList<BM25Query> bm25Queries = queriesForFTS5(query);
QSqlQuery sqlQuery(m_db);
sqlQuery.prepare(SELECT_CHUNKS_FTS_SQL.arg(k));
sqlQuery.prepare(SELECT_CHUNKS_FTS_SQL.arg(collections.join("', '"), QString::number(k)));
QList<SearchResult> results;
for (auto &bm25Query : std::as_const(bm25Queries)) {
@ -2346,11 +2473,13 @@ QList<int> Database::searchBM25(const QString &query, const QList<QString> &coll
}
}
do {
const int chunkId = sqlQuery.value(0).toInt();
const float score = sqlQuery.value(1).toFloat();
results.append({chunkId, score});
} while (sqlQuery.next());
if (sqlQuery.at() != QSql::AfterLastRow) {
do {
const int chunkId = sqlQuery.value(0).toInt();
const float score = sqlQuery.value(1).toFloat();
results.append({chunkId, score});
} while (sqlQuery.next());
}
k = qMin(k, results.size());
std::partial_sort(
@ -2483,7 +2612,7 @@ void Database::retrieveFromDB(const QList<QString> &collections, const QString &
return;
QSqlQuery q(m_db);
if (!selectChunk(q, searchResults, retrievalSize)) {
if (!selectChunk(q, searchResults)) {
qDebug() << "ERROR: selecting chunks:" << q.lastError();
return;
}
@ -2524,6 +2653,26 @@ void Database::retrieveFromDB(const QList<QString> &collections, const QString &
results->append(tempResults.value(id));
}
bool Database::ftsIntegrityCheck()
{
QSqlQuery q(m_db);
// Returns an error executing sql if it the integrity check fails
// See: https://www.sqlite.org/fts5.html#the_integrity_check_command
const bool success = q.exec(FTS_INTEGRITY_SQL);
if (!success && q.lastError().nativeErrorCode() != "267" /*SQLITE_CORRUPT_VTAB from sqlite header*/) {
qWarning() << "ERROR: Cannot prepare sql for fts integrity check" << q.lastError();
return false;
}
if (!success && !q.exec(FTS_REBUILD_SQL)) {
qWarning() << "ERROR: Cannot exec sql for fts rebuild" << q.lastError();
return false;
}
return true;
}
// FIXME This is very slow and non-interruptible and when we close the application and we're
// cleaning a large table this can cause the app to take forever to shut down. This would ideally be
// interruptible and we'd continue 'cleaning' when we restart
@ -2574,7 +2723,7 @@ bool Database::cleanDB()
int document_id = q.value(0).toInt();
QString document_path = q.value(1).toString();
QFileInfo info(document_path);
if (info.exists() && info.isReadable() && m_scannedFileExtensions.contains(info.suffix()))
if (info.exists() && info.isReadable() && m_scannedFileExtensions.contains(info.suffix(), Qt::CaseInsensitive))
continue;
#if defined(DEBUG)

View File

@ -1,7 +1,7 @@
#ifndef DATABASE_H
#define DATABASE_H
#include "embllm.h" // IWYU pragma: keep
#include "embllm.h"
#include <QByteArray>
#include <QChar>
@ -15,11 +15,11 @@
#include <QSet>
#include <QSqlDatabase>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QUrl>
#include <QVector>
#include <QtGlobal>
#include <QVector> // IWYU pragma: keep
#include <QtAssert>
#include <atomic>
#include <cstddef>
@ -28,7 +28,7 @@
#include <memory>
#include <optional>
#include <utility>
#include <vector>
#include <vector> // IWYU pragma: keep
using namespace Qt::Literals::StringLiterals;
@ -39,12 +39,23 @@ class QSqlQuery;
class QTextStream;
class QTimer;
/* Version 0: GPT4All v2.4.3, full-text search
* Version 1: GPT4All v2.5.3, embeddings in hsnwlib
* Version 2: GPT4All v3.0.0, embeddings in sqlite */
* Version 2: GPT4All v3.0.0, embeddings in sqlite
* Version 3: GPT4All v3.4.0, hybrid search
*/
// minimum supported version
static const int LOCALDOCS_MIN_VER = 1;
// FIXME: (Adam) The next time we bump the version we should add triggers to manage the fts external
// content table as recommended in the official documentation to keep the fts index in sync
// See: https://www.sqlite.org/fts5.html#external_content_tables
// FIXME: (Adam) The fts virtual table should include the chunk_id explicitly instead of relying upon
// the id of the two tables to be in sync
// current version
static const int LOCALDOCS_VERSION = 3;
@ -161,8 +172,7 @@ public:
explicit ChunkStreamer(Database *database);
~ChunkStreamer();
void setDocument(const DocumentInfo &doc, int documentId, const QString &embeddingModel, const QString &title,
const QString &author, const QString &subject, const QString &keywords);
void setDocument(DocumentInfo doc, int documentId, const QString &embeddingModel);
std::optional<DocumentInfo::key_type> currentDocKey() const;
void reset();
@ -252,6 +262,7 @@ private:
void enqueueDocumentInternal(DocumentInfo &&info, bool prepend = false);
void enqueueDocuments(int folder_id, std::list<DocumentInfo> &&infos);
void scanQueue();
bool ftsIntegrityCheck();
bool cleanDB();
void addFolderToWatch(const QString &path);
void removeFolderFromWatch(const QString &path);

View File

@ -10,32 +10,37 @@
#include <QDebug>
#include <QGlobalStatic>
#include <QGuiApplication>
#include <QIODevice>
#include <QIODevice> // IWYU pragma: keep
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QKeyValueIterator>
#include <QLocale>
#include <QNetworkRequest>
#include <QPair>
#include <QPair> // IWYU pragma: keep
#include <QRegularExpression>
#include <QRegularExpressionMatch>
#include <QSettings>
#include <QSslConfiguration>
#include <QSslSocket>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QTextStream>
#include <QUrl>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <Qt>
#include <QtAssert>
#include <QtLogging>
#include <QtMinMax>
#include <algorithm>
#include <compare>
#include <cstddef>
#include <utility>
using namespace Qt::Literals::StringLiterals;
class MyDownload: public Download { };
Q_GLOBAL_STATIC(MyDownload, downloadInstance)
Download *Download::globalInstance()
@ -58,11 +63,6 @@ Download::Download()
m_startTime = QDateTime::currentDateTime();
}
static bool operator==(const ReleaseInfo& lhs, const ReleaseInfo& rhs)
{
return lhs.version == rhs.version;
}
std::strong_ordering Download::compareAppVersions(const QString &a, const QString &b)
{
static QRegularExpression versionRegex(R"(^(\d+(?:\.\d+){0,2})(-.+)?$)");

View File

@ -13,10 +13,14 @@
#include <QSslError>
#include <QString>
#include <QThread>
#include <QtGlobal>
#include <QtTypes>
// IWYU pragma: no_forward_declare QFile
// IWYU pragma: no_forward_declare QList
// IWYU pragma: no_forward_declare QSslError
class QByteArray;
struct ReleaseInfo {
Q_GADGET
Q_PROPERTY(QString version MEMBER version)

View File

@ -1,35 +1,35 @@
#include "embllm.h"
#include "modellist.h"
#include "mysettings.h"
#include <gpt4all-backend/llmodel.h>
#include <QCoreApplication>
#include <QDebug>
#include <QFile>
#include <QFileInfo>
#include <QGuiApplication>
#include <QIODevice>
#include <QJsonArray>
#include <QJsonDocument>
#include <QJsonObject>
#include <QJsonValue>
#include <QList>
#include <QMutexLocker>
#include <QMutexLocker> // IWYU pragma: keep
#include <QNetworkAccessManager>
#include <QNetworkReply>
#include <QNetworkRequest>
#include <QUrl>
#include <Qt>
#include <QtGlobal>
#include <QtAssert>
#include <QtLogging>
#include <exception>
#include <string>
#include <utility>
#include <vector>
using namespace Qt::Literals::StringLiterals;
static const QString EMBEDDING_MODEL_NAME = u"nomic-embed-text-v1.5"_s;
static const QString LOCAL_EMBEDDING_MODEL = u"nomic-embed-text-v1.5.f16.gguf"_s;
@ -359,8 +359,11 @@ void EmbeddingLLMWorker::handleFinished()
if (retrievedData.isValid() && retrievedData.canConvert<QVector<EmbeddingChunk>>())
chunks = retrievedData.value<QVector<EmbeddingChunk>>();
QVariant response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
QVariant response;
if (reply->error() != QNetworkReply::NoError) {
response = reply->attribute(QNetworkRequest::HttpStatusCodeAttribute);
Q_ASSERT(response.isValid());
}
bool ok;
int code = response.toInt(&ok);
if (!ok || code != 200) {

View File

@ -5,10 +5,10 @@
#include <QMutex>
#include <QObject>
#include <QString>
#include <QStringList>
#include <QStringList> // IWYU pragma: keep
#include <QThread>
#include <QVariant>
#include <QVector>
#include <QVector> // IWYU pragma: keep
#include <atomic>
#include <vector>
@ -16,6 +16,7 @@
class LLModel;
class QNetworkAccessManager;
struct EmbeddingChunk {
QString model; // TODO(jared): use to select model
int folder_id;

View File

@ -0,0 +1,76 @@
#include "jinja_helpers.h"
#include <QString>
#include <QUrl>
#include <ranges>
#include <string>
#include <utility>
namespace views = std::views;
using json = nlohmann::ordered_json;
json::object_t JinjaResultInfo::AsJson() const
{
return {
{ "collection", m_source->collection.toStdString() },
{ "path", m_source->path .toStdString() },
{ "file", m_source->file .toStdString() },
{ "title", m_source->title .toStdString() },
{ "author", m_source->author .toStdString() },
{ "date", m_source->date .toStdString() },
{ "text", m_source->text .toStdString() },
{ "page", m_source->page },
{ "file_uri", m_source->fileUri() .toStdString() },
};
}
json::object_t JinjaPromptAttachment::AsJson() const
{
return {
{ "url", m_attachment->url.toString() .toStdString() },
{ "file", m_attachment->file() .toStdString() },
{ "processed_content", m_attachment->processedContent().toStdString() },
};
}
json::object_t JinjaMessage::AsJson() const
{
json::object_t obj;
{
json::string_t role;
switch (m_item->type()) {
using enum MessageItem::Type;
case System: role = "system"; break;
case Prompt: role = "user"; break;
case Response: role = "assistant"; break;
case ToolResponse: role = "tool"; break;
}
obj.emplace_back("role", std::move(role));
}
{
QString content;
if (m_version == 0 && m_item->type() == MessageItem::Type::Prompt) {
content = m_item->bakedPrompt();
} else {
content = m_item->content();
}
obj.emplace_back("content", content.toStdString());
}
if (m_item->type() == MessageItem::Type::Prompt) {
{
auto sources = m_item->sources() | views::transform([](auto &r) {
return JinjaResultInfo(r).AsJson();
});
obj.emplace("sources", json::array_t(sources.begin(), sources.end()));
}
{
auto attachments = m_item->promptAttachments() | views::transform([](auto &pa) {
return JinjaPromptAttachment(pa).AsJson();
});
obj.emplace("prompt_attachments", json::array_t(attachments.begin(), attachments.end()));
}
}
return obj;
}

Some files were not shown because too many files have changed in this diff Show More