Compare commits

...

17 Commits

Author SHA1 Message Date
Jacob Nguyen
2fda274601 add js side of dispose 2023-10-25 15:18:45 -05:00
Jacob Nguyen
1721cdbf0c untested dispose method 2023-10-25 15:04:01 -05:00
jacob
177172fe00 merge 2023-10-25 13:58:51 -05:00
Andriy Mulyar
3444a47cad
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-10-24 22:03:21 -04:00
Adam Treat
89a59e7f99 Bump version and add release notes for 2.5.1 2023-10-24 13:13:04 -04:00
cebtenzzre
f5dd74bcf0
models2.json: add tokenizer merges to mpt-7b-chat model (#1563) 2023-10-24 12:43:49 -04:00
cebtenzzre
78d930516d
app.py: change default model to Mistral Instruct (#1564) 2023-10-24 12:43:30 -04:00
cebtenzzre
83b8eea611 README: add clear note about new GGUF format
Signed-off-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-24 12:14:29 -04:00
Andriy Mulyar
1bebe78c56
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-10-24 12:05:46 -04:00
Andriy Mulyar
b75a209374
Update README.md
Signed-off-by: Andriy Mulyar <andriy.mulyar@gmail.com>
2023-10-24 12:04:19 -04:00
cebtenzzre
e90263c23f
make scripts executable (#1555) 2023-10-24 09:28:21 -04:00
Aaron Miller
f414c28589 llmodel: whitelist library name patterns
this fixes some issues that were being seen on installed windows builds of 2.5.0

only load dlls that actually might be model impl dlls, otherwise we pull all sorts of random junk into the process before it might expect to be

Signed-off-by: Aaron Miller <apage43@ninjawhale.com>
2023-10-23 21:40:14 -07:00
cebtenzzre
7e5e84fbb7
python: change default extension to .gguf (#1559) 2023-10-23 22:18:50 -04:00
cebtenzzre
37b007603a
bindings: replace references to GGMLv3 models with GGUF (#1547) 2023-10-22 11:58:28 -04:00
cebtenzzre
c25dc51935 chat: fix syntax error in main.qml 2023-10-21 21:22:37 -07:00
Thomas
34daf240f9
Update Dockerfile.buildkit (#1542)
corrected model download directory

Signed-off-by: Thomas <tvhdev@vonhaugwitz-softwaresolutions.de>
2023-10-21 14:56:06 -04:00
Victor Tsaran
721d854095
chat: improve accessibility fields (#1532)
Co-authored-by: cebtenzzre <cebtenzzre@gmail.com>
2023-10-21 10:38:46 -04:00
43 changed files with 187 additions and 121 deletions

View File

@ -1,11 +1,9 @@
<h1 align="center">GPT4All</h1> <h1 align="center">GPT4All</h1>
<p align="center">Open-source assistant-style large language models that run locally on your CPU</p> <p align="center">Open-source large language models that run locally on your CPU and nearly any GPU</p>
<p align="center"><strong>New</strong>: Now with Nomic Vulkan Universal GPU support. <a href="https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan">Learn more</a>.</p>
<p align="center"> <p align="center">
<a href="https://gpt4all.io">GPT4All Website</a> <a href="https://gpt4all.io">GPT4All Website and Models</a>
</p> </p>
<p align="center"> <p align="center">
@ -32,14 +30,25 @@ Run on an M1 macOS Device (not sped up!)
</p> </p>
## GPT4All: An ecosystem of open-source on-edge large language models. ## GPT4All: An ecosystem of open-source on-edge large language models.
GPT4All is an ecosystem to train and deploy **powerful** and **customized** large language models that run locally on consumer grade CPUs. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).
> [!IMPORTANT]
> GPT4All v2.5.0 and newer only supports models in GGUF format (.gguf). Models used with a previous version of GPT4All (.bin extension) will no longer work.
GPT4All is an ecosystem to run **powerful** and **customized** large language models that work locally on consumer grade CPUs and any GPU. Note that your CPU needs to support [AVX or AVX2 instructions](https://en.wikipedia.org/wiki/Advanced_Vector_Extensions).
Learn more in the [documentation](https://docs.gpt4all.io). Learn more in the [documentation](https://docs.gpt4all.io).
The goal is simple - be the best instruction tuned assistant-style language model that any person or enterprise can freely use, distribute and build on.
A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models. A GPT4All model is a 3GB - 8GB file that you can download and plug into the GPT4All open-source ecosystem software. **Nomic AI** supports and maintains this software ecosystem to enforce quality and security alongside spearheading the effort to allow any person or enterprise to easily train and deploy their own on-edge large language models.
### What's New ([Issue Tracker](https://github.com/orgs/nomic-ai/projects/2))
- **October 19th, 2023**: GGUF Support Launches with Support for:
- Mistral 7b base model, an updated model gallery on [gpt4all.io](https://gpt4all.io), several new local code models including Rift Coder v1.5
- [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) support for Q4_0, Q6 quantizations in GGUF.
- Offline build support for running old versions of the GPT4All Local LLM Chat Client.
- **September 18th, 2023**: [Nomic Vulkan](https://blog.nomic.ai/posts/gpt4all-gpu-inference-with-vulkan) launches supporting local LLM inference on AMD, Intel, Samsung, Qualcomm and NVIDIA GPUs.
- **August 15th, 2023**: GPT4All API launches allowing inference of local LLMs from docker containers.
- **July 2023**: Stable support for LocalDocs, a GPT4All Plugin that allows you to privately and locally chat with your data.
### Chat Client ### Chat Client
Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See <a href="https://gpt4all.io">GPT4All Website</a> for a full list of open-source models you can run with this powerful desktop application. Run any GPT4All model natively on your home desktop with the auto-updating desktop chat client. See <a href="https://gpt4all.io">GPT4All Website</a> for a full list of open-source models you can run with this powerful desktop application.

View File

@ -18,6 +18,6 @@ COPY gpt4all_api/app /app
RUN mkdir -p /models RUN mkdir -p /models
# Include the following line to bake a model into the image and not have to download it on API start. # Include the following line to bake a model into the image and not have to download it on API start.
RUN wget -q --show-progress=off https://gpt4all.io/models/gguf/${MODEL_BIN} -P /models \ RUN wget -q --show-progress=off https://gpt4all.io/models/${MODEL_BIN} -P /models \
&& md5sum /models/${MODEL_BIN} && md5sum /models/${MODEL_BIN}

@ -1 +1 @@
Subproject commit a8ed8c858985ef94d97a3cf2c97085b680c6d5d0 Subproject commit 2dee60214b0001cf03e1cec0a53a61a17b55c1eb

View File

@ -10,6 +10,7 @@
#include <cassert> #include <cassert>
#include <cstdlib> #include <cstdlib>
#include <sstream> #include <sstream>
#include <regex>
#ifdef _MSC_VER #ifdef _MSC_VER
#include <intrin.h> #include <intrin.h>
#endif #endif
@ -81,6 +82,13 @@ const std::vector<LLModel::Implementation> &LLModel::Implementation::implementat
static auto* libs = new std::vector<Implementation>([] () { static auto* libs = new std::vector<Implementation>([] () {
std::vector<Implementation> fres; std::vector<Implementation> fres;
std::string impl_name_re = "(bert|llama|gptj|llamamodel-mainline)";
if (requires_avxonly()) {
impl_name_re += "-avxonly";
} else {
impl_name_re += "-(default|metal)";
}
std::regex re(impl_name_re);
auto search_in_directory = [&](const std::string& paths) { auto search_in_directory = [&](const std::string& paths) {
std::stringstream ss(paths); std::stringstream ss(paths);
std::string path; std::string path;
@ -90,7 +98,10 @@ const std::vector<LLModel::Implementation> &LLModel::Implementation::implementat
// Iterate over all libraries // Iterate over all libraries
for (const auto& f : std::filesystem::directory_iterator(fs_path)) { for (const auto& f : std::filesystem::directory_iterator(fs_path)) {
const std::filesystem::path& p = f.path(); const std::filesystem::path& p = f.path();
if (p.extension() != LIB_FILE_EXT) continue; if (p.extension() != LIB_FILE_EXT) continue;
if (!std::regex_search(p.stem().string(), re)) continue;
// Add to list if model implementation // Add to list if model implementation
try { try {
Dlhandle dl(p.string()); Dlhandle dl(p.string());

View File

@ -40,5 +40,5 @@ directory, if necessary.
If you have already saved a model beforehand, specify its path with the `-m`/`--model` argument, If you have already saved a model beforehand, specify its path with the `-m`/`--model` argument,
for example: for example:
```shell ```shell
python app.py repl --model /home/user/my-gpt4all-models/GPT4All-13B-snoozy.ggmlv3.q4_0.bin python app.py repl --model /home/user/my-gpt4all-models/gpt4all-13b-snoozy-q4_0.gguf
``` ```

3
gpt4all-bindings/cli/app.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
"""GPT4All CLI """GPT4All CLI
The GPT4All CLI is a self-contained script based on the `gpt4all` and `typer` packages. It offers a The GPT4All CLI is a self-contained script based on the `gpt4all` and `typer` packages. It offers a
@ -53,7 +54,7 @@ def repl(
model: Annotated[ model: Annotated[
str, str,
typer.Option("--model", "-m", help="Model to use for chatbot"), typer.Option("--model", "-m", help="Model to use for chatbot"),
] = "ggml-gpt4all-j-v1.3-groovy", ] = "mistral-7b-instruct-v0.1.Q4_0.gguf",
n_threads: Annotated[ n_threads: Annotated[
int, int,
typer.Option("--n-threads", "-t", help="Number of threads to use for chatbot"), typer.Option("--n-threads", "-t", help="Number of threads to use for chatbot"),

View File

@ -1,3 +1,4 @@
#!/bin/sh
mkdir -p runtimes mkdir -p runtimes
rm -rf runtimes/linux-x64 rm -rf runtimes/linux-x64
mkdir -p runtimes/linux-x64/native mkdir -p runtimes/linux-x64/native

View File

@ -50,7 +50,7 @@ Test it out! In a Python script or console:
```python ```python
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin") model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3) output = model.generate("The capital of France is ", max_tokens=3)
print(output) print(output)
``` ```
@ -59,7 +59,7 @@ print(output)
GPU Usage GPU Usage
```python ```python
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin", device='gpu') # device='amd', device='intel' model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf", device='gpu') # device='amd', device='intel'
output = model.generate("The capital of France is ", max_tokens=3) output = model.generate("The capital of France is ", max_tokens=3)
print(output) print(output)
``` ```

View File

@ -166,7 +166,7 @@ If you want to use a different model, you can do so with the `-m`/`--model` para
model file name is provided, it will again check in `.cache/gpt4all/` and might start downloading. model file name is provided, it will again check in `.cache/gpt4all/` and might start downloading.
If instead given a path to an existing model, the command could for example look like this: If instead given a path to an existing model, the command could for example look like this:
```shell ```shell
python app.py repl --model /home/user/my-gpt4all-models/GPT4All-13B-snoozy.ggmlv3.q4_0.bin python app.py repl --model /home/user/my-gpt4all-models/gpt4all-13b-snoozy-q4_0.gguf
``` ```
When you're done and want to end a session, simply type `/exit`. When you're done and want to end a session, simply type `/exit`.

View File

@ -11,7 +11,7 @@ pip install gpt4all
=== "GPT4All Example" === "GPT4All Example"
``` py ``` py
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin") model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3) output = model.generate("The capital of France is ", max_tokens=3)
print(output) print(output)
``` ```
@ -35,7 +35,7 @@ Use the GPT4All `chat_session` context manager to hold chat conversations with t
=== "GPT4All Example" === "GPT4All Example"
``` py ``` py
model = GPT4All(model_name='orca-mini-3b.ggmlv3.q4_0.bin') model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
with model.chat_session(): with model.chat_session():
response1 = model.generate(prompt='hello', temp=0) response1 = model.generate(prompt='hello', temp=0)
response2 = model.generate(prompt='write me a short poem', temp=0) response2 = model.generate(prompt='write me a short poem', temp=0)
@ -89,7 +89,7 @@ To interact with GPT4All responses as the model generates, use the `streaming=Tr
=== "GPT4All Streaming Example" === "GPT4All Streaming Example"
``` py ``` py
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin") model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
tokens = [] tokens = []
for token in model.generate("The capital of France is", max_tokens=20, streaming=True): for token in model.generate("The capital of France is", max_tokens=20, streaming=True):
tokens.append(token) tokens.append(token)
@ -135,7 +135,7 @@ is the same as if it weren't provided; that is, `~/.cache/gpt4all/` is the defau
``` py ``` py
from pathlib import Path from pathlib import Path
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All(model_name='orca-mini-3b.ggmlv3.q4_0.bin', model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf',
model_path=(Path.home() / '.cache' / 'gpt4all'), model_path=(Path.home() / '.cache' / 'gpt4all'),
allow_download=False) allow_download=False)
response = model.generate('my favorite 3 fruits are:', temp=0) response = model.generate('my favorite 3 fruits are:', temp=0)
@ -152,7 +152,7 @@ If you want to point it at the chat GUI's default folder, it should be:
from pathlib import Path from pathlib import Path
from gpt4all import GPT4All from gpt4all import GPT4All
model_name = 'orca-mini-3b.ggmlv3.q4_0.bin' model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path.home() / 'Library' / 'Application Support' / 'nomic.ai' / 'GPT4All' model_path = Path.home() / 'Library' / 'Application Support' / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path) model = GPT4All(model_name, model_path)
``` ```
@ -161,7 +161,7 @@ If you want to point it at the chat GUI's default folder, it should be:
from pathlib import Path from pathlib import Path
from gpt4all import GPT4All from gpt4all import GPT4All
import os import os
model_name = 'orca-mini-3b.ggmlv3.q4_0.bin' model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path(os.environ['LOCALAPPDATA']) / 'nomic.ai' / 'GPT4All' model_path = Path(os.environ['LOCALAPPDATA']) / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path) model = GPT4All(model_name, model_path)
``` ```
@ -170,7 +170,7 @@ If you want to point it at the chat GUI's default folder, it should be:
from pathlib import Path from pathlib import Path
from gpt4all import GPT4All from gpt4all import GPT4All
model_name = 'orca-mini-3b.ggmlv3.q4_0.bin' model_name = 'orca-mini-3b-gguf2-q4_0.gguf'
model_path = Path.home() / '.local' / 'share' / 'nomic.ai' / 'GPT4All' model_path = Path.home() / '.local' / 'share' / 'nomic.ai' / 'GPT4All'
model = GPT4All(model_name, model_path) model = GPT4All(model_name, model_path)
``` ```
@ -182,7 +182,7 @@ from pathlib import Path
import gpt4all.gpt4all import gpt4all.gpt4all
gpt4all.gpt4all.DEFAULT_MODEL_DIRECTORY = Path.home() / 'my' / 'models-directory' gpt4all.gpt4all.DEFAULT_MODEL_DIRECTORY = Path.home() / 'my' / 'models-directory'
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All('orca-mini-3b.ggmlv3.q4_0.bin') model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')
... ...
``` ```
@ -193,7 +193,7 @@ Session templates can be customized when starting a `chat_session` context:
=== "GPT4All Custom Session Templates Example" === "GPT4All Custom Session Templates Example"
``` py ``` py
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All('ggml-Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin') model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.' system_template = 'A chat between a curious user and an artificial intelligence assistant.'
# many models use triple hash '###' for keywords, Vicunas are simpler: # many models use triple hash '###' for keywords, Vicunas are simpler:
prompt_template = 'USER: {0}\nASSISTANT: ' prompt_template = 'USER: {0}\nASSISTANT: '
@ -222,7 +222,7 @@ To do the same outside a session, the input has to be formatted manually. For ex
=== "GPT4All Templates Outside a Session Example" === "GPT4All Templates Outside a Session Example"
``` py ``` py
model = GPT4All('ggml-Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin') model = GPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
system_template = 'A chat between a curious user and an artificial intelligence assistant.' system_template = 'A chat between a curious user and an artificial intelligence assistant.'
prompt_template = 'USER: {0}\nASSISTANT: ' prompt_template = 'USER: {0}\nASSISTANT: '
prompts = ['name 3 colors', 'now name 3 fruits', 'what were the 3 colors in your earlier response?'] prompts = ['name 3 colors', 'now name 3 fruits', 'what were the 3 colors in your earlier response?']
@ -285,7 +285,7 @@ customized in a subclass. As an example:
``` ```
=== "GPT4All Custom Subclass Example" === "GPT4All Custom Subclass Example"
``` py ``` py
model = RotatingTemplateGPT4All('ggml-Wizard-Vicuna-7B-Uncensored.ggmlv3.q4_1.bin') model = RotatingTemplateGPT4All('wizardlm-13b-v1.2.Q4_0.gguf')
with model.chat_session(): # starting a session is optional in this example with model.chat_session(): # starting a session is optional in this example
response1 = model.generate("hi, who are you?") response1 = model.generate("hi, who are you?")
print(response1) print(response1)
@ -345,7 +345,7 @@ logging infrastructure offers [many more customization options][py-logging-cookb
import logging import logging
from gpt4all import GPT4All from gpt4all import GPT4All
logging.basicConfig(level=logging.INFO) logging.basicConfig(level=logging.INFO)
model = GPT4All('nous-hermes-13b.ggmlv3.q4_0.bin') model = GPT4All('nous-hermes-llama2-13b.Q4_0.gguf')
with model.chat_session('You are a geography expert.\nBe terse.', with model.chat_session('You are a geography expert.\nBe terse.',
'### Instruction:\n{0}\n### Response:\n'): '### Instruction:\n{0}\n### Response:\n'):
response = model.generate('who are you?', temp=0) response = model.generate('who are you?', temp=0)
@ -414,7 +414,7 @@ If you know exactly when a model should stop responding, you can add a custom ca
=== "GPT4All Custom Stop Callback" === "GPT4All Custom Stop Callback"
``` py ``` py
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All('orca-mini-3b.ggmlv3.q4_0.bin') model = GPT4All('orca-mini-3b-gguf2-q4_0.gguf')
def stop_on_token_callback(token_id, token_string): def stop_on_token_callback(token_id, token_string):
# one sentence is enough: # one sentence is enough:

View File

@ -9,7 +9,7 @@ GPT4All software is optimized to run inference of 3-13 billion parameter large l
=== "GPT4All Example" === "GPT4All Example"
``` py ``` py
from gpt4all import GPT4All from gpt4all import GPT4All
model = GPT4All("orca-mini-3b.ggmlv3.q4_0.bin") model = GPT4All("orca-mini-3b-gguf2-q4_0.gguf")
output = model.generate("The capital of France is ", max_tokens=3) output = model.generate("The capital of France is ", max_tokens=3)
print(output) print(output)
``` ```

View File

@ -75,7 +75,7 @@ class GPT4All:
Constructor Constructor
Args: Args:
model_name: Name of GPT4All or custom model. Including ".bin" file extension is optional but encouraged. model_name: Name of GPT4All or custom model. Including ".gguf" file extension is optional but encouraged.
model_path: Path to directory containing model file or, if file does not exist, where to download model. model_path: Path to directory containing model file or, if file does not exist, where to download model.
Default is None, in which case models will be stored in `~/.cache/gpt4all/`. Default is None, in which case models will be stored in `~/.cache/gpt4all/`.
model_type: Model architecture. This argument currently does not have any functionality and is just used as model_type: Model architecture. This argument currently does not have any functionality and is just used as
@ -141,7 +141,7 @@ class GPT4All:
Model config. Model config.
""" """
model_filename = append_bin_suffix_if_missing(model_name) model_filename = append_extension_if_missing(model_name)
# get the config for the model # get the config for the model
config: ConfigType = DEFAULT_MODEL_CONFIG config: ConfigType = DEFAULT_MODEL_CONFIG
@ -201,7 +201,7 @@ class GPT4All:
Download model from https://gpt4all.io. Download model from https://gpt4all.io.
Args: Args:
model_filename: Filename of model (with .bin extension). model_filename: Filename of model (with .gguf extension).
model_path: Path to download model to. model_path: Path to download model to.
verbose: If True (default), print debug messages. verbose: If True (default), print debug messages.
url: the models remote url (e.g. may be hosted on HF) url: the models remote url (e.g. may be hosted on HF)
@ -456,7 +456,7 @@ def empty_chat_session(system_prompt: str = "") -> List[MessageType]:
return [{"role": "system", "content": system_prompt}] return [{"role": "system", "content": system_prompt}]
def append_bin_suffix_if_missing(model_name): def append_extension_if_missing(model_name):
if not model_name.endswith((".bin", ".gguf")): if not model_name.endswith((".bin", ".gguf")):
model_name += ".bin" model_name += ".gguf"
return model_name return model_name

View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import sys import sys
import time import time
from io import StringIO from io import StringIO

View File

@ -8,7 +8,7 @@ import pytest
def test_inference(): def test_inference():
model = GPT4All(model_name='orca-mini-3b.ggmlv3.q4_0.bin') model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
output_1 = model.generate('hello', top_k=1) output_1 = model.generate('hello', top_k=1)
with model.chat_session(): with model.chat_session():
@ -47,49 +47,44 @@ def do_long_input(model):
def test_inference_long_orca_3b(): def test_inference_long_orca_3b():
model = GPT4All(model_name="orca-mini-3b.ggmlv3.q4_0.bin") model = GPT4All(model_name="orca-mini-3b-gguf2-q4_0.gguf")
do_long_input(model) do_long_input(model)
def test_inference_long_falcon(): def test_inference_long_falcon():
model = GPT4All(model_name='ggml-model-gpt4all-falcon-q4_0.bin') model = GPT4All(model_name='gpt4all-falcon-q4_0.gguf')
do_long_input(model) do_long_input(model)
def test_inference_long_llama_7b(): def test_inference_long_llama_7b():
model = GPT4All(model_name="orca-mini-7b.ggmlv3.q4_0.bin") model = GPT4All(model_name="mistral-7b-openorca.Q4_0.gguf")
do_long_input(model) do_long_input(model)
def test_inference_long_llama_13b(): def test_inference_long_llama_13b():
model = GPT4All(model_name='ggml-nous-hermes-13b.ggmlv3.q4_0.bin') model = GPT4All(model_name='nous-hermes-llama2-13b.Q4_0.gguf')
do_long_input(model) do_long_input(model)
def test_inference_long_mpt(): def test_inference_long_mpt():
model = GPT4All(model_name='ggml-mpt-7b-chat.bin') model = GPT4All(model_name='mpt-7b-chat-q4_0.gguf')
do_long_input(model) do_long_input(model)
def test_inference_long_replit(): def test_inference_long_replit():
model = GPT4All(model_name='ggml-replit-code-v1-3b.bin') model = GPT4All(model_name='replit-code-v1_5-3b-q4_0.gguf')
do_long_input(model)
def test_inference_long_groovy():
model = GPT4All(model_name='ggml-gpt4all-j-v1.3-groovy.bin')
do_long_input(model) do_long_input(model)
def test_inference_hparams(): def test_inference_hparams():
model = GPT4All(model_name='orca-mini-3b.ggmlv3.q4_0.bin') model = GPT4All(model_name='orca-mini-3b-gguf2-q4_0.gguf')
output = model.generate("The capital of france is ", max_tokens=3) output = model.generate("The capital of france is ", max_tokens=3)
assert 'Paris' in output assert 'Paris' in output
def test_inference_falcon(): def test_inference_falcon():
model = GPT4All(model_name='ggml-model-gpt4all-falcon-q4_0.bin') model = GPT4All(model_name='gpt4all-falcon-q4_0.gguf')
prompt = 'hello' prompt = 'hello'
output = model.generate(prompt) output = model.generate(prompt)
assert isinstance(output, str) assert isinstance(output, str)
@ -97,7 +92,7 @@ def test_inference_falcon():
def test_inference_mpt(): def test_inference_mpt():
model = GPT4All(model_name='ggml-mpt-7b-chat.bin') model = GPT4All(model_name='mpt-7b-chat-q4_0.gguf')
prompt = 'hello' prompt = 'hello'
output = model.generate(prompt) output = model.generate(prompt)
assert isinstance(output, str) assert isinstance(output, str)

View File

@ -61,7 +61,7 @@ copy_prebuilt_C_lib(SRC_CLIB_DIRECtORY,
setup( setup(
name=package_name, name=package_name,
version="2.0.0", version="2.0.1",
description="Python bindings for GPT4All", description="Python bindings for GPT4All",
author="Nomic and the Open Source Community", author="Nomic and the Open Source Community",
author_email="support@nomic.ai", author_email="support@nomic.ai",

View File

@ -15,7 +15,8 @@ Napi::Function NodeModelWrapper::GetClass(Napi::Env env) {
InstanceMethod("initGpuByString", &NodeModelWrapper::InitGpuByString), InstanceMethod("initGpuByString", &NodeModelWrapper::InitGpuByString),
InstanceMethod("hasGpuDevice", &NodeModelWrapper::HasGpuDevice), InstanceMethod("hasGpuDevice", &NodeModelWrapper::HasGpuDevice),
InstanceMethod("listGpu", &NodeModelWrapper::GetGpuDevices), InstanceMethod("listGpu", &NodeModelWrapper::GetGpuDevices),
InstanceMethod("memoryNeeded", &NodeModelWrapper::GetRequiredMemory) InstanceMethod("memoryNeeded", &NodeModelWrapper::GetRequiredMemory),
InstanceMethod("dispose", &NodeModelWrapper::Dispose)
}); });
// Keep a static reference to the constructor // Keep a static reference to the constructor
// //
@ -313,7 +314,9 @@ Napi::Value NodeModelWrapper::GetRequiredMemory(const Napi::CallbackInfo& info)
threadSafeContext->nativeThread = std::thread(threadEntry, threadSafeContext); threadSafeContext->nativeThread = std::thread(threadEntry, threadSafeContext);
return threadSafeContext->deferred_.Promise(); return threadSafeContext->deferred_.Promise();
} }
void NodeModelWrapper::Dispose(const Napi::CallbackInfo& info) {
llmodel_model_destroy(inference_);
}
void NodeModelWrapper::SetThreadCount(const Napi::CallbackInfo& info) { void NodeModelWrapper::SetThreadCount(const Napi::CallbackInfo& info) {
if(info[0].IsNumber()) { if(info[0].IsNumber()) {
llmodel_setThreadCount(GetInference(), info[0].As<Napi::Number>().Int64Value()); llmodel_setThreadCount(GetInference(), info[0].As<Napi::Number>().Int64Value());

View File

@ -24,6 +24,7 @@ public:
*/ */
Napi::Value Prompt(const Napi::CallbackInfo& info); Napi::Value Prompt(const Napi::CallbackInfo& info);
void SetThreadCount(const Napi::CallbackInfo& info); void SetThreadCount(const Napi::CallbackInfo& info);
void Dispose(const Napi::CallbackInfo& info);
Napi::Value getName(const Napi::CallbackInfo& info); Napi::Value getName(const Napi::CallbackInfo& info);
Napi::Value ThreadCount(const Napi::CallbackInfo& info); Napi::Value ThreadCount(const Napi::CallbackInfo& info);
Napi::Value GenerateEmbedding(const Napi::CallbackInfo& info); Napi::Value GenerateEmbedding(const Napi::CallbackInfo& info);

0
gpt4all-bindings/typescript/scripts/build_unix.sh Normal file → Executable file
View File

View File

@ -42,6 +42,8 @@ const completion2 = await createCompletion(model, [
console.log(completion2.choices[0].message) console.log(completion2.choices[0].message)
//CALLING DISPOSE WILL INVALID THE NATIVE MODEL. USE THIS TO CLEANUP
model.dispose()
// At the moment, from testing this code, concurrent model prompting is not possible. // At the moment, from testing this code, concurrent model prompting is not possible.
// Behavior: The last prompt gets answered, but the rest are cancelled // Behavior: The last prompt gets answered, but the rest are cancelled
// my experience with threading is not the best, so if anyone who is good is willing to give this a shot, // my experience with threading is not the best, so if anyone who is good is willing to give this a shot,

View File

@ -61,6 +61,11 @@ declare class InferenceModel {
prompt: string, prompt: string,
options?: Partial<LLModelPromptContext> options?: Partial<LLModelPromptContext>
): Promise<string>; ): Promise<string>;
/**
* delete and cleanup the native model
*/
dispose(): void
} }
declare class EmbeddingModel { declare class EmbeddingModel {
@ -69,6 +74,12 @@ declare class EmbeddingModel {
config: ModelConfig; config: ModelConfig;
embed(text: string): Float32Array; embed(text: string): Float32Array;
/**
* delete and cleanup the native model
*/
dispose(): void
} }
/** /**
@ -163,6 +174,11 @@ declare class LLModel {
* @returns * @returns
*/ */
listGpu() : GpuDevice[] listGpu() : GpuDevice[]
/**
* delete and cleanup the native model
*/
dispose(): void
} }
/** /**
* an object that contains gpu data on this machine. * an object that contains gpu data on this machine.

View File

@ -15,6 +15,10 @@ class InferenceModel {
const result = this.llm.raw_prompt(prompt, normalizedPromptContext, () => {}); const result = this.llm.raw_prompt(prompt, normalizedPromptContext, () => {});
return result; return result;
} }
dispose() {
this.llm.dispose();
}
} }
class EmbeddingModel { class EmbeddingModel {
@ -29,6 +33,10 @@ class EmbeddingModel {
embed(text) { embed(text) {
return this.llm.embed(text) return this.llm.embed(text)
} }
dispose() {
this.llm.dispose();
}
} }

View File

@ -18,7 +18,7 @@ endif()
set(APP_VERSION_MAJOR 2) set(APP_VERSION_MAJOR 2)
set(APP_VERSION_MINOR 5) set(APP_VERSION_MINOR 5)
set(APP_VERSION_PATCH 1) set(APP_VERSION_PATCH 2)
set(APP_VERSION "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}") set(APP_VERSION "${APP_VERSION_MAJOR}.${APP_VERSION_MINOR}.${APP_VERSION_PATCH}")
# Include the binary directory for the generated header file # Include the binary directory for the generated header file

1
gpt4all-chat/cmake/sign_dmg.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import os import os
import subprocess import subprocess
import tempfile import tempfile

View File

@ -282,8 +282,8 @@ Window {
highlighted: comboBox.highlightedIndex === index highlighted: comboBox.highlightedIndex === index
} }
Accessible.role: Accessible.ComboBox Accessible.role: Accessible.ComboBox
Accessible.name: qsTr("ComboBox for displaying/picking the current model") Accessible.name: qsTr("List of available models")
Accessible.description: qsTr("Use this for picking the current model to use; the first item is the current model") Accessible.description: qsTr("The top item is the current model")
onActivated: function (index) { onActivated: function (index) {
currentChat.stopGenerating() currentChat.stopGenerating()
currentChat.reset(); currentChat.reset();
@ -307,7 +307,7 @@ Window {
running: parent.visible running: parent.visible
Accessible.role: Accessible.Animation Accessible.role: Accessible.Animation
Accessible.name: qsTr("Busy indicator") Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the model is loading") Accessible.description: qsTr("loading model...")
} }
Label { Label {
@ -339,8 +339,8 @@ Window {
padding: 15 padding: 15
Accessible.role: Accessible.ButtonMenu Accessible.role: Accessible.ButtonMenu
Accessible.name: qsTr("Hamburger button") Accessible.name: qsTr("Main menu")
Accessible.description: qsTr("Hamburger button that reveals a drawer on the left of the application") Accessible.description: qsTr("Navigation drawer with options")
background: Item { background: Item {
anchors.centerIn: parent anchors.centerIn: parent
@ -389,7 +389,7 @@ Window {
Item { Item {
Accessible.role: Accessible.Dialog Accessible.role: Accessible.Dialog
Accessible.name: qsTr("Network dialog") Accessible.name: qsTr("Network dialog")
Accessible.description: qsTr("Dialog for opt-in to sharing feedback/conversations") Accessible.description: qsTr("opt-in to share feedback/conversations")
} }
} }
@ -405,7 +405,7 @@ Window {
padding: 15 padding: 15
toggled: MySettings.networkIsActive toggled: MySettings.networkIsActive
source: "qrc:/gpt4all/icons/network.svg" source: "qrc:/gpt4all/icons/network.svg"
Accessible.name: qsTr("Network button") Accessible.name: qsTr("Network")
Accessible.description: qsTr("Reveals a dialogue where you can opt-in for sharing data over network") Accessible.description: qsTr("Reveals a dialogue where you can opt-in for sharing data over network")
onClicked: { onClicked: {
@ -441,8 +441,8 @@ Window {
padding: 15 padding: 15
toggled: currentChat.collectionList.length toggled: currentChat.collectionList.length
source: "qrc:/gpt4all/icons/db.svg" source: "qrc:/gpt4all/icons/db.svg"
Accessible.name: qsTr("Add collections of documents to the chat") Accessible.name: qsTr("Add documents")
Accessible.description: qsTr("Provides a button to add collections of documents to the chat") Accessible.description: qsTr("add collections of documents to the chat")
onClicked: { onClicked: {
collectionsDialog.open() collectionsDialog.open()
@ -460,8 +460,8 @@ Window {
z: 200 z: 200
padding: 15 padding: 15
source: "qrc:/gpt4all/icons/settings.svg" source: "qrc:/gpt4all/icons/settings.svg"
Accessible.name: qsTr("Settings button") Accessible.name: qsTr("Settings")
Accessible.description: qsTr("Reveals a dialogue where you can change various settings") Accessible.description: qsTr("Reveals a dialogue with settings")
onClicked: { onClicked: {
settingsDialog.open() settingsDialog.open()
@ -528,7 +528,7 @@ Window {
z: 200 z: 200
padding: 15 padding: 15
source: "qrc:/gpt4all/icons/copy.svg" source: "qrc:/gpt4all/icons/copy.svg"
Accessible.name: qsTr("Copy button") Accessible.name: qsTr("Copy")
Accessible.description: qsTr("Copy the conversation to the clipboard") Accessible.description: qsTr("Copy the conversation to the clipboard")
TextEdit{ TextEdit{
@ -595,7 +595,7 @@ Window {
source: "qrc:/gpt4all/icons/regenerate.svg" source: "qrc:/gpt4all/icons/regenerate.svg"
Accessible.name: text Accessible.name: text
Accessible.description: qsTr("Reset the context which erases current conversation") Accessible.description: qsTr("Reset the context and erase current conversation")
onClicked: { onClicked: {
Network.sendResetContext(chatModel.count) Network.sendResetContext(chatModel.count)
@ -623,7 +623,7 @@ Window {
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Dialog Accessible.role: Accessible.Dialog
Accessible.name: text Accessible.name: text
Accessible.description: qsTr("Dialog indicating an error") Accessible.description: qsTr("Error dialog")
} }
background: Rectangle { background: Rectangle {
anchors.fill: parent anchors.fill: parent
@ -641,7 +641,7 @@ Window {
height: window.height - (window.height * .1) height: window.height - (window.height * .1)
Item { Item {
Accessible.role: Accessible.Dialog Accessible.role: Accessible.Dialog
Accessible.name: qsTr("Download new models dialog") Accessible.name: qsTr("Download new models")
Accessible.description: qsTr("Dialog for downloading new models") Accessible.description: qsTr("Dialog for downloading new models")
} }
} }
@ -740,8 +740,8 @@ Window {
ScrollBar.vertical: ScrollBar { policy: ScrollBar.AlwaysOn } ScrollBar.vertical: ScrollBar { policy: ScrollBar.AlwaysOn }
Accessible.role: Accessible.List Accessible.role: Accessible.List
Accessible.name: qsTr("List of prompt/response pairs") Accessible.name: qsTr("Conversation with the model")
Accessible.description: qsTr("This is the list of prompt/response pairs comprising the actual conversation with the model") Accessible.description: qsTr("prompt / response pairs from the conversation")
delegate: TextArea { delegate: TextArea {
id: myTextArea id: myTextArea
@ -811,7 +811,7 @@ Window {
running: (currentResponse ? true : false) && value === "" && currentChat.responseInProgress running: (currentResponse ? true : false) && value === "" && currentChat.responseInProgress
Accessible.role: Accessible.Animation Accessible.role: Accessible.Animation
Accessible.name: qsTr("Busy indicator") Accessible.name: qsTr("Busy indicator")
Accessible.description: qsTr("Displayed when the model is thinking") Accessible.description: qsTr("The model is thinking")
} }
Label { Label {
anchors.verticalCenter: parent.verticalCenter anchors.verticalCenter: parent.verticalCenter
@ -1053,7 +1053,7 @@ Window {
} }
Accessible.role: Accessible.EditableText Accessible.role: Accessible.EditableText
Accessible.name: placeholderText Accessible.name: placeholderText
Accessible.description: qsTr("Textfield for sending messages/prompts to the model") Accessible.description: qsTr("Send messages/prompts to the model")
Keys.onReturnPressed: (event)=> { Keys.onReturnPressed: (event)=> {
if (event.modifiers & Qt.ControlModifier || event.modifiers & Qt.ShiftModifier) if (event.modifiers & Qt.ControlModifier || event.modifiers & Qt.ShiftModifier)
event.accepted = false; event.accepted = false;
@ -1090,7 +1090,7 @@ Window {
height: 30 height: 30
visible: !currentChat.isServer visible: !currentChat.isServer
source: "qrc:/gpt4all/icons/send_message.svg" source: "qrc:/gpt4all/icons/send_message.svg"
Accessible.name: qsTr("Send the message button") Accessible.name: qsTr("Send message")
Accessible.description: qsTr("Sends the message/prompt contained in textfield to the model") Accessible.description: qsTr("Sends the message/prompt contained in textfield to the model")
onClicked: { onClicked: {

View File

@ -94,17 +94,17 @@
}, },
{ {
"order": "h", "order": "h",
"md5sum": "f5bc6a52f72efd9128efb2eeed802c86", "md5sum": "cf5e8f73747f9d7c6fe72a629808c1de",
"name": "MPT Chat", "name": "MPT Chat",
"filename": "mpt-7b-chat-q4_0.gguf", "filename": "mpt-7b-chat-merges-q4_0.gguf",
"filesize": "3911522272", "filesize": "3796133728",
"requires": "2.5.0", "requires": "2.5.0",
"ramrequired": "8", "ramrequired": "8",
"parameters": "7 billion", "parameters": "7 billion",
"quant": "q4_0", "quant": "q4_0",
"type": "MPT", "type": "MPT",
"description": "<strong>Good model with novel architecture</strong><br><ul><li>Fast responses<li>Chat based<li>Trained by Mosaic ML<li>Cannot be used commercially</ul>", "description": "<strong>Good model with novel architecture</strong><br><ul><li>Fast responses<li>Chat based<li>Trained by Mosaic ML<li>Cannot be used commercially</ul>",
"url": "https://gpt4all.io/models/gguf/mpt-7b-chat-q4_0.gguf", "url": "https://gpt4all.io/models/gguf/mpt-7b-chat-merges-q4_0.gguf",
"promptTemplate": "<|im_start|>user\n%1<|im_end|><|im_start|>assistant\n", "promptTemplate": "<|im_start|>user\n%1<|im_end|><|im_start|>assistant\n",
"systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>" "systemPrompt": "<|im_start|>system\n- You are a helpful assistant chatbot trained by MosaicML.\n- You answer questions.\n- You are excited to be able to help the user, but will refuse to do anything that could be considered harmful to the user.\n- You are more than just an information source, you are also able to write poetry, short stories, and make jokes.<|im_end|>"
}, },

View File

@ -550,6 +550,21 @@
* Jared Van Bortel (Nomic AI) * Jared Van Bortel (Nomic AI)
* Adam Treat (Nomic AI) * Adam Treat (Nomic AI)
* Community (beta testers, bug reporters, bindings authors) * Community (beta testers, bug reporters, bindings authors)
"
},
{
"version": "2.5.1",
"notes":
"
* Accessibility fixes
* Bugfix for crasher on Windows
",
"contributors":
"
* Aaron Miller (Nomic AI)
* Jared Van Bortel (Nomic AI)
* Victor Tsaran <vtsaran@yahoo.com>
* Community (beta testers, bug reporters, bindings authors)
" "
} }
] ]

View File

@ -35,8 +35,8 @@ MySettingsTab {
Layout.fillWidth: false Layout.fillWidth: false
model: ["Dark", "Light"] model: ["Dark", "Light"]
Accessible.role: Accessible.ComboBox Accessible.role: Accessible.ComboBox
Accessible.name: qsTr("ComboBox for displaying/picking the color theme") Accessible.name: qsTr("Color theme")
Accessible.description: qsTr("Use this for picking the color theme for the chat client to use") Accessible.description: qsTr("Color theme for the chat client to use")
function updateModel() { function updateModel() {
themeBox.currentIndex = themeBox.indexOfValue(MySettings.chatTheme); themeBox.currentIndex = themeBox.indexOfValue(MySettings.chatTheme);
} }
@ -70,8 +70,8 @@ MySettingsTab {
Layout.fillWidth: false Layout.fillWidth: false
model: ["Small", "Medium", "Large"] model: ["Small", "Medium", "Large"]
Accessible.role: Accessible.ComboBox Accessible.role: Accessible.ComboBox
Accessible.name: qsTr("ComboBox for displaying/picking the font size") Accessible.name: qsTr("Font size")
Accessible.description: qsTr("Use this for picking the font size of the chat client") Accessible.description: qsTr("Font size of the chat client")
function updateModel() { function updateModel() {
fontBox.currentIndex = fontBox.indexOfValue(MySettings.fontSize); fontBox.currentIndex = fontBox.indexOfValue(MySettings.fontSize);
} }
@ -105,8 +105,8 @@ MySettingsTab {
Layout.fillWidth: false Layout.fillWidth: false
model: MySettings.deviceList model: MySettings.deviceList
Accessible.role: Accessible.ComboBox Accessible.role: Accessible.ComboBox
Accessible.name: qsTr("ComboBox for displaying/picking the device") Accessible.name: qsTr("Device")
Accessible.description: qsTr("Use this for picking the device of the chat client") Accessible.description: qsTr("Device of the chat client")
function updateModel() { function updateModel() {
deviceBox.currentIndex = deviceBox.indexOfValue(MySettings.device); deviceBox.currentIndex = deviceBox.indexOfValue(MySettings.device);
} }
@ -143,8 +143,8 @@ MySettingsTab {
Layout.fillWidth: true Layout.fillWidth: true
model: ModelList.userDefaultModelList model: ModelList.userDefaultModelList
Accessible.role: Accessible.ComboBox Accessible.role: Accessible.ComboBox
Accessible.name: qsTr("ComboBox for displaying/picking the default model") Accessible.name: qsTr("Default model")
Accessible.description: qsTr("Use this for picking the default model to use; the first item is the current default model") Accessible.description: qsTr("Default model to use; the first item is the current default model")
function updateModel() { function updateModel() {
comboBox.currentIndex = comboBox.indexOfValue(MySettings.userDefaultModel); comboBox.currentIndex = comboBox.indexOfValue(MySettings.userDefaultModel);
} }
@ -194,7 +194,7 @@ MySettingsTab {
Layout.row: 5 Layout.row: 5
Layout.column: 2 Layout.column: 2
text: qsTr("Browse") text: qsTr("Browse")
Accessible.description: qsTr("Opens a folder picker dialog to choose where to save model files") Accessible.description: qsTr("Choose where to save model files")
onClicked: { onClicked: {
openFolderDialog("file://" + MySettings.modelPath, function(selectedFolder) { openFolderDialog("file://" + MySettings.modelPath, function(selectedFolder) {
MySettings.modelPath = selectedFolder MySettings.modelPath = selectedFolder

View File

@ -31,8 +31,8 @@ Drawer {
anchors.margins: 10 anchors.margins: 10
Accessible.role: Accessible.Pane Accessible.role: Accessible.Pane
Accessible.name: qsTr("Drawer on the left of the application") Accessible.name: qsTr("Drawer")
Accessible.description: qsTr("Drawer that is revealed by pressing the hamburger button") Accessible.description: qsTr("Main navigation drawer")
MyButton { MyButton {
id: newChat id: newChat
@ -42,7 +42,7 @@ Drawer {
topPadding: 20 topPadding: 20
bottomPadding: 20 bottomPadding: 20
text: qsTr("\uFF0B New chat") text: qsTr("\uFF0B New chat")
Accessible.description: qsTr("Use this to create a new chat") Accessible.description: qsTr("Create a new chat")
background: Rectangle { background: Rectangle {
border.color: newChat.down ? theme.backgroundLightest : theme.buttonBorder border.color: newChat.down ? theme.backgroundLightest : theme.buttonBorder
border.width: 2 border.width: 2
@ -135,7 +135,7 @@ Drawer {
} }
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: qsTr("Select the current chat") Accessible.name: qsTr("Select the current chat")
Accessible.description: qsTr("Provides a button to select the current chat or edit the chat when in edit mode") Accessible.description: qsTr("Select the current chat or edit the chat when in edit mode")
} }
Row { Row {
id: buttons id: buttons
@ -155,8 +155,7 @@ Drawer {
chatName.readOnly = false chatName.readOnly = false
chatName.selectByMouse = true chatName.selectByMouse = true
} }
Accessible.name: qsTr("Edit the chat name") Accessible.name: qsTr("Edit chat name")
Accessible.description: qsTr("Provides a button to edit the chat name")
} }
MyToolButton { MyToolButton {
id: trashButton id: trashButton
@ -168,8 +167,7 @@ Drawer {
trashQuestionDisplayed = true trashQuestionDisplayed = true
timer.start() timer.start()
} }
Accessible.name: qsTr("Delete of the chat") Accessible.name: qsTr("Delete chat")
Accessible.description: qsTr("Provides a button to delete the chat")
} }
} }
Rectangle { Rectangle {
@ -207,8 +205,7 @@ Drawer {
Network.sendRemoveChat() Network.sendRemoveChat()
} }
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: qsTr("Confirm delete of the chat") Accessible.name: qsTr("Confirm chat deletion")
Accessible.description: qsTr("Provides a button to confirm delete of the chat")
} }
Button { Button {
id: cancel id: cancel
@ -230,8 +227,7 @@ Drawer {
trashQuestionDisplayed = false trashQuestionDisplayed = false
} }
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: qsTr("Cancel the delete of the chat") Accessible.name: qsTr("Cancel chat deletion")
Accessible.description: qsTr("Provides a button to cancel delete of the chat")
} }
} }
} }
@ -256,7 +252,7 @@ Drawer {
anchors.bottomMargin: 10 anchors.bottomMargin: 10
text: qsTr("Updates") text: qsTr("Updates")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.description: qsTr("Use this to launch an external application that will check for updates to the installer") Accessible.description: qsTr("Launch an external application that will check for updates to the installer")
onClicked: { onClicked: {
if (!LLM.checkForUpdates()) if (!LLM.checkForUpdates())
checkForUpdatesError.open() checkForUpdatesError.open()
@ -270,7 +266,7 @@ Drawer {
anchors.bottom: aboutButton.top anchors.bottom: aboutButton.top
anchors.bottomMargin: 10 anchors.bottomMargin: 10
text: qsTr("Downloads") text: qsTr("Downloads")
Accessible.description: qsTr("Use this to launch a dialog to download new models") Accessible.description: qsTr("Launch a dialog to download new models")
onClicked: { onClicked: {
downloadClicked() downloadClicked()
} }
@ -282,7 +278,7 @@ Drawer {
anchors.right: parent.right anchors.right: parent.right
anchors.bottom: parent.bottom anchors.bottom: parent.bottom
text: qsTr("About") text: qsTr("About")
Accessible.description: qsTr("Use this to launch a dialog to show the about page") Accessible.description: qsTr("Launch a dialog to show the about page")
onClicked: { onClicked: {
aboutClicked() aboutClicked()
} }

View File

@ -83,7 +83,7 @@ MySettingsTab {
text: qsTr("Add") text: qsTr("Add")
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: text Accessible.name: text
Accessible.description: qsTr("Add button") Accessible.description: qsTr("Add collection")
onClicked: { onClicked: {
var isError = false; var isError = false;
if (root.collection === "") { if (root.collection === "") {

View File

@ -125,7 +125,7 @@ MyDialog {
Layout.fillWidth: true Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: !isChatGPT && !installed && !calcHash && downloadError === "" visible: !isChatGPT && !installed && !calcHash && downloadError === ""
Accessible.description: qsTr("Cancel/Resume/Download button to stop/restart/start the download") Accessible.description: qsTr("Stop/restart/start the download")
background: Rectangle { background: Rectangle {
border.color: downloadButton.down ? theme.backgroundLightest : theme.buttonBorder border.color: downloadButton.down ? theme.backgroundLightest : theme.buttonBorder
border.width: 2 border.width: 2
@ -151,7 +151,7 @@ MyDialog {
Layout.fillWidth: true Layout.fillWidth: true
Layout.alignment: Qt.AlignTop | Qt.AlignHCenter Layout.alignment: Qt.AlignTop | Qt.AlignHCenter
visible: installed || downloadError !== "" visible: installed || downloadError !== ""
Accessible.description: qsTr("Remove button to remove model from filesystem") Accessible.description: qsTr("Remove model from filesystem")
background: Rectangle { background: Rectangle {
border.color: removeButton.down ? theme.backgroundLightest : theme.buttonBorder border.color: removeButton.down ? theme.backgroundLightest : theme.buttonBorder
border.width: 2 border.width: 2
@ -186,8 +186,8 @@ MyDialog {
Download.installModel(filename, openaiKey.text); Download.installModel(filename, openaiKey.text);
} }
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: qsTr("Install button") Accessible.name: qsTr("Install")
Accessible.description: qsTr("Install button to install chatgpt model") Accessible.description: qsTr("Install chatGPT model")
} }
ColumnLayout { ColumnLayout {
@ -385,7 +385,7 @@ MyDialog {
linkColor: theme.textColor linkColor: theme.textColor
Accessible.role: Accessible.Paragraph Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Description") Accessible.name: qsTr("Description")
Accessible.description: qsTr("The description of the file") Accessible.description: qsTr("File description")
onLinkActivated: Qt.openUrlExternally(link) onLinkActivated: Qt.openUrlExternally(link)
} }
} }
@ -456,7 +456,7 @@ MyDialog {
} }
MyButton { MyButton {
text: qsTr("Browse") text: qsTr("Browse")
Accessible.description: qsTr("Opens a folder picker dialog to choose where to save model files") Accessible.description: qsTr("Choose where to save model files")
onClicked: modelPathDialog.open() onClicked: modelPathDialog.open()
} }
} }

View File

@ -69,7 +69,7 @@ Item {
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Button Accessible.role: Accessible.Button
Accessible.name: text Accessible.name: text
Accessible.description: qsTr("Restores the settings dialog to a default state") Accessible.description: qsTr("Restores settings dialog to a default state")
onClicked: { onClicked: {
root.restoreDefaultsClicked(); root.restoreDefaultsClicked();
} }

View File

@ -89,7 +89,7 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
} }
Accessible.role: Accessible.EditableText Accessible.role: Accessible.EditableText
Accessible.name: qsTr("Attribution (optional)") Accessible.name: qsTr("Attribution (optional)")
Accessible.description: qsTr("Textfield for providing attribution") Accessible.description: qsTr("Provide attribution")
onEditingFinished: { onEditingFinished: {
MySettings.networkAttribution = attribution.text; MySettings.networkAttribution = attribution.text;
} }
@ -103,12 +103,12 @@ NOTE: By turning on this feature, you will be sending your data to the GPT4All O
spacing: 10 spacing: 10
MyButton { MyButton {
text: qsTr("Enable") text: qsTr("Enable")
Accessible.description: qsTr("Enable opt-in button") Accessible.description: qsTr("Enable opt-in")
DialogButtonBox.buttonRole: DialogButtonBox.AcceptRole DialogButtonBox.buttonRole: DialogButtonBox.AcceptRole
} }
MyButton { MyButton {
text: qsTr("Cancel") text: qsTr("Cancel")
Accessible.description: qsTr("Cancel opt-in button") Accessible.description: qsTr("Cancel opt-in")
DialogButtonBox.buttonRole: DialogButtonBox.RejectRole DialogButtonBox.buttonRole: DialogButtonBox.RejectRole
} }
background: Rectangle { background: Rectangle {

View File

@ -21,8 +21,8 @@ MyDialog {
Item { Item {
Accessible.role: Accessible.Dialog Accessible.role: Accessible.Dialog
Accessible.name: qsTr("Settings dialog") Accessible.name: qsTr("Settings")
Accessible.description: qsTr("Dialog containing various application settings") Accessible.description: qsTr("Contains various application settings")
} }
ListModel { ListModel {

View File

@ -133,7 +133,6 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Opt-in for anonymous usage statistics") Accessible.name: qsTr("Opt-in for anonymous usage statistics")
Accessible.description: qsTr("Label for opt-in")
} }
ButtonGroup { ButtonGroup {
@ -162,7 +161,7 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.RadioButton Accessible.role: Accessible.RadioButton
Accessible.name: qsTr("Opt-in for anonymous usage statistics") Accessible.name: qsTr("Opt-in for anonymous usage statistics")
Accessible.description: qsTr("Radio button to allow opt-in for anonymous usage statistics") Accessible.description: qsTr("Allow opt-in for anonymous usage statistics")
background: Rectangle { background: Rectangle {
color: "transparent" color: "transparent"
@ -203,7 +202,7 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.RadioButton Accessible.role: Accessible.RadioButton
Accessible.name: qsTr("Opt-out for anonymous usage statistics") Accessible.name: qsTr("Opt-out for anonymous usage statistics")
Accessible.description: qsTr("Radio button to allow opt-out for anonymous usage statistics") Accessible.description: qsTr("Allow opt-out for anonymous usage statistics")
background: Rectangle { background: Rectangle {
color: "transparent" color: "transparent"
@ -249,7 +248,7 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.Paragraph Accessible.role: Accessible.Paragraph
Accessible.name: qsTr("Opt-in for network") Accessible.name: qsTr("Opt-in for network")
Accessible.description: qsTr("Checkbox to allow opt-in for network") Accessible.description: qsTr("Allow opt-in for network")
} }
ButtonGroup { ButtonGroup {
@ -276,7 +275,7 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.RadioButton Accessible.role: Accessible.RadioButton
Accessible.name: qsTr("Opt-in for network") Accessible.name: qsTr("Opt-in for network")
Accessible.description: qsTr("Radio button to allow opt-in anonymous sharing of chats to the GPT4All Datalake") Accessible.description: qsTr("Allow opt-in anonymous sharing of chats to the GPT4All Datalake")
background: Rectangle { background: Rectangle {
color: "transparent" color: "transparent"
@ -317,7 +316,7 @@ model release that uses your data!")
font.pixelSize: theme.fontSizeLarge font.pixelSize: theme.fontSizeLarge
Accessible.role: Accessible.RadioButton Accessible.role: Accessible.RadioButton
Accessible.name: qsTr("Opt-out for network") Accessible.name: qsTr("Opt-out for network")
Accessible.description: qsTr("Radio button to allow opt-out anonymous sharing of chats to the GPT4All Datalake") Accessible.description: qsTr("Allow opt-out anonymous sharing of chats to the GPT4All Datalake")
background: Rectangle { background: Rectangle {
color: "transparent" color: "transparent"

1
gpt4all-training/build_map.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import numpy as np import numpy as np
from nomic import atlas from nomic import atlas
import glob import glob

1
gpt4all-training/clean.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import numpy as np import numpy as np
import glob import glob
import os import os

0
gpt4all-training/create_hostname.sh Normal file → Executable file
View File

1
gpt4all-training/eval_figures.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import glob import glob
import pickle import pickle
import numpy as np import numpy as np

1
gpt4all-training/eval_self_instruct.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import json import json
import torch import torch
import pickle import pickle

1
gpt4all-training/generate.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModelForCausalLM from peft import PeftModelForCausalLM
from read import read_config from read import read_config

1
gpt4all-training/inference.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
from transformers import AutoModelForCausalLM, AutoTokenizer from transformers import AutoModelForCausalLM, AutoTokenizer
import torch import torch
import torch.nn as nn import torch.nn as nn

0
gpt4all-training/launcher.sh Normal file → Executable file
View File

1
gpt4all-training/train.py Normal file → Executable file
View File

@ -1,3 +1,4 @@
#!/usr/bin/env python3
import os import os
from transformers import AutoModelForCausalLM, AutoTokenizer, get_scheduler from transformers import AutoModelForCausalLM, AutoTokenizer, get_scheduler
import torch import torch