Java binding - Improve error check before loading Model file (#1206)

* Javav binding - Add check for Model file be Readable.

* add todo for java binding.

---------

Co-authored-by: Feliks Zaslavskiy <feliks.zaslavskiy@optum.com>
Co-authored-by: felix <felix@zaslavskiy.net>
This commit is contained in:
Felix Zaslavskiy 2023-07-15 18:07:42 -04:00 committed by GitHub
parent cfd70b69fc
commit 1e74171a7b
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
4 changed files with 15 additions and 4 deletions

View File

@ -12,12 +12,12 @@ You can add Java bindings into your Java project by adding the following depende
<dependency>
<groupId>com.hexadevlabs</groupId>
<artifactId>gpt4all-java-binding</artifactId>
<version>1.1.3</version>
<version>1.1.5</version>
</dependency>
```
**Gradle**
```
implementation 'com.hexadevlabs:gpt4all-java-binding:1.1.3'
implementation 'com.hexadevlabs:gpt4all-java-binding:1.1.5'
```
To add the library dependency for another build system see [Maven Central Java bindings](https://central.sonatype.com/artifact/com.hexadevlabs/gpt4all-java-binding/).
@ -121,4 +121,6 @@ If this is the case you can easily download and install the latest x64 Microsoft
3. Version **1.1.4**:
- Java bindings is compatible with gpt4all version 2.4.11
- Falcon model support included.
4. Version **1.1.5**:
- Add a check for model file readability before loading model.

View File

@ -1,2 +1,6 @@
## Needed
1. Integrate with circleci build pipeline like the C# binding.
## These are just ideas
1. Better Chat completions function.
2. Chat completion that returns result in OpenAI compatible format.

View File

@ -6,7 +6,7 @@
<groupId>com.hexadevlabs</groupId>
<artifactId>gpt4all-java-binding</artifactId>
<version>1.1.4</version>
<version>1.1.5</version>
<packaging>jar</packaging>
<properties>

View File

@ -184,11 +184,16 @@ public class LLModel implements AutoCloseable {
throw new IllegalStateException("Model file does not exist: " + modelPathAbs);
}
// Check if file is Readable
if(!Files.isReadable(modelPath)){
throw new IllegalStateException("Model file cannot be read: " + modelPathAbs);
}
// Create Model Struct. Will load dynamically the correct backend based on model type
model = library.llmodel_model_create2(modelPathAbs, "auto", error);
if(model == null) {
throw new IllegalStateException("Could not load gpt4all backend :" + error.message);
throw new IllegalStateException("Could not load, gpt4all backend returned error: " + error.message);
}
library.llmodel_loadModel(model, modelPathAbs);