gpt4all chatbot ui
Go to file
Saifeddine ALOUI 69441a0fa3 sync 2024-05-03 01:52:32 +02:00
.devcontainer Cleared code 2023-08-26 03:07:31 +02:00
.github changed 2024-01-07 20:44:59 +01:00
.vscode Upgraded install batch files 2023-04-06 22:07:20 +02:00
ai_ethics bugfix 2023-07-12 23:36:18 +02:00
api Huge upgrade, discussion system is now in lollms core 2024-02-26 01:55:44 +01:00
assets added logo 2024-01-22 23:00:36 +01:00
configs new version 2024-05-03 00:58:18 +02:00
databases New code struture with separated backend 2023-04-15 13:30:08 +02:00
docs new video 2024-04-13 03:06:04 +02:00
endpoints new version 2024-05-03 00:58:18 +02:00
events new version 2024-05-03 00:58:18 +02:00
extensions Added extension 2023-07-06 11:36:57 +02:00
help updated faqs 2023-06-22 09:07:33 +02:00
images updated 2023-06-24 13:06:06 +02:00
lollms_core@58e777f9ab sync 2024-05-03 01:45:38 +02:00
notebooks fixed model icon issue 2024-02-27 13:03:35 +01:00
presets added preset 2024-02-04 19:18:37 +01:00
scripts added installer 2024-04-27 02:22:54 +02:00
tests new version 2024-05-03 00:58:18 +02:00
tests_and_fun v9.6 2024-04-21 20:48:05 +02:00
train added training code 2023-06-06 22:20:29 +02:00
utilities Update graphviz_execution_engine.py 2024-04-30 09:18:01 +02:00
web sync 2024-05-03 01:52:32 +02:00
zoos new version 2024-05-03 00:58:18 +02:00
.gitignore fixed 2024-02-03 01:42:40 +01:00
.gitmodules Added pip master to lollms-webui 2024-04-21 17:13:50 +02:00
.hadolint.yaml Hadolint config added 2023-04-07 18:14:03 +02:00
CHANGELOG.md synced 2023-11-09 02:04:28 +01:00
CODE_OF_CONDUCT.md Initial commit 2023-04-06 21:12:49 +02:00
CONTRIBUTING.md Added personalities manipulation tools 2023-06-08 08:58:02 +02:00
Dockerfile Moved to new discussion system 2024-02-19 00:23:15 +01:00
LICENSE V5: new personalities structuring 2023-08-18 01:29:53 +02:00
README.md Update README.md 2024-04-01 12:57:45 +02:00
SECURITY.md Create SECURITY.md 2024-02-01 08:49:53 +01:00
app.py upgraded 2024-04-14 22:50:45 +02:00
docker-compose.yml fix docker 2023-08-29 22:39:37 +02:00
installer.iss upgraded ui 2023-08-31 14:07:11 +02:00
installer_cpu.iss upgraded tool 2023-08-28 01:40:24 +02:00
lollms_webui.py new version 2024-05-03 00:58:18 +02:00
models.yaml upgraded 2024-04-27 17:06:59 +02:00
package-lock.json Preparing for the final move to FastAPI 2024-01-06 02:05:07 +01:00
package.json Preparing for the final move to FastAPI 2024-01-06 02:05:07 +01:00
requirements.txt upgraded 2024-04-14 22:50:45 +02:00
requirements_dev.txt sync 2024-03-01 01:34:54 +01:00
restart_script.py steps can fail now 2023-07-28 01:16:26 +02:00
setup.py fixed cstom personalities 2023-11-06 20:51:39 +01:00
tailwind.config.js Light and Dark mode + Tailwind Files 2023-04-18 21:49:44 -07:00
update_script.py upgraded 2024-02-01 23:30:31 +01:00

README.md

LoLLMs (Lord of Large Language Multimodal Systems) Web UI

Logo

GitHub license GitHub issues GitHub stars GitHub forks Discord Follow me on X Follow Me on YouTube

LoLLMs core library download statistics

Downloads Downloads Downloads

LoLLMs webui download statistics

Downloads Downloads

Welcome to LoLLMS WebUI (Lord of Large Language Multimodal Systems: One tool to rule them all), the hub for LLM (Large Language Models) and multimodal intelligence systems. This project aims to provide a user-friendly interface to access and utilize various LLM and other AI models for a wide range of tasks. Whether you need help with writing, coding, organizing data, analyzing images, generating images, generating music or seeking answers to your questions, LoLLMS WebUI has got you covered.

As an all-encompassing tool with access to over 500 AI expert conditionning across diverse domains and more than 2500 fine tuned models over multiple domains, you now have an immediate resource for any problem. Whether your car needs repair or if you need coding assistance in Python, C++ or JavaScript; feeling down about life decisions that were made wrongly yet unable see how? Ask Lollms. Need guidance on what lies ahead healthwise based on current symptoms presented, our medical assistance AI can help you get a potential diagnosis and guide you to seek the right medical care. If stuck with legal matters such contract interpretation feel free reach out to Lawyer personality, to get some insight at hand -all without leaving comfort home. Not only does it aid students struggling through those lengthy lectors but provides them extra support during assessments too, so they are able grasp concepts properly rather then just reading along lines which could leave many confused afterward. Want some entertainment? Then engage Laughter Botand let yourself go enjoy hysterical laughs until tears roll from eyes while playing Dungeons&Dragonsor make up crazy stories together thanks to Creative Story Generator. Need illustration work done? No worries, Artbot got us covered there! And last but definitely not least LordOfMusic here for music generation according to individual specifications. So essentially say goodbye boring nights alone because everything possible can be achieved within one single platform called Lollms...

Features

  • Choose your preferred binding, model, and personality for your tasks
  • Enhance your emails, essays, code debugging, thought organization, and more
  • Explore a wide range of functionalities, such as searching, data organization, image generation, and music generation
  • Easy-to-use UI with light and dark mode options
  • Integration with GitHub repository for easy access
  • Support for different personalities with predefined welcome messages
  • Thumb up/down rating for generated answers
  • Copy, edit, and remove messages
  • Local database storage for your discussions
  • Search, export, and delete multiple discussions
  • Support for image/video generation based on stable diffusion
  • Support for music generation based on musicgen
  • Support for multi generation peer to peer network through Lollms Nodes and Petals.
  • Support for Docker, conda, and manual virtual environment setups
  • Support for LM Studio as a backend
  • Support for Ollama as a backend
  • Support for vllm as a backend

Star History

Star History Chart

Thank you for all users who tested this tool and helped making it more user friendly.

Installation

Automatic installation (UI)

If you are using Windows, just visit the release page, download the windows installer and install it.

Automatic installation (Console)

Download the installation script from scripts folder and run it. The installation scripts are:

  • win_install.bat for Windows.
  • linux_install.shfor Linux.
  • mac_install.shfor Mac.

Manual install:

Since v 9.4, it is not advised to do manual install as many services require the creation of a separate environment and lollms needs to have complete control on the environments. So If you install it using your own conda setup, you will not be able to install any service and reduce the use of lollms to the chat interface (no xtts, no comfyui, no fast generation through vllm or petals or soever)

Code of conduct

By using this tool, users agree to follow these guidelines :

  • This tool is not meant to be used for building and spreading fakenews / misinformation.
  • You are responsible for what you generate by using this tool. The creators will take no responsibility for anything created via this lollms.
  • You can use lollms in your own project free of charge if you agree to respect the Apache 2.0 licenseterms. Please refer to https://www.apache.org/licenses/LICENSE-2.0 .
  • You are not allowed to use lollms to harm others directly or indirectly. This tool is meant for peacefull purposes and should be used for good never for bad.
  • Users must comply with local laws when accessing content provided by third parties like OpenAI API etc., including copyright restrictions where applicable.

⚠️ Security Warning

Please be aware that LoLLMs WebUI does not have built-in user authentication and is primarily designed for local use. Exposing the WebUI to external access without proper security measures could lead to potential vulnerabilities.

If you require remote access to LoLLMs, it is strongly recommended to follow these security guidelines:

  1. Activate Headless Mode: Enabling headless mode will expose only the generation API while turning off other potentially vulnerable endpoints. This helps to minimize the attack surface.

  2. Set Up a Secure Tunnel: Establish a secure tunnel between the localhost running LoLLMs and the remote PC that needs access. This ensures that the communication between the two devices is encrypted and protected.

  3. Modify Configuration Settings: After setting up the secure tunnel, edit the /configs/local_config.yaml file and adjust the following settings:

    host: 0.0.0.0  # Allow remote connections
    port: 9600  # Change the port number if desired (default is 9600)
    force_accept_remote_access: true  # Force accepting remote connections
    headless_server_mode: true  # Set to true for API-only access, or false if the WebUI is needed
    

By following these security practices, you can help protect your LoLLMs instance and its users from potential security risks when enabling remote access.

Remember, it is crucial to prioritize security and take necessary precautions to safeguard your system and sensitive information. If you have any further questions or concerns regarding the security of LoLLMs, please consult the documentation or reach out to the community for assistance.

Stay safe and enjoy using LoLLMs responsibly!

Disclaimer

Large Language Models are amazing tools that can be used for diverse purposes. Lollms was built to harness this power to help the user inhance its productivity. But you need to keep in mind that these models have their limitations and should not replace human intelligence or creativity, but rather augment it by providing suggestions based on patterns found within large amounts of data. It is up to each individual how they choose use them responsibly!

The performance of the system varies depending on the used model, its size and the dataset on whichit has been trained. The larger a language model's training set (the more examples), generally speaking - better results will follow when using such systems as opposed those with smaller ones. But there is still no garantee that the output generated from any given prompt would always be perfect and it may contain errors due various reasons. So please make sure you do not use it for serious matters like choosing medications or making financial decisions without consultating an expert first hand !

license

This repository uses code under ApacheLicense Version 2.0 , see license file for details about rights granted with respect to usage & distribution

Copyright:

ParisNeo 2023