The files that you need to download and put under `models/model-name` (for instance, `models/gpt-j-6B`) are the json, txt, and pytorch\*.bin files. The remaining files are not necessary.
[GPT-4chan](https://huggingface.co/ykilcher/gpt-4chan) has been shut down from Hugging Face, so you need to download it elsewhere. You have two options:
This webui allows you to switch between different models on the fly, so it must be fast to load the models from disk.
One way to make this process about 10x faster is to convert the models to pytorch format using the script `convert-to-torch.py`. Create a folder called `torch-dumps` and then make the conversion with:
python convert-to-torch.py models/model-name/
The output model will be saved to `torch-dumps/model-name.pt`. This is the default way to load all models except for `gpt-neox-20b`, `opt-13b`, `OPT-13B-Erebus`, `gpt-j-6B`, and `flan-t5`. I don't remember why these models are exceptions.