gpt4all/.codespellrc
Aaron Miller ee3469ba6c New tokenizer implementation for MPT and GPT-J
Improves output quality by making these tokenizers more closely
match the behavior of the huggingface `tokenizers` based BPE
tokenizers these models were trained with.

Featuring:
 * Fixed unicode handling (via ICU)
 * Fixed BPE token merge handling
 * Complete added vocabulary handling
2023-05-30 12:05:57 -04:00

5 lines
81 B
Plaintext

[codespell]
skip = .git,*.pdf,*.svg,*_tokenizer_config.h
#
# ignore-words-list =