llmodel: add FIXMEs to recalculateContext

Signed-off-by: Jared Van Bortel <jared@nomic.ai>
This commit is contained in:
Jared Van Bortel 2024-07-31 17:24:16 -04:00
parent 75bd250845
commit 3832d140c9

View File

@ -15,6 +15,9 @@
#include <vector> #include <vector>
// TODO(cebtenzzre): replace this with llama_kv_cache_seq_shift for llamamodel (GPT-J needs this as-is) // TODO(cebtenzzre): replace this with llama_kv_cache_seq_shift for llamamodel (GPT-J needs this as-is)
// FIXME(jared): if recalculate returns false, we leave n_past<tokens.size() and do not tell the caller to stop
// FIXME(jared): if we get here during chat name or follow-up generation, bad things will happen when we try to restore
// the old prompt context afterwards
void LLModel::recalculateContext(PromptContext &promptCtx, std::function<bool(bool)> recalculate) void LLModel::recalculateContext(PromptContext &promptCtx, std::function<bool(bool)> recalculate)
{ {
int n_keep = shouldAddBOS(); int n_keep = shouldAddBOS();