@vile_asslips@sh.itjust.works

As the title says, why use GPT-2 bots instead of a more modern ~8B (or ~3B active MoE) LLMs? Seems like you’d get much more coherent/interesting conversation soup.

If you’re worried about hosting hardware, there’s stuff that’s faster than GPT-2 on CPU these days. Heck, I’d help finetune one if you wish.