You know something cm0002 just wanted to let you know I appreciate the dichotomy formed between posting these shitpost memes making fun of corporate AI trash one second while simultaneously posting genuinely informative news keeping the localllama community updated the next. It helps keep it real.
I havent used twitter in many years. Do they really let their llms have a twitter account and be taggable like this? I though they would have learned it doesnt worked out so great when microsoft tried it with TayAI years ago.





Use a local model, learn some toolcalling and have it retrieve factual answers from a database like wolfram alpha if needed. . We have a community over at c/[email protected] all about local models. if your not very techy I recommend starting with a simple llamafile which is a one click executable EXE that packages engine and model together in a single file.
Then move on to a real local model engine like kobold.cpp running a quantized model that fits in your computer especially if you have a graphics card and want to offload via CUDA or Vulcan. Feel free to reply/message me if you need further clarification/guidance
https://github.com/mozilla-ai/llamafile
https://github.com/LostRuins/koboldcpp
I would start with a 7b q4km quant see if your system can run that.