Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.
(Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)
there are voice to text apps that run a model on your phone. a few more cores on our devices or some more optimisations to the models and we can run an LLM. The problem is battery life and heat.
I once runned some models on my phone thruh termux.
I tried to run Llama 3.2 with 1 and 3B parameters and run pretty well, i tried 8B and was slow.
I tried deepseek-r1, 1.5B and run well, 7B was slow.
For text prediction llama 1B may be enough
Now, this is on a 300/400€ phone (Honor magic 6 lite)
Well if people started calling it for what it is, weighted random text generator, then maybe they’d stop relying on it for anything serious…
Yeah, my point was more this doesn’t have to do anything with AI or the technology itself. I mean whether AI is good or bad or doesn’t really work… Their guardrails did work exactly as intended and flagged the account hundreds of times for suicidal thoughts. At least according to these articles. So it’s more a business decision to not intervene and has little to do with what AI is and what it can do.
(Unless the system comes with too many false positives. That’d be a problem with technology. But this doesn’t seem to be discussed in any form.)
I call it enhanced autocomplete. We all know how inaccurate autocomplete is.
I wonder how a keyboard with those enhanched autocomplete would be to use…clearly if the autocomplete is used locally and the app is open source
there are voice to text apps that run a model on your phone. a few more cores on our devices or some more optimisations to the models and we can run an LLM. The problem is battery life and heat.
I once runned some models on my phone thruh termux. I tried to run Llama 3.2 with 1 and 3B parameters and run pretty well, i tried 8B and was slow. I tried deepseek-r1, 1.5B and run well, 7B was slow.
For text prediction llama 1B may be enough
Now, this is on a 300/400€ phone (Honor magic 6 lite)
I like how the computational linguist Emily Bender refers to them: “synthetic text extruders”.
The word “extruder” makes me think about meat processing that makes stuff like chicken nuggets.