Seriously, what’s with all the Mozilla hate on Lemmy? People bitch about almost everything they do. Sometimes it feels like, because it’s non-profit/open-source, people have this idealized vision of a monastery full of impoverished, but zealous, single-minded monks working feverishly and never deviating from a very tiny mission.
Cards on the table, I remain an AI skeptic, but I also recognize that it’s not going anywhere anytime soon. I vastly prefer to see folks like Mozilla branching out into the space a little than to have them ignore it entirely and cede the space to corporate interests/advertisers.
Can you show some examples of where people complain about Mozilla taking Google money?
Because when I complain about Mozilla, it’s because they fired their employees while bloating the salary of their CEO, that Firefox languishes while they throw in privacy invasive junk that nobody asked for.
That seems more aligned with their mission of fighting misinformation on the web. It looks like fake spot was an acquisition so hopefully efforts like the ones mentioned in this post better help aligne this with their other goals.
What I’m saying is Mozzilla, from my understanding, didn’t set out to do that but instead aqquired a business that was in order to use their services to fight misinformation. We should pressure them to reform the new part of business to better align with the rest of Mozzilla’s goals.
but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I’m supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.
Open source project focused on giving people features they want but in a privacy and censorship resistant way. Classic Moz
Seriously, what’s with all the Mozilla hate on Lemmy? People bitch about almost everything they do. Sometimes it feels like, because it’s non-profit/open-source, people have this idealized vision of a monastery full of impoverished, but zealous, single-minded monks working feverishly and never deviating from a very tiny mission.
Cards on the table, I remain an AI skeptic, but I also recognize that it’s not going anywhere anytime soon. I vastly prefer to see folks like Mozilla branching out into the space a little than to have them ignore it entirely and cede the space to corporate interests/advertisers.
because “oh no Mozilla foundation bad” “they take google money”
Can you show some examples of where people complain about Mozilla taking Google money?
Because when I complain about Mozilla, it’s because they fired their employees while bloating the salary of their CEO, that Firefox languishes while they throw in privacy invasive junk that nobody asked for.
deleted
That seems more aligned with their mission of fighting misinformation on the web. It looks like fake spot was an acquisition so hopefully efforts like the ones mentioned in this post better help aligne this with their other goals.
deleted
What I’m saying is Mozzilla, from my understanding, didn’t set out to do that but instead aqquired a business that was in order to use their services to fight misinformation. We should pressure them to reform the new part of business to better align with the rest of Mozzilla’s goals.
I’ve been trying. No luck so far. The only change to the Fakespot TOS was adding an allowance for private data to get sold to Mozilla…
but it is not a feature i want. not now, not ever. An inbuilt bullshit generator, now with less training and more bullshit is not something I ever asked for.
Training one of these ais requires huge datacenters, insanely huge datasets and millions of dollars in resources. And I’m supposed to believe one will be effectively trained by the pittance of data generated by browsing?
Yes but I like it, so where do we go from here?
You clearly are wrong and you should feel bad /s
Fine tunning is more possible on end user hardware. You also have projects like hive mind and petals that working on distributed training and inference systems to deal with the concentration effects of this you described for base models.