Doesn’t seem like that gravy train will roll on forever
Doesn’t seem like that gravy train will roll on forever
Haven’t heard of ElevenLabs, I wonder how it compares to Speechify
The Zelda game from the Philips CD-i
Man, wait till this journalist hears about this little thing called the Fediverse.
Might I suggest a common set of keybinds… maybe C for copy, and v for vaste… maybe use ctrl as well?
Might be heavily dependent on OEM and Android version.
On Samsung, it’s called Eye Comfort Shield, for example.
Now how about forum signatures?
boots up cracked Photoshop
Instead of a multitude of differing moderation teams, you’re subject to a single one, BlueSky itself, I guess?
Anyone worried about moderation on a social media platform probably needs to think a little more introspectively though…
It probably helps that WebKit was forked from KDE’s Konqueror/KHTML and that Blink was a fork of WebKit.
Compared to Gecko, I’m sure they behave the same as far as webdevs were concerned - hindering it’s adoption - webdevs don’t want to support esoteric engines for obvious reasons.
Mere possession of child pornography should not be a crime at all. To prosecute people for possessing something published, no matter what it may be, is a big threat to human rights. – stallman.org, 5 June 2017 “Possession of child porn”
I’ll honestly be surprised if he doesn’t end up being revealed to be in possession of CSAM.
Not just problematic, but consequential too:
Richard Stallman has also embarked upon a decades-long political project to normalize sexual violence. Under his ideological leadership, the free software movement is unsafe, particularly for women. Women represent just 3% of the free software community,2 compared to 23% of industry programmers generally.3 This is no accident. There is a pervasive culture of sexism and a stark lack of accountability in free software, and it begins with Stallman’s unchallenged and reprehensible behavior.
A llm making business decisions has no such control or safety mechanisms.
I wouldn’t say that - there’s nothing preventing them from building in (stronger) guardrails and retraining the model based on input.
If it turns out the model suggests someone killing themselves based on very specific input, do you not think they should be held accountable to retrain the model and prevent that from happening again?
From an accountability perspective, there’s no difference from a text generator machine and a soda generating machine.
The owner and builder should be held accountable and thereby put a financial incentive on making these tools more reliable and safer. You don’t hold Tesla not accountable when their self driving kills someone because they didn’t test it enough or build in enough safe guards – that’d be insane.
Just another reason to boycott Goya
Amazing that someone would ask that on Piazza.
Don’t you mean 𒁷𒀱𒀉?
The answer, is that I want the Galaxy S Ultra line to have an SD card slot.
Looking at life through the eyes of a tire hub
Tbf, microplastics would typically contain plasticizers, so that distinction doesn’t seem very important other than to highlight that plasticizers are the biologically active component.
Considering the staggering cost of AI models, waiting until AI solves the problem is going to do nothing but prove the Great Filter hypothesis.
Absolutely no way this doesn’t explicitly target certain groups of people and end up in a lawsuit.