I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.
Someone interested in many things.
I had no idea FOSS tax software was a thing. Huh. I’ll try and play around with it at some point and let you know.
Morton up in here spreading free salt.
Patching a newer version of the Youtube app resolved the issues with playback I was having.
Not if Anna has anything to say about it…
Perhaps, but I sucked at touch typing when I was younger.
No idea; does autocorrect even exist in an inbuilt fashion on Windows? I’ve never really tried using anything like that.
Oh, and here’s a one-off test I just did without autocorrection turned on. With a few more tries, I’m sure I could get up to 100+.
Ironically, I can almost type as fast on my phone (102 WPM PB) as I can on most keyboards (110 WPM PB), and that’s with my weird improper method of touch typing. These scores are for the 15 second word test on MonkeyType.
The good ol’ Linus parrots. Squawk “Steve Burke is a bad journalist because he pointed out errors publicly that affected consumers.” Screech “Linus didn’t sell the employees internally on the idea that he and his wife were a substitute for HR, he auctioned it.”
More often than not, people who are passionate about something, such as Linux, take personal offense when someone says something incorrect or offensive about said thing. Oh, and blud is just to call someone a poser.
I love this comment so much. “You crossed Linux? Now you’ve crossed me, blud.”
To be fair, the comments and posts you leave are technically being collected for display across the lemmyverse. In that sense, there’s never going to be a zero data collection Lemmy client. Still, Liftoff currently has my vote. A decent little FOSS fork of Lemur, I believe.
Heck, even my college Sociology textbook from OpenStax basically has nuclear fear-mongering baked into one of the later sections.
Unfortunately, there’s still that one guy in the comments trying to say that hypothetical, largely unproven solutions are better for baseload than something that’s worked for decades.
I feel like my obsession with Mavicas has just been dismissed as invalid.
We do something similar over at [email protected], but with photos. Of course, we’re using old floppy disk cameras, so the compression, aberration, and CCD weirdness is indeed authentic.
I forgot: are Lemmy’s active and hot sorts chronological? They’re pretty decent, but I do find stale content does get stuck on one that isn’t there on the other.
Yeah, that’s fair. The early versions GPT3 kinda sucked compared to what we have now. For example, it basically couldn’t rhyme. RLHF or some of the more recent advanced seemed to turbocharge that aspect of LLMs.
So a few tidbits you reminded me of:
You’re absolutely right: there’s what’s called an alignment problem between what the human thinks looks superficially like a quality answer and what would actually be a quality answer.
You’re correct in that it will always be somewhat of an arms race to detect generated content, as lossy compression and metadata scrubbing can do a lot to make an image unrecognizable to detectors. A few people are trying to create some sort of integrity check for media files, but it would create more privacy issues than it would solve.
We’ve had LLMs for quite some time now. I think the most notable release in recent history, aside from ChatGPT, was GPT2 in 2019, as it introduced a lot of people to to the concept. It was one of the first language models that was truly “large,” although they’ve gotten much bigger since the release of GPT3 in 2020. RLHF and the focus on fine-tuning for chat and instructability wasn’t really a thing until the past year.
Retraining image models on generated imagery does seem to cause problems, but I’ve noticed fewer issues when people have trained FOSS LLMs on text from OpenAI. In fact, it seems to be a relatively popular way to build training or fine-tuning datasets. Perhaps training a model from scratch could present issues, but generally speaking, training a new model on generated text seems to be less of a problem.
Critical reading and thinking was always a requirement, as I believe you say, but certainly it’s something needed for interpreting the output of LLMs in a factual context. I don’t really see LLMs themselves outperforming humans on reasoning at this stage, but the text they generate certainly will make those human traits more of a necessity.
Most of the text models released by OpenAI are so-called “Generative Pretrained Transformer” models, with the keyword being “transformer.” Transformers are a separate model architecture from GANs, but are certainly similar in more than a few ways.
Unless I’m mistaken, aren’t GANs mostly old news? Most of the current SOTA image generation models and LLMs are either diffusion-based, transformers, or both. GANs can still generate some pretty darn impressive images, even from a few years ago, but they proved hard to steer and were often trained to generate a single kind of image.
What in the gosh darn condescending non sequitur is that? I have a special kind of dislike for people who, instead of trying to promote learning for anyone and everyone at any stage, instead choose to ridicule people for having missed some trivial detail that has about as much in common with Bash as does COBOL (basically nothing). Web scripting is, unsurprisingly, its own skill, and it’s very, surpassingly, extremely, stupendously, and obviously conceivable that someone could have years of Bash experience but only recently started putting around with scripting for things like API access or HTML parsing. But you should know this already. :)