

If you’re comfortable with using codeberg, yes, that’s the best place. Otherwise you can post in the comments of the original thread, complete the survey, or use github issues (if you must).
Blind geek, fanfiction lover (Harry Potter and MLP). keyoxide: aspe:keyoxide.org:PFAQDLXSBNO7MZRNPUMWWKQ7TQ
If you’re comfortable with using codeberg, yes, that’s the best place. Otherwise you can post in the comments of the original thread, complete the survey, or use github issues (if you must).
Sadly I don’t have an einc device. But if someone does, we’d be happy to accept feedback and include some images.
So most modern activitypub servers backfill threads and profiles. My single user instance processes 30000 notes a day. If I was actually trying, I’m sure it’d be easy to grab much more while appearing well behaved.
How does that help? My personal instance currently has a database of several million posts thanks to the various Mastodon relays. I don’t need to scrape your instance to sell your posts. I don’t, of course, but it’d be easy for some company to create friendlycutekittens.social and just start collecting posts. Do you really have time to audit every instance you federate with?
When watching a movie or tv show by ourselves, blind people can’t see the picture. So unless we are watching with a sighted friend, we would rather save on storage and bandwidth by only downloading the audio.
Audiovault.net is the website you want. Made by and for blind folks, it has thousands of AD tracks in mp3 format. You should be able to just sync them with the video. Though blind folks never bother; we only care about audio anyway.
From the article:
The TLS-SNI header is used by CDN servers to route requests based on the Server Name in the header. However, a typical front end server, or even a load balancer (LB), belongs to a single app or organization, and does not typically need to handle the SNI header. The easy and reasonable way to configure TLS certificates on such a server, is to either: Serve all requests with a single TLS certificate that has SANs (Subject Alternative Names) for all the domains that are used Have multiple certificates, chosen according to SNI, with one of them as the default. In both of these common cases, sending a HTTPS request directly to the IP of a front end server, without any SNI, will present us with a default server certificate. This certificate will reveal what domains are being served by this server.
So apparently the real issue is that people aren’t using SNI correctly.
The tech blog is much better: https://www.zafran.io/resources/breaking-waf-technical-analysis
It boils down to scanning all IPV4 space, and grabbing the SSL certificate returned by any webservers on port 443. If the server is incorrectly configured the fields in the SSL cert will tell you what domains it serves. And using Certificate Transparency logs to figure out what domains you want to target. I wouldn’t really call this a flaw that breaks anything. It’s just a byproduct of how SSL, IPV4, and WAFs work.
Your post showed up here just fine.
There’s also a list here, though last updated in 2020: https://distributedcomputing.info/projects.html
Most of those projects remain active in some form.
For those of us using screen readers, this is a way bigger deal. Honestly I probably shouldn’t use a bluetooth headset and a bluetooth keyboard for my banking. We focus so much on SSL/HTTPS and wifi security, but I wonder how much effort goes into wireless keyboard security? Not nearly as much, I’d bet.
Problem was that I usually only discovered the issue when I went to read the book lol
I never did that, my connection was too slow to want to take up someone’s DCC slot for like a day to get an entire movie. Remember all the frustrating idiots who would share .lit files, but forget to remove the DRM from them?
Ah, good to know. Back in my day, when we had to walk a hundred miles to school in the snow, up hill both ways, IRC was the only place to get ebooks. I’m guessing it’s just the old users clinging on now.
Man, I’m getting flashbacks to my days running omenserve on undernet. I had no idea people were still doing this! How does the content compare to places like Anna’s archive these days?
Personally I find myself renting GPU and running Goliath 120b. Smaller models could do what I’m doing if I spent more time optimizing my prompts. But every day I’m doing different tasks, and Goliath 120b will just handle whatever I throw at it, no matter how sloppy I am. I’ve also been playing with LLAVA and Hermes vision models to describe images to me. However, when I really need alt-text for an image I can’t see, I still find myself resorting to GPT4; the open source options just aren’t as accurate or detailed.
It’s just as long and incomprehensible as Google’s and Microsoft’s. So I have no idea.
That’s what worries me. When companies get desperate for cash, they tend to do pretty terrible things.
So who are they sending our product browsing data to in order to provide this service? At least I know what Microsoft and Google are doing with my data (nothing good). But Pocket and cloudflare and there VPN provider and whatever other random companies Firefox partners with? Who knows! How do I opt out? Who knows! How secure are these companies? Who knows! At least using Edge or Chrome I only have to hand over my data to one evil corporation, instead of several. Plus I actually get things I want in return (for me: automatic image descriptions, reader mode, read aloud, and AI based page summaries). Nothing I get from the companies Firefox works with are things I even want.
This has been broken for us on the entire 0.9 series. It works with iceshrimp, go to social, etc. just not mastodon. I think it has something to do with authorized fetch and signatures. But I haven’t tried to track it down as the way lemmy formats posts from mastodon was super ugly anyway.