Just some Internet guy

He/him/them 🏳️‍🌈

  • 1 Post
  • 828 Comments
Joined 1 year ago
cake
Cake day: June 25th, 2023

help-circle
  • Guarantee there will be questions of cost of setup, maintenance, and risks.

    And time moderating it, especially if they run their own. At least with Twitter/Facebook/YouTube, you get a lot of moderation for free whether you agree with it or not.

    And if they use another instance, there’s other liability questions about the particular instance to choose. If they’re gonna represent an official city account, you’d expect some cybersecurity certifications to be a requirement and all kinds of stuff, even if it’s a free service. The instance admins interfering, possibly steering opinions during city elections, etc.

    Nobody cares about decentralized social networks, the technology, or how terrible the other outlets are. For a municipality, you may want to focus on maintaining multiple channels of communications and ways to reach and engage the most users. You could then fold the fediverse into it as one more channel. Something they should keep an eye on. They’ll need a way to post the same content to all those channels with the least effort. Something easy that a trained intern or clerk can do.

    In this case IMO it might even be better to use something like Wordpress with the ActivityPub plugin, or alternatives to that. I imagine a city mostly posts announcements and stuff, so a blog that serves as both an official website and you can follow and interact with it from the comfort of your preferred social service sounds a lot more appealing than just another social media without that many users. Can even use more plugins to post to Facebook and Twitter as well, all from one place. Given the age of the board, they’re also more likely to know and care about Threads and Bluesky compatibility just because they have more users, and bureaucratic decisions are based on numbers. A nice graph showing if they join the fediverse they capture all the users fleeing Twitter by supporting AP and AT.


  • It’s nicknamed the autohell tools for a reason.

    It’s neat but most of its functionality is completely useless to most people. The autotools are so old I think they even predate Linux itself, so it’s designed for portability between UNIXes of the time, so it checks the compiler’s capabilities and supported features and tries to find paths. That also wildly predate package managers, so they were the official way to install things so there was also a need to make sure to check for dependencies, find dependencies, and all that stuff. Nowadays you might as well just want to write a PKGBUILD if you want to install it, or a Dockerfile. Just no need to check for 99% of the stuff the autotools check. Everything it checks for has probably been standard compiler features for at least the last decade, and the package manager can ensure you have the build dependencies present.

    Ultimately you eventually end up generating a Makefile via M4 macros through that whole process, so the Makefiles that get generated look as good as any other generated Makefiles from the likes of CMake and Meson. So you might as well just go for your hand written Makefile, and use a better tool when it’s time to generate a Makefile.

    (If only c++ build systems caught up to Golang lol)

    At least it’s not node_modules





  • I think it is a circular problem.

    Another example that comes to mind: the sanctions on Huawei and whether Google would be considered to be supplying software because Android is open-source. At the very least any contributions from Huawei is unlikely to be accepted into AOSP. The EU is also becoming problematic with their whole software origin and quality certifications they’re trying to impose.

    This leads to exactly what you said: national forks. In Huawei’s case that’s HarmonyOS.

    I think we need to get back to being anonymous online, as if you’re anonymous nobody knows where you’re from and your contributions should be based solely on its merit. The legal framework just isn’t set up for an environment like the Internet that severely blurs the lines between borders and no clear “this company is supplying this company in the enemy country”.

    Governments can’t control it, and they really hate it.


  • The problem isn’t even where the software is officially based, it can become a problem for individual contributors too.

    PGP for example used to be problematic because US exports control on encryption used to forbid exporting systems capable of strong encryption because the US wanted to be able to break it when it’s used by others. Sending the tarball of the PGP software by an american to the soviets at the time would have been considered treason against the US, let alone letting them contribute to it. Heck, sharing 3D printable gun models with a foreign country can probably be considered supplying weapons like they’re real guns. So even if Linux was based in a more neutral country not subject to US sanctions, the sanctions would make it illegal to use or contribute to it anyway.

    As much as we’d love to believe in the FOSS utopia that transcends nationality, the reality is we all live in real countries with laws that restrict what we can do. Ultimately the Linux maintainers had to do what’s best for the majority of the community, which mostly lives in NATO countries honoring the sanctions against Russia and China.


  • Max-P@lemmy.max-p.metoPiracy@lemmy.mlAI for torrenting?
    link
    fedilink
    English
    arrow-up
    9
    ·
    7 days ago

    No. It could repair some files to make them playable, maybe, by extrapolating sections before and after, like a couple seconds missing there and there in a movie, but all bets are off as to whether it’ll guess right. I’m not aware of such tool existing.

    But if it’s a zip file, there’s no chance it can fix it. It’s much different than AI upscaling, because you don’t just need to find an answer that’s close enough, you need the exact bits because even one value off could mean the gravity of the whole game is off, as an example. If some files are encrypted then all bets are off, as that would imply breaking encryption.

    Also I’d look at what’s the missing data. Sometimes you can be stuck at 99% because the only seeder left didn’t download a readme file or something but the whole content is there.




  • With Docker, the internal network is just a bridge interface. The reason most firewall rules don’t apply is a combination of:

    • Containers have their own namespace including network namespace, so each container have a blank iptables just for them.
    • For container communication, that goes through the FORWARD table, not the INPUT/OUTPUT ones.
    • Docker adds its own rules to ensure that this works as expected.

    The only thing that should be affected by the host firewall is the proxy service Docker uses to listen on a port on the host and send it to the container.

    When using Docker, each container acts like an independent machine, and your host gets configured to act as a router. You can firewall Docker containers, the rules just need to be in the right place to work.


  • The sandboxing is almost always better because it’s an extra layer.

    Even if you gain root inside the container, you’re not necessarily even root on the host. So you have to exploit some software that has a known vulnerable library, trigger that in that single application that uses this particular library version, root or escape the container, and then root the host too.

    The most likely outcome is it messes up your home folder and anything your user have access to, but more likely less.

    Also, something with a known vulnerability doesn’t mean it’s triggerable. If you use say, a zip library and only use it to decompress your own assets, then it doesn’t matter what bugs it has, it will only ever decompress that one known good zip file. It’s only a problem if untrusted files gets involved that you can trick the user in causing them to be opened and trigger the exploit.

    It’s not ideal to have outdated dependencies, but the sandboxing helps a lot, and the fact only a few apps have known vulnerable libraries further reduces the attack surface. You start having to chain a lot of exploits to do anything meaningful, and at that point you target those kind of efforts to bigger more valuable targets.




  • Also, series F but they’re only deploying on one server? Try scaling that to a real deployment (200+ servers) with millions of requests going through and see how well that goes.

    And also no way their process passes ISO/SOC 2/PCI certifications. CI/CD isn’t just “make do things”, it’s also the process, the logs, all the checks done, mandatory peer reviews. You can’t just deploy without the audit logs of who pushed what when and who approved it.



  • My point was really that data can’t be that exensive even with including transit fees like Cogent and Level3, because I can use TBs of bandwidth every month and OVH doesn’t even bother measuring it.

    If my home ISP gives me a gigabit link, yes I pay for all the cabling and equipment to carry that traffic. But that’s it, I already pay for infrastructure capable of providing me with gigabit connectivity. So why is it that they also want me to pay per the GB?

    In Europe they can provide gigabit connectivity for dirt cheap with no caps, they don’t even bother with tiered speed plans there, how come my $120+/mo Internet in the US isn’t sufficient to cover the bandwidth costs? It’s ridiculous, even StarLink doesn’t have data caps.

    But somehow communities with crappy DSL that can barely do 10 Mbps still have ridiculously low data caps. It’s somehow not a problem for most ISPs in the world, except US ISPs, the supposedly richest and most advanced country in the world.