• 0 Posts
  • 94 Comments
Joined 1 year ago
cake
Cake day: August 18th, 2023

help-circle







  • Honestly, I’ve worked with a few teams that use conventional commits, some even enforcing it through CI, and I don’t think I’ve ever thought “damn, I’m glad we’re doing this”. Granted, all the teams I’ve been on were working on user facing products with rolling release where main always = prod, and there was zero need for auto-generating changelogs, or analyzing the git history in any way. In my experience, trying to roughly follow 1 feature / change per PR and then just squash-merging PRs to main is really just … totally fine, if that’s what you’re doing.

    I guess what I’m trying to say is that while conv commits are neat and all, the overhead really isn’t really always worth it. If you’re developing an SDK or OSS package and you need changelogs, sure. Other than that, really, what’s the point?


  • I know. Just the “full-stack meta frameworks” part alone makes any ADHD person feel nausea.

    But why? What’s bad about this?

    I disagree. Geminispace is very usable without scripts

    That’s great, I’m not saying that it’s impossible to make usable apps without JS. I’m saying that the capabilities of websites would be greatly reduced without JS being a thing. Sure, a forum can be served as fully static pages. But the web can support many more advanced use-cases than that.

    If only one paradigm must remain, then naturally I pick mine. If not, then there’s no problem and I still shouldn’t care.

    So you can see that other people have different needs to yours, but you think those shouldn’t be considered? We’re arguing about the internet. It’s a pretty diverse space.

    For me it’s obvious that embeddable cross-platform applications as content inside hypertext are much better than turning a hypertext system into some overengineered crappy mess of a cross-platform application system.

    Look, I’m not saying that the web is the most coherent platform to develop for or use, but it’s just where we’re at after decades of evolving needs needing to be met.

    That said, embedded interactive content is absolutely not better than what we have now. For one, both Flash and Java Applets were mostly proprietary technologies, placing far too much trust in the corpos developing them. There were massive cross-platform compatibility problems, and neither were in any way designed for or even ready for a responsive web that displays well on different screen sizes. Accessibility was a big problem as well, given an entirely different accessibility paradigm was necessary within vs. the HTML+CSS shell around the embedded content.

    Today, the web can do everything Flash + Java Applets could do and more, except in a way that’s not proprietary but based on shared standards, one that’s backwards-compatible, builds on top of foundational technologies like HTML rather than around, and can actually keep up with the plethora of different client devices we have today. And speaking of security — sure, maybe web browsers were pretty insecure back then generally, but I don’t see how you can argue that a system requiring third-party browser plug-ins that have to be updated separately from the browser can ever be a better basis for security than just relying entirely on the (open-source!) JS engine of the browser for all interactivity.

    I ask you for links and how many clicks and fucks it would take to make one with these, as opposed to back then. These are measurable, scientific things. Ergonomics is not a religion.

    The idea that any old website builder back in the day was more “ergonomic” while even approaching the result quality and capabilities of any no-code homepage builder solution you can use today is just laughable. Sorry, but I don’t really feel the burden of proof here. And I’m not even a fan of site builders, I would almost prefer building my own site, but I recognize that they’re the only (viable) solution for the majority of people just looking for a casual website.

    Besides — there’s nothing really preventing those old-school solutions from working today. If they’re so much better than modern offerings, why didn’t they survive?


  • So what does it say about us diverting from purely server-side scripted message boards with pure HTML and tables, and not a line of JS? Yes, let’s get back there please.

    Ironically, proper SSR that has the server render the page as pure HTML & CSS is becoming more and more popular lately thanks to full-stack meta frameworks that make it super easy. Of course, wanting to go back to having no JS is crazy — websites would lose almost all ability to make pages interactive, and that would be a huge step backwards, no matter how much nostalgia you feel for a time before widespread JS. Also tables for layout fucking sucked in every possible way; for the dev, for the user, and for accessibility.

    people want nice, dynamic, usable websites with lots of cool new features, people are social

    That’s right, they do and they are.

    By the way, we already had that with Flash and Java applets, some things of what I remember were still cooler than modern websites of the “web application” paradigm are now.

    Flash and Java Applets were a disaster and a horrible attempt at interactivity, and everything we have today is miles ahead of them. I don’t even want to get into making arguments as to why because it’s so widely documented.

    And we had personal webpages with real names and contacts and photos. And there were tools allowing to make them easily.

    There are vastly more usable and simple tools for making your own personal websites today!




  • How do you know this? Of course there are lots of reasons for why they’d want to enforce minimum browser versions. But security might very well be one of them. Especially if you’re a bank you probably feel bad about sending session tokens to a browser that potentially has known security vulnerabilities.

    And sure, the user agent isn’t a sure way to tell whether a browser is outdated, but in 95% of cases it’s good enough, and people that know enough to understand the block shouldn’t apply to them can bypass it easily anyway.


  • There’s no reason your clients can’t have public, world routeable IPs as well as security.

    There are a lot of valid reasons, other than security, for why you wouldn’t want that though. You don’t necessarily want to allow any client’s activity to be traceable on an individual level, nor do you want to allow people to do things like count the number of clients at a particular location. Information like that is just unnecessary to expose, even if hiding it doesn’t make anything more secure per se.




  • efstajas@lemmy.worldtoLinux@lemmy.ml“Systemd is the future”
    link
    fedilink
    arrow-up
    23
    arrow-down
    2
    ·
    edit-2
    5 months ago

    Oof, that quote is the exact brand of nerd bullshit that makes my blood boil. “Sure, it may be horribly designed, complicated, hard to understand, unnecessarily dangerous and / or extremely misleading, but you have nOT rEAd ThE dOCUmeNtATiON, therefore it’s your fault and I’m immune to your criticism”. Except this instance is even worse than that, because the documentation for that command sounds just as innocent as the command itself. But I guess obviously something called “tmpfiles” is responsible for your home folder, how couldn’t you know that?


  • You’re of course right with the exclusivity argument — that’s a very real possibility, and yet Microsoft has tried it with Call of Duty, one of the most popular franchises ever, and saw very little success with it, resulting in them putting it back on Steam years later. If I were to guess why attempts like this have failed in the past, I would say that Steam is so dominant over the PC gaming market today that not even large franchises going exclusive attract enough of a user base to offset the loss of customers that aren’t buying games only because they’re not on Steam. Add to this the additional overhead of developing and maintaining a competing store front, and the cost-benefit analysis leans clearly towards just being on Steam and accepting their cut of sales. The exclusivity tactic clearly failed even for big titles like CoD, so it definitely won’t work for smaller ones. And we’re not even talking about cutting into the indie game market, which would require making very attractive exclusivity offers to many smaller studios, all for acquiring exclusivity on titles in the hope that they’ll be the next big hit — a very high risk strategy that likely results in a lot of sunken cost short-term.

    Once they have that market share, they can give developers better margins, since they’ll be selling customer data at a profit

    When we talk about “selling customer data”, I think we need to look in more detail into what this would actually mean in practice. It’s very unlikely that any online storefront could legally literally “sell your personal data” like address etc. that you would enter presumably as part of the payment process to third parties. That’s just illegal almost everywhere in the world, and certainly in the largest PC gaming markets. It wouldn’t lead to significant revenue either, because raw data like that just isn’t very valuable. Instead, I suppose what people mean when they say this (in the context of companies like Google or Facebook) is just the practice of selling advertising services that use the data they have on people to advertisers, who can then target their ads at highly specific segments, improving their return on ad spend. The actual private data though stays with the entity that collected it — because it’s what actually gives them the edge on the market; it allows them to offer better ad targeting than competitors.

    How would this apply to Steam or a potential competing storefront? Barely. I assume no-one is arguing that a steam competitor could launch a generic advertising network that could stand against Google or Facebook, so we’re probably talking about advertising within the storefront itself. Steam today already collects information on your interests and customizes the store based on that, plus presumably your location, age group etc. — so they’re pretty much already using your “personal information” to the extent possible in this context. How else could a competitor realistically monetize personal information?

    It’s a market, markets trend towards short term gains strategies over long term gains strategies because having faster short term gains means you can more easily crush your competition.

    I wouldn’t say that this is the case when we’re talking about trying to eat into the market share of a dominant entity like Steam. Sure, potential competitors can make short-term plays that cut away some market share, but such strategies are expensive, risky, and alone likely don’t lead towards a significantly improved position long-term (exhibit A, again: COD being exclusive to Battle.net).

    For better or worse (usually worse), toppling a near-monopoly like Steam is extremely hard for players with big cash, and practically impossible for independent competitors. This is especially true for products that are inherently sticky, like Steam, where people have curated large libraries over decades. The only reason Steam’s dominant position is not hurting the consumer is because their product works well and is in many ways very pro-consumer.