Lying liar lied. News at 11.
Lying liar lied. News at 11.
I can’t imagine that being the case for most users. I’m absolutely a power user and I keep being surprised at how consistently high the performance is of my base model M1 Air w/16GB even when compared to another Mac workstation of mine with 64GB.
I can run two VMs, a ton of live loading development tooling, several JVM programs and so much more on that little Air and it won’t even sweat.
I’m not an Apple apologist - lots of poor decisions these days and software quality has taken a real hit. While 16GB means everyone’s getting a machine that should last much longer, I can’t see a normal user needing more any time soon, especially when Apple is optimizing their local machine learning models for their 8GB iOS platforms first and foremost.
It exists for the outgoing Mac mini. We ran two minis in a 1u, colocated in a DC, for years. They ran Ubuntu server.
Rack mini: https://www.sonnettech.com/product/rackmacmini.html
I’d stoped flying x plane when MSFS came out. Will give it a whirl too.
Haven’t. Will check it out! Thanks.
Recently decided to try Linux for gaming. It wasn’t without a hitch or two, but largely fine. A number of games I play don’t even need an emulation tool like Proton.
The only reason windows was lying around was for gaming.
Looks like it’ll only get used for flight simulation.
For those of us who work in (or love) tech - we (myself included) grossly overestimate how much the general public cares about, or cares to be informed about, this stuff. Heck, even people in tech who know better.
I wish it wasn’t the case but look how long and hard Microsoft moved on Internet Explorer and ActiveX back in the early days of the web.
Google and Chrome is just another bit of history repeating.
As an aside, I’ve been using Zen for about a week and it’s been wonderful. Easy transition from Firefox because it largely is Firefox, so all my containers, extensions, and settings carried over. Zen’s workspaces provide exactly the promise I’d hoped “tab groups” brought with Safari (but never worked right). I just wish there was an equivalent to the Hush plug-in on Safari (even after a year of full-timing FF, consent-o-matic is quite poor).
Sweet. It’s worth it IMO. And definitely fun for either tinkering or just having something solid that works (why not both? ;) ).
We’ve been using monowall - now pfsense since 2008.
I don’t necessarily recommend btw - there are lots of great options out there (like it’s cousin OPNSense and so many more).
Easy to block that - though not with pihole exclusively.
We use another tool at our network edge to block all 53/853 traffic and redirect all port 53 traffic to our internal DNS resolver (works much like pihole).
Then we also block all DoH.
Only two devices have failed using this strategy: Chromecast - which refuses to work if it can’t access googles DNS. And Philips Hue bridges. Both lie and say “internet offline”. Every other device - even some of the questionable ones on a special VLAN for devices we don’t trust - work just fine and fall back to the router-specified DNS.
An ex-Google, ex-Apple, leadership chatbot focused on improving outcomes with data and cat memes, hustling 24/7.
2 years plus source code and working oss backends or 10 years (and still source code).
2 years will just ensure endless forced upgrade cycles IMO.
If it’s a backup server why not build a system around an CPU with an integrated GPU? Some of the APUs from AMD aren’t half bad.
Particularly if it’s just your backup… and you can live without games/video/acceleration while you repair your primary?
Is there a reason you need a dual book instance instead of a VM or even WINE?
Unless you need direct access to hardware and if you have enough RAM, you can probably avoid dual booting altogether.
Good enough? I mean it’s allowed. But it’s only good enough if a licensee decides your their goal is to make using the code they changed or added as hard as possible.
Usually, the code was obtained through a VCS like GitHub or Gitlab and could easily be re-contributed with comments and documentation in an easy-to-process manner (like a merge or pull request). I’d argue not completing the loop the same way the code was obtained is hostile. A code equivalent of taking the time (or not) to put their shopping carts in the designated spots.
Imagine the owner (original source code) making the source code available only via zip file, with no code comments or READMEs or developer documentation. When the tables are turned - very few would actually use the product or software.
It’s a spirit vs. letter of the law thing. Unfortunately we don’t exist in a social construct that rewards good faith actors over bad ones at the moment.
As someone who worked at a business that transitioned to AGPL from a more permissive license, this is exactly right. Our software was almost always used in a SaaS setting, and so GPL provided little to no protection.
To take it further, even under the AGPL, businesses can simply zip up their code and send it to the AGPL’ed software owner, so companies are free to be as hostile as possible (and some are) while staying within the legal framework of the license.
deleted by creator
I’ve been using self-hosted Ghost for a bit and it’s a pretty well designed piece of software.
That it requires mailgun to really function well was a bit of a nuisance. But that’s a very minor nitpick that will likely change if adoption increases.
There sadly isn’t a viable one at the same level of functionality.
Edit: some random other comment appeared here. Fixed.
Agreed. Companies should be required by law to release source code, build guides, documentation and service architecture for services or apps that are required by hardware they sold.
While there are bigger fish to fry at the moment, socially speaking, the problem is only going to get worse if legislators don’t step in.
Found the other NixOS user. ;)