

As far as I know, there is no programmatic way to destroy an existing pizza. terraform destroy is implemented on the client side, by consuming the pizza.


As far as I know, there is no programmatic way to destroy an existing pizza. terraform destroy is implemented on the client side, by consuming the pizza.
I have a sticker of the nix one on a laptop.


have you looked at solutions which emulate github actions locally?
https://github.com/nektos/act this is one of them but I think I’ve seen one more.
Github actions also has self hosted runners: https://docs.github.com/en/actions/concepts/runners/self-hosted-runners


What would you use if you had a choice?


For maintenance I would recommend a ticketing system instead of forgejo:
https://selfh.st/apps/?search=ticket
There are a few options and they probably all work better than a git issue tracker.
Another thing I would recommend is to have centralized accounts via an identity provider. People have one username and password they can use to log into all the services, and you can reset/signup them to all connected services by managing the identity provider app.
There are a few options for this as well but I’m on my phone some imma just list the three that I find most promising for your usecase: kanidm, voidauth, authentik.


https://home.robusta.dev/blog/stop-using-cpu-limits
Okay, it’s actually more complex than that. Because on self managed nodes, kubernetes is not the only thing that’s running, so it can make sense to set limits for other non kubernetes workloads hosted on those nodes. And memory is a bit different from CPU. You will have to do some testing and YMMV but just keep the difference between requests and limits in mind.
But my suggestion would be to try to see if you can get away with only setting requests, or with setting high very high limits. See: https://kubernetes.io/docs/tasks/configure-pod-container/assign-memory-resource/#if-you-do-not-specify-a-memory-limit
In order for them not to be OOM Killed, you have to set the memory requests for them above their highest spike, which means most of the time they’re only using like 25% or so of their memory allocation.
Are you sure? Only limits should limit the total memory usage of a pod? Requests should happily let pods use more memory than the request size.
One thing I am curious about is if your pods actually need that much memory. I have heard (horror) stories, where people had an application in Kubernetes with a memory leak, so what they did instead of fixing the memory leak, was to just regularly kill pods and restart new ones that weren’t leaking yet. :/
To answer your actual question about memory optimization, no. Even google still “wastes” memory by having requests and limits higher than what pods usually use. It is very difficult to prune and be ultra efficient. If an outage due to OOM costs more than paying for more resources would, then people just resort to the latter.


This is the same technology that lets people play windows games on android with good performance. Because there is not direct access to the GPU, they have to use GPU virtualization in order to get it access to a Linux proot that runs wine inside.
I’m excited to see it being used and developed in other areas.


design around ease of self-hosting. A non technical user must be able to self host easily and at a very low cost.
This may be a controversial opinion, but I actually like the way that hosting a lemmy instance is somewhat difficult to spin up. I like the way that it is requires a time investment and spammers can’t simply spin up across different domain names. I like the way that problematic instances get defederated and spammers or other problematic individuals can’t simply move domain names due to the way activitypub is tied to those.
In theory, you could set up something like digitalocean’s droplets, where a user does one click to deploy an app like nextcloud or whatever. But I’m not really eager to see something like that.
Transferable user identity (between instances)
I dislike this for a similar reason, tbh. If someone gets banned, they should have to start over. Not get to instantly recreate and refederate all their content from a different instance.
Of course, ban evasion is always a thing. But what I like is that spammers or problematic individuals who had their content nuked are forced to start from scratch and spend time recreating it before they get banned again.
As for what I would really like to see, I would really love features that make lemmy work as a more powerful help forum. Like, on discourse if you make a post, it automatically searches for similar posts and shows them to you in order to avoid duplicate posts. Lemmy does something similar, but it appears to only be the title. It would also be cool to automatically show relevant wiki pages, or FAQ content, since one of the problems on reddit was that people wouldn’t read the wiki or FAQ of help forums.
I would also like the ability to mark a comment on a post as an “answer”, or something similar. I think stackoverflows model definitely had lots of issues with mods incorrectly marking things as duplicate, but I think it was a noble goal to try to ensure that questions were only asked once, and for them to accumulate into a repository of knowledge. For the all the complaints about it, stackoverflow is undeniably the one of the biggest and most useful repositories of knowledge.


There does exist a tool that does it. The creator posted about it on the fediverse. It only supported ubuntu at the time but looked extremely promising.
I cannot remember it’s name. :/
Maybe it’s linixify? But I remember seeing a post on lemmy with a youtube demo?


unless the SSD stopped working but then it is reasonable to expect it would no accept partitioning
This happened to me. It still showed up in kde’s partition manager (when I plugged the ssd into another computer), with the drive named as an error code.


The creator of this software streams on twitch, using the “linux” tag which I follow around. I think she uses debian stable or unstable last time I was on the stream. She also has an owncast, which is like an open source self hosted twitch.
https://expiredpopsicle.com/about.html
I really enjoy when people dogfood software.
My recommendation is meetup and a website for advertising purposes. Meetup is frustrating, yes, but at the same time it’s where I have found almost all the linux and tech groups near me.
This may sound kind of weird, but do you really need a communication platform for a LUG?
Our local LUG uses meetup and a website for advertising and telling people when we meet (once every two weeks at the same spot). (Okay I guess the one time our spot was closed and we had to track down people’s phone numbers to inform them of the new spot wasn’t that fun).
Anyway, we have a mailing list, an irc, and a matrix chat bridged to the irc, but they are effectively dead and no one uses them. The lack of activity on them makes me wonder if you really need to have a chatroom to run a LUG. We seem to get by just fine, for the most part.


Familiarity instead of compatibility.
This piece of documentation from forgejo, about how their actions are mostly github actions compatible is how I feel about this or similar endeavors.
I really like KDE, because it’s familiar enough to Windows users that they can just kinda use it. Many of the shortcuts are the same. But I’ve had a bad experience with things that try to emulate Windows more completely, because people begin to expect some windows idiosyncracy or some other thing to be there. And then they get frustrated when it’s not the same.
KDE manages to be “close enough”, which results in a better experience.


Yes. My high school used to do this. UDP blocked except for DNS to some specific servers, and probably some other needed things.

Why not switch to 10 fps instead of the weird keyframe thing they did?
I was once watching a programming streamer on twitch who was working from a laptop in a hotel instead of their usual powerful home setup with fast internet. They decided to switch the stream to 10 fps and then it worked fine.
Gnome used to much worse when it comes to ram usage, so the inertia of those sentiments still carry.
Kde used to be much worse, using what gnome uses now, but now kde has similar ram usage to xfce last time I tested. CPU wise it’s still much worse though.


I’ve heard of thumbnails being used to deliver malware.
You’ve heard of critical vulnerabilities in media processing applications that mean that thumbnails can theoretically be used to be spread malware. That is not the same as “this issue was being actively exploited in the wild and used to spread malware before it was found and patched”.
These vulnerabilities, (again, cost money), and are fixed rapidly when found. Yes, disabling thumbnails is more secure. But I am of the belief that average users should not worry about any form of costly zero day in their threat model, because they don’t have sensitive information on their computers that makes them a target.
Do you have a source or benchmarks for the last bullet point?
I am skeptical that optimizations like that wouldn’t already be implemented by postgres.
Edit: Btrfs has the worst performance for databases according to this benchmark.
https://www.dimoulis.net/posts/benchmark-of-postgresql-with-ext4-xfs-btrfs-zfs/