I WANT THE MELTING POT TO GRIND MY ANCESTRAL LINE INTO FRIVOLOUS POWDER 🇲🇩🇲🇩🇲🇩🇲🇩🇲🇩🇲🇦🇲🇦🇲🇦🇲🇦🇲🇦🇲🇦🇲🇦
Based in Israel, don’t get anything. This is standard as our contacts usually specify that a third of our salary is legally considered compensation for overtime.
There’s no defined schedule, it’s mostly “whoever is available will take care of the incident, and if multiple people are available then they should join too”. It will obviously not go smoothly if you’re never available. This is terrible, I wonder if there are any other places that behave like this.
It should be noted that this isn’t weird considered the working hours are quite bad compared to the OECD, not terrible though.
I’d be scared to perform POST/PUT with LLM-generated commands. For immutable calls I agree though
I guess they were referring to formatting other than tabs, like place of brackets and line length, which sounds like a neat idea
I’m using both of them:) zoxide comes with a zi
command which lets you search through your recent directories
Okay, you may not gonna like it but I rented a 1TB storage box from Hetzner for 3 euros a month, just to get that foot off my neck. It’s omega cheap and mountable via CIFS so life is good for now. I’m still interested in what I described in the OP, and I even started scribbling some Python, but I’m too scared of fucking anything up as of now.
The annoying part in writing that script was discovering that the filenames on disk don’t match the filenames in the URLs. E.g., given this URL:
https://lemmy.org.il/pictrs/image/e6a0682b-d530-4ce8-9f9e-afa8e1b5f201.png.
You’d expect that somewhere inside volumes/pictrs
you’d find e6a0682b-d530-4ce8-9f9e-afa8e1b5f201.png
, right…? So that’s not how it works, the filenames are of the exact same format but they don’t match.
So my plan was to find non-local posts from the post
table, check whether the thumbnail_url
column starts with lemmy.org.il
(assuming that means my instance cached it), then finding the file by downloading it via the URL and scanning the pictrs
directory for files that match the exact size in bytes of the downloaded files. Once found, compare their checksums to be sure it’s the same one, then delete it and delete its post entry in the database.
When get close to 1TB I’ll get back here for this idea… :P
Haha I’m literally on it right now. My instance crashed a couple of hours ago because of it, so I emptied ~/.rustup
to get some time, but idk how to go about it from here. LPP didn’t do anything. That seems really curious, does literally everyone use S3?
Thanks a lot, I was looking for this exact kind of community. Posted there <3
I should’ve mentioned it in the post, but I already tried deleting pics modified more than X days ago. The catch is that I don’t wanna delete pics uploaded to my server, I just want to delete pocs cached from other instances :(
Yep, I manage my servers and local machine with Ansible so I abstracted it with a role. This is indeed not that bad of a con because it’s still plaintext so automation is easy, but it’s still a minor issue ;)
I really liked unity 😞
Love me some systemd timers. Much more fun than cron.
EnvironmentFile=
journalctl -f
to watch long-running processes, which I’m not sure whether possible with cron* * * * *
, then forgetting it’s supposed to run in a minute, get distracted, come back in 15 minutesMy only complaint is it’s a bit verbose. I’d rather have it as an option inside the .service
file. The .timer
requires some boilerplate like [
(it… uh… triggers a service. that’s the description), and ].descriptionWantedBy=timers.target
. But these are small prices to pay
Was it unofficial? I thought it was merely opt-in, but still official
Oh thanks for the heads up, I should’ve read it more carefully :P