As the title says. I put the wrong value inside a clean up code and I wiped everything. I did not push any important work. I just want to cry but at least I can offer it to you.
Do not hesitate to push even if your project is in a broken state.
i
sudo shutdown now
the main production (remote) server a few times before, and ive been doing sshing into servers for a long time.there there 🫂 its ok. we all do this shit. you do have backups of course, right?
You’ve done it a few times? At the same job? Are you self-employed?
throughout many years.
you do have backups of course, right?
cries
I did a “rm -rf *” in the wrong directory today.
I got the absolutely beautiful “argument list too long” in return.
I had a backup. But holy shit I’m glad the directory had thousands of files in it and nothing happened. First time I got that bash error and was happy.
I usually have rm aliased to “trash” or whatever that cli based recycle bin is. But just installed a new OS and ran this on a NAS folder today by mistake.
My dad once rm -rf’ed his company’s payroll server by accident. He was a database admin at the time. He was asked to make a quick update to something. Instead of running it as a transaction (which would have been reversible) he went “eh it’s a simple update.” He hit Enter after typing out the change for the one entry, and saw “26478 entries updated”. At that point, his stomach fell out of his asshole.
The company was too cheap to commit to regular 3-2-1 backups, so the most recent backup he had was a manual quarterly backup from three months ago. Luckily, Payroll still had paper timesheets for the past month, so they were able to stick an intern on data entry and get people paid. So they just had a void for those two months in between the backup and the paper timesheets.
It wasn’t a huge issue, except for the fact that one of their employees was on parole. The parole officer asked the company to prove that the employee was working when he said he was. The officer wanted records for, you guessed it, the past three months. At that point, the company had to publicly admit to the fuckup. My dad was asked to resign… But at least the company started funding regular 3-2-1 backups (right before his two week notice was up.)
IN CASE OF FIRE 1. git commit 2. git push 3. exit building
Except when everyone pushes to main at the same time and now you have conflicts.
Who pushes to main? That branch should be protected! Who reviews the merge request?
Lol, standards 🙄
“
git-fire
is a Git plugin that helps in the event of an emergency by switching to the repository’s root directory, adding all current files, committing, and pushing commits and all stashes to a new branch (to prevent merge conflicts).”git commit -m 'asdf'
I have this printed on a sign at work.
git commit, git push, git out
I need a t-shirt that says this.
This is a programmers mic drop.
No backup, no mercy.
Sorry this happened.
Use it as an opportunity to learn how to better store and edit your code (e.g. a VCS and a smart-ish editor). For me, a simple Ctrl-Z would be enough to get my code back.
it sounds like they
rm -rf
-ed their project. How would Ctrl+Z help here?I should have put it inside the post text but I used a wrong value inside a test
Sympathy upvote
Ya, push push push baby, do it on your own branch so that you can find your way back if needed.
Especially when refactoring.
I always like to say “push it to the limit” and then I have this homer Simpson with muscle body sitting on his super couch (I forgot which TV series the Simpsons made Satire of) picture in my head 🤣
Update, hah found it😁 https://m.youtube.com/watch?v=7Mhb9D35pkc&pp=ygUdc2ltcHNvbnMgcHVzaCBpdCB0byB0aGUgbGltaXQ%3D
Time to implement a couple forms of backups.
You guys don’t use a COW (copy on write) filesystem?
Version control would be quite adequate if using a sane amount of time between pushes
Except that one is automatically versioned and would have saved you this pain, and the other relies on you actively remembering to reflexively commit, and then do extra work to clean up your history before sharing, and once you push, it’s harder to change history and make a clean version to share.
These days, there’s little excuse to not use COW with automated snapshots in addition to your normal, manual, VCS activities.
I’m paranoid. I have like 5 different ways (including 3-2-1 backups) to restore everything. COW fs is great for stuff that is not a git-able project.
What did you learn from this?
To push daily and to not write test :P
If you’re using vscode you might be able to look through the individual file histories to recover some work.
I keep my git clone in Dropbox so I can revert accidental delete and always have the most recent code on all devices without having to remember to commit and push. If it requires manual execution I wouldn’t really consider it a proper backup solution.
I have been burnt by Dropbox in the past so now use Syncthing between my desktop, laptop, and a private remote server with file versioning turned on. Trivial to global ignore node_modules, and not giving data to a third party.
It’s saved me on several occasions.
I use Dropbox too. Though I have to admit, when running code you sometimes have to pause sync otherwise it interferes with code execution. But definitely worth the peace of mind. Sometimes you don’t want to commit stuff until you’re sure that it works.
Do you at least have some local commits to get back to? Or did your job remove the .git folder as well? 👀
also removed .git
You have backups? Right?
what garbage cleanup tool gets rid of dotfiles, especially .git? if you let us know we can learn to avoid it
shutil.remove_tree(BASE_DIR)
instead ofshutil.remove_tree(TEMP_DIR)
inside of tear down codeOn top of that, the content of
.git/objects/
is write protected, so even if you gorm -r
, you’ll get an additional warning.
Oh man, I hate losing code. Last time it happened I spent more time trying to recover it than it would’ve taken to rewrite it.
You can’t just… replace your baby, man!