this post was submitted on 23 Oct 2025
166 points (96.1% liked)

Linux

9966 readers
466 users here now

A community for everything relating to the GNU/Linux operating system (except the memes!)

Also, check out:

Original icon base courtesy of lewing@isc.tamu.edu and The GIMP

founded 2 years ago
MODERATORS
 

Today had some important markdown file that accidentally deleted on my SSD and had to go over the recovery of it.

All I did was this:


run sudo systemctl status fstrim.timer to check how often TRIM runs on my system (apparently it runs weekly and the next scheduled run was in 3 days)

run sudo pacman -S testdisk

run sudo photorec

choose the correct partition where the files were deleted

choose filesystem type (ext4)

choose a destination folder where to save recovered files

start recovery

10-15 minutes and it's done.

open nvim in parent folder and grep for content in the file that I remember adding today


That's it - the whole process was so fast. No googling through 10 different sites with their shitty flashy UIs promising "free recovery," wondering whether this is even trustworthy to install on your machine, dealing with installers that'll sneak in annoying software if you click too fast, only to have them ask for payment later. No navigating complex GUIs either.

I was so thankful for this I actually donated to the maintainers of the software. Software done right.

all 50 comments
sorted by: hot top controversial new old
[–] Lemmchen@feddit.org 48 points 1 week ago (2 children)

PhotoRec and TestDisk are available for Windows as well.

[–] chasteinsect@programming.dev 26 points 1 week ago

Is that so ? I discovered them through arch wiki so had no idea !

File recovery - ArchWiki

One extra thing I forgot to mention was just how easy it was to find this recovery software due to arch wiki.

[–] frongt@lemmy.zip 5 points 1 week ago

Yeah I've done almost this exact same process on Windows too. I just took the system offline to ensure no writes, and so that permissions weren't an issue.

[–] TachyonTele@piefed.social 38 points 1 week ago* (last edited 1 week ago) (10 children)

Question. Where do you learn commands? There's no way you thought "let's try sudo" out of no where.

Sincerely,
New Linux User

Edit: Great responses everyone, thank you!

[–] puttputt@beehaw.org 26 points 1 week ago (1 children)

In another comment, OP said they checked the arch wiki, specifically https://wiki.archlinux.org/title/File_recovery. The arch wiki is a great resource; most of the information is not arch-specific and is useful for linux in general.

Regarding "let's try sudo": you should get familiar with sudo because it's one of the most important linux commands. It runs a command with elevated privileges (it originally stood for "super-user do"). That means sudo isn't actually the important part of the commands; it just means that the following commands (pacman and photorec) need elevated privileges. pacman deals with systemwide package management and photorec needs access to the raw storage device objects in order to recover files.

[–] thingsiplay@beehaw.org 15 points 1 week ago (1 children)

Acktually sudo is there to run a command as another user, it does not need to be "superuser" (also known as rot). "superuser" is just the default. To be honest, I never used the option to run as another user, because my computers are single user only. There are so many more options. One should look into man sudo to see whats possible, its incredible!

However, there are alternatives to sudo, such as doas from OpenBSD ported over and run0 from the evil SystemD. They find sudo to be complicated and bloated.

Also quick tip: sudoedit (same as sudo -e or sudo --edit) instead sudo vim to edit your files with elevated privileges while using your personal configuration. If you want do that, that's up to you. I want to use my Vim configuration while editing files within sudo right.

[–] towerful@programming.dev 4 points 1 week ago (1 children)

Yeh, I think it's actually "switch user do".
Like "su" is "switch user".

The default being root is handy.

[–] thingsiplay@beehaw.org 3 points 1 week ago (1 children)

Yes and no. The original design of sudo stands for super user do, and could only run with super user privileges. The run as other users feature was added later, and then they renamed it to substitute user do. I even looked up to get that fact right, and always forget its "substitute" and not "switch", but I also think of sudo as switch user do.^^

[–] towerful@programming.dev 3 points 1 week ago

Massive Linux lore dumps going on, and I love it.
Thanks for correcting me.

[–] styanax@lemmy.world 21 points 1 week ago (1 children)

In the old days, we would ls /usr/bin/ (sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a --help option to figure out what they did. At best we maybe had an O'Reilly book (the ones with animals on the covers) or friends to ask. You can still do that today instead of reading blog posts or websites, just look, be curious and be willing to break something by accident. :)

Part of the Linux journey is to be inquisitive and break some stuff so you can learn to fix it - unlike say Windows, on a Unix-style system the filesystem is laid out in a very specific way (there's a specification [1]) so one always know where "things" are - docs go here, icons go there, programs go here, configs go there... - lending itself to just poking around and seeing what something does when you run it.

After awhile your brain adjusts and starts to see all the beautiful patterns in design of the typical Linux OS/distro because it's all laid out in a logical manner and documented how it's supposed to work if you play the game correctly.

[1] https://refspecs.linuxfoundation.org/fhs.shtml

[–] AnUnusualRelic@lemmy.world 3 points 1 week ago

In the old days, we would ls /usr/bin/ (sic, there are several locations defined for apps) and either look at the man page (if it existed) for the items we saw, or just run the commands with a --help option to figure out what they did

I confirm, that's exactly what I did in the 90s.

there are lots of cheatsheets out there but the best way to learn commands is practice. different people will use different commands, so you may not need to spend time learning ffmpeg syntax whereas others find it invaluable. Google is your friend while learning. if you have a Linux question, chances are someone else has had the same question and posted about it online. as far as basics go, spend some time learning about grep and find, they are probably the two most valuable basic commands imo outside of the common ls/mkdir/etc.

as for sudo, it's just "superuser do" so it's essentially the same as hitting run as admin in windows. lots of times if you try to run a command without sudo that needs it, you'll get a permission error which reminds you to run as superuser. it eventually becomes second nature to say "ah, this command needs direct access to a hardware device or system files" which means it'll need to be run with sudo.

[–] nope@jlai.lu 9 points 1 week ago* (last edited 1 week ago)

A very nice terminal-based cheatsheet is named tldr (or tealdeer).
It gives you a very short explanation of what a program does, and then lists common uses of that program and explains them

[–] Labotomized@lemmy.world 3 points 1 week ago

I am also a novice so take that into account. But it seems to me it’s something you learn over time what commands do what and when to use them. I think it’s kind of like knowing what folders or settings to navigate to in other operating systems. Over time you get a feel for it.

Also most troubleshooting guides or things like photo rec have the steps kind of built in and explained to you what commands do what.

If the guide you’re reading has the steps try to break them down and figure out what the command is actually doing rather than blindly copy pasting.

[–] Cris_Color@lemmy.world 2 points 1 week ago* (last edited 1 week ago)

Most of what I've learned has been the handful that have stuck when I've looked up how to do stuff. If you ever install a minimal distro and follow a guide or anything that's a great way to learn. Or if you look up how to fix something and you find commands on the internet, you can look up what their solutions do. But mostly I'm just replying to wish you well on your journey.

Best of luck with linux, hope you have a lovely day ☺️

[–] bitwolf@sh.itjust.works 2 points 1 week ago

It takes some time to understand the manpage format but it's worth the time because they're always available.

When I first started, I used the website explainshell to help my get an idea of how the most common commands worked.

This has the benefit of referencing the man pages directly so you can better learn how to interpret them.

[–] Badabinski@kbin.earth 2 points 1 week ago

If you'd like to learn more about Bash itself, this is an amazing resource: https://mywiki.wooledge.org/BashGuide

Probably the ONLY place on the internet that will teach you to write safe shell scripts. Most shit on StackOverflow (and consequently, most shell generated by LLMs) is dangerous garbage.

[–] misteloct@lemmy.dbzer0.com -2 points 1 week ago* (last edited 1 week ago) (1 children)

AI is great for learning Linux imo, e.g. ask "why did my command say Permission Denied?". If you object to ChatGPT there are local AI engines too

[–] misteloct@lemmy.dbzer0.com 1 points 1 week ago* (last edited 1 week ago)

Downvoted because "AI bad". Seriously OP don't discount it. Source: I'm a professional software engineer.

[–] Redjard@lemmy.dbzer0.com 27 points 1 week ago (3 children)

Last time I deleted a plaintext file I just grepped for it.
cat /dev/nvme0n1 | strings | grep -n "text I remember"

Had to hone in on the location with head -c and tail -c after I found it, then simply did a cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec and trimmed the last filesystem garbage from the ends manually.

[–] MonkderVierte@lemmy.zip 7 points 1 week ago* (last edited 1 week ago) (2 children)

That's the way.

Edit: no need for cat here.

[–] frezik@lemmy.blahaj.zone 5 points 1 week ago* (last edited 1 week ago) (1 children)

How far have we fallen from God's grace?

Edit: come to think of it, Useless Use of Cat is basically the blinking twelve problem for the Unix-inclined.

[–] Redjard@lemmy.dbzer0.com 3 points 1 week ago (1 children)

It makes the command easier to edit here. Put the various forms across my use next to each other and it becomes apparent:

cat /dev/nvme0n1 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec

compare that to

strings /dev/nvme0n1 | grep -n "text I remember"
tail /dev/nvme0n1 -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
tail /dev/nvme0n1 -c -123456789012 | head -c 3000 > filerec

where I have to weave the long and visually distracting partition name between the active parts of the command.
The cat here is a result of experiencing what happens when not using it.

Worse, some commands take input file arguments in weird ways or only allow them after the options, so when taking that into account the generic style people use becomes

strings /dev/nvme0n1 | grep -n "text I remember"
tail -c -100000000000 /dev/nvme0n1 | head -c 50000000000 | strings | grep -n "text I remember"
tail -c -123456789012 /dev/nvme0n1 | head -c 3000 > filerec

This is what I'd expect to run across in the wild, and also for example what ai spits out when asked how to do this. You'll take my stylistic cats over my dead body

[–] thingsiplay@beehaw.org 2 points 1 week ago* (last edited 1 week ago) (1 children)

In that case I would prefer using variables for the filename:

file="/dev/nvme0n1"
text="text I remember"
strings "${file}" | grep -n "${text}"
tail -c -100000000000 "${file}" | head -c 50000000000 | strings | grep -n "${text}"
tail -c -123456789012 "${file}" | head -c 3000 > filerec

Even if it's in the terminal, a temporary variable helps a lot. And for a series of commands I would probably end up writing a simple script or Bash function to share.

[–] Redjard@lemmy.dbzer0.com 2 points 1 week ago (1 children)

I could do a script if I knew what I was gonna do ahead of time, or would write one later if I was gonna do it more often.

A variable in the shell is fine, but I still have to skip over it to change the first command, it still breaks up the flow a bit more than not having that "$file" in there at all.
Also if I interrupt the work (or in this case have to let it run for a while), or if I wanna share this with others for whatever reason, I don't have to hunt for the variable definition, and don't run any risk of fetching the wrong one if I changed it. Getting by without variables makes the command self-contained.

And it still maintains the flow of left to right, it's simply easier to take the tiny well-known packet of cat file and from that point pipe the information ever rightwards, than to see a tail, then read the options, and only then see the far more important start of where the information comes from, to the continue on with the next processing step.
Any procedural language is always as left to right as possible.

If you really want to avoid the cat, I have yet another different option for you:
< /dev/nvme0n1 strings | grep -n "text I remember"
< /dev/nvme0n1 tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
< /dev/nvme0n1 tail -c -123456789012 | head -c 3000 > filerec

This ofc you can again extend with ${infile} and ${recfile} if the context makes it appropriate.

[–] thingsiplay@beehaw.org 2 points 1 week ago

I understand the reason why you do it this way. Hardcoded and be explicit has its advantage (but also disadvantage). I was just saying that I personally prefer using variables in a case like this. Especially when sharing, because the user needs to edit a single place only. And for variables, it has the advantage being a bit more flexible and easier to read and change for many commands. But that's from someone who loves writing aliases, scripts and little programs. It's just a different mindset and none way is wrong or correct. But probably not worth it complicating stuff for one off commands.

And for the cat thing, I am not that concerned about it and just took the last example in your post (because it seemed to the most troublesome). I personally avoid cat when I think about it, but won't go out of my way to hunt it down. I only do so, if performance is in any way critical issue (like in a loop).

[–] Redjard@lemmy.dbzer0.com 2 points 1 week ago (1 children)

I don't like to use < in combination with pipes, I find it harder to read. One is left to right the other right to left, and < is also just plain weird in its specifics.
cat is a stylistic choice avoiding needless notational complexity

[–] MonkderVierte@lemmy.zip 2 points 1 week ago (1 children)

strings /dev/nvme0n1 | grep -n "text I remember"

tail -c -123456789012 /dev/nvme0n1 | head -c 3000 > filerec

No need for < complexity either.

[–] Redjard@lemmy.dbzer0.com 2 points 1 week ago

Oh right I misunderstood.
I didn't do that because I was planning to switch out strings in that line. First inserting the tail and head before it to hone in on the position, then removing it entirely to not delete "non-string" parts of my file like empty newlines.

cat /dev/nvme0n1 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -100000000000 | head -c 50000000000 | strings | grep -n "text I remember"
cat /dev/nvme0n1 | tail -c -123456789012 | head -c 3000 > filerec

This would be the loose chain of commands I went through, editing one into the next. It's nice keeping the "constants" like the drive device that are hard to type static. That way mentally for me the command starts only after the first pipe.

[–] chasteinsect@programming.dev 7 points 1 week ago

Oh fascinating you can just do it manually.

[–] Gobbel2000@programming.dev 6 points 1 week ago

That was so eye-opening for me when I figured out you can just grep a block device for files unlinked by the file system but not yet physically overwritten. Magically reanimating lost files can be such an incredibly simple operation.

[–] just_another_person@lemmy.world 12 points 1 week ago (2 children)

Even easier: use btrfs or ZFS and tools that let you timeshift.

[–] cadekat@pawb.social 4 points 1 week ago

Snapper is great, just make sure the FS is setup correctly or it causes very mysterious hangs.

[–] Matriks404@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (1 children)

Last time I used btrfs (about few months ago on OpenSUSE) it eventually fucked up the whole partition, making it unrecoverable. No, thanks.

[–] possiblylinux127@lemmy.zip 3 points 1 week ago

I've used it for 5 or so years

It isn't perfect but in general it is solid

[–] thingsiplay@beehaw.org 8 points 1 week ago* (last edited 1 week ago) (1 children)

Congratz on recovering the important file. And thanks for sharing your tips and experience. Good to know in case of an accident. In general I advice you to do regular backup of changing files (or at least once if it doesn't change), especially for important and small files like Markdown.

I would also recommend not to install or use the system, and try to recover from a live boot rescue disk or usb stick instead. This will minimize the risk of losing the file. Even if trim didn't run and delete the data, you could accidentally overwrite parts of it while using your system (in example while installing software or when using your browser). EDIT: When I think about it, I am actually not sure if this is true for SSDs. This is just a habit of me from old magnetic drives. I think the used data will not be overwritten, until trim runs, right?

[–] chasteinsect@programming.dev 6 points 1 week ago (1 children)

AFAIK the blocks get marked as "free space" and can be potentially overwritten by new stuff. TRIM guarantees those blocks will be wiped at hardware level. I thought about booting from a live USB but eventually decided to try it out normally.

It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?

[–] thingsiplay@beehaw.org 4 points 1 week ago (1 children)

It was interesting to find out that TRIM runs once a week for me, I thought it runs almost continuously and not periodically? Is this common perhaps someone knows?

Oh, this is common as far as I know. You don't want to run TRIM too often, because excessive delete/rewrite will tear down your drive faster. There is no perfect setup and might be different for specialized use cases. A weekly TRIM is absolutely normal. In some occasions after lots of lots Gigabytes write and delete, I start the process sudo fstrim -va manually myself too (it figures out all SSDs that can be trimmed). This is something you should not need to do, just make sure you have plenty of space left (personal limit in my mind is 25% free space).

For me its weekly too:

$ cat /etc/systemd/system/timers.target.wants/fstrim.timer
[Unit]
Description=Discard unused filesystem blocks once a week
Documentation=man:fstrim
ConditionVirtualization=!container
ConditionPathExists=!/etc/initrd-release

[Timer]
OnCalendar=weekly
AccuracySec=1h
Persistent=true
RandomizedDelaySec=100min

[Install]
WantedBy=timers.target
[–] chasteinsect@programming.dev 3 points 1 week ago

Ah I see thanks for the info. I was not even aware you can manually run it but I suppose it makes sense.

[–] dellish@lemmy.world 5 points 1 week ago (2 children)

Just wondering, is there an inverse of this? Find files on disk flagged as OK to overwrite and have all bits set to 0?

[–] MonkderVierte@lemmy.zip 4 points 1 week ago* (last edited 1 week ago) (1 children)

Copy disk as image, <hex-tool> image |grep 0000 ? But what for?

[–] meekah@lemmy.world 2 points 1 week ago (1 children)

making sure deleted data is actually deleted, I guess

[–] dellish@lemmy.world 5 points 1 week ago

That's basically it, just for paranoid information security. I figure if it's so easy to bring a deleted file back, it should also be easy to ensure deleted data is actually destroyed.

[–] EuroNutellaMan@lemmy.world 1 points 1 week ago

Yeah just erase files

[–] sbeak@sopuli.xyz 1 points 1 week ago

Wow, that’s very cool :0

[–] Matriks404@lemmy.world -3 points 1 week ago (3 children)

I don't even want to know what can be important in a markdown file.

[–] Alaknar@sopuli.xyz 5 points 1 week ago

The notes...?

[–] chasteinsect@programming.dev 4 points 1 week ago* (last edited 1 week ago)

All of my notes are stored in markdown files (Obsidian), don't use any other apps. Syncthing to sync between phone and PC.