this post was submitted on 22 May 2025
478 points (97.0% liked)

memes

14891 readers
5029 users here now

Community rules

1. Be civilNo trolling, bigotry or other insulting / annoying behaviour

2. No politicsThis is non-politics community. For political memes please go to !politicalmemes@lemmy.world

3. No recent repostsCheck for reposts when posting a meme, you can only repost after 1 month

4. No botsNo bots without the express approval of the mods or the admins

5. No Spam/AdsNo advertisements or spam. This is an instance rule and the only way to live.

A collection of some classic Lemmy memes for your enjoyment

Sister communities

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Alexstarfire@lemmy.world 28 points 23 hours ago* (last edited 23 hours ago) (5 children)

Isn't vram usually bigger than ram? Those pics should be switched.

EDIT: Oh, I took vram to be virtual ram, not video ram. It makes sense for video ram.

[–] serpineslair@lemmy.world 24 points 23 hours ago

Mine certainly isn't. 6GB vram, 16gb ram.

[–] cm0002@lemmy.world 17 points 23 hours ago* (last edited 23 hours ago) (2 children)

It depends on your definition of "usually", high end GPUs for data centers, AI, workstations or "enthusiasts" yea. For these applications you're starting at like 16

GPUs for us plebs, no

[–] BombOmOm@lemmy.world 14 points 22 hours ago* (last edited 22 hours ago) (2 children)

It's also fairly cheap to buy 32+ GB of RAM, lots of choices for under $80. Meanwhile, I'm not even sure how you find a video card with 32GB of VRAM (not that you really need this much, 12GB and 16GB are pretty solid for a video card nowadays).

[–] 30p87@feddit.org 8 points 22 hours ago* (last edited 12 hours ago) (1 children)

Afaik for consumers only the 5090 has 32GB VRAM. So you're correct, practically impossible to find. And even if you find it, prone to spontaneous combustion.

For servers, it tops out at 288GB currently, with the AMD Mi355X.

[–] Anivia@feddit.org 3 points 10 hours ago

Afaik for consumers only the 5090 has 32GB VRAM

Only if you don't count Apple Silicon with its shared RAM/VRAM. Ironically a Mac Mini / Studio is currently the cheapest way to get a GPU with lots of vram for AI

[–] 30p87@feddit.org 6 points 22 hours ago (1 children)

Tbf, we should be starting with 16GB for gaming GPUs too, especially for those prices. But ... NVidia.

But yeah, modern HPC Processors have at least 48GB or so. And max. is the AMD Mi355X with 288GB VRAM afaik. Which is actually less than my servers RAM, ha! But also probably like a thousand times fasted, considering my RAM runs at 1600 MT/s.

[–] zurohki@aussie.zone 6 points 20 hours ago

I'm seeing games today regularly hitting 11 GB, and that's without raytracing or frame generation which require more VRAM.

The new 8GB GPU Nvidia just launched is a trap. It exists to trick people into buying a GPU that they'll need to upgrade next year.

[–] Kolanaki@pawb.social 10 points 23 hours ago* (last edited 23 hours ago) (1 children)

8gb of VRAM is still pretty good, but 8GB of RAM is getting pretty low these days. 16GB of ram and 6-8 VRAM is pretty common, and even that might go up relatively soon.

[–] zurohki@aussie.zone 6 points 20 hours ago

If you have an 8GB GPU that's a few years old, it's probably doing okay-ish. It probably doesn't have the performance to really suffer from VRAM limits and you don't game with things like raytracing or ultra detail settings turned on because the GPU isn't fast enough for those things anyway.

My Vega 64 had 8GB VRAM and that was fine.

If you buy one of the new GPUs with 8GB though, the VRAM is a huge problem. You have the GPU power to have all the features turned on, but you're going to see real performance crippled because it overflows VRAM.

Longevity is the other issue - when games released in 2025 run like ass on your 8GB GPU from 2017, you won't be surprised. Bad performance from an 8GB GPU that released in 2025 for $500, that's a problem.

[–] FlexibleToast@lemmy.world 9 points 18 hours ago* (last edited 18 hours ago) (1 children)

Creating your swap as 2x your RAM is outdated advice. Now it's essentially changed to be 2x until 4GB of RAM, then 1x until 8GB, and anything over 8GB just use 4GB of swap because you probably have enough RAM. Or, even some modern systems like Fedora will swap to zRAM. Which is just a highly compressed portion of RAM.

[–] wax@feddit.nu 4 points 17 hours ago (1 children)

I think that recommendation came partly due to hibernation, where the ram is dumped to disk before powering off. Today, I'd probably use a swapfile instead.

load more comments (1 replies)

Normally you don't even have that much virtual ram. It's at most twice your system ram, but honestly past 8gb and you're gonna want to start closing out of stuff.

[–] hperrin@lemmy.ca 26 points 20 hours ago (1 children)
[–] SpaceNoodle@lemmy.world 12 points 19 hours ago

What it feels like moving from x86 to ARM

[–] PattyMcB@lemmy.world 25 points 20 hours ago (1 children)

Noone will ever need more than 640k of RAM

[–] Branch_Ranch@lemmy.world 9 points 19 hours ago (1 children)
[–] PattyMcB@lemmy.world 3 points 7 hours ago

Achshully, you're right

[–] kittenzrulz123@lemmy.blahaj.zone 25 points 8 hours ago (4 children)

8gb of system ram is enough for a low end system (especially with Linux) and 8gb of vram is enough for 1080p gaming.

load more comments (4 replies)
[–] Dagnet@lemmy.world 21 points 17 hours ago (1 children)

Still remember my first 500MB drive, thought I would never manage to fill it up

[–] Uli@sopuli.xyz 11 points 16 hours ago (1 children)

I remember being thrilled to move from floppies to a 16mb flash drive for my school assignments, even if I did have to constantly download and reinstall the USB Mass Storage drivers for the Windows 1998 sp2 computers in the library which reset every night. And the transfer speed was SLOW.

The fact that you can get a terabyte flash drive now, which can hold 62,500 of my school assignment drives, is mind blowing to me.

[–] MDCCCLV@lemmy.ca 4 points 8 hours ago (2 children)

I always wanted the zip drives with 250mb capacity.

load more comments (2 replies)
[–] Smoolak@lemmy.world 15 points 22 hours ago (1 children)

The meme don't make sense. An SRAM cache of that size would be so slow that you would most likely save clock cycles reading directly from RAM an not having a cache at all...

[–] cogman@lemmy.world 28 points 21 hours ago (1 children)

Slow? Not necessarily.

The main issue with that much memory is the data routing and the physical locality of the memory. Assuming you (somehow) could shrink down the distance from the cache to the registers and could have a wide enough data line/request lines you can have data from such a cache in ~4 cycles (assuming L1 and a hit).

What slows down memory for L2 is the wider address space and slower residence checks. L3 gets a bit slower because of even wider address spaces but also it has to deal with concurrency issues since it's shared among cores. It also ends up being slower because it physically has to be further away from the cores due to it's size.

If you ever look at a CPU die, you'll see that L1 caches are generally tiny and embedded right into the center of the processor. L2 tends to be bolted onto the sides of the physical cores. And L3 tends to be the largest amount of silicon real estate on a CPU package. This is all what contributes to the increasing fetch performance for each layer along with the fact that you have to check the closest layers first (An L3 hit, for example, means that the CPU checked L1 and L2 and failed at both which takes time. So L3 access will always be at least the L1 + L2 times).

[–] Smoolak@lemmy.world 5 points 20 hours ago

I agree. When evaluating cache access latency, it is important to consider the entire read path rather than just the intrinsic access time of a single SRAM cell. Much of the latency arises from all the supporting operations required for a functioning cache, such as tag lookups, address decoding, and bitline traversal. As you pointed out, implementing an 8 GB SRAM cache on-die using current manufacturing technology would be extremely impractical. The physical size would lead to substantial wire delays and increased complexity in the indexing and associativity circuits. As a result, the access latency of such a large on-chip cache could actually exceed that of off-chip DRAM, which would defeat the main purpose of having on-die caches in the first place.

[–] expatriado@lemmy.world 14 points 22 hours ago

that much cache could be detrimental to the speed of your CPU

[–] red_bull_of_juarez@lemmy.dbzer0.com 14 points 11 hours ago (5 children)

The first hard drive I got had 20MB and it was glorious.

[–] ouRKaoS@lemmy.today 6 points 10 hours ago

So I can boot up without a disk now?

[–] AnUnusualRelic@lemmy.world 6 points 9 hours ago (1 children)

The first one I used was 5MB. The OS on the machine (a CP/M version) didn't know how to handle it, so it was partitioned as lots and lots of floppies. Not very useful.

[–] Honytawk@feddit.nl 7 points 8 hours ago* (last edited 8 hours ago) (1 children)
load more comments (1 replies)
load more comments (3 replies)
[–] Semi_Hemi_Demigod@lemmy.world 14 points 21 hours ago (2 children)

The first computer I bought had eight megs of RAM.

[–] zurohki@aussie.zone 9 points 20 hours ago

Mine got upgraded to a full meg.

[–] SmoothLiquidation@lemmy.world 4 points 18 hours ago

I remember being thrilled with a 20 meg scsi hard drive I got as a kid.

[–] Johanno@feddit.org 13 points 9 hours ago (4 children)

I always thought it would be funny running an os from an usb stick.

Never would I have thought that there would be storage in the size of a stick exceeding the default configuration of a desktop pc.

2 TB in one small nvme drive?! Wtf. Amazing but also crazy.

[–] Korhaka@sopuli.xyz 8 points 8 hours ago

You should check out Linux live USBs from nearly 2 decades ago then.

[–] epicstove@lemmy.ca 5 points 9 hours ago

When my dad first saw an nvme drive he had to triple check what he was looking at BC in his old 70s computer brain there's no fucking way something so small and unmoving can hold so much data, read/write it so fast, and all for a relatively cheap price.

load more comments (2 replies)
[–] Emi@ani.social 9 points 15 hours ago (1 children)
load more comments (1 replies)
[–] nonentity@sh.itjust.works 8 points 16 hours ago

I remember when this applied to 8kB.

[–] ABetterTomorrow@lemm.ee 7 points 9 hours ago (5 children)

What does 1GB of cache look like?

[–] Honytawk@feddit.nl 11 points 8 hours ago (1 children)
[–] ABetterTomorrow@lemm.ee 5 points 8 hours ago

That’s a lot of cache! For a new battery :P

load more comments (4 replies)
[–] pyre@lemmy.world 7 points 4 hours ago

thanks Nvidia, maybe 4gb VRAM is next

[–] nuko147@lemm.ee 6 points 8 hours ago

RAM on phones is ok, though.

[–] ouRKaoS@lemmy.today 5 points 10 hours ago (1 children)
load more comments (1 replies)
[–] Strider@lemmy.world 5 points 12 hours ago* (last edited 12 hours ago) (1 children)

8GB of (internet) bandwidth.

[–] MDCCCLV@lemmy.ca 4 points 8 hours ago (1 children)
load more comments (1 replies)
[–] thelosers5o@lemmy.world 4 points 10 hours ago (4 children)

Generally there’s a reverse relationship between size and speed. A 8gb cache would also be super slow thus defeating the purpose of the cache. If it were so easy every cpu would have a huge cache

load more comments (4 replies)
[–] leaky_shower_thought@feddit.nl 3 points 19 hours ago

dying in 8gb unified ram intensifies

load more comments
view more: next ›