this post was submitted on 07 Sep 2025
278 points (95.7% liked)

Technology

75223 readers
2814 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Decq@lemmy.world 71 points 1 week ago (3 children)

I honestly don't get why anyone would have bought an Intel in the last 3-4 years. AMD was just better on literally every metric.

[–] stealth_cookies@lemmy.ca 17 points 1 week ago (1 children)

If your use case benefited from Quicksync then Intel was a clear choice.

[–] PalmTreeIsBestTree@lemmy.world 5 points 1 week ago (1 children)

Older Intel CPUs are the only ones that can play 4K BluRays on the player itself and not just ripping to a drive. Very niche use case but that is one I can think of.

[–] notthebees@reddthat.com 9 points 1 week ago (1 children)

They can't even do that anymore. sgx had a bunch of vulnerabilities and as a result, that service has been disabled.

https://sgx.fail/ SGX.Fail

load more comments (1 replies)
[–] Quatlicopatlix@feddit.org 14 points 1 week ago (1 children)

Idle power is the only thing they are good at, but for a homeserver a used older cpu is good enough.

[–] Decq@lemmy.world 10 points 1 week ago (1 children)

Was that even true for comparable CPU's? I feel this was only for their N100's etc.

[–] Quatlicopatlix@feddit.org 8 points 1 week ago (17 children)

Nah all the am4 cpus have abysmal idle power, the am5 got a little better as far as i know but the infinity fabric was a nightmare for the idle power.

[–] Decq@lemmy.world 13 points 1 week ago

Well I concede, I guess there was one metric they were better at. Doing absolutely nothing.

load more comments (16 replies)
load more comments (1 replies)
[–] KiwiTB@lemmy.world 41 points 1 week ago (4 children)

Looks like they didn't have adequate cooling for their CPU, killed it... Then replaced it without correcting the cooling. If your CPU hits 3 digits, it's not cooled properly.

[–] sugar_in_your_tea@sh.itjust.works 45 points 1 week ago* (last edited 1 week ago) (1 children)

If your CPU hits 3 digits, then throttling isn't working properly, because it should kick in before it hits that point.

[–] frongt@lemmy.zip 26 points 1 week ago (1 children)

The article (or one of the linked ones) says the max design temperature is 105°C, so it doesn't throttle until it hits that.

Which makes me think it should be able to sustain operating at that temperature. If not, Intel fucked up by speccing them too high.

[–] sugar_in_your_tea@sh.itjust.works 13 points 1 week ago (3 children)

I'd expect it to still throttle before getting to 105C, and then adjust to maintain a temp under 105C. If it goes above 105C, it should halt.

[–] frongt@lemmy.zip 16 points 1 week ago (1 children)

Then you misunderstand the spec. That's the max operating temperature, not the thermal protection limit. It throttles at 105 so it doesn't hit the limit at 115 or whatever and shut down. I can't find a detailed spec sheet that might give an exact figure.

[–] sugar_in_your_tea@sh.itjust.works 4 points 1 week ago (6 children)

The chip needs to account for thermal runaway, so I'd expect it to throttle before reaching max operating temperature and then adjust so it stays within that range. So it should downclock a little around 90C or whatever, the increase as needed as it approaches 105C or whatever the max operating temp is. If it goes above that temp, it should aggressively throttle or halt, depending how how far above it went and how quickly.

load more comments (6 replies)
[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 8 points 1 week ago (1 children)

Why? It’s designed to run up to 105c.

I think it was when AMDs 7000 series CPUs were running at 95c and everyone freaked out that AMD came out and said that the CPUs are built to handle this load 24/7 365 for years on end.

And it’s not like this is new to Intel. Intel laptop CPUs have been doing this for a decade now.

[–] sugar_in_your_tea@sh.itjust.works 4 points 1 week ago (1 children)

CPUs should throttle as they approach the limit to prevent thermal runaway. As it gets closer to that limit, it should adjust the frequency in smaller increments until it arrives at that temp to keep the changes to temps small.

[–] fuckwit_mcbumcrumble@lemmy.dbzer0.com 5 points 1 week ago (1 children)

105c is the max operating temperature. It's not going to run away the second it hits 106.

Your CPU starts throttling at 104c so that way it almost never hits at 105c for long If it can't maintain clocks then it drops them until 104c can mostly be maintained.

If you have an improperly mounted cooler, you could very well get to 105C incredibly quickly, and 115C or whatever the halt temp is shortly after.

[–] mrvictory1@lemmy.world 4 points 1 week ago

My intel mac's cpu (i5-5250U) throttles to maintain 105 C

[–] nuko147@lemmy.world 12 points 1 week ago (1 children)

That's not the case. 100% for new CPUs, but also for old ones too.

My father's old CPU cooler did not make good contact, got lose in one corner some how, and the system would throttle (fan at 100% making noise and PC run slow). After i fixed it, in one of my visits, CPU was working fine for years.

System throttles or even shuts down before any thermal damage occures (at least when temperatures rise normally).

[–] lemming741@lemmy.world 6 points 1 week ago

Pretty much anything with a heat spreader should be impossible to accidentally kill. Bare die? May dog have mercy on your soul.

[–] victorz@lemmy.world 3 points 1 week ago* (last edited 1 week ago) (4 children)

What if it hits around 90°C during Vulkan shader processing? 😅 Otherwise like 42–52 idle. How's that? I'm wondering if my cooling is sufficient.

This is an AMD 9950X3D + 9070 XT setup, for reference.

Any way to do Vulkan shader processing on the GPU perhaps, to speed it up?

[–] Glitchvid@lemmy.world 21 points 1 week ago

It's fine, modern CPUs boost until they either hit amperage, voltage, or thermal constraints, assuming the motherboard isn't behaving badly then the upper limits for all of those are safe to be at perpetually.

[–] missphant@lemmy.blahaj.zone 3 points 1 week ago* (last edited 1 week ago) (4 children)

If you're talking about the Steam feature you can safely turn it off, any modern hardware running mesa radv (the default AMD vulkan driver in most distros) should be sufficient to process shaders in real-time thanks to ACO.

load more comments (4 replies)
load more comments (2 replies)
load more comments (1 replies)
[–] Knossos@lemmy.world 35 points 1 week ago (1 children)

I built a new PC recently. All I needed to see were the benchmarks over the last 5 years. There's currently no contest.

[–] the16bitgamer@programming.dev 3 points 1 week ago

I went from Ryzen 1000 to intel 12000 since I need single threaded performance above all else (CAD). Plus it was a steal of a deal.

If Intel ever sorts out their drivers or it gets cheap enough I might for at 14000 chip but no further.

[–] Vanilla_PuddinFudge@infosec.pub 18 points 1 week ago (4 children)

"Do you need to transcode video?

Then leave Intel the fuck alone."

Been my rule for 20 years, and it's worked good so far.

[–] muusemuuse@sh.itjust.works 21 points 1 week ago (3 children)

It’s odd, their GPUs are doing fine, a market they are young in, but their well established CPU market is cratering

Business majors suck.

[–] KingRandomGuy@lemmy.world 11 points 1 week ago

Their GPU situation is weird. The gaming GPUs are good value, but I can't imagine Intel makes much money from them due to the relatively low volume yet relatively large die size compared to competitors (B580 has a die nearly the size of a 4070 despite being competing with the 4060). Plus they don't have a major foothold in the professional or compute markets.

I do hope they keep pushing in this area still, since some serious competition for NVIDIA would be great.

[–] TheGrandNagus@lemmy.world 7 points 1 week ago (1 children)

Sure, if by doing fine you mean looking alright in benchmarks while having zero supply because they don't make money selling them and thus don't want to produce them in any significant amount.

load more comments (1 replies)
[–] kreskin@lemmy.world 4 points 1 week ago

they always did, even back in college.

[–] acosmichippo@lemmy.world 4 points 1 week ago

yeah quicksync is the only reason i put an intel in my NAS.

load more comments (2 replies)
[–] Fizz@lemmy.nz 13 points 1 week ago (6 children)

I'd probably just warranty the CPU and assume it was a defect instead of blame the entire company.

But yeah amd is the better choice for everything atm except x86 power efficiency laptop chips.

load more comments (6 replies)
[–] 3dcadmin@lemmy.relayeasy.com 12 points 1 week ago (1 children)

It was ok until he said the AMD chip consumed more power. It is a X3D chip that is pretty much a given, if he'd gone for a none X3D chip he'd have saved quite a bit of power especially at idle. Plus he seems to use an AMD chip like an Intel chip with little or no idea how to tweak its power usage down

[–] xthexder@l.sw0.com 4 points 1 week ago (2 children)

I've got a 9700X and it absolutely rips at only 65W

load more comments (2 replies)

Interesting, so it's not only their recent-ish (either 12th or 13th gen and up, iirc) laptop CPUs that die under normal load.

[–] zr0@lemmy.dbzer0.com 9 points 1 week ago

I knew Michael Stapelberg from other projects, but I just realized he is the author of the i3 Window Manager. Damn!

[–] SapphironZA@sh.itjust.works 6 points 1 week ago (3 children)

Just for interest. Why did you buy Intel in the first place. I don't know about many use cases where Intel is the superior option.

load more comments (3 replies)
[–] vikingtons@lemmy.world 5 points 1 week ago

I'd never heard of arrow lake dying like raptor has been? wild.

[–] ArmchairAce1944@discuss.online 5 points 1 week ago (2 children)

The computer I bought should last me about 10 years. I spent a fuckload of money on it. The next comp will have to be done entirely with as little starting google and privacy violating shit as possible.

And I am certain AMD will make better stuff by then.

load more comments (2 replies)
[–] callouscomic@lemmy.zip 4 points 1 week ago* (last edited 1 week ago) (3 children)

Somehow I figured out Intel was shit early on. Been AMD for like 15-20 years. I think it was a combo of childhood shit computers running Intel, and a lot of advice pointing out what garbage it was and not worth the cost for PC builds.

Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.

[–] acosmichippo@lemmy.world 6 points 1 week ago (1 children)

15-20 years is silly. Intel was the clear leader for a long time before Ryzen in 2017, and arguably a few years after that too.

load more comments (1 replies)
[–] sugar_in_your_tea@sh.itjust.works 5 points 1 week ago* (last edited 1 week ago)

I was in team AMD in the 2000s for two reasons: price and competition to Intel. Intel had a massive anti-trust loss to AMD around that time, and I wanted AMD to succeed. I stuck with them until Zen was actually competitive and stayed with them ever since because they actually had better products. Intel was the king in both performance and power efficiency until that Zen release, so I really don't know where that advice would've come from.

As for Hitachi and Western Digital, WTF? Hitachi hasn't been a thing for well over a decade since they sold their HDD business to WD, and WD is generally as reliable or better than its competition. It sounds like you were impacted by a couple failures (probably older drives?) and made a decision based on that. If you look at Backblaze stats, there's not a huge difference between manufacturers, just a few models that do way worse than the rest.

[–] Passerby6497@lemmy.world 3 points 1 week ago (1 children)

Similar reasons I hate Hitachi and Western Digital hard drives. They always fucking fail.

You misspelled Seagate.

My WD drives have been great, but my Seagates failed multiple times, causing data loss because I wasn't properly protecting myself.

[–] frongt@lemmy.zip 6 points 1 week ago (2 children)

All manufacturers have bad batches. Use diversity and keep backups.

Seagate has more than bad batches. When every single one of their 1tb per platter barracuda drives have high failure rates then that’s a design/long term production issue.

[–] Passerby6497@lemmy.world 3 points 1 week ago (1 children)

How likely is it that I got 4 to 5 bad batches over the space of as many years?

Raid and offline backups these days, I eventually learned my lesson. One of which is stay away from Seagate.

[–] frongt@lemmy.zip 3 points 1 week ago

Within the realm of possibility. Especially if you treat them harshly (lots of start-stop, and low airflow and high temps). Backblaze collects and publishes data, and the AFR for Seagate is slightly higher than other manufacturers, but not what I'd consider dangerous.

load more comments
view more: next ›