FooBarrington

joined 2 years ago
[–] FooBarrington@lemmy.world 3 points 2 days ago

(pssht, don't tell them)

[–] FooBarrington@lemmy.world 8 points 2 days ago (4 children)

If he didn't earn his place in Elysium, I'll kill everyone in Valhalla and then myself

[–] FooBarrington@lemmy.world 3 points 3 days ago

I don't believe in such trite charlatanery

"But I've learned about imaginary numbers in mathematics degree" well, that and $20 dollars will buy you an egg

[–] FooBarrington@lemmy.world 6 points 3 days ago (7 children)

5g > 2g if g is positive

[–] FooBarrington@lemmy.world 7 points 4 days ago

I mean, it's the word I've heard used for this all my life, and it's much shorter than "installing software without an app store".

It's not bad journalism to use terms of art.

[–] FooBarrington@lemmy.world 46 points 5 days ago (1 children)

The answer is disappointingly simple: emotional satisfaction.

For decades, these people have been told that they are incredibly generous towards their allies, and that they get nothing in return. That their allies are abusing their relationships. Of course this is false, but they've been told so every day.

Now they get to abuse their "abusers" right back.

[–] FooBarrington@lemmy.world 1 points 6 days ago (1 children)

My god.

There are many parameters that you set before training a new model, one of which (simplified) is the size of the model, or (roughly) the number of neurons. There isn't any natural lower or upper bound for the size, instead you choose it based on the hardware you want to run the model on.

Now the promise from OpenAI (from their many papers, and press releases, and ...) was that we'll be able to reach AGI by scaling. Part of the reason why Microsoft invested so much money into OpenAI was their promise of far greater capabilities for the models, given enough hardware. Microsoft wanted to build a moat.

Now, through DeepSeek, you can scale even further with that hardware. If Microsoft really thought OpenAI could reach ChatGPT 5, 6 or whatever through scaling, they'd keep the GPUs for themselves to widen their moat.

But they're not doing that, instead they're scaling back their investments, even though more advanced models will most likely still use more hardware on average. Don't forget that there are many players in this field that keep bushing the bounds. If ChatGPT 4.5 is any indication, they'll have to scale up massively to keep any advantage compared to the market. But they're not doing that.

[–] FooBarrington@lemmy.world 1 points 1 week ago (3 children)

But really the "game" is the model. Throwing more hardware at the same model is like throwing more hardware at the same game.

No, it's not! AI models are supposed to scale. When you throw more hardware at them, they are supposed to develop new abilities. A game doesn't get a new level because you're increasing the resolution.

At this point, you either have a fundamental misunderstanding of AI models, or you're trolling.

[–] FooBarrington@lemmy.world 1 points 1 week ago* (last edited 1 week ago) (5 children)

I'm supposed to be able to take a model architecture from today, scale it up 100x and get an improvement. I can't make the settings in Crysis 100x higher than they can go.

Games always have a limit, AI is supposed to get better with scale. Which part do you not understand?

[–] FooBarrington@lemmy.world 2 points 1 week ago (7 children)

It's still not a valid comparison. We're not talking about diminished returns, we're talking about an actual ceiling. There are only so many options implemented in games - once they're maxed out, you can't go higher.

That's not the situation we have with AI, it's supposed to scale indefinitely.

[–] FooBarrington@lemmy.world 4 points 1 week ago

Bobs and vageen?

view more: next ›