AdrianTheFrog

joined 2 years ago
[–] AdrianTheFrog@lemmy.world 4 points 3 hours ago

This works in Kerbal Space Program

[–] AdrianTheFrog@lemmy.world 2 points 5 hours ago (1 children)

The advantage of making your own engine is that you can specialize for your specific gameplay.

[–] AdrianTheFrog@lemmy.world 1 points 5 hours ago (1 children)

For the resolution of the texture to need to be doubled along each axis, you could either have a monitor with twice the resolution, or you could be half the distance away. Most of the time games will let you get closer to objects than their texture resolution looks good for. So 4k textures still give an improvement even on a 1080p monitor.

Texture resolution is chosen on a case by case basis, objects that the player will usually be very close to have higher resolution textures, and ones that are impossible to approach can use lower resolution.

The only costs to including higher resolution textures are artist time and (usually) disk space. Artist time is outsourced to countries with cheap labor (Philippines, Indonesia, etc) and apparently no-one cares about disk space.

[–] AdrianTheFrog@lemmy.world 11 points 1 day ago

I got the one on the top (minus storage and ram) from a local university surplus store for $30 a few years ago. Lenovo brand but same form factor.

[–] AdrianTheFrog@lemmy.world 21 points 2 days ago* (last edited 1 day ago) (5 children)

JXL is badly supported but it does offer lossless encoding in a more flexible and much more efficient way than png does

Basically jxl could theoretically replace png, jpg, and also exr.

[–] AdrianTheFrog@lemmy.world 3 points 2 days ago

I think it's just that the iPhone processing gives it a certain look that people associate with ai for possibly incorrect reasons.

[–] AdrianTheFrog@lemmy.world 1 points 6 days ago* (last edited 6 days ago)

Hmm I just tried editing some systemd service with Kate and it did actually give me an authenticator popup when I tried to save it

Although then the prompt expired and now it does nothing when I try to save it. Restarted Kate and now it works again...

I haven't tried that before

When I try to go into the sudoers.d folder tho it just says I can't, and the same thing happens when I try to open the sudoers file in Kate. If I try to copy and paste a systemd service in dolphin tho it just says I don't have permission and doesn't give a prompt.

lol if I open it with nano through sudo it says 'sudoers is meant to be read only'

[–] AdrianTheFrog@lemmy.world 1 points 6 days ago

Yeah, when I was on xfce on Arch I remember going into some places in the file manager where it wouldn't let me edit files etc without running it from the terminal through sudo.

[–] AdrianTheFrog@lemmy.world 12 points 1 week ago* (last edited 1 week ago) (9 children)

Is there a technical reason that Linux apps can't/don't just pop up an authenticator thing asking for more privileges like Windows apps can do? Why does nano just say that the file is unwriteable instead of letting me increase the privileges?

[–] AdrianTheFrog@lemmy.world 1 points 1 week ago

It says on that page that SHaRC requires raytracing capable hardware. I guess they could be modifying it to use their own software raytracing implementation. In any case it's the exact same math for either hardware or software raytracing, hardware is just a bit faster. Unless you do what lumen did and use a voxel scene for software raytracing.

[–] AdrianTheFrog@lemmy.world 8 points 1 week ago* (last edited 1 week ago)

Yeah, that's just rasterized shadow mapping. It's very common and a lot of old games use it, as well as any modern game. Basically used in any non-raytraced game with dynamic shadows (I think there's only one other way to do it, just directly projecting the geometry, only done by a few very old games that can only cast shadows onto singular flat surfaces).

The idea is that you render the depth of the scene from the perspective of the light source. Then, for each pixel on the screen, to check if it's in shadow, you find it's position on the depth texture. If it's further away than something else from the perspective of the light, it's in shadow, else it isn't. This is filtered to make it smoother. The downside is that it can't support shadows of variable width without some extra hacks that don't work in all cases (aka literally every shadow), to get sharp shadows you need to render that depth map at a very high resolution, rendering a whole depth map is expensive, it renders unseen pixels, doesn't scale that well to low resolutions (like if you wanted 100 very distant shadow catching lights) etc.

Raytraced shadows are actually very elegant since they operate on every screen pixel (allowing quality to naturally increase as you get closer to any area of interest in the shadow) and naturally support varying shadow widths at the cost of noise and maybe some more rays. Although they still scale expensively with many light sources, some modified stochastic methods still look very good and allow far more shadow casting lights than would ever have been possible with pure raster.

You don't notice the lack of shadow casting lights much in games because the artists had to put in a lot of effort and modifications to make sure you wouldn't.

[–] AdrianTheFrog@lemmy.world 1 points 1 week ago

I heard the Source 2 editor has (relatively offline, think blender viewport style) ray tracing as an option, even though no games with it support any sort of real time RT. Just so artists can estimate what the light bake will look like without actually having to wait for it.

So what people are talking about there is lightmaps, essentially a whole other texture on top of everything else that holds diffuse lighting information. It's 'baked' in a lengthy process of ray tracing that can take seconds to hours to days depending on how fast the baking system is and how hard the level is to light. This just puts that raytraced lighting information directly into a texture so it can be read in fractions of a millisecond like any other texture. It's great for performance, but can't be quickly previewed, can't show the influence of moving objects, and technically can't be applied to any surface with a roughness other than full (so most diffuse objects but basically no metallic objects, those use light probes and bent normals usually, and sometimes take lightmap information although that isn't technically correct and can produce weird results in some cases)

The solution to lighting dynamic objects in a scene with lightmaps is through a grid of pre baked light probes. These give lighting to dynamic objects but don't receive it from them.

 

This is at JFK, does anyone know what they are used for? There wasn’t an obvious time when it was taking a picture.

collapsed inline media

collapsed inline media

view more: next ›