Jump to content
Everywhere

klepto2

Developers
  • Posts

    942
  • Joined

  • Last visited

Everything posted by klepto2

  1. to enable stats you have to call: world->RecordStats(true); otherwise no value in the renderstats will be filled.
  2. This is how the cascades look when using 4 cascades, full range and a lambda of 0.98: and here with 3 cascades: the only downside of rendering the full frustum is the higher polycount which needs to be rendered, but from a visual aspect you should not notice the lowres shadow at cascade 3 or 4.
  3. If the zfar is too low you will never see shadows indoor on anything which is further away. Maybe you should make this as a shadow parameter. You already can see it your images btw. If you would make big brush above the rocks in the background , normally the rocks should be in shadow. With the low zfar they will not and still be fully lit by the light. [Edit] I have checked some resources (other engine and docs) and this is a small summary I think could improve the shadow handling: Make the shadowgeneration more configurable Add parameters for: How many splits to use (maybe hardcoded to 4 for the first step) The distance for the splits (number of split -1) eg: godot uses ranges from 0.0 to 1.0 default: 0.1,0.2,0.5 Cascade 1: 0.0-10 % Cascade 2: 10%-20% Cascade 3: 20%-50% Cascade 4: 50%-100% (maxdistance) optional: option to calculate the distances by the above formula using a maxdistance and the lambda a maximum distance default maybe like now 64 or 100 0.0 should be validand mean the full range (Camera Farrange) The shaders should receive the splits from the client and not calculate them for the first version a vec3 might be enough when the splits are fixed to 4
  4. Why do you use just 32 as zfar? You need to cover the whole frustum? Then the formula makes sense.
  5. This is where my numbers came from and also the pink color when the shadow is out of range. The problem with the single cascade distance is that we have to set it up to 125 to cover the whole viewspace, but with this value you get very ugly shadows in a short distance. I hope you can get it working out
  6. When dealing with bigger indoor rooms and don't have setup the whole environemnt probes needed. You can see in this screen: The last (unreachable space) is marked with pink in this screenshot, As you can see the shadows don't cover the whole cave / level in view. Instead (after what i have found out) it just covers (with a default cascade size of 4.0) a range of 64.0. So the first cascade goes from near to 4.0, the second from 4.0 to 8.0 and the last covers a range from 8.0 to 64.0. The rest just can never receive shadow as long as the camera to far away. One way to fix this would be to raise the initial cascade distance to something much higher like 40 an above. this leads to artifacts and ugly shadows for the near shadows because they are at a too low resolution for the covered area. From a previous research (years ago) I remembered a logarithmic approach mentioned by nvidia about PSSM (https://developer.nvidia.com/gpugems/gpugems3/part-ii-light-and-shadows/chapter-10-parallel-split-shadow-maps-programmable-gpus) to calculate the cascade sizes. Josh, maybe you should try this apporach or add the ability to configure the whole cascade sizes by ourself. float get_cascade_split(float lambda, UINT current_partition, UINT number_of_partitions, float near_z, float far_z) { // https://developer.nvidia.com/gpugems/gpugems3/part-ii-light-and-shadows/chapter-10-parallel-split-shadow-maps-programmable-gpus float exp = (float)current_partition / number_of_partitions; // Logarithmic split scheme float Ci_log = near_z * std::powf((far_z / near_z), exp); // Uniform split scheme float Ci_uni = near_z + (far_z - near_z) * exp; // Lambda [0, 1] float Ci = lambda * Ci_log + (1.f - lambda) * Ci_uni; return Ci; } Vec2 camera_range = camera->GetRange(); float lambda = 1.0; // full log (0.0 = linear) int total_cascades = 3; for (auto cascade = 0; cascade < 3; cascade++) { float cascade_near = get_cascade_split(lambda, cascade, total_cascades, camera_range.x, camera_range.y); float cascade_far = get_cascade_split(lambda, cascade + 1, total_cascades, camera_range.x, camera_range.y); Print("Cascade:" + WString(i) + " --> " + WString(cascade_near) + "-" + WString(cascade_far)); } this returns something like this: Cascade:0 --> 0.1-2.15443 Cascade:1 --> 2.15443-46.4159 Cascade:2 --> 46.4159-1000
  7. When you edit a value like the range of a layer-variation and switch to another variation without leaving the previously edited control the new value is passed to the new variation. The same happens for a lot of other editable fields like colors, etc in the default scene tree.
  8. When you set the camera far-ange to something higher than the default 1000 the layersystem is not displayed correclty anymore and start flickering when you move the camera. The bug is more visible with higher farranges but also noticable when the range is somewhere like 5k or 10k. This is a small gif to show the bug: and i have add a small sample map demonstrating the bug. The zip contains a map with a terrain and the box mdl used as a vegetation layer. Also the "CameraControls.lua" is modified so that when you press "F" the range is set to 1.0 and 100000.0 and to default when "F" is not pressed down. VegetationLayerBug.zip
  9. This might be because of the predefined steps for down and upsampling. Normally you would calculate the needed steps in code and choose the correct steps based on the resolution. The downscaling and upsampling is normally calculated like mipmapping.
  10. I get the same, instead of rgb it is bgr.
  11. Might be related to this: it happens as soon as generated meshes or instances go beyond a certain value.
  12. yes it will , but in the meantime i think Josh will have a native waterplane integrated.
  13. Thank you Its nice to see it in action. the default threshold is 1.0 because you normally just want bloom for colors brighter than the normal color range ( < 1.0). this is why you always need to use bloom before you do any tonemapping (normally tonemapping will try to bring the colors back to 0.0 - 1.0 ranges). the auto exposure should come before that.
  14. Hi, thanks for pointing this out. I will take a look as soon as possible, I think it will just be some small details.
  15. not that i am aware of. I mainly see the info included in the description or depending on the editor in the project files of the used editor.
  16. Maybe for import of heightmaps, ask for the minimum and maximum height in meters. This info comes with most terrain generators and also allows the calculation of "below sea level" (< 0) positions. Maybe these values should be set to a default range of min= 0 and max = 100.
  17. I know that it is expensive, and I usually use just 1k textures, which are much faster to compress. The model I have used is from Kitbash and "pipelined" via (KitBash3D) Cargo and blender 4.1 (gltf-export) (I downloaded the 4k by "accident"). The problem with these models are that they use up to 200 textures which is very much and this is why I experienced the long loading time (with normal models with baked textures or lower texture count you might not notice it) I know that bc7 (and bc6) are expensive, and you have done a great job with it. But I would still suggest some important UX improvements on this part: The process should not block the editor rule of thumb → if something takes longer than 100ms, show a progress screen with proper update/progress info if it is non-blocking, show progress in the status bar with information of what file is processed right now. In general, for a better UX you should report more info to the user on certain actions. Just a few samples where this might apply: Model/Texture-Conversion GI-Building Scene-Loading
  18. Converting textures alone, works flawless and only takes a few few seconds (even when the size is above 2k). But when converting a gltf model to mdl it is painfully slow. As you can see it takes more than 5 minutes for just 23 textures. (around 5 materials) and i have killed the preview exe to not disturb the process. With the preview exe running it doubles the amount of time.
  19. Normally this shouldn't be needed as it would only require additional GPU memory. The textures in the background are already smaller in size due to mipmapping so using lower res textures for lower lods is not needed.
  20. Ok, i think the bottleneck were some textures saved originally in png with a size of 4k. These textures take a lot of time to convert. jpgs of the same size worked fast. Will test it a bit more and provide you the textures which convert slow later.
  21. As a sidenote: The main reason for the long loading time (regardless the not wanted conversion) is, that the conversion of texture to dds seems to be extremely slow in the latest build. It takes minutes for just a simple 2k jpg.
  22. I have the same issue and it started after adding a new gltf model (a larger one) into the project. Now everytime the editor is started, it takes a lot of time. In the log i can see, that the model is reconverted everytime to mdl (including all textures etc).
  23. it is more realistically than the brightness > 1.0 of the diffuse channel. In addition to what Dreikblack said, you can use the emission for much more: glowing signs, monitors with text on it and much more. Another benefit of having a brightness value (adjustable from the MaterialClass) is an easy way to implement flickering materials or ways to turn glow on/off. This looks right, but it has a major downside (at least in my opinion): Once you set the brightness to 0.0 you lose the color information. In my opinion you should always store the color and brightness independently in the material file. How it is then used in the engine doesn't matter, but it will prevent users from accidentally delete the color information.
  24. Ok, i don't know why, but it works now. But i would suggest to add the same functionality to the emission color.
×
×
  • Create New...