Read more about the solution here: https://impetusorgansseparation.com/avnneqwn?key=670ae0c3b093d8e8ac42c57aa7da8c14
Improving Precision for Ray-Ellipsoid Intersection You are currently working with an Atmospheric scattering GLSL fragment shader, and you're attempting to improve the precision of ray-ellipsoid intersection calculations, particularly for large distances (e.g., 100 AU). Since the issue arises primarily due to precision loss when working with floating point values in GLSL, you're using double precision (fp64) for the core calculations.
From your explanation and shader code, it seems like you've already done a lot of the right things, but you are still facing some issues with artifacts and the precision is not good enough at higher zoom levels or for large distances.
Here are some potential strategies to further improve the precision:
a. Alternative Roots of the Quadratic Equation You have already mentioned an alternative solution using q = -0.5 * (b + sign(b) * d) for computing l0 and l1. This form, while computationally more expensive, is known to reduce rounding errors and is more numerically stable for certain conditions. It could help improve precision for cases where the ray and ellipsoid are nearly tangent.
// Alternative more stable approach for l0 and l1
if (b < 0.0) {
d = -d;
}
d = -0.5 * (b + sign(b) * d);
l0 = d / a;
l1 = c / d;
This approach is designed to be more stable when the discriminant is close to zero and can help in cases of nearly tangential intersections.
b. Using a Higher-Precision Method (Newton-Raphson) If the intersection points are very close to each other, you can also consider using an iterative refinement technique like Newton-Raphson to solve for l0 and l1 more accurately after an initial guess. This can improve the precision by performing iterative corrections.
Another thing to check is if you can make the range shift more adaptive, i.e., only apply it when the values exceed a certain threshold rather than a fixed value (view_depth_max).
is prone to numerical instability when the discriminant 𝑏2−4ac is small or when the values of b and a are very similar. One way to improve the precision is to use the exact quadratic formula when dealing with small discriminants:
d = sqrt(abs(b*b - 4.0*a*c)); // ensure that d doesn't become NaN due to small differences
Shifting the View Position (p0) You mentioned shifting the view position (p0) closer to the center, which can help improve precision for small distances. However, this requires carefully ensuring that the ray's direction (dp) and position (p0) are adjusted correctly to prevent artifacts. Be aware that this method may not be applicable in all cases since it can introduce additional errors when you are dealing with large distances or high zoom levels.
Increase the Precision of Your Inputs You mentioned that double precision inputs are being converted to float by GLSL. This could be a major source of the artifacts, especially when the ray intersects at extremely large or small distances. Unfortunately, you cannot directly pass double-precision values through the pipeline because GLSL has limitations regarding double-precision support for interpolators and attributes.
However, you can upcast the ray parameters in the vertex shader before passing them to the fragment shader, but this requires support for fp64 in your hardware and shader profile:
Q2: Is there a way to pass double interpolators from vertex to fragment shader?
In GLSL, you cannot directly pass dvec4 values (double precision) from the vertex shader to the fragment shader because GLSL does not support fp64 interpolators on all hardware. However, there are a few techniques you can consider:
a. Manually Pass Doubles as Separate Float Components You can manually pass the dvec4 values as separate vec4 or vec2 components in the vertex shader and reconstruct them in the fragment shader. This avoids relying on fp64 interpolation but requires more shader complexity.
// Vertex Shader
out vec4 vertexPosition;
out vec4 vertexDirection;
out vec4 vertexEllipsoid;
void main() {
// Convert the dvec4 to vec4 for passing as attributes
vertexPosition = vec4(p0.x, p0.y, p0.z, 1.0);
vertexDirection = vec4(dp.x, dp.y, dp.z, 1.0);
vertexEllipsoid = vec4(r.x, r.y, r.z, 1.0);
}
Then in the fragment shader, you can reconstruct these components back into dvec3 values. Although this avoids the need for fp64 interpolators, it does introduce some complexity.
b. Use Multiple Render Passes with fp64 in a Fragment Shader Another method is to separate the computations into multiple passes. In the first pass, compute the dvec3 values in a fragment shader using fp64, then store these values in a texture. In the second pass, retrieve the values from the texture and perform further calculations.
Read more about the solution here: https://impetusorgansseparation.com/avnneqwn?key=670ae0c3b093d8e8ac42c57aa7da8c14