メインコンテンツまでスキップ

depth2viewZ: Depth Buffer to View Space Conversion

Camera Projection Depth Transformation Functions

The depth2viewZ family of functions converts normalized depth buffer values to linear view space Z coordinates. These functions handle the mathematical transformation required to reconstruct world positions from depth textures in camera projection systems.

Mathematical Foundation

For perspective projection, the transformation follows:

Zview=nf(fn)dfZ_{view} = \frac{n \cdot f}{(f - n) \cdot d - f}

For orthographic projection:

Zview=d(nf)nZ_{view} = d \cdot (n - f) - n

Where:

  • dd = normalized depth buffer value [0,1]
  • nn = near clipping plane distance
  • ff = far clipping plane distance
  • ZviewZ_{view} = linear view space Z coordinate

Function Variants

FunctionPurposeParameters
depth2viewZPerspective projectiondepth, near, far
depth2viewZOrthographicOrthographic projectiondepth, near, far
depth2viewZCombinedUnified functiondepth, near, far, orthographic
ライブエディター
const fragment = () => {
      const depth = sin(uv.x.mul(8)).mul(0.5).add(0.5)
      const near = float(1)
      const far = float(20)
      const ortho = step(0.5, uv.y)
      const viewZ = depth2viewZCombined(depth, near, far, ortho)
      const color = viewZ.div(far).add(0.5)
      return vec4(vec3(color), 1)
}