Skip to content
Discussion options

You must be logged in to vote

Real stereo depth cameras (and Isaac Sim's simulation of them) compute depth from disparity — the pixel offset between left and right views. The fundamental relationship is:

  depth = (focal_length × baseline) / disparity                                                           

At greater distances, the disparity shrinks. When an object is both small and far, it occupies very few pixels in the rendered image. The post-processing pipeline can't compute a reliable disparity for it — the object doesn't span enough pixels to produce a meaningful stereo match. The pipeline correctly returns invalid depth rather than an unreliable measurement. A larger object occupies more pixels → more pixe…

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@arambarricalvoj
Comment options

Answer selected by arambarricalvoj
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants