Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Author
Guo, CrystalDate Published
2022-06-13
Metadata
Show full item recordAbstract
Biological visual systems rely on pose estimation of three-dimensional (3D) objects to understand and navigate the surrounding environment, but the neural computations and mechanisms for inferring 3D poses from 2D retinal images are only partially understood, especially for conditions where stereo information is insufficient. We previously presented evidence that humans use the geometrical back-transform from retinal images to infer the poses of 3D objects lying centered on the ground. This model explained the almost veridical estimation of poses in real scenes and the illusory rotation of poses in obliquely viewed pictures, including the pointing at you phenomenon. Here we test this model for 3D objects in more varied configurations and find that it needs to be augmented. Five observers estimated poses of inclined, floating, or off-center 3D sticks in each of 16 different poses displayed on a monitor viewed straight or obliquely. Pose estimates in scenes and pictures showed remarkable accuracy and agreement between observers, but with a systematic fronto-parallel bias for oblique poses. When one end of an object is on the ground while the other is inclined up, the projected retinal orientation changes substantially as a function of inclination, so the back-transform derived from the object’s projection to the retina is not unique unless the angle of inclination is known. We show that observers’ pose estimates can be explained by the back-transform from retinal orientation only if it is derived for close to the correct inclination. The same back-transform explanation applies to obliquely viewed pictures. There is less change in retinal orientations when objects are floated or placed off-center but pose estimates can be explained by the same model, making it more likely that observers use internalized perspective geometry to make 3D pose inferences while actively incorporating inferences about other aspects of object placement.Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International