Neuronal properties, neural populations, and mental geometry in inferring object attributes
Average rating
Cast your vote
You can rate an item by clicking the amount of stars they wish to award to this item.
When enough users have cast their vote on this item, the average rating will also be shown.
Star rating
Your vote was cast
Thank you for your feedback
Thank you for your feedback
Author
Maruya, AkihitoDate Published
2024
Metadata
Show full item recordAbstract
"In the first chapter, we reveal that while many studies have focused on size invariance concerning physical distance, the constancy or inconstancy of relative size with respect to object pose has been largely overlooked. Our findings demonstrate a systematic underestimation of length for objects oriented toward or away from the observer, whether static or dynamically rotating. While observers attempt to correct for projected shortening using the optimal back- transform, these corrections often fall short, particularly for longer objects that appear more slanted. Incorporating a multiplicative factor for perceived slant into the back-transform model yields a better fit to the observed corrections. In the second chapter, we extend this investigation to obliquely viewed pictures, comparing human performance to the optimal geometric solution. We show that size and shape distortions occur in oblique views, particularly for objects at fronto-parallel poses, leading to significant underestimation. We found that empirical correction functions, although similar in shape to the optimal, are of lower amplitude, likely due to systematic underestimation of viewing azimuth. By adjusting the geometrical back-transform to account for this bias, we achieve better fits to the estimated 3D lengths from oblique views. These results add to the evidence that humans use internalized projective geometry to perceive sizes, shapes, and poses in both real scenes and their photographic representations. The third chapter addresses the perception of rigidity and non-rigidity in rigidly moving objects. We used rotating rigid objects that could appear either rigid or non-rigid to test the contribution of shape features to rigidity perception. Our results show that salient features such as gaps or vertices reinforce the perception of rigidity at slow and moderate speeds, while all configurations appear non-rigid at high speeds. We also demonstrate that motion flow vectors from local ME computation are predominantly orthogonal to the contours of the rings rather than parallel to the rotation direction. A convolutional neural network trained to distinguish flow patterns for wobbling versus rotation showed that motion-energy flows contribute to the perception of wobbling, while feature tracking mechanisms enhance the perception of rotation. Interestingly, circular rings can sometimes appear to spin and roll even without any sensory evidence, an illusion that is mitigated by the presence of vertices, gaps, and painted segments, highlighting the role of rotational symmetry and shape. By combining CNN outputs that prioritize motion energy at high speeds and feature tracking at low speeds, along with shape-based priors for wobbling and rolling, we were able to accurately explain both rigid and non-rigid perceptions across different shapes and speeds (R2=0.95). These findings demonstrate how the cooperation and competition between different classes of neurons lead to distinct states of visual perception and transitions between those states. Finally, the fourth chapter investigates the anisotropy in object non-rigidity, linking it to low-level neural properties in the primary visual cortex. By combining mathematical derivations and computational simulations, we replicate psychophysical findings on non-rigidity perception in rotating objects. Our analysis reveals that perceived shape changes, such as elongation or narrowing of rings, can be decoded from V1 outputs by considering anisotropies in orientation-selective cells. We empirically show that even when vertically rotating ellipses are widened or horizontally rotating ellipses are elongated to match shapes, the perceived difference in non-rigidity decreases, but heightened non- rigidity remains in vertical rotations. By integrating cortical anisotropies into motion flow calculations, we observed that motion gradients for vertical rotations align more closely with physical wobbling, whereas horizontal rotations fall somewhere between wobbling and rigid rotation. This indicates that intrinsic cortical anisotropies play a role in amplifying the perception of non-rigidity when orientation changes from horizontal to vertical. The study highlights the significance of these cortical anisotropies in influencing perceptual outcomes and prompts further exploration of their evolutionary purpose, particularly in relation to shape constancy and motion perception.Collections
The following license files are associated with this item:
- Creative Commons
Except where otherwise noted, this item's license is described as Attribution-NonCommercial-NoDerivatives 4.0 International