News

2025.05.26News

A paper by the research team of Planning Group A01 Dr. Oizumi has been published in the Journal of iScience

 A paper by the research team of Planning Group A01, Dr. Oizumi, has been published in the Journal of Science.

 This study investigated differences in information representation in the visual cortex by comparing unsupervised neural activity representations in different brains using Gromov–Wasserstein optimal transport. We examined mouse Neuropixel data and human fMRI data and found that in both datasets, neural representations were unsupervised and aligned within the same visual cortex. On the other hand, between different visual areas, we found that information representations were unsupervised and aligned between higher visual areas, whereas they were not unsupervised and aligned between lower and higher visual areas. The full text is available to read in Open Access. We would be very happy if you did.

 Please see also here for more detailed explanations.



Paper Information


Authors: Ken Takeda, Kota Abe, Jun Kitazono, Masafumi Oizumi


Title: Unsupervised alignment reveals structural commonalities and differences in neural representations of natural scenes across individuals and brain areas


Journal: iScience, Volume 28, Issue 5, 2025, 112427, ISSN 2589-0042


DOI: https://doi.org/10.1016/j.isci.2025.112427.



Abstract: 

Summary

 Neuroscience research aims to identify universal neural mechanisms underlying sensory information encoding by comparing neural representations across individuals, typically using Representational Similarity Analysis. However, traditional methods assume direct stimulus correspondence across individuals, limiting the exploration of other possibilities. To address this, we propose an unsupervised alignment framework based on Gromov-Wasserstein Optimal Transport, which identifies correspondences between neural representations solely from internal similarity structures, without relying on stimulus labels. Applying this method to Neuropixels recordings in mice and fMRI data in humans viewing natural scenes, we found that the neural representations in the same visual cortical areas can be well aligned across individuals in an unsupervised manner. Furthermore, alignment across different brain areas is influenced by factors beyond the visual hierarchy, with higher-order visual areas aligning well with each other, but not with lower-order areas. This unsupervised approach reveals more nuanced structural commonalities and differences in neural representations than conventional methods.