Visual and idiothetic information is coupled in forming multimodal spatial representations during navigation (Tcheang et al. in Proc Natl Acad Sci USA 108(3):1152–1157, 2011). We investigated whether idiothetic representations activate visual representations but not vice versa (unidirectional coupling) or whether these two representations activate each other (bidirectional coupling). In a virtual reality environment, participants actively rotated in place to face certain orientations to become adapted to a new vision–locomotion relationship (gain). In particular, the visual turning angle was equal to 0.7 times the physical turning angle. After adaptation, participants walked a path with a turn in darkness (idiothetic input only) or watched a video of the traversed path (visual input only). Then, the participants pointed to the origin of the path. The participants who were presented with only idiothetic input showed that their pointing responses were influenced by the new gain (adaptation effect). By contrast, the participants who were presented with only visual input did not show any adaptation effect. These results suggest that idiothetic input contributed to spatial representations indirectly via the coupling, which resulted in the adaptation effect, whereas vision alone contributed to spatial representations directly, which did not result in the adaptation effect. Hence, the coupling between vision and locomotion is unidirectional.