Vast majority of visual cognitive functions from low to high level rely not only on feedforward signals carrying sensory input to downstream brain areas but also on internally-generated feedback signals traversing the brain in the opposite direction. The feedback signals underlie our ability to conjure up internal representations regardless of sensory input – when imagining an object or directly perceiving it. Despite ubiquitous implications of feedback signals in visual cognition, little is known about their functional organization in the brain. Multiple studies have shown that within the visual system the same brain region can concurrently represent feedforward and feedback contents. Given this spatial overlap, (1) how does the visual brain separate feedforward and feedback signals thus avoiding a mixture of the perceived and the imagined? Confusing the two information streams could lead to potentially detrimental consequences. Another body of research demonstrated that feedback connections between two different sensory systems participate in a rapid and effortless signal transmission across them. (2) How do nonvisual signals elicit visual representations? In this work, we aimed to scrutinize the functional organization of directed signal transmission in the visual brain by interrogating these two critical questions. In Studies I and II, we explored the functional segregation of feedforward and feedback signals in grey matter depth of early visual area V1 using 7T fMRI. In Study III we investigated the mechanism of cross-modal generalization using EEG. In Study I, we hypothesized that functional segregation of external and internally-generated visual contents follows the organization of feedforward and feedback anatomical projections revealed in primate tracing anatomy studies: feedforward projections were found to terminate in the middle cortical layer of primate area V1, whereas feedback connections project to the superficial and deep layers. We used high-resolution layer-specific fMRI and multivariate pattern analysis to test this hypothesis in a mental rotation task. We found that rotated contents were predominant at outer cortical depth compartments (i.e. superficial and deep). At the same time perceived contents were more strongly represented at the middle cortical compartment. These results correspond to the previous neuroanatomical findings and identify how through cortical depth compartmentalization V1 functionally segregates rather than confuses external from internally-generated visual contents. For the more precise estimation of signal-by-depth separation revealed in Study I, next we benchmarked three MR-sequences at 7T - gradient-echo, spin-echo, and vascular space occupancy - in their ability to differentiate feedforward and feedback signals in V1. The experiment in Study II consisted of two complementary tasks: a perception task that predominantly evokes feedforward signals and a working memory task that relies on feedback signals. We used multivariate pattern analysis to read out the perceived (feedforward) and memorized (feedback) grating orientation from neural signals across cortical depth. Analyses across all the MR-sequences revealed perception signals predominantly in the middle cortical compartment of area V1 and working memory signals in the deep compartment. Despite an overall consistency across sequences, spin-echo was the only sequence where both feedforward and feedback information were differently pronounced across cortical depth in a statistically robust way. We therefore suggest that in the context of a typical cognitive neuroscience experiment manipulating feedforward and feedback signals at 7T fMRI, spin-echo method may provide a favorable trade-off between spatial specificity and signal sensitivity. In Study III we focused on the second critical question - how are visual representations activated by signals belonging to another sensory modality? Here we built our hypothesis following the studies in the field of object recognition, which demonstrate that abstract category-level representations emerge in the brain after a brief stimuli presentation in the absence of any explicit categorization task. Based on these findings we assumed that two sensory systems can reach a modality-independent representational state providing a universal feature space which can be read out by both sensory systems. We used EEG and a paradigm in which participants were presented with images and spoken words while they were conducting an unrelated task. We aimed to explore whether categorical object representations in both modalities reflect a convergence towards modality-independent representations. We obtained robust representations of objects and object categories in visual and auditory modalities; however, we did not find a conceptual representation shared across modalities at the level of patterns extracted from EEG scalp electrodes in our study. Overall, our results show that feedforward and feedback signals are spatially segregated in the grey matter depth, possibly reflecting a general strategy for implementation of multiple cognitive functions within the same brain region. This differentiation can be revealed with diverse MR-sequences at 7T fMRI, where spin-echo sequence could be particularly suitable for establishing cortical depth-specific effects in humans. We did not find modality-independent representations which, according to our hypothesis, may subserve the activation of visual representations by the signals from another sensory system. This pattern of results indicates that identifying the mechanisms bridging different sensory systems is more challenging than exploring within-modality signal circuitry and this challenge requires further studies. With this, our results contribute to a large body of research interrogating how feedforward and feedback signals give rise to complex visual cognition.