This thesis describes my work in the AutoNOMOS project, dedicated to the advancement of autonomous driving. Our team of researchers demon- strated autonomous driving on the German Autobahn and in the inner city traffic of Berlin, mostly relying on LIDAR scanners and an Applanix POS LV 220 for self-localization. These sensors are highly precise, yet expensive. This prompts two questions: Can we achieve reliable environmental percep- tion at hardware costs suitable for mass-market applications? And secondly, can we achieve human-like or even superior sensing capabilities, yet use only passive human-like sensing modalities? Therefore, this thesis presents computer vision algorithms based on an em- bedded stereo camera that was developed in our research group. It processes stereoscopic images and computes range data in real-time. The sensor’s hard- ware is inexpensive, yet powerful. This dissertation covers four important aspects of visual environmental per- ception. First, the camera’s pose must be precisely calibrated in order to interpret visual data within the car’s coordinate system. For this purpose, an automatic calibration is proposed that does not rely on a calibration tar- get, but rather uses the image data from typical road scenes. The second problem solved is free-space and obstacle recognition. This the- sis presents a generic object detection system based on the stereo-camera’s range data. It detects obstacles using a hybrid 2D/3D segmentation method and employs an occupancy grid to compute drivable and occupied space. Traffic lights detection at intersections is the third problem solved. Our au- tonomous car was equipped with two monocular cameras behind the wind- shield to cover a maximal field of view. The detection is steered by the vehicle’s planning module and digital maps annotated with the GPS posi- tions of oncoming traffic lights. The traffic light detection system has also been tested with the stereo camera in order to map the locations of yet un- known traffic lights. The fourth and last problem solved is estimating the car’s ego-motion based on the stereo camera’s range measurements and sparse optical flow. The presented algorithm yields the car’s linear and angular velocity at reasonable accuracy, offering a low-cost alternative to some parts of the Applanix POS LV 220 reference system. The integral parts of all presented algorithms rely only on visual data. Thus, the methods are applicable not only to our vehicle but also to other au- tonomous cars and robotic platforms.