dc.contributor.author
Langner, Tobias
dc.date.accessioned
2020-03-02T08:04:02Z
dc.date.available
2020-03-02T08:04:02Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/26792
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-26549
dc.description.abstract
This thesis describes my work in the AutoNOMOS project, dedicated to
the advancement of autonomous driving. Our team of researchers demon-
strated autonomous driving on the German Autobahn and in the inner city
traffic of Berlin, mostly relying on LIDAR scanners and an Applanix POS
LV 220 for self-localization. These sensors are highly precise, yet expensive.
This prompts two questions: Can we achieve reliable environmental percep-
tion at hardware costs suitable for mass-market applications? And secondly,
can we achieve human-like or even superior sensing capabilities, yet use only
passive human-like sensing modalities?
Therefore, this thesis presents computer vision algorithms based on an em-
bedded stereo camera that was developed in our research group. It processes
stereoscopic images and computes range data in real-time. The sensor’s hard-
ware is inexpensive, yet powerful.
This dissertation covers four important aspects of visual environmental per-
ception. First, the camera’s pose must be precisely calibrated in order to
interpret visual data within the car’s coordinate system. For this purpose,
an automatic calibration is proposed that does not rely on a calibration tar-
get, but rather uses the image data from typical road scenes.
The second problem solved is free-space and obstacle recognition. This the-
sis presents a generic object detection system based on the stereo-camera’s
range data. It detects obstacles using a hybrid 2D/3D segmentation method
and employs an occupancy grid to compute drivable and occupied space.
Traffic lights detection at intersections is the third problem solved. Our au-
tonomous car was equipped with two monocular cameras behind the wind-
shield to cover a maximal field of view. The detection is steered by the
vehicle’s planning module and digital maps annotated with the GPS posi-
tions of oncoming traffic lights. The traffic light detection system has also
been tested with the stereo camera in order to map the locations of yet un-
known traffic lights.
The fourth and last problem solved is estimating the car’s ego-motion based
on the stereo camera’s range measurements and sparse optical flow. The
presented algorithm yields the car’s linear and angular velocity at reasonable
accuracy, offering a low-cost alternative to some parts of the Applanix POS
LV 220 reference system.
The integral parts of all presented algorithms rely only on visual data. Thus,
the methods are applicable not only to our vehicle but also to other au-
tonomous cars and robotic platforms.
en
dc.format.extent
iv, 103 Seiten
dc.rights.uri
http://www.fu-berlin.de/sites/refubium/rechtliches/Nutzungsbedingungen
dc.subject
Computer Vision
en
dc.subject
Autonomous Vehicles
en
dc.subject.ddc
000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::005 Computerprogrammierung, Programme, Daten
dc.title
Visual Perception for Autonomous Driving
dc.contributor.gender
male
dc.contributor.firstReferee
Rojas, Rául
dc.contributor.furtherReferee
Hild, Manfred
dc.date.accepted
2020-01-29
dc.identifier.urn
urn:nbn:de:kobv:188-refubium-26792-0
dc.title.translated
Visuelle Wahrnehmung für autonomes Fahren
refubium.affiliation
Mathematik und Informatik
dcterms.accessRights.dnb
free
dcterms.accessRights.openaire
open access