dc.contributor.author
Hundelshausen, Felix von
dc.date.accessioned
2018-06-07T21:26:06Z
dc.date.available
2004-09-20T00:00:00.649Z
dc.identifier.uri
https://refubium.fu-berlin.de/handle/fub188/7893
dc.identifier.uri
http://dx.doi.org/10.17169/refubium-12092
dc.description
Title-page, Preface, Abstract and Table of Contents
1\. Introduction 5
1.1. Motivation 6
1.2 The Middle Size League 7
1.3 The FU-Fighter�s Middle Size Robot 10
1.4 Localizing the Robot by the Field Lines 13
1.5 Organization of the Thesis 14
2 Related Work 15
2.1 Navigation Using Laser Range Scanners 16
2.2 Navigation Using GPS and DGPS 18
2.3 Navigation Using Radar 20
2.4 Navigation Using Infrared Proximity Sensors 21
2.5 Navigation Using Ultrasonic Sensors 21
2.6 Navigation by Vision 22
2.7 Typical System Architectures in RoboCup 29
2.8 Existing Methods for Field Line Extraction 30
2.8.1 Applying Thresholding and Thinning to Extract the Lines 31
2.8.2 Using the Canny Edge Detector to Extract the Lines 32
2.8.3 Using the Radial Scan Method to Extract the Lines 37
2.8.4 Using a Model to Extract the Lines 38
2.9 Existing Methods for Robot Self-Localization Using the Field lines 40
2.9.1 Monte Carlo Localization 40
2.9.2 Global Localization by Matching Straight Lines 44
2.9.3 Relative Localization 45
2.10 Methods for Feature Detection 45
3 A new Algorithm: Tracking Regions 48
3.1 Extending the Region Growing Paradigm 50
3.1.1 Region Growing by Pixel Aggregation 51
3.1.2 The Key Observation 51
3.1.3 Shrinking Regions 53
3.1.4 Alternating Shrinking and Growing 54
3.1.5 Applicability 58
3.1.6 Running Time 59
3.1.7 Controlling the Tracking 59
3.1.8 Homogeneity Criterion 60
3.1.9 Tracking Several Regions 60
3.2 Boundary Extraction 62
3.1.6 Extracting the Field Lines By Tracking Regions 66
3.1.6 Results 69
4 A new Localization Method Using Shape Information 71
4.1 Three Layers for Robot Self-Localization 71
4.2 The Robot�s System State 73
4.3 Coordinate Systems and Transformations 74
4.4 Relationship between Wheel Rotations and the Robot�s Movement 78
4.5 The Dynamic Model 83
4.6 Using a Kalman Filter to Fuse the Three Layers 85
4.7 Fusing Delayed Measurements 90
4.7.1 Splitting the Kalman Cycle 92
4.7.2 Explicit Representation of Time 93
4.8 Layer 1: Odometric Information 96
4.9 The Observation Model 97
4.9.1 The Omni-Directional Vision System 97
4.9.2 The Distance Function 98
4.9.3 Predicting the Location of Objects in the Image 99
4.9.4 Transformation of Two-Dimensional Points on the Field 99
4.9.5 Transformation of Arbitrary 3D Points 101
4.10 Transforming the Contours into World Space 102
4.11 Modelling the Field Lines 105
4.12 Layer 2: Relative Visual Localization 107
4.12.1 MATRIX: A Force Field Pattern Approach 107
4.12.2 Adapting the System Dynamics Approach 117
4.13 Layer 3: Feature Recognition 130
4.13.1 Representation of the Line Contours 130
4.13.2 Quality and Quantity of Features 132
4.13.3 Direct Pose Inference by High-Level Features 134
4.13.4 Smoothing the Lines 139
4.13.5 Splitting the Lines 139
4.13.6 Corner Detection 142
4.13.7 Classification 144
4.13.8 Constructing Arcs and straight lines 144
4.13.9 Grouping Arcs and Detecting the Center Circle 148
4.13.10 Refining the Initial Solution of the Circle 150
4.13.11 Determining the Principal Directions 153
4.13.12 Discarding Unreliable and Grouping Collinear Lines 153
4.13.13 Detecting the Corners of the Penalty Area 157
4.13.14 Results of the Feature Detection 158
4.14 Results of the Overall Localization 162
5 Conclusions and Future Work 164
5.1 Considering the System Dynamics Approach 165
5.1.1 Automatic Modeling 165
5.1.2 Automatic Learning of Feature Detectors 166
5.1.3 The Problem of Feature Selection 166
5.2 Top-Down Versus Bottom-Up Methods 167
5.3 Criticizing the Proposed Feature Recognition Approach 168
5.4 The Problem of Light 171
6 Summary of Contributions 176
A Pseudo-code of Described Algorithms 178
B Source Code of the Region Tracking Algorithm 182
Bibiography 202
dc.description.abstract
This thesis has been written in conjunction to our engagement in the Midsize
league of RoboCup where autonomous mobile robots play soccer. In particular,
it is about the computer vision system of the robots, which supplies the
necessary visual information. The main contribution is a new image processing
technique that allows efficient tracking of large regions. The method yields
the precise shape of the regions and it is a base for several other methods,
which are described in this thesis. They comprise a new localization method
enabling the robots to determine their precise position by perceiving the
white field lines. In particular, they are able to perform real-time
recognition of a whole palette of features, including the center circle,
T-junctions and corners. If a situation occurs where no feature can be
recognized, another new method, the "MATRIX-method", is applied. It uses a
pre-computed force field to match the perceived field lines to the
corresponding lines in a model. Overall localization is then performed in a
three-level fusion process, which precisely takes into account the different
time delays in the system. The approach has been demonstrated to work, playing
over 10 games at the world-championship 2004 in Lisbon where the system
achieved fourth place. Although the system was conceived for participation in
RoboCup, especially the region tracking method will be of great use for many
other applications.
de
dc.description.abstract
Diese Dissertation wurde in Verbindung zu unserem Engagement in der RoboCup
MidSize-Liga, in der autonome mobile Roboter Fußball spielen, geschrieben.
Insbesondere behandelt sie die visuelle Verarbeitung per Computer, die die
notwendingen visuellen Informationen liefert. Der wichtigste Forschungsbeitrag
ist ein neues Bildverarbeitungsverfahren, das das effiziente Tracken großer
Bildregionen erlaubt. Die Methode liefert die genaue Form der Regionen und ist
eine Grundlage für mehrere andere Methoden, die in der Dissertation
beschrieben werden. Sie umfassen eine neue Ortungsmethode, die den Robotern
erlaubt, ihre präzise Position zu bestimmen, indem sie die weißen Feldlinien
wahrnehmen. Insbesondere sind die Roboter in der Lage, einge ganze Palette von
visuellem Merkmalen in Echtzeit zu erkennen, darunter den Mittelkreis,
T-Verbindungen und Ecken. In einer Situation, in der kein Merkmal erkannt
werden kann, wird eine andere Methode angewandt: Sie verwendet ein
vorberechnetes Kraftfeld um die wahrgenommenen Linien mit einem entsprechenden
Linienmodell in Einklang zu bringen. Die Ortung wird insgesamt in einem
dreistufigen Fusionsprozess durchgeführt, der präzise die zeitlichen
Verzögerungen im System berücksichtigt. Das System hat sich in der Praxis als
funktionsfähig erwiesen: Bei den Weltmeisterschaften in Lissabon 2004, in
denen der vierte Platz erzielt werden konnte, wurden über zehn Spiele
erfolgreich absolviert. Obwohl das System für die Teilnahme an RoboCup
konzipiert wurde, wird vor allem der "Region Tracking"-Algorithmus auch für
andere Anwendungen von großem Wert sein.
de
dc.rights.uri
http://www.fu-berlin.de/sites/refubium/rechtliches/Nutzungsbedingungen
dc.subject
computer vision
dc.subject.ddc
000 Informatik, Informationswissenschaft, allgemeine Werke::000 Informatik, Wissen, Systeme::004 Datenverarbeitung; Informatik
dc.title
Computer Vision for Autonomous Mobile Robots
dc.contributor.firstReferee
Prof. Dr. Raúl Rojas
dc.contributor.furtherReferee
Prof. Dr. Ernst Dieter Dickmanns
dc.date.accepted
2004-09-15
dc.date.embargoEnd
2004-09-22
dc.identifier.urn
urn:nbn:de:kobv:188-fudissthesis000000001335-3
dc.title.translated
Computersehen für autonome mobile Roboter
de
refubium.affiliation
Mathematik und Informatik
de
refubium.mycore.fudocsId
FUDISS_thesis_000000001335
refubium.mycore.transfer
http://www.diss.fu-berlin.de/2004/243/
refubium.mycore.derivateId
FUDISS_derivate_000000001335
dcterms.accessRights.dnb
free
dcterms.accessRights.openaire
open access