Research: Imaging
Image-based State Estimation

Imaging systems have become a ubiquitous sensor for autonomous systems. An image captured from a camera (or video stream) provides a dense data set that can be interpreted to understand relative position and orientation information about objects in the field-of-view (FOV). However, an image only provides two-dimensional information, and state estimation is required to recover the unmeasured distance from the camera. A challenge to the state estimation problem is that the image dynamics are inherently nonlinear (and nonlinear in the unmeasured state) and can be uncertain (e.g., uncertain camera calibration). NCR research efforts focus on the use of nonlinear observer methods and image geometry methods to solve problems such as Structure-from-Motion (SfM), Structure-and-Motion (SaM), range identification, Simultaneous Localization and Mapping (SLAM), and visual odometry problems. Recent and on-going efforts focus on solving the above problem even during intermittent image feedback when features in the scene are temporary occluded.

Visual Servo Control

Unique control challenges exist when attempting to use image feedback in a closed-loop control system. For example, the image coordinates can be directly used as feedback (i.e., so-called image-based visual servo control), which provides a heuristic better chance of the features remaining in the field-of-view (FOV) since they are being regulated towards the center of the image. However, satisfying the desired feature trajectories can require large camera movements that are not physically realizable. Furthermore, the control laws can suffer from unpredictable singularities in the image Jacobian. Alternatively, the image coordinates can be related to a Euclidean coordinate system (i.e., so-called position-based visual servo control). Using the relative Euclidean coordinates typically yields physically valid camera trajectories, with known (and hence avoidable) singularities and no local minima. However, there is no explicit control of the image features, and features may leave the field of view, resulting in task failure. NCR efforts focus on the use of homography-based (or 2.5D) based approaches that combine the strengths of IBVS and PBVS to yield realizable control trajectories that also use image coordinates in the feedback. Such approaches have been used for develop image-based controllers to track desired image trajectories for various autonomous systems. A key innovation was the development of daisy-chaining as a means to relate current image features to a keyframe for visual odometry. Research efforts also focus on means to ensure features remain in the FOV during task execution, provide robustness when they leave the FOV, and path planning based on FOV constraints.


The focus of this project is to develop methods that can be used to determine the geographic location from which an arbitrary hand-held video was taken. Unlike image matching-based approaches which are ill-suited for application with dramatically different image perspectives, NCR research efforts focus on methods that can extract relative feature point coordinates from the underlying nonlinear image dynamics inherent to any video, reconstruct the scene geometry, and then compare with geometry available from overhead imagery to obtain the geolocation. This approach converts the camera into a Cartesian sensor that generates a three dimensional geometric map of the scene that can be compared to other geometric maps derived from Nadir images or other auxiliary information. The proposed approach includes: a feature point extraction, refinement, and position/orientation estimation method; nonlinear observer methods to determine the camera motion and structure in the scene; nonlinear optimization methods used for error correction and structure estimation refinement; and the use of image features to reduce possible geolocation matches within a region of interest.


Ongoing Projects
AFRL: Privileged Sensing Framework
Prioria: Observer Methods for Image Based Autonomous Navigation
Florida High Corridor Matching Funds

Completed Projects
Laser Technology: A Nonlinear Approach to Image-Based Speed Estimation
BARD (US Isreal AG R&D Fund):Enhancement of Sensing Technologies for Selective Tree Fruit Identification and Targeting in Robotic Harvesting Systems
DOE: University Research Program In Robotics For Environmental Restoration & Waste Management
DARPA: Symbiosis of Micro-Robots for Advanced In-Space Operations
Innovative Automation Technologies Inc: Micro Air Vehicle Tether Recovery Apparatus (MAVTRAP): Image-based MAV Capture and Tensegrity System
NGA: Nonlinear Estimation Methods for Geolocation of Hand-held Video
AFRL: Vision-Based Guidance and Control Algorithms Research
UCF: Image-Based Motion Estimation and Tracking for Collaborative Space Assets
AFRL: Simultaneous Localization and Mapping
Eclipse Energy Systems: Eclipse Energy Systems Camera Project
Prioria: Structure, Motion, and Geolocation Estimation for Autonomous Vehicles
NGA: Nonlinear Observer and Learning Methods for Geolocation: A GEOINT Visual Analytics Tool
DOA Natl Inst of Food & AG: NRI: An Integrated Machine Vision-based Control for Citrus Fruit Harvesting Using Enhanced 3D Mapping, Path Planning and Servo Control
Florida High Corridor Matching Funds

Related NCR Image Feedback Publications