Research: Imaging

Structure and Motion Estimation

Structure and Motion estimation using a camera is a very well-known problem in robotics and computer vision research community. Many researchers have tried to address this problem using various model-based techniques, where one needs to know the accurate model of an object. At NCR we have developed novel methods to estimate the pose from camera to targets without the need of a target model. One method uses Euclidean homography relationships and a single known geometric length on a single object to estimate pose. By attaching reference frames to objects in the scene, the method is useful in position-based visual servo control, where it allows control of pose with respect to an object. Another method uses a globally, exponentially stable, reduced-order observer to estimate feature point coordinates (object structure) from known relative motion. Finally, an unkown input observer has been designed to estimate the motion of a moving object from a moving camera.


Daisy Chaining

The Euclidean position and orientation (i.e., pose) of an unmanned ground vehicle (UGV) is typically required for autonomous navigation and control. A collaborative visual servo controller is developed with the objective to regulate an UGV to a desired pose utilizing the feedback from a moving airborne monocular camera system. In contrast to typical camera configurations used for visual servo control problems, the controller in this paper is developed using a moving on-board camera viewing a moving target. Multi-view photogrammetric methods are used to develop relationships between different camera frames and UGV coordinate systems. Lyapunov-based methods are used to prove asymptotic regulation. The efforts are extended to develop a cooperative visual tracking controller for UGV and unknown depth compensation is achieved using adaptive Lyapunov-based control. Daisy Chaining method is fused with Geometric reconstruction to achieve myriad objectives such as real time trajectory generation, obstacle avoidance, etc.



The recent growth in portable consumer electronics has resulted in a plethora of image and video content publicly available on the internet that contains information or events that are of interest to the intelligence community. However, due to this “information overload”, it is difficult to isolate relevent content. Even when a video has been identified as containing information of interest, the location of where the video was recorded is unavailable. At NCR, we are developing a method to determine the location at which a video has been recorded. The process can be partitioned into three steps: (1) Structure and motion techniques are used to reconstruct the structure of background buildings, roads, etc in the scene, (2) an overhead view of the structures is generated and (3) matched with satellite imagery maps.

Collaborative Vehicles

At NCR, we have developed algorithms for estimating relative pose and velocity of an object using a single camera without the accurate knowledge of object model but only a single length on object. In collaboration with Center for Intelligent Machine And Robotics (CIMAR) at University of Florida efforts are spawned to make an autonomous CONVOY of robot vehicles. To aid the accurate relative pose and velocity estimation, four IR targets are used. The segmentation and tracking algorithms are used to track the four feature points and homography-based algorithm is used to estimate the relative pose. A non-linear estimator developed by the group is used to estimate the relative object velocity. We tried to address the issues like feature points going out of the FOV during turning, by using the side targets. The developed system was demonstrated at the Army air force base at Richmond Virginia.