Use of image feedback loops for real time terrain feature extraction

Wolfgang Baera, Lynne Greweb, Neil Rowea

aNaval Postgraduate School, Code CS , Monterey 93943

bCSU Monterey Bay, 100 Campus Center, Seaside, CA 95010

 

Abstract

By utilizing images calculated on-the-fly as a filter improvements in real-time performance of object measurement and feature extraction can be achieved for automated aerial photograph analysis. The process requires the rapid calculation of images from an existing terrain database. The calculated images are then compared to incoming sensor data. The difference between the calculated and sensor image is then utilized as a parallel error signal for updating the state of knowledge of the objects and features measured.

The advantage of this image feedback technique is that the calculation of sensor realistic perspective views from parameterized object models is easier than the direct interpretation of complex images. The feedback technique effectively eliminates what is already known from the measurement signal and thereby reduces the amount of data which must be processed by pattern recognition techniques by orders of magnitude.

The paper presents the mathematical description of the image feedback technique and estimates update frame rates which can be expected for real time applications. We then discuss the incremental software development approach and the system design we are using for implementing the technique. The state of the current system is presented along with a discussion of experiments and experiences gained in building large-scale high-resolution terrain databases. The paper concludes by defining future research areas that need to be addressed for improving performance and accuracy

Keywords: Pattern Recognition, Filter Images, Terrain Database Generation,

Feature Extraction

 

1. Introduction

Automated 3D model construction of real world objects from sensor inputs has proven to be a difficult task for computers. The same task appears to be easily and effortlessly accomplished by the human brain. We propose to bridge the gap between human and machine capabilities through a flexible and scalable approach within which the problem can be incrementally solved. The proposed approach automates a standard method in interactive 3D model building we call image feedback. Put simply, image feedback presents the operator with a synthetic 2D view of a 3D model and requests a correction of the model based upon the difference between the synthetic image and sensor measurements from the real world.

Starting with a computer system designed to support the interactive 3D world model building, which uses the image feedback algorithm, we propose to automate the process by systematically eliminating interactive operations. The development path for such a system represents a new and comprehensive approach to automatic pattern identification and object extraction. This process is particularly useful for 3D terrain modeling and we will emphasize this application throughout this paper.

1.1. Background

Terrain model construction typically requires digitized image data and sufficient information to calculate depth information. Many kinds of sensors have been used for calculating 3D models including stereo systems, range sensor, FLIR, LADAR, etc. [Ill98, Flir, Ladar]. Except for stereo, all of the other systems do not yield photo-metric information required for 3D terrain modeling. Unfortunately, there are many difficulties inherent with the stereo process including the presence of occlusion, non-overlapping regions as well as the issue of accuracy of registration. In addition, it is not always possible to obtain registered stereo pairs. By far, the most easily obtained sensor reading, is that of single 2D images taken from different perspectives, of possibly non-overlapping regions of the model and at different times. We therefore designed a system that performs updates on a 3D model using single 2D images, the current state of the 3D model, and knowledge of the approximate location of the camera.

A number of contributions to the construction of 3D models have been made via fusion of multiple range/depth images taken from different perspectives [Egg96, Ill98]. This assumes that range data is already available. Much of the previous work that uses 2D images to construct 3D models involves stereo vision[All95, Cha91, Jit96, Ill98, Sen98]. Stereo vision assumes that two or more registered images are available. Image point or feature correspondences between the images are created and used with the known camera parameters to calculate the underlying depth values of each set of matched points or features.

Other contributions to the construction of 3D models using 2D images include "shape from shading" and "shape from highlights" [Zhe96]. The practical use of both of these methods is limited and many assumptions either about the object being modeled or the controlled sensing configuration of the object must be made and do not apply realistically to modeling of natural objects such as terrain.

Another method using 2D images for the calculation of 3D models that is more closely related to our work is commonly called "depth from silhouettes" or "shape from silhouettes" or "shape from contour". Examples of this work can be found in [Ill98, Sle93, Zhe96, Wol94]. The basic premise is that from a single 2D image an object?s contour or silhouette can be detected and using the known camera parameters back-projected into a semi-infinite 3D volume. This volume can be repeatedly "clipped" down to reflect the true 3D volume form of the object using the silhouettes of the same object taken from different 2D perspective images of the object. Typically, a "shape from silhouette" modeling system will consist of a movable platform (usually via rotation) that the object sits on and is moved to present different perspective views to a stationary camera. Often such systems to aid in the detection of the object?s silhouettes will use a colored background (i.e. like green-screen technology). Obviously, these kinds of constraints will not work for the modeling of many kinds of objects such as terrain. In the case of terrain, there is no clear separation of background to object so detecting silhouettes can be difficult.

In [She92], a system is described with attempts to locate 3D objects from single 2D images. Of interest in this system, is the projection of a hypothesized 3D object at a hypothesized perspective to produce a 2D image that is compared with the input 2D image. This concept of projection of a 3D model and comparison in the 2D domain with an input image is used in our system albeit for the purpose of updating the 3D model and not for detection of the 3D model in a scene.

In [Sai97], a system is described which attempts to construct simple 3D polyhedral models using only box and pyramid primitives given a set of 2D input images. Instead of using stereo or other shape-from-X approaches, a hypothesize and test procedure is invoked. Specifically, a set of model primitives and their relations are hypothesized to represent the scene. The perspective projection of this model is calculated for each 2D input image of the scene and the two are compared. Each projected model image and input image are compared in terms of their "edge information". The hypothesis that yields the best comparison is taken as the model which represents the object in the series of 2D images. This kind of hypothesis method will only work for extremely simple scenes as evidenced by the reported results in [Sai97]. However, it is important in that like [She92], [Sai97] also looks at comparison of a 3D model to 2D input images in the 2D domain as does our system.

The literature confirms that 3D model building is an active research area in which only specialized solutions are available. For users who need a 3D object model now the options are succinctly summarized in an article by Illingworth and Hilton [Ill98]. These are:

    1. use a CAD/CAM package and draw it.
    2. Measure your objects using photogrammetric techniques.
    3. Buy it "off the shelf".

We have used option (b) to build large 3D terrain data models [Baer95] in order to generate sensor realistic real time perspective views for focal plane guided weapons testing. Such efforts cost several thousand dollars per square kilometer and were extremely operator intensive. The main time sink occurred during the quality control phase in which image feedback was used to validate the accuracy of terrain models by simple visual inspection. System requirements for the automation of 3D terrain generation using image feedback was first analyzed by one of us[Baer93]. Though theoretically possible, the result showed the compute power capable of delivering on the order of 2000 polygons per second would be required to automate the generation of a 3D terrain model from a remotely piloted vehicle using a video camera and flying at 60 miles per hour.

High speed video realistic perspective view generators, originally developed on transputer based parallel processing systems [Baer91] have recently been implemented on a PC based platform. The rapid and continuing increase in low cost computer speeds has made such outrageous performance requirements more realistic. This is especially true when we recognize that full automation may be a valiant research goal, but partial automation can save a lot of money now.

Image feedback is an approach which can be partially automated on current systems. It is also a development algorithm which lends itself to incremental implementation and provides a path toward full automation in the future. The remainder of this paper describes image feedback, the development path, and the progress made thus far.

2. Image Feedback Algorithm

A block diagram of the image feedback algorithm is shown in figure 1.




 

 

The circle on the left represents objective world. From this world a sensor takes a measurement and produces a stream of image data. On the far right is a set of databases in which our knowledge of the objective world and the sensors we use to view it is stored. This knowledge is represented as P parameters. These databases also contain a set of Q parameters used to track our estimate of the quality of the values stored in the P parameters.

Processing starts with a database initialization function, which loads our best current knowledge of the world into the computer and usually requires operator data entry. Once initialized the information in the database is used to generate a picture in the perspective of the sensor used. The calculated picture (Vc[i][j]) is the compared with the measurement view (Vm[i][j]) and an error view (Ve[i][j]) generated. The measured and error view are then analyzed by the block labeled 3D Model Update Algorithms. It is the job of these algorithms to analyze the error signal and estimate the best update for the parameter set which stores our knowledge of the world as well as the conditions under which the measurement was taken. After an update occurs, the processing cycle is repeated until the error signal is reduced to a minimum. A minimum error indicates that the measurement information has been exhausted and a quality calculation should be made to record the certainty with which the information is known. Once processing for one view is complete the next measurement should be analyzed in a similar fashion. The processing continues until all input measurements are exhausted.

The challenge in this processing scheme is to find algorithms appropriate for the 3D Model Update function. Identifying the need to develop update algorithms based on image differences is one of the central purposes of this paper. From a qualitative viewpoint the algorithms we are seeking are a superset of direct image analysis algorithms discussed in the introduction. Clearly if the Perspective View Generator is a null function and only generates Vc[i][j] = 0 values , or a blank image, then the error function would be identical to the measurement and the 3D Model Update Algorithms would reduce to the direct image analysis schemes. Conversely if the database is correct and the Perspective View Generator accurate, then the calculated picture will be identical to the incoming measurement. In this case the error signal is zero and the only function performed by the 3D Model Update Algorithms will be to increase our confidence of our knowledge.

Including a perspective view feedback into the image analysis processing is analogous to building a large parallel recursive filter. From this perspective the image feedback is viewed as a continuous process designed to improve our knowledge of the objective world. In the next section we provide a description of the algorithm in mathematical terms.

2.1. Mathematical Description of the Image Feedback Approach

We start the analysis by writing the global error function, E[T](Pk), for a series of image measurements.

Where:

i, j ? array coordinates of image

Pk - represents all real world parameters of all types (Po,Pt,Pe,Ps)

T - sample number from 1 to the last measurement image T

t - sample summation variable summed from 1 to T

Ve, Vm,Vc- error, measurement, calculated view arrays respectively

We are using square brackets [] to imply discrete variables and parentheses () to imply continuous variables. The error view Ve[i,j,t](Pk) represents an array of functions which are evaluated using our state of knowledge, Pk, and summed over all pixels (i,j) in all measured images. At each measurement time, the state-of-knowledge parameters are adjusted by the 3D Model Update function to make the global error function a minimum. This represents our best estimate of the real world and the solution to our problem.

Rather than perform the global error minimization as a single operation on all the measurement images at once it is preferable to update our state of knowledge each time a new measurement is made. Figure 2 shows the sequence of processing steps followed at each measurement step.

E[T+1,Pk[T]]

Ve[T+1,Pk[T]]

-d Pk +d Pk



E[T,Pk[T]] E[T+1,Pk[T+1]]


Pk[T] Pk[T+1] Pk Axis



D Pk[T]

Figure 2 Recursive Error Processing Steps

At the end of measurement T the system has estimated parameters Pk[T] which minimize the global error function E[T,Pk[T]]. This state is represented by the circled "1" in figure 2. The global error function shown as two lines emanating from this point is a minimum. Any change in the Pk parameters increases the error.

Assuming we have found a set of Pk[T]?s around which the error function is a minimum we now add the next measurement Vm[i,j,t+1]. The new average error is,

This error is calculated with the new measurement at T+1 using the old real world parameters estimated at time T. Adding the new measurement moves the cumulative error from point 1 to point 2 in figure 4-1. At this point the cumulative error is not generally a minimum. We must now find the a real world parameter correction, D Pk[T+1], so that the new values of the real world parameters, Pk[T+1]= Pk[T]+ D Pk[T+1], make the new global error a minimum. Graphically this means moving along the line from point 2 to point 3 in figure 2. Mathematically this can be expressed as an equation for the real world parameter correction in terms of the new error and the error partials as follows:

The use of the partial derivative notation requires some explanation. The derivative of the cumulative error function at E[T](Pk[T]) is not defined analytically since the function has a sharp corner at the minimum. However if we approximate it by calculating the change in the function when adding a small increment d Pk[T] and dividing the result by d Pk[T] a value for the derivative can be assigned at every discrete point. Though we are using partial derivative notation we are using it as a short cut for a numerical algorithm which is implemented with the perspective view generation function Vc(Pk).

Let us examine the significance of formula 2-3. The first term is the error image calculated with the old real world parameter estimate at time T and the new measurement at time T+1. The second term contains the adjustment to the accumulated error up to time T plus the adjustment to the current error Ve due to a correction in the real world parameters D Pk. The second term is written with primes on some of the k?s. This is to indicate that more than one parameter is indicated. The numerator of the partial derivatives contains many P?s and the denominator is one of them. The bracketed term is therefore a vector with indexes k? and a sum over the k? indexes is intended. If we could calculate the inverse of this matrix and multiply through equation 4-6 would then become a direct formula for the changes D Pk[T+1] required to make the new real world parameters Pk[T+1] give a new minimum error E[P[T+1],T+1].

It should be noted that in the case the new measurement shows no error, Ve = 0, the parameters are already at the minimum and no correction is necessary. Thus the D Pk is also zero.

2.2. Global Error Partial Derivative

From the definition of the global error we can write the global error partial as:

This represents a three dimensional array in i,j,t. Each of the individual elements represent the contribution of a change in the k?th real world parameter has on i,j?th pixel in the t?th measurement. As mentioned above it is calculated numerically by varying the Pk?th parameter and executing the perspective view algorithm.

The significance of this value is that it stores the memory of all the previous estimates. A new measurement does not directly change a corresponding real world parameter but rather contributes to the change. The magnitude of the contribution is weighted by the global error partial multiplied by the number of previous measurements. This value can then be used as the estimation quality parameter designated in figure 1 by the letter Q.

2.3. Classification of Real World Variables

In principle we could use equation 2-3 to calculate the real world corrections required to update the real world database as a direct mathematical calculation. In practice the simplicity of these equations hide many complexities associated with the large number of variable designated by k, and the fact that the error functions are highly non linear and analytic derivatives can not to be hoped for. Even though our initial claim that the perspective view generation function can be implemented as a very rapid computation in modern machines, blindly executing each variation Vc(Pk +d Pk) when k becomes very large is a hopeless task. Luckily by understanding the nature of the variable referred to we find that most of these calculations need not be done.

Simplifications come about in two ways. First the global error partials are often zero since the only real world parameters which can have an effect on the error are those which represent portions of the real world actually seen. Varying the parameter describing an invisible surface does not effect the error calculation. Second changes in many of the parameters will only effect one or a small group of pixels.

For example the reflectivity of a surface element will only effect the pixels in which the surface element shows up. Assuming the real world surface element only shows up in measurement image 3 and 4 at pixels ia,ja and ib,jb respectively the entire global partial derivative reduces to two terms.

For a typical aerial survey in which a ground element shows up in two or three overlap regions this drastically reducing the computational load for executing numerical estimation.

The examination of global error partials also provides a classification scheme useful in identifying the order in which estimation should be done. As mentioned above the global error partial is a three dimensional array in i,j,t for each parameter k.

Parameters for which all the elements are nonzero are sensor parameters such as focal length. These should be estimated first.

Parameters for the only nonzero elements are have one t value but are non zero for all i,j are parameters such as camera position, attitude and global weather parameters such as visibility. These should be estimated second.

Parameters for which are nonzero for a small number of adjacent pixels and some , usually sequential, measurement values are individual ground descriptors and should be estimated last.

Parameters for which all partials are zero can be ignored.

2.4. Measurement Weights

Not all pixels in a measurement image are of equal importance. Clouds for example obscure the scene and pixels containing clouds should weigh less in an estimation of terrain features than view portions showing clear line of sight to the ground. The simplest way to include measurement weights is to include a pixel mask W[i,j,t] as shown below.

Here the sum over the measurement mask in the denominator takes on the role of the normalization factor played by 1/T in the previous equations.

The mathematical discussion in this section is intended to provide a description of the image feedback approach. We have used mathematics as a language to describe a procedure. This procedure consists of updating our best estimate of a 3D world model by reducing the error between a synthetic and measurement image sequence. The approach is general in the sense that all parameters, from camera location to a single surface element reflectivity value, can be treated fundamentally alike. The algorithm is simple in the sense that it only requires the implementation of estimation loops, which search for the local minimum, locality tests, which assure that the minimum are meaningful instead of just random noise induced artifacts, and the calculation and storage of error partials, to speed up what would otherwise be random search variations.

We have not written such an algorithm at the present time nor do we expect a single practical algorithmic solution to be forthcoming on machines available in the near future. Not all possible variations on all possible 3D model parameters can be performed. Instead we will define the software architecture within which the development of practical algorithms can take place.

 

3. Image Feedback System Design

A data flow block diagram for a prototype system is shown in figure 3-1. The right hand edge of the diagram shows the current database and perspective view generator. The calculated and measured images are subtracted to generate an error image. The calculated, measured and error image are made available to both the display section and to the Image Feedback algorithms. These are drawn side by side to emphasize the similarity of roles played by these two sections. Both of these functions generate parameter update instructions, which are interpreted by the parameter update command interpreter, in order to update the current database. The processing cycle then repeats.

Diagram 3-1 emphasizes the similarity in role between the Image Feedback Algorithms and the display section. Both are designed to allow analysis of measured data and generate the update instructions. The display section however does this job by utilizing the analytic capabilities of an operator in an interactive mode while the 3D Model Update section incorporates intelligent software to automate the same process.

 

 

 

Figure 3-1 Image Feedback System Data Flow Block Diagram

The domain of effectiveness for these two operations differs considerably. The human operator is adept at recognizing large differences and unforeseen situations while software approaches excel at fine tuning solutions which are already defined within known convergence bounds. The job of the display section is then to provide the operator with clear pictures representing the recognition problem at hand and simple commands which allow him to correct the parameters in the data base rapidly. The operators roll is that is initialize to the knowledge of the situation and bring database parameters to the point at which the automated routines can take over.

3.1. Image Feedback System Operation

The operation of the system is outlined in diagram 3-2. The diagram shows three calculation phases which together do the analytic work of the system between an input operation and a save operation. The three calculation phases are described as follows.

Initialization Phase: The initialization phase begins with the reception of a new measurement image and ends with the switch over from interactive to automated calculation. This phase is dominated by operator interactive control and feedback through the CRT display drawn in figure 3-1.

Shown on the upper part of the screen is the measured image on the left, the difference image on the right. The lower portion of the screen contains the calculated image along with a series of control buttons, a map view, and various pop up dialog boxes as needed.

Operator commands are entered via mouse and keyboard. In general the operator has a choice of allowing automated control over parameter estimation or providing a hand input override.

Automated Update Phase: The automated phase begins with the switch from interactive database parameter editing and ends with the estimation of a database parameter update which has a minimum error.

Quality Control Phase: The quality control phase begins with the presentation of minimum error estimates to the operator and ends with his decision to re-initialize the process, thus returning to step 3.1 or to accept the database update and go on to the next measurement image.

As conceived here the initialization phase represents all operator interactive operations required to set the system up for automated operations. This includes both the gross adjustments (for example approximate image location and direction) which need to be made by the operator to get the system set up as well as the interactive operations required to keep the system locked into the correct local optimization minimum. In this sense the initialization phase command provide the capability of completing the pattern recognition function in a hand operated mode. If the system were never guided to a point in which automated lock-on occurs, or if no 3D Model Update algorithms had not been developed, the all operations would be performed by a combination of operations from the initialization and quality control phases.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Figure 3-2 Top Level Operation Flow Diagram

Development of the 3D Model Update algorithms can therefore be conceived as a systematic process of eliminating key-strokes from an existing interactive operation. Questions to be asked are:

    1. Can the operator perform the database generation function in the image feedback system environment?
    2. What information does the operator process in order to decide on his operational sequence of key-strokes?
    3. What computer algorithms can be substituted for the human decision process so the selection of operational sequences is automated.

The emphasis on the definition of interactive operations as an algorithm development guide is based upon the assumption that if we ask a programmer to code an operation we do not know how to do by hand will not be successful. Hence knowledge of the operations is a prerequisite to algorithm development. In the context of this paper the question is more pointed. In figure 3-1 the operator is presented with three views. These are the measured, calculated and difference view. Does the operator actually use the difference between the calculated and measured view to guide his database update commands? If so what features in the difference view provide the clues to guide his work?

The alternative and more traditional approach to feature extraction is simply to look at the measured view directly. Typically rules are developed to identify the expected appearance of objects. For example a round red object with a brown linear protrusion might be an apple. The measured image is segmented into blobs of color. The blobs are tested for color, roundness, and linearity. Any round blobs of red color are possible apples while if in addition a linear brown blob is found touching or imbedded in the red blob the probability that it is an apple rather than a ball is increased. Additional rules can be added to provide ever finer selection of features and increase the probability of identification accuracy but the point is that such rules are applied to the measured image directly.

In the image feedback approach the expands the domain of information the operator/ programmer has to work with. The calculated and difference image is also available. The feature extraction rules can now include features in the difference image as well as the measured image.

4. Automated Terrain Database Generation

Several experimental setups implementing the image feedback approach have been built[Baer93]. In the past such systems were setup ad hoc in order to automate the quality control phase of 3D terrain database generation. Such systems required networked machines and parallel processing components[baer91] which were costly to procure, program and operate. Recently the perspective view generator and display components shown in figure 3-1 have been implemented in a Quad Pentium 200 operating under Windows NT. The system can calculate on the order of 30 frames per second for a complex fully rendered terrain scene if no display is requested.

Figure 4-1 Terrain Database Creation Screen Display

Figure 4-1 shows a typical screen display. Shown along the upper portion of the screen are the three perspective views. From left to right these are the measured, alphanumeric, and synthetic views. The large bottom center window contains a terrain database overview map along with command menu options, difference view, and colored (show black) fans indicating the ground prints of the perspective views.

Utilizing this system experiments were conducted for estimating navigation parameters using the image feedback approach. High resolution images acting as measurement images were generated at 1m resolution. A terrain database utilizing 16meter resolution at Ft. Hunter Liggett in central California was utilized as the best current 3D model. The data is roughly equivalent to Defence Maping Agency DTED Level I elevations combined with SPOT satellite 10 meter imagery.

The synthetic image calculated from the 16 meter data base was flown interactively to within a an operator selected error bound. The navigation was handed over to a simple two parameter image feedback estimation loop such as outlined in section 2. Table 4-1 shows the convergence time for various error bounds.

Heading Error Pitch Error Convergence time above nominal

(deg) (deg) (sec)

0 0 0 ( 4.5 nominal)

10.29 7.17 1.5

19.5 8.5 2.1

28.5 5 no convergence

~1 20 2.2

~1 24.5 2.4

~1 28.9 2.9

~1 -10 no convergence lost horizon

Table 4-1 Image Registration Times for Various Start Errors

Images used were 90 degree zoom angle and contained a small horizon as shown in figure 4-1. Bore-sight rotation angle was not used in this experiment.

The result indicate that an operator could interactively navigate a synthetic camera to within 20 degrees the actual camera angles and allow the image feedback algorithm to converge utilizing a 16 meter data base. Our simple estimation loop did not converge without the strong horizon contrast in the view. Once image registration is complete the system can vary the ground descriptor elements to generate 1 meter database updates.

5. Future Research

Experiments and software development conducted so far has been limited to simple concept experiments. Future research will be required in several areas. These include:

    1. Development of rule based 3D model update schemes based on measured and synthetic image differences.
    2. Utilization of image segmentation differences rather than simple pixel subtraction to generate the global error function.
    3. Development of "lock-on" criteria defining the point between interactive and linear estimation domain in complex environments.
    4. Recursive estimation and adjustment of the global error partials.
    5. Development of culling rules for eliminating ignorable model parameters.

In addition continued improvement of sensor realistic perspective view generation rates is desirable. The faster images can be generated the faster the parameter variations leading to 3D model update directions will be found.

Though we expect future research to define improvements in acuracy and system performance. The current state of knowledge as well as the incremental development approach outlined in section 3 is adequate to guide the construction of an operational partially interactive 3D world model generator. Such a system specifically targeted for high resolution terrain generation is expected to be built in the coming year.

 

6. Conclusion

We have described the Image Feedback Approach for estimation of 3D real world models from measurement images. The approach consists of two components. First a straight forward minimum error estimation algorithm utilizing fast synthetic image generation capability from a 3D model. Second an algorithm development system which starts with a fully interactive quality control configuration and contains the software "hooks" for systematic replacement of labor intensive key board functions with program modules.

We have proposed the Image Feedback Approach as a general mechanism for solving pattern and image recognition problems in complex real world environments. The approach is based upon observation and learning actions performed by human operators when solving the same problem. We expect increasing compute capacity to favor logically simple computationally intensive algorithms and the economics of software development to favor incremental improvement of operational machinery.

Acknowledgement

The authors gratefully acknowledge the US Army TEXCOM, Ft. Hood, TX and US Army TRADOC , Monterey for both funds and equipment to support this work.

References

[All95] "3D Scene Reconstruction and Object Recognition for use with Autonomously Guided Vehicles (AGVs)" by C. Allen and I. Leggett, Proceedings of the 1995 IEEE IECON, pp. 1219-1224, 1995

[Baer91] " Implementation of a Perspective View Generator", Transputing ?91, P. Welch et. al., Vol. 2 pp. 643-656, ISOPRESS Amsterdam, 1991

[Baer93] "An Approach for Real-Time Terrain Database Creation from Aerial Imagery", by W. Baer and J. R. Akin, SPIE Proceedings Vol. 1943-17 OE/Aerospace and Remote Sensing Conference, 12-16 April 1993.

[Baer95] "Global Terrain Database Design for Realistic Imaging Sensor Simulation", W. Baer and C. Reed, Proceedings 13?th DIS Workshop on Standards for Interoperability of Distributed Simulations, Vol I, p. 19 ,Sep 18-22 1995

[Cha91] "A Combined Use of Regions and Segments to Construct Facets," H. Chabbi and G. Masini, Proceedings of the 6th International Conference on Image Analysis and Processing, Progress in Image Analysis and Processing II, pp. 334-338, Sept. 1991.

[Egg96] "Simultaneous Registration of Multiple Range Views for use in Reverse Engineering" by D. Eggert, A. Fitzgibbon, R. Fisher, International Conference on Pattern Recognition, pp. 243-247, 1996.

[Ill98] "Looking to Build a Model World," J. Illingworth and A. Hilton, Electronics & Communication Engineering Journal, pp 103-113, June 1998.

[Jit96] "Automated 3D Object Recognition and Dynamic Library Entry/Update System" by R. Jitly and D. Fraser, 3rd IEEE International Conference on Image Processing, pp. 325-328, 1996.

[Sai97] "Obtaining polyhedral model by integration of multi-view images via genetic algorithms" by Hideo Saito and Satoshi Kirihara, SPIE, Three-Dimensional Imaging and Laser-based Systems, Vol 3204, pp 174-184, Oct. 1997.

[Sen98] "Human Face Structure Estimation from Multiple Images using the 2D Affine Space" by K. Sengupta and J. Ohya, International Conference on Automatic Face and Gesture Recognition, pp. 106-111, 1998.

[She92] "A Generalized Method for 3D Object Location from Single 2D Images" by D. Sheu and A. Bond, Pattern Recognition, Vol 25, pp 771-786, Aug. 1992.

[Sle93] "Rapid Octree Construction from Image Sequences," R. Szeliski, CVGIP: Image Understanding, Vol. 58, No. 1, pp. 23-32, July 1993.


*Correspondence: Baer@cs.nps.navy.mil Tel. 408-656-2209