Some applications require a user to consider both geometric and image information. Consider, for example, an interface that presents both a three-dimensional model of an object, built from a CAD model or laser-range data, and an image of the same object, gathered from a surveillance camera or a carefully calibrated photograph. The easiest way to provide these information sets to a user is in separate, side-by-side displays. A more effective alternative combines both types of information in a single, integrated display by projecting the image onto the model. A perspective transformation that assigns image coordinates to model vertices can visually engrave the image onto corresponding surfaces of the model. Combining the image and geometric information in this manner provides several advantages. It allows an operator to visually confirm the accuracy of the modeling geometry and also provides realistic textures for the geometric model. We discuss several of our procedural methods to implement the integrated displays and discuss the benefits gained from applying these techniques to projects including robotic hazardous waste remediation, the virtual exploration of Mars, and remote mobile robot control.
展开▼