o '/ ii:'"' f tt ' ////lll t:
Chapter 11
Digital Computational Imaging Leonid Yaroslavsky Tel Aviv University, Israel
11.1 Introduction: Present-Day Trends in Imaging Imaging has always been the primary goal of informational optics. The whole history of optics is, without any exaggeration, a history of creating and perfecting imaging devices. Starting more than 2000 years ago from ancient magnifying glasses, optics has been evolving with ever increasing speed from Galileo’s telescope and van Leeuwenhoek’s microscope, through mastering new types of radiations and sensors, to the modern wide variety of imaging methods and devices of which most significant are holography, methods of computed tomography, adaptive optics, synthetic aperture and coded aperture imaging, and digital holography. The main characteristic feature of this latest stage of the evolution of optics is integration of physical optics with digital computers. With this, informational optics is reaching its maturity. It is becoming digital and imaging is becoming computational. The following qualities make digital computational imaging an ultimate solution for imaging: • Processing versatility. Digital computers integrated into optical information processing and imaging systems enable them to perform not only element wise and integral signal transformations such as spatial Fourier analysis, signal convolution, and correlation, which are characteristic for analog optics, but any operations needed. This eliminates the major limitation of optical information processing and makes optical information processing integrated with digital signal processing almost almighty. • Flexibility and adaptability. No hardware modifications are necessary to reprogram digital computers for solving different tasks. With the same hardware, one can build an arbitrary problem solver by simply selecting or designing an appropriate code for the computer. This feature makes digital computers also an ideal vehicle for processing optical signals adaptively since, with the help of computers, they can easily be adapted to varying signals, tasks, and end-user requirements. • Universal digital form of the data. Acquiring and processing quantitative information carried by optical signals and connecting 208
optical systems to other informational systems and networks is most natural when data are handled in a digital form. In the same way that in economics money is a general equivalent, digital signals are the general equivalent in information handling. Thanks to its universal nature, the digital signal is an ideal means for integrating different informational systems. Present-day main trends in digital computational imaging are as follows: • Development and implementation of new digital image acquisition, image formation, and image display methods and devices • Transition from digital image processing to real-time digital video processing • Widening the front of research toward 3D imaging and 3D video communication Currently there is a tremendous amount of literature on digital computational imaging, which is impossible to comprehensively review here, and the present-day trends can only be illustrated on some examples. In this chapter, we use an example of three developments, in which the present author was directly involved. In Section 11.2 we describe a new family of image sensors that are free of diffraction limitations of conventional lens-based image sensors and base their image formation capability solely on numerical processing of radiation intensity measurements made by a set of simple radiation sensors with natural cosine law spatial sensitivity. In Section 11.3 we describe real-time digital video processing for perfecting visual quality of video streams distorted by camera noise and atmospheric turbulence. For the latter case, the processing not only produces good quality video for visual analysis, but in addition, makes use of atmospheric turbulence-induced image instabilities to achieve image super-resolution beyond the limits defined by the camera sampling rate. In Section 11.4 we present a computer-generated display hologram based 3D video communication paradigm.
209
11.2 Opticsless Imaging Using “Smart” Sensors One can treat images as data that indicate location in space and intensities of sources of radiation. In conventional optical imaging systems, the task of determining positions of sources of light is solved by lenses, and the task of measurement of light source intensities is solved by light-sensitive plane sensors such as photographic films or CCD/CMOS electronic sensor arrays. A lens directs light from each of the light sources to a corresponding place on the sensor plane, and the intensity sensor’s output signal at this place provides an estimate of the light source intensity. Lenses are wonderful processors of directional information carried by light rays. They work in parallel with all light sources in their field of view and with the speed of light. However, their high perfection has its price. Light propagation from the lens to the sensor’s plane is governed by the diffraction laws. They limit the capability of the optical imaging system to distinguish light radiated from different light sources and to discriminate closely spaced sources. According to the theory of diffraction, this capability, called imaging system resolving power, is determined by the ratio of the light wavelength times the lens’s focal distance and the dimensions of the lens. Therefore, good imaging lenses are large and heavy. Perfect lenses that produce low aberrations are also very costly. In addition, lens-based imaging systems have limited field of view and lenses are not available for many practically important kinds of radiation such as, for instance, x-rays and radioactive radiation. This motivates search for optics-less imaging devices. Recently, H. J. Caulfield and the present author suggested a concept of a new family of opticsless radiation sensors1,2 that exploits the idea of combining the natural cosine law directional sensitivity of radiation sensors with the computational power of modern computers and digital processors to secure the sensor’s spatial selectivity required for imaging. These opticsless “smart” (OLS) sensors consist of an array of small elementary subsensors with the cosine law angular selectivity supplemented with a signal processing unit (the sensor’s “brain”) that processes the subsensors’ output signals to produce maximum likelihood (ML) estimates of spatial locations of radiation sources and their intensities. Two examples of possible designs of the OLS sensors are sketched in Fig. 11.1. Figure 11.1(a) shows an array of elementary subsensors arranged on a curved surface, such as spherical one. Such an array of subsensors, together with its signal processing unit, is capable of 210
localizing sources of radiation and measuring their intensities at any distances and in a 4π steradian solid angle. Shown in Fig. 11.1(b) is an array of elementary subsensors on a flat surface together with its signal processing unit that is capable of measuring coordinates and intensities of radiation sources at close distances.
a)
b) Figure 11.1 Two examples of possible designs of opticsless “smart” radiation sensors and their corresponding schematic diagrams.
The work principle of opticsless smart sensors can be explained using a simple special case of locating a certain number K of very distant radiation sources by an array of N elementary sensors placed on a curved surface. Consider a 1D model of sensor’s geometry sketched in Fig. 11.2. For an nth elementary sensor with the cosine law spatial selectivity placed at angle ϕn with respect to the sensor’s “optical” axis, its output signal sn,k to a ray of light emanating from the kth source under angle θk with respect to the sensor’s “optical” axis is proportional to the radiation intensity I k 211
and cosine of angle θn,k between the vector of electrical field of the ray and the normal to the elementary sensor surface. Additionally, this signal contains a random component ν n that describes the subsensor’s immanent noise as sn = Ak cosθn ,k + ν n = Ak sin ( ϕn + θk ) + ν n ,
(11.1)
where cos(.) and sin(.) are truncated cosine and sine functions equal to zero, when their corresponding cosine and sine functions are negative. Subsensor’s angular sensitivity pattern in polar coordinates
Ray from kth source Vector of the radiation electrical field
Sensor’s “optical” axis
Sensor’s surface
nth subsensor
Figure 11.2 Geometry of an elementary subsensor of the sensor array.
The sensor’s “brain” collects output signals {sn } of all N elementary subsensors and estimates on this base intensities { Ak } and directional angles {θk } of the given number K of distant radiation sources. In view of the statistical nature of the random sensor noise, statistically optimal estimates will be the maximum likelihood estimates. In the assumption that noise components of different subsensors are statistically independent and normally distributed with the same standard deviation, maximum likelihood estimates { Aˆk , θˆ k } of intensities and directional angles of the sources can be found as solutions of the following equation:
212
{
2 K ⎧N ⎫ ˆA θˆ = arg min ⎨⎪ s − ⎡ Aˆ sin ( ϕ + θ ) ⎤ ⎬⎪ . ∑ n,k ⎢⎣∑ k, k k n k ⎥ k =1 ⎦ ⎪⎭ ⎪⎩ n =1 { Aˆk ,θˆ k }
}
(11.2)
For a single source, an analytical solution of this equation is possible, which means that the computational complexity of estimating intensity and directional angles of a single light source is of the order of the number of sensors and that the computations can be implemented in quite simple hardware. For a larger number K of sources, solution of this equation requires optimization in K-dimensional space, and the computational complexity of the estimation of source parameters grows dramatically with the number of sources. For locating multiple sources, “smart” sensors can be used in the following three modes: • “General localization” mode for localization and intensity estimation of a known number of radiation sources • “Constellation localization” mode for localization and intensity estimation of a given number of groups of sources with known mutual positions and intensity relations of individual sources • “Imaging” mode for estimating intensities of the given number of radiation sources in the given locations such as, for instance, in nodes of a regular grid on certain known distances from the sensor Some illustrative simulation results of localization of the given number of very distant and very proximal radiation sources by arrays of elementary subsensors are presented in Figs. 11.3 and 11.4, correspondingly. Figure 11.3(a) shows results of 100 statistical runs of a model of a spherical array of 30 × 30 subsensors with the signal-to-noise ratio (the ratio of the signal dynamic range to the noise standard deviation, or SNR) 20 used for localization and estimating intensities of three very distant point sources. Clouds of dots on the plot show the spread of the estimates. The simulation results show that for each source, standard deviations of the estimates of its angular position and intensity are, for reasonably high sensor SNRs, inversely proportional to SNR, to the source intensity, to the number of sources, and to the square root of the number of elementary subsensors. Figure 11.3(b) illustrates the work of a spherical OLS sensor in the imaging mode and shows the distribution of estimates of intensities of an array of sources that form pattern “OLSS” (opticsless smart sensors). In this experiment, a model of the spherical sensor consisted of 15 × 20 = 300 subsensors arranged within spatial angles ±π longitude and ±π/2.05 latitude, and the array of radiation sources consisted of 19 × 16 sources 213
with known directional angles within spatial angles ±π/2 longitude and ±π/3 latitude. Each subsensor had a noise standard deviation of 0.01, and source intensities were 0 or 1
(b)
(a)
Figure 11.3 Spherical “smart” sensor in the localization and imaging modes: (a) Simulation results of locating angular positions and intensities of three distant radiation sources using an array of 30 × 30 subsensors uniformly distributed over the surface of a hemisphere; (b) distribution of estimates of intensities of an array of sources that form pattern “OLSS”.
Figure 11.4 illustrates the performance of a flat OLS sensor in the imaging mode for imaging of a pattern of 10 × 16 sources on different distances, from one to five inter-subsensor distances, from the sensor. The results are obtained on a 1D model of the OLS sensor consisting of 30 elementary subsensors. The source array was scanned by the sensor columnwise. Standard deviation of subsensors’ noise was 0.01 (in units of the source intensity dynamic range [0 1]). Simulation results show that flat OLS sensors are capable of localization and imaging of radiation sources in their close proximity on distances of about half of the sensor’s dimensions. As always, there is a trade-off between good and bad features of the OLS sensors. The advantages are as follows: • No optics are needed, making this type of sensor applicable to virtually any type of radiation and wavelength. • The angle of view (of spherical sensors) is unlimited. • The resolving power is determined ultimately by the subsensor size and not by diffraction-related limits. 214
Figure 11.4 Simulation results of imaging of sources that pattern “SV” by a flat OLS sensor placed at different distances, from one to five inter-subsensor distances, from the sources.
The cost for these advantages is the high computational complexity, especially when good imaging properties for multiple sources are required. However, the inexorable march of Moore’s law makes such problems less hampering each year. Furthermore, the required computations can be parallelized, so the computational aspects are not expected to hinder usage. In conclusion, it is interesting to note that one can find quite a number of examples of opticsless vision among live creatures. Obviously, plants, such as sunflowers, that feature heliotropism must have a sort of “skin” vision to determine direction to sunlight in order to be able to direct their flowers or leaves accordingly. There are also many animals that have extraocular photoreception. One can also find a number of reports on the phenomenon of “skin” vision in humans, though some of them that refer to paranormal phenomena may provoke skepticism. Properties of the described OLS sensors show that opticsless vision can, in principle, exist and also cast light on its possible mechanisms. 11.3 Digital Video Processing for Image Denoising, Deblurring, and Superresolution 11.3.1 Image and video perfecting: Denoising and deblurring
Common sources of degradations of image quality in video sequences frequently are video camera noise and aperture distortions. These degradations may be considerably well compensated using spatial and temporal redundancy present in the video streams. One of the best video processing algorithms that allow reaching this goal is a 3D extension of the sliding window local adaptive discrete cosine transform (DCT) domain
215
filtering, originally developed for image denoising and deblurring.3,4 A flow diagram of such a filtering is shown in Fig. 11.5.
3D DCT spectrum
Sliding 3D window
Video frames
Modified spectrum
3D IDCT for the central pixel
Modification of the spectrum Current input frame
Current output frame
Figure 11.5 Sliding 3D window transform domain filtering.
According to this flow diagram, filtering is carried out in a sliding, space wise and time wise, window that, for each pixel in the currently processed video frame, contains the pixel’s spatial neighborhood in the current input video frame and in a certain number of preceding and following frames. The size of the 3D space-time window is a user-defined parameter defined by spatial and time correlations in the video frames. At each position of the 3D window, the 3D DCT transform coefficients of the window data are recursively computed from those in the previous position of the window. The obtained signal spectral coefficients are then nonlinearly modified according to the principles of empirical Wiener filtering.3 In its simplest implementation, this modification consists of the transform coefficient thresholding and zeroing the coefficients that, by their magnitude, do not exceed a certain threshold determined by the noise variance. For aperture correction, the remaining coefficients are multiplied by the inverse of the camera frequency response. The modified spectral coefficients are then subjected to inverse 3D DCT for computing the central pixel of the window. In this way, pixel by pixel, all pixels in the current frame are obtained; then the window starts sliding over the next frame, and so on. 216
Figures 11.6 and 11.7 illustrate denoising and deblurring of test and real-life video sequences using 5 × 5 × 5 sliding-window video processing. Corresponding demonstrative movies can be found on the Internet.5
Figure 11.6 Local adaptive 3D sliding-window DCT domain filtering for denoising video sequences. Top first and second rows: examples of frames of initial and noisy test video sequence. The third row: examples of frames of restored video obtained using 3D sliding 5 × 5 × 5 spatial/temporal window filtering.
217
(a)
(b)
(c) (d) Figure 11.7 Examples of frames of (a) initial and (b) restored thermal real-life video sequence, and (c) and (d) corresponding magnified fragments of the images. Note enhanced sharpness of the restored image.
218
11.3.2 Perfecting and superresolution of turbulent vides
In long-distance observation systems, images and video are frequently damaged by atmospheric turbulence, which causes spatially and temporally chaotic fluctuations in the index of refraction of the atmosphere6 and results in chaotic, spatial, and temporal geometrical distortions of the neighborhoods of all pixels. This geometrical instability of image frames heavily worsens the quality of videos and hampers their visual analysis. To make visual analysis possible, it is first of all required to stabilize images of stable scenes while preserving real motion of moving objects that might be present in the scenes. In Ref. 7, methods of generating stabilized videos from turbulent videos were reported. It was also found that along with image stabilization, image superresolution on stable scenes can be achieved.8 The core of the stabilization and superresolution method is elastic pixelwise registration, with subpixel accuracy, of available video frames of stable scenes followed by resampling of the frames according to the registration results. For achieving the required elastic registration of frames, for each current video frame, a time window of several preceding and following frames of the video sequence is analyzed. For each pixel of the frame, its x-y displacements in all remaining frames of the window are found using the methods of block matching or optical flow methods. Then the displacement data are analyzed to derive their statistical parameters for distinguishing pixels that were displaced due to the atmospheric turbulence from those that belong to real moving objects. This distinction can be made on an assumption that turbulence-induced pixel displacements are relatively small and irregular in time, while displacements caused by real movement are relatively large and, what is more important, contain a regular, in time, component. On the basis of these measurements, the stabilized and superresolved output frame is generated on a sampling grid built by subdivisions of the initial sampling grid. Nodes of the latter correspond, for each pixel, to mean values of found pixel displacements. Formation of the output frame consists of two steps. In the first step, corresponding pixels from all time window frames are placed at the nodes of the subpixels’ grid according to their found displacements minus displacement mean values. Because pixel displacements are chaotic, it may happen in this process that two or more corresponding pixels from different frames have to be placed in the same position in the output frame. In these cases, a robust to outliers estimate of average, such as median, can be taken as a replacement of those pixels. In the second step, subpixels that remain empty because of a possible 219
shortage of data in the selected time window should be interpolated from available data. For the interpolation, different available methods for interpolation of sparse data can be used, of which discrete sinc interpolation proved to be the most suitable.9 Pixels retrieved from the set of the time window frames contain, in the form of aliasing components, high frequencies outside the image base band defined by the original sampling rate of the input frames. Placing them into proper subpixel positions results in dealiasing these frequencies and, therefore, in widening image bandwidth beyond the base band. The more pixels in different subsampling positions are available, the higher degree of superresolution is achieved. The efficiency of the entire processing is illustrated in Fig. 11.8 by the results of computer simulations and in Fig. 11.9 for a real-life video.8 As one can see from Fig. 11.8, even from as small a number of frames as 15, a substantial resolution enhancement is potentially possible.
(a)
(b)
(c)
(d)
(e)
(f)
Figure 11.8 Illustrative simulation results of resolution enhancement of turbulent video frames: (a) Initial high-resolution frame; (b)–(d) examples of low-resolution frames distorted by simulated random local displacements with standard deviation 0.5 interpixel distance; (e) resolution-enhanced frame obtained by the described fusion process from 15 low-resolution frames; and (f) final output frame obtained by interpolation of samples that are missing in frame (e).
220
(a)
(b)
(c)
(d)
Figure 11.9 Illustration of resolution enhancement of real-life video frames. (a) A 256 × 256 pixels fragment of the stabilized frame; (b) the same fragment after resolution enhancement; (c) difference between images (a) and (b) that shows edges enhanced in image (b) as compared to image (a); and (d) aperturecorrected image (b).
Figures 11.9(a) and (b) show a 256 × 256 pixel fragment of a stabilized frame of real-life video obtained as a temporal median over 117 frames, and the same fragment obtained after replacing its pixels, as described above, by pixels taken from those 117 frames according to their displacements. Since resolution improvement can be appreciated visually only on a high-quality display, Fig. 11.9(c) presents the difference between these two images that clearly shows edges enhanced in Fig. 11.9(b) compared to Fig. 11.9(a). Figure 11.9(d) presents the final result of the processing after reinterpolation and aperture correction are 221
implemented in the assumption that the camera sensor has a fill factor, or the ratio of the size of individual sensing cells to the interpixel distance is close to 1. In the evaluation of the results obtained for real-life video, one should take into consideration that substantial resolution enhancement in the described superresolution fusion process can be expected only if the video acquisition camera fill factor is small enough. The camera fill factor determines the degree of low-pass filtering introduced by the camera. Due to this low-pass filtering, image high frequencies in the base band and aliasing high-frequency components that come into the base band due to image sampling are attenuated. Those aliasing components can be recovered and returned back to their true frequencies outside the base band in the described superresolution process, but only if they have not been lost due to the camera low-pass filtering. The larger the fill factor, the harder it is to recover resolution losses. In the described simulation experiment, the camera fill factor is 0.05; whereas in reality fill factors of monochrome cameras are usually close to 1. 11.4 Computer-Generated Holograms and 3D Video Communication 11.4.1 Computer-generated display holograms: an ultimate solution for 3D visualization and communication
There are no doubts that the ultimate solution for 3D visualization is holographic imaging. This is the only method capable of reproducing, in the most natural viewing conditions, 3D images that have all the visual properties of the original objects including full parallax, and are visually separated from the display device. Three-dimensional visual communication and display can be achieved through generation, at the viewer side, of holograms out of data that contain all relevant information on the scene to be viewed. Digital computers are ideal means for converting data on 3D scenes into optical holograms for visual perception.10,11 At the core of the 3D digital holographic visual communication paradigm is the understanding that, for 3D visual communication, one does not need to record, at the scene site, a physical hologram of the scene and that the hologram has to be generated at the viewer side. To this goal, one needs to collect, at the scene side, and to transmit to the viewer site a set of data that will be sufficient to generate, at the viewer site, a synthetic hologram of the scene for viewing. The major requirement for computer222
generated display holograms is that they should provide natural viewing conditions for the human visual system and, in particular, separation of reconstructed images from the display device. A crucial issue in transmitting data needed for the synthesis, at the viewer site, of display holograms is the volume of data to be collected and transmitted, and the computational complexity of the hologram synthesis. The upper bound of the amount of data needed to be collected at the scene side and transmitted to the viewer site is, in principle, the full volumetric description of the scene geometry and optical properties. However, a realistic estimation of the amount of data needed for generating a display hologram of the scene is by orders of magnitude lower than the upper bound due to the limitations of the human visual system. This also has a direct impact on the computational complexity of the hologram synthesis.
11.4.2 Feasible solutions for generating synthetic display holograms that fit human vision.
Several computationally inexpensive and at the same time quite sufficient solutions for creating 3D visual sensation with synthetic display holograms can be considered.10 11.4.2.1 Multiple view compound macroholograms
In this method, the scene to be viewed is described by means of multiple view images taken from different directions in the required view angle; and for each image, a hologram is synthesized separately with an account of its position in the viewing angle (see Fig. 11.10). The size of each hologram has to be of about the size of the viewer’s eye pupil. These elementary holograms will reconstruct different aspects of scenes from different directions, which are determined by their position in the viewing angle. The set of such holograms is then used to build a composite, or mosaic, macrohologram. It is essential that, for scenes given by their mathematical models, well-developed methods and software/hardware instrumentation tools of the modern computer graphics can be used for fast generating multiple view images needed for computing elementary holograms.
223
Compound macro-hologram: a 2-D mosaic of elementary holograms of the eye pupil size
Object
Elementary hologram
Multiple view directions
Figure 11.10 The principle of the synthesis of composite holograms.
In Ref. 12, an experiment on the synthesis of such a composite macrohologram composed of 900 elementary holograms of 256 × 256 pixels was reported. The hologram contained 30 × 30 views, in spatial angle −90 deg to 90 deg, of an object in the form of a cube. The synthesized holograms were encoded as purely phase holograms, or kinoforms,10,11 and recorded with the pixel size 12.5 µm. The physical size of the elementary holograms was 3.2 × 3.2 mm. Each elementary hologram was repeated, in the process of recording, 7 × 7 times to the size 22.4 × 22.4 mm. The size of the entire hologram was 672 × 672 mm. Being properly illuminated, the hologram can be used for viewing the reconstructed scene from different angles such as, for instance, through a window (Fig. 11.11). Looking through the hologram with two eyes, viewers are able to see a 3D image of a cube (Fig. 11.11, bottom) floating in the air. 11.4.2.2 Composite stereoholograms
A special case of multiple view mosaic macroholograms is composite stereoholograms. Composite stereoholograms are synthetic Fourier holograms that reproduce only horizontal parallax.10,13,15 When viewed with two eyes, they are capable of creating 3D sensation thanks to stereoscopic vision. With such holograms arranged in a circular composite hologram, a full 360 deg view of the scene can be achieved. Figure 11.12 shows such a 360 deg computer-generated hologram and examples of images from which it was synthesized.14 The entire hologram was composed of 1152 fragmentary kinoform holograms of 1024 × 1024 pixels 224
recorded with pixel size 12.5 µm. A viewer looking through the hologram from different positions sees a 3D image of an object in a form of a “molecule” of six “atoms” differently arranged in space. When the hologram is rotated, the viewer sees a 3D image of “atoms” floating in the air and continuously rotating in space, and easily recognize the rotation direction. A sample of the described 360 deg computer-generated hologram is stored at the MIT Museum.14
Figure 11.11 Viewing a compound computer-generated hologram using as an illumination source a miniature white-light lamp (left) and one of the views reconstructed from the hologram (right).
Figure 11.12 Synthetic computer-generated circular stereo-macrohologram (left) and two views of the scene (right). 11.4.2.3 “Programmed diffuser” holograms
The “programmed diffuser” method for synthesis of a Fourier display hologram was suggested for synthesis of computer-generated display holograms capable of reconstructing different views of 3D objects whose surfaces scatter light diffusely.10,16 This method assumes that objects to be viewed are specified in the object coordinate system (x,y,z) by their “macro” shape z(x,y), by the magnitude of the object reflectivity 225
distribution A(x,y) in the object plane (x,y) tangent to the object surface and by the directivity pattern of the diffuse component of its surface. The diffuse light scattering from the object surface is simulated by adding to the object wave front phase, as defined by the object macroshape, a pseudo-random phase component (a “programmable diffuser”), whose correlation function corresponds to the given directivity pattern of the object diffuse surface. This pseudo-random phase component is combined with the deterministic phase component defined by the object shape to form the distribution of the phase of the object wavefront. Holograms synthesized with this method exhibit a spatial inhomogeneity that is directly determined by the geometrical shape and diffuse properties of the object surface. This allows imitation of viewing the object from different direction by means of reconstruction of different fragments of its “programmed diffuser” hologram as is illustrated in Fig. 11.13 in an example of an object in the form of a hemisphere.
A(x,y)
z(x,y) NW
N
E
W
SW
NE
S
SE
Figure 11.13 An object’s image (upper left), its shape (center left), its “programmed diffuser” hologram (bottom left), and nine images reconstructed from northwest, north, northeast, west, center, east, southwest, south, and southeast fragments of the hologram (right). Circles on the image of the hologram encircle different fragments of the hologram.
226
References 1. Caulfield, H.J., Yaroslavsky, L.P., and Ludman, J., “Brainy light sensors with no diffraction limitations,” http://arxiv.org/abs/physics/0703099 2. Caulfield, H.J., and Yaroslavsky, L.P., “Flat accurate nonimaging point locator,” Digital Holography and Three-Dimensional Imaging 2007 Conference, Vancouver, June 18–20, 2007 3. Yaroslavsky, L., Digital Holography and Digital Image Processing, Kluwer, Boston (2004). 4. Yaroslavsky, L., “Space variant and adaptive transform domain image and video restoration methods,” Advances in Signal transforms: Theory and Applications, J. Astola and L. Yaroslavsky, Eds., Hindawi Publishing Corporation, http://www.hindawi.com (2007). 5. Shteinman, A., http://www.eng.tau.ac.il/~yaro/Shtainman/house_t18_b555.avi 6. Roggermann, M.C., and Welsh,B., Imaging Through Turbulence, CRC Press, Boca Raton (1996). 7. Fishbain, B., Yaroslavsky, L.P., and Ideses, I.A., “Real time stabilization of long range observation system turbulent video,” Journal of Real-Time Image Process., 2, Number 1 , October 2007, pp. 11-22 8. Yaroslavsky, L., Fishbain, B., Shabat, G., and Ideses, I., “Superresolution in turbulent videos: Making profit from damage,” Opt. Lett., 32(21), pp. 3038-3040, 2007 9. Yaroslavsky, L., “Fast discrete sinc-interpolation: A gold standard for image resampling,” Advances in Signal Transforms: Theory and Applications, J. Astola, and L. Yaroslavsky, Eds., EURASIP Book Series on Signal Processing and Communications, Hindawi Publishing Corporation, http://www.hindawi.com (2007). 10. Yaroslavkii, L., and Merzlyakov, N., Methods of Digital Holography, Consultants Bureau, New York (1980). 11. Lee, W.H., “Computer generated holograms: techniques and applications,” Progress in Optics, E. Wolf, Ed., Vol. XVI, p. 121–231, North Holland, Amsterdam (1978) 12. Karnaukhov, V.N., Merzlyakov, N.S., Mozerov, M.G., Yaroslavsky, L.P., Dimitrov, L.I., and Wenger, E., “Digital Display Macroholograms.” Comput. Opt., 14-15, Pt 2, pp. 31-36, 1995 13. Karnaukhov, V.N., Merzlyakov, N.S., and Yaroslavsky, L.P., “Three dimensional computer generated holographic movie,” Sov. Tech. Phys. Lett., 2, 169–172, 1976. 14. Yaroslavskii, L.P., 360-degree computer generated display hologram, http://museumcollections.mit.edu/FMPro?-lay=Web&wok=107443&format=detail.html&-db=ObjectsDB.fp5&-find 227
15. Yatagai, T., “Stereoscopic approach to 3-D display using computergenerated holograms,” Appl. Opt., 15, 2722–2729, 1976 16. Merzlyakov, N.S., and Yaroslavsky, L.P., “Simulation of light spots on diffuse surfaces by means of a ‘programmed diffuser’,” Sov. Phys. Tech. Phys., 47, 1263–1269, 1977.
228
llilliiltfltilililtil uililffiililil1il