Publications Abstracts – Conference
A passively illuminated scene presents a variety of photon pathways: direct and indirect, which convey varying levels of information about the scene across different dimensions of the light field. In indirect passive imaging, the object of interest is occluded from the imager which has no control over illumination. Using a second-order (non-linear) image formation model we demonstrate (experimentally) the feasibility of passive indirect diffuse imaging.
For details, see Shu Yang, Kwan Kit Lee, Amit Ashok, “Passive indirect diffuse imaging“, Proc. SPIE 11138, Wavelets and Sparsity XVIII, 111380U, 2019. DOI: 10.1117/12.2529956
Tsang et al. have shown that the Fisher information of the two incoherent point source separation, below the Rayleigh limit, is finite and achievable using optical modes measurements.1 However, recent claims regarding partial coherence of sources, no matter how small, leads to necessarily zero Fisher information as the source separation decreases below the Rayleigh limit approaching zero have proved to be controversial.2, 3 Thus, the impact of partial coherence on the photon counting optical modal measurements merits further exploration. In this work, we derive the mutual coherence function (image plane) of two partially coherent point sources and find the classical Fisher information of the source separation using both direct image plane and photon counting modal measurements. A classical Fisher information analysis of partially coherent source(s) leads to some rather surprising results for two-point source resolution as the source separation approaches zero. We find that the magnitude of the Fisher information strongly depends on the degree of (positive/negative) partial coherence, which can be understood using an intuitive semi-classical analysis of direct image plane and photon counting modal measurements. We also provide an error analysis of the maximum likelihood estimators for both measurements.
For details, see Kwan Kit Lee and Amit Ashok “Surpassing Rayleigh limit: Fisher information analysis of partially coherent source(s)”, Proc. SPIE 11136, Optics and Photonics for Information Processing XIII, 111360H (6 September 2019); https://doi.org/10.1117/12.2528540
Multi-view, transmission-based X-ray scanners can detect weapons and some explosive materials in luggage, but struggle to distinguish between certain threat and benign materials with sufficiently low false alarm rates. Orthogonal technologies, such as X-ray diffraction tomography (XRDT), provide complimentary information that can improve detection; however, such systems are challenging to implement in a practical, cost-effective manner. In this talk, we discuss our approach to the design of multi-modality scanners (i.e., combining transmission and XRDT), which involves rapid, flexible high-fidelity X-ray simulations, modeling realistic bags, using information theoretic analysis tools, and conducting system-level experimental investigation and validation.
For details, see Joshua H. Carpenter; Yijun Ding; Ava Hurlock; David Cocarelli; Chirs Gregory; Souleymane O. Diallo; Amit Ashok; Michael E. Gehm; Joel Greenberg, “Motivations and methods for the analysis of multi-modality x- ray systems for explosives detection,” Proc. SPIE 10999, Anomaly Detection and Imaging with X-Rays (ADIX) IV, 1099908, 2019. DOI: 10.1117/12.2518781
We have developed a high-fidelity simulation framework capable of modeling a multi-energy X-ray fixed gantry computed tomography transmission system. Our end-to-end simulation framework includes experimentally validated models of sources and detectors, as well as virtual bags to emulate the X-ray measurements generated by the fixed gantry X-ray CT system. This simulation capability enables us to conduct exploratory system trade-off studies around the current fixed gantry system, in terms of the source-detector geometry, detector energy resolution and other relevant system parameters to assess their impact on the threat detection performance. Using information-theoretic metrics, we are able to provide quantitative performance bounds on the performance of the candidate system designs. In this work, we will report results of our initial system design trade-off studies focused on detector energy resolution and energy partitioning and how they impact the threat detection performance.
For details, see Jay Voris, Yijun Ding, Ratchaneekorn Thamvichai, Joel Greenberg, David Coccarelli, Michael Gehm, Eric Johnson, Carl Bosch, Amit Ashok, “Information-theoretic analysis of fixed Gantry x-ray computed tomography transmission system for threat detection,” Proc. SPIE Anomaly Detection and Imaging with X-Rays (ADIX) IV, 109990I, 2019. DOI: 10.1117/12.2519812
We aim at developing a framework to assess X-ray based baggage scanners for the threat-detection task. In our prior work, we had developed a multi-energy measurement model that incorporated shot noise and material variations and derived the Cauchy-Schwarz Mutual Information (CSMI). However, the energy correlations in the data were not considered. In this work, we incorporate energy correlations in the material-variation model and the measurement model. We provide an analytical approximation of the CSMI and demonstrate the impact of energy correlations on CSMI, as well as bounds on the Shannon Mutual Information (SMI) and the Probability of Error (PE).
For details, see Yijun Ding and Amit Ashok, “X-ray measurement model and information-theoretic metric incorporating material variations with energy correlations,” Proc. SPIE 10999, Anomaly Detection and Imaging with X-Rays (ADIX) IV, 109990J, 2019. DOI: 10.1117/12.2518552
X-ray Computed Tomography (CT) transmission systems typically employ a X-ray source/detector array geometry on a rotating gantry to collect angularly diverse projections of an object. Such a mechanical scanning mechanism can contribute to increased system cost and maintenance requirements. An alternative approach employs Rectangular-Fixed-Gantry (RFG) architecture with multiple electronically controlled X-ray sources and detector arrays. Here we consider multiplexed measurement design in the RFG system framework that utilizes simultaneous multiple source illumination in optimized configuration(s). Using simulation studies, we demonstrate that multiplexed measurement design can significantly outperform traditional sequential measurement design in terms of reconstruction fidelity on the object.
For details, see Ahmad Masoudi and Amit Ashok, “Multiplexed measurement design for fixed Gantry x- ray computed tomography system,” Proc. SPIE 10999, Anomaly Detection and Imaging with X-Rays (ADIX) IV, 109990K, 2019. DOI: 10.1117/12.2519600
We propose a compression encoding method to perceptually optimize the image quality based on a novel quality metric, which emulates how the human visual system form opinion of a compressed image. Compared to the existing perceptual-optimized compression methods, which usually aim to minimize the detectability of compression artifacts and are sub-optimal in visually lossless regime, the proposed encoder aims to operate in the visually lossy regime. We implement the proposed encoder within the JPEG 2000 standard, and demonstrate its advantage over both detectability-based and conventional MSE encoders.
For details, see Yuzhang Lin, Feng Liu, Miguel Hernandez-Cabronero, Eze Ahanonu, Michael Marcellin and Ali Bilgin, and Amit Ashok, “Perception-Optimized Encoding for Visually Lossy Image Compression,” Proc. of Data Compression Conference (DCC), pp. 592-592, 2019. DOI 10-109/DCC.201900104
Traditional image compression methods primarily focus on maximizing the fidelity of the compressed image using image quality driven distortion metrics, which are ideally suited for human observers but are not necessarily optimal for machine observers, i.e., automated image exploitation algorithms. For machine observers, task-based distortion metrics, such as probability of error, have been shown to be more effective for tasks such as object detection and classification. This motivates an approach to a task-based image compression, within the JPEG 2000 framework, which preserves the information that is most relevant for the given task. Our proposed method produces a JPEG 2000 compliant compressed codestream, which can be decoded by any JPEG 2000 compliant decoder. We demonstrate the feasibility and the effectiveness of our task-based image compression approach on a simple object classification and detection problem and quantify its performance relative to a conventional MSE encoder.
For details, see Yuzhang Lin, Amit Ashok, Michael Marcellin and Ali Bilgin, “Task-Based JPEG 2000 Image Compression: An Information-Theoretic Approach,” Proc. of Data Compression Conference (DCC), pp. 423-423, 2018. DOI 10.1109/DCC.2018.00076
In this paper, we propose a visual discrimination model to enable perceptually optimized JPEG200 compression for both near-threshold and supra-threshold regimes. The performance of the proposed approach is validated by comparing it to the conventional Mean Squared Error (MSE)-optimized compression.
For details, see Feng Liu, Yuzhang Lin, Miguel Hernndez-Cabronero, Eze Ahanonu, Michael Marcellin, Amit Ashok, and Ali Bilgin, “A Visual Discrimination Model for JPEG2000 Compression,” Proc. of Data Compression Conference (DCC), pp. 424-424, 2018. DOI 10.1109/DCC.2018.00077
In our prior work, we had employed a fixed photo-absorption, coherent, and incoherent cross-section material model to derive a shot-noise limited description of the X-ray measurements in check-point or a checked baggage threat-detection systems. Using this measurement model, we developed an information-theoretic metric, which provides an upper-bound on the performance of a threat-detection system. However, the fixed cross-section material model does not incorporate material variability arising from inherent variations in its composition and density. In this work, we develop a multi-energy model of material variability based on composition and density variations and combine it with the shot-noise photon detection process to derive a new X-ray measurement model. We derive a computationally scalable analytic approximation of an information-theoretic metric, i.e. Cauchy-Schwarz mutual information, based on this material variability model to quantify the upper-bound on the performance of the threat-detection task. We demonstrate the effect of material variations on the performance bounds of X-ray transmission-based threat detection systems as a function of detector energy resolution and source fluence.
For details, see Ahmad Masoudi, Jay Voris, David Coccarelli, Joel Greenberg, Michael Gehm, and Amit Ashok, “X-ray measurement model and information-theoretic system metric incorporating material variability,” Proc. SPIE 10632, Anomaly Detection and Imaging with X-Rays (ADIX) III, 106320H, 2018. DOI: 10.1117/12.2307242
Differentiating material anomalies requires a measurement system that can reliably inform the user/classifier of pertinent material characteristics. In past work, we have developed a simulation framework capable of making simulated x-ray transmission and scatter measurements of virtual baggage. Using this simulated data, we have demonstrated how an information-theoretic approach to x-ray system design and analysis provides insight into system performance. Moreover, we have shown how performance limits relate to architectural variations in source fluence, view number, spectral resolution, spatial resolution, etc. However, our previous investigations did not include material variability in the description of the materials which make up the virtual baggage. One would expect the material variability to dramatically affect the results of the information-theoretic metric, and thus we now include it in our analysis. Previously, material information was captured as energy-dependent mean attenuation values. Because of this, material differentiation can always become easier with an improvement in SNR. When there is no variation to obscure class differences, improvements in SNR will indefinitely improve performance. Therefore, we saw a monotonic increase of the metric with source fluence. However there is inherent variability in materials from chemical impurities, texturing, or macroscopic variation. When this variability is accounted for, we better understand system performance limits at higher SNR as well as better represent the distributions of material characteristics. We will report on the analysis of real world system geometries and the fundamental limits of performance limits after incorporating these material variability improvements.
For details, see David Coccarelli, Joel A. Greenberg, Ratchaneekorn Thamvichai, Jay Voris, Ahmad Masoudi, Amit Ashok, and Michael Gehm, “An information theoretic approach to system optimization accounting for material variability,” Proc. SPIE 10632, Anomaly Detection and Imaging with X-Rays (ADIX) III, 106320F, 2018. DOI: 10.1117/12.2305227
X-ray computed tomography is widely used in security applications. With growing interest in view-limited systems, which have increased throughput, there is a significant interest in constrained image reconstruction techniques that allows high fidelity reconstruction from limited data. These image reconstruction techniques are commonly characterized by their intense computational requirements making their deployment in real-time imaging applications challenging. Recent success of deep learning techniques in various signal and image processing applications has sparked an interest in using these techniques for image reconstruction problems. In this work, we explore the use of deep learning techniques for reconstruction of baggage CT data and compare these techniques to constrained reconstruction methods.
For details, see Sagar Mandava, Amit Ashok, and Ali Bilgin, “Deep learning based sparse view x-ray CT reconstruction for checked baggage screening,” Proc. SPIE 10632, Anomaly Detection and Imaging with X-Rays (ADIX) III, 1063204, 2018. DOI: 10.1117/12.2309509
The smallest estimable separation of two incoherent monochromatic point sources is considered to be a fundamental measure of imaging resolution. We extend the fundamental limit of two-point source resolution of a Gaussian aperture to an arbitrary aperture and propose a single-mode measurement that approaches this limit in the sub-diffraction regime.
For details, see Ronan Kerviche, Saikat Guha, and Amit Ashok, “Achieving the Ultimate Limit of Two Point Resolution by Computational Imaging,” in Imaging and Applied Optics Technical Digest (COSI), paper CW4B.5 (2017). DOI 10.1364/COSI.2017.CW4B.5
X-ray CT based baggage scanners are widely used in security applications. Recently, there has been increased interest in view-limited systems which can improve the scanning throughput while maintaining the threat detection performance. However as very few view angles are acquired in these systems, the image reconstruction problem is challenging. Standard reconstruction algorithms such as the filtered backprojection create strong artifacts when working with view-limited data. In this work, we study the performance of a variety of reconstruction algorithms for both single and multi-energy view-limited systems.
For details, see Sagar Mandava, David Coccarelli, Joel A. Greenberg, Michael E. Gehm, Amit Ashok, and Ali Bilgin, “Image reconstruction for view-limited x-ray CT in baggage scanning,” in Proc. SPIE 10187, Anomaly Detection and Imaging with X-Rays (ADIX) II, 101870F (2017). DOI 10.1117/12.2265491
Anomaly detection requires a system that can reliably convert measurements of an object into knowledge about that object. Previously, we have shown that an information-theoretic approach to the design and analysis of such systems provides insight into system performance as it pertains to architectural variations in source fluence, view number/angle, spectral resolution, and spatial resolution.1 However, this work was based on simulated measurements which, in turn, relied on assumptions made in our simulation models and virtual objects. In this work, we describe our experimental testbed capable of making transmission x-ray measurements. The spatial, spectral, and temporal resolution is sufficient to validate aspects of the simulation-based framework, including the forward models, bag packing techniques, and performance analysis. In our experimental CT system, designed baggage is placed on a rotation stage located between a tungsten-anode source and a spectroscopic detector array. The setup is able to measure a full 360° rotation with 18,000 views, each of which defines a 10 ms exposure of 1,536 detector elements, each with 64 spectral channels. Measurements were made of 1,000 bags that comprise 100 clutter instantiations each with 10 different target materials. Moreover, we develop a systematic way to generate bags representative of our desired clutter and target distributions. This gives the dataset a statistical significance valuable in future investigations.
For details, see David Coccarelli, Joel A. Greenberg, Sagar Mandava, Qian Gong, Liang-Chih Huang, Amit Ashok, and Michael E. Gehm, “Creating an experimental testbed for information-theoretic analysis of architectures for x-ray anomaly detection,” in Proc. SPIE 10187, Anomaly Detection and Imaging with X-Rays (ADIX) II, 1018709 (2017). DOI 10.1117/12.2263033
Image compression systems that exploit the properties of the Human Visual System (HVS) have been studied extensively over the few decades. For the JPEG2000 image compressionstandard, several methods to optimize perceptual quality have been proposed. In 2013, Han et al. proposed a visually lossless compression approach based on the irreversible pipeline defined in the JPEG2000 standard. In this approach, visibility thresholds were measured using psychovisual experiments. These thresholds were then incorporated in a JPEG2000 encoder to ensure that quantization distortions remain below visible levels in the compressed codestreams. In this work, we investigate the use of a similar approach for the reversible pipeline of the JPEG2000 standard. Our motivation is to allow the creation of a scalable codestream that can provide both visually lossless and numerically lossless representations from a single codestream. By comparing the difference in compression performance between the reversible and irreversible pipelines, we also quantify the overhead associated with the reversible pipeline for visually lossless compression.
For details, see Feng Liu, Eze Ahanonu, Michael W. Marcellin,Yuzhang Lin, Amit Ashok and Ali Bilgin, “Visibility Thresholds in Reversible JPEG2000 Compression,” in Proc. of Data Compression Conference (DCC), pp. 450-450 (2017). DOI 10.1109/DCC.2017.78
Estimating the angular separation between two incoherently radiating monochromatic point sources is a canonical toy problem to quantify spatial resolution in imaging. In recent work, Tsang et al. showed, using a Fisher Information analysis, that Rayleigh’s resolution limit is just an artifact of the conventional wisdom of intensity measurement in the image plane. They showed that the optimal sensitivity of estimating the angle is only a function of the total photons collected during the camera’s integration time but entirely independent of the angular separation itself no matter how small it is, and found the information-optimal mode basis, intensity detection in which achieves the aforesaid performance. We extend the above analysis, which was done for a Gaussian point spread function (PSF) to a hard-aperture pupil proving the information optimality of image-plane sinc-Bessel modes, and generalize the result further to an arbitrary PSF. We obtain new counterintuitive insights on energy vs. information content in spatial modes, and extend the Fisher Information analysis to exact calculations of minimum mean squared error, both for Gaussian and hard aperture pupils.
For details, see Ronan Kerviche, Saikat Guha, and Amit Ashok, “Fundamental limit of resolving two point sources limited by an arbitrary point spread function,” Prof. International Symposium on Information Theory (ISIT), (2017). DOI 10.1109/ISIT.2017.8006566
Recently, Han et. al. developed a method for visually lossless compression using JPEG2000. In this method, visibility thresholds (VTs) are experimentally measured and used during quantization to ensure that the errors introduced by quantization are below these thresholds. In this work, we extend the work of Han et. al. to visually lossy regime. We propose a framework where a series of experiments are conducted to measure Just-Noticeable-Differences using the quantization distortion model introduced by Han et. al. The resulting thresholds are incorporated into a JPEG2000 encoder to yield visually lossy, JPEG2000 Part 1 compliant codestreams.
For details, see Feng Liu, Yuzhang Lin, Ezha Ahanonu, Michael W. Marcellin, Amit Ashok, Elizabeth A. Krupinski, and Ali Bilgin, “Visibility Thresholds for Visually Lossy JPEG2000,” Proc. of SPIE, Applications of Digital Image Processing XXXIX, 99711P, (2016). DOI 10.1117/12.2238411
In this work we present an information-theoretic framework for a systematic study of checkpoint x-ray systems using photoabsorption measurements. Conventional system performance analysis of threat detection systems confounds the effect of the system architecture choice with the performance of a threat detection algorithm. However, our system analysis approach enables a direct comparison of the fundamental performance limits of disparate hardware architectures, independent of the choice of a specific detection algorithm. We compare photoabsorptive measurements from different system architectures to understand the affect of system geometry (angular views) and spectral resolution on the fundamental limits of the system performance.
For details, see Yuzhang Lin, Genevieve G. Allouche, James Huang, Amit Ashok, Qian Gong, David Coccarelli, Razvan-Ionut Stoian, and Michael E. Gehm, “Information-theoretic analysis of x-ray photoabsorption based threat detection system for check-point,” in Proc. SPIE 9847, Anomaly Detection and Imaging with X-Rays (ADIX), 98470F (2016). DOI 10.1117/12.2223803
Conventional performance analysis of detection systems confounds the effects of the system architecture (sources, detectors, system geometry, etc.) with the effects of the detection algorithm. Previously, we introduced an information-theoretic approach to this problem by formulating a performance metric, based on Cauchy-Schwarz mutual information, that is analogous to the channel capacity concept from communications engineering. In this work, we discuss the application of this metric to study novel screening systems based on x-ray scatter or phase. Our results show how effective use of this metric can impact design decisions for x-ray scatter and phase systems.
For details, see David Coccarelli, Qian Gong, Razvan-Ionut Stoian, Joel A. Greenberg, Michael E. Gehm, Yuzhang Lin, James Huang, and Amit Ashok, “Information-theoretic analysis of x-ray scatter and phase architectures for anomaly detection,” in Proc. SPIE 9847, Anomaly Detection and Imaging with X-Rays (ADIX), 98470B (2016). DOI 10.1117/12.2223175
We present an information-theoretic approach to X-ray measurement design for threat detection in passenger bags. Unlike existing X-ray systems that rely of a large number of sequential tomographic projections for threat detection based on 3D reconstruction, our approach exploits the statistical priors on shape/material of items comprising the bag to optimize multiplexed measurements that can be used directly for threat detection without an intermediate 3D reconstruction. Simulation results show that the optimal multiplexed design achieves higher probability of detection for a given false alarm rate and lower probability of error for a range of exposure (photon) budgets, relative to the non-multiplexed measurements. For example, a 99% detection probability is achieved by optimal multiplexed design requiring 4x fewer measurements than non-multiplexed design.
For details, see James Huang, and Amit Ashok, “Information optimal compressive x-ray threat detection,” in Proc. SPIE 9847, Anomaly Detection and Imaging with X-Rays (ADIX), 98470T (2016). DOI 10.1117/12.2223784
We present a scalable information-optimal compressive imager optimized for the target classification task, discriminating between two target classes. Compressive projections are optimized using the Cauchy-Schwarz Mutual Information (CSMI) metric, which provides an upper-bound to the probability of error of target classification. The optimized measurements provide significant performance improvement relative to random and PCA secant projections. We validate the simulation performance of information-optimal compressive measurements with experimental data.
For details, see Ronan Kerviche, and Amit Ashok, “Scalable information-optimal compressive target recognition,” in Proc. SPIE 9870, Computational Imaging, 987008 (2016). DOI 10.1117/12.2228570
Compression is a key component in many imaging systems in order to accommodate limited resources such as power and bandwidth. Image compression is often done independent of the specific tasks that the systems are designed for, such as target detection, classification, diagnosis, etc. Standard compression techniques are designed based on quality metrics such as mean-squared error (MSE) or peak signal to noise ratio (PSNR). Recently, a metric based on task-specific information (TSI) was proposed and successfully incorporated into JPEG2000 encoding. It has been shown that the proposed TSI metric can optimize the task performance. In this work, a joint metric is proposed to provide a seamless transition between the conventional quality metric MSE and the recently proposed TSI. We demonstrate the effectiveness and flexibility of the proposed joint TSI metric for target detection tasks. Furthermore, it is extended to video tracking applications to demonstrate the robustness of the proposed metric. Experimental results show that although the metric is not directly designed for the applied task, better tracking performance can still be achieved when the joint metric is used, compared to results obtained with the traditional MSE metric.
For details, see Lingling Pu, Michael W. Marcellin, Ali Bilgin, and Amit Ashok, “Compression Based on a Joint Task-Specific Information Metric,” in Proceeding of the IEEE Data Compression Conference (DCC), pp. 467-467 (2015). DOI 10.1109/DCC.2015.76
We present an analysis of measurement quantization and rate allocation problems in compressive imaging and quantify its impact on image formation. Compressive imaging is compared with traditional image compression with respect to measurement data size.
For details, see Yuzhang Lin, and Amit Ashok, “Measurement Quantization in Compressive Imaging and Image Compression,” in OSA Imaging and Applied Optics Technical Digest, paper JT5A.37 (2015). DOI 10.1364/AOMS.2015.JT5A.37
We present a scalable compressive imager using information-optimal measurements design and single-pass piece-wise linear reconstruction algorithm. Superior performance of such design compared to random binary projections is demonstrated via a real-time, high-resolution implementation of said system.
For details, see Ronan Kerviche, and Amit Ashok, “Information-optimal Scalable Compressive Imaging System,” in OSA Classical Optics Technical Digest, paper CMD2.2 (2014). DOI 10.1364/COSI.2014.CM2D.2
Compression is a key component in imaging systems hat have limited power, bandwidth, or other resources. In many applications, images are acquired to support a specific task such as target detection or classification. However, standard image compression techniques such as JPEG or JPEG2000 (J2K) are often designed to maximize image quality as measured by conventional quality metrics such as mean-squared error (MSE) or Peak Signal to Noise Ratio (PSNR). This mismatch between image quality metrics and ask performance motivates our investigation of image compression using a task-specific metric designed for the designated tasks. Given the selected target detection task, we first propose a metric based on conditional class entropy. The proposed metric is then incorporated into a J2K encoder to create compressed codestreams ha are fully compliant with the J2K standard. Experimental results illustrate that the decompressed images obtained using he proposed encoder greatly improve performance in detection/classification tasks over images encoded using a conventional J2K encoder.
For details, see Lingling Pu, Michael W. Marcellin, Ali Bilgin, and Amit Ashok, “Image compression based on task-specific information,” IEEE International Conference on Image Processing (ICIP), pp. 4817-4821 (2014). DOI 10.1109/ICIP.2014.7025976
We present a compressive imager demonstrator based on a scalable, parallel architecture. It primarily utilizes information-optimal projections and a Piece-wise Linear Minimum Mean Square Error Estimator (PLE-MMSE) combined with a block-based statistical model of natural images. Such system delivers high-resolution images from low resolution sensor with near real-time snapshots. This testbed provides a highly programmable compressive imager that allows testing of a variety of projection designs for different tasks (e.g. random binary, PCA) and also enables adaptive or dynamic designs.
For details, see Ronan Kerviche, Nan Zhu, and Amit Ashok, “Information optimal scalable compressive imager demonstrator,” in Proceeding of the IEEE International Conference on Image Processing (ICIP), pp. 2177-2179 (2014). DOI 10.1109/ICIP.2014.7025439
Compressive imaging exploits sparsity/compressibility of natural scenes to reduce the detector count/read-out bandwidth in a focal plane array by effectively implementing compression during the acquisition process. How-ever, realizing the full potential of compressive imaging entails several practical challenges, such as measurement design, measurement quantization, rate allocation, non-idealities inherent in hardware implementation, scalable imager architecture, system calibration and tractable image formation algorithms. We describe an information-theoretic approach for compressive measurement design that incorporates available prior knowledge about natural scenes for more efficient projection design relative to random projections. Compressive measurement quantization and rate-allocation problem are also considered and simulation studies demonstrate the performance of random and information-optimal projection designs for quantized compressive measurements. Finally we demonstrate the feasibility of optical compressive imaging with a scalable compressive imaging hardware implementation that addresses system calibration and real-time image formation challenges. The experimental results highlight the practical effectiveness of compressive imaging with system design constraints, non-ideal system components and realistic system calibration.
For details, see Amit Ashok, James Huang, Yuzhang Lin, and Ronan Kerviche, “Information optimal compressive imaging: design and implementation,” in Proc. SPIE 9186, Fifty Years of Optical Sciences at The University of Arizona, 91860K (2014). DOI 10.1117/12.2063947
We present a non-greedy adaptive compressive measurement design for application to an M-class recognition task. Unlike a greedy strategy which sequentially optimizes the immediate performance conditioned on previous measurement, a non-greedy adaptive design determines the optimal measurement vector by maximizing the expected final performance. Gaussian class conditional densities are used to model the variety of object realization for each hypothesis. The simulation results demonstrate that non-greedy adaptive design significantly reduces the probability of recognition error from greedy adaptive and various static measurement designs by 22% and 33%, respectively.
For details, see James Huang, Amit Ashok, and Mark Neifeld, “Non-greedy adaptive compressive imaging: A face recognition example,” in Proceeding of the IEEE Asilomar Conference on Signal, Systems, and Computers, pp. 762-764 (2013). DOI 10.1109/ACSSC.2013.6810387
We design an adaptive compressive imager by maximizing mutual information between measurements and the object. Simulation result shows that the proposed design requires 1.5 times fewer measurements relative to a static information-optimal design.
For details, see James Huang, Amit Ashok, and Mark Neifeld, “Information Optimal Adaptive Measurement Design For Compressive Imaging,” in OSA Imaging and Applied Optics Technical Digest, paper JTu4A.22 (2013). DOI 10.1364/AOPT.2013.JTu4A.22
In contrast with previous compressive light field imagers, we propose an optical architecture that jointly encodes both angular and spatial structure using a 2D random mask. This joint modulation is shown to outperform other compressive LF cameras.
For details, see Basel Salahieh, Amit Ashok, and Mark Neifeld, “Compressive Light Field Imaging Using Joint Spatio-Angular Modulation,” in OSA Imaging and Applied Optics Technical Digest, paper CM4C.6 (2013). DOI 10.1364/COSI.2013.CM4C.6
We examine compressive imaging within a stereo vision application in which a traditional correspondence algorithm is used to find pixel disparity maps. Through simulation we show that compressive imaging provides sufficient image fidelity to compute disparity maps.
For details, see Vicha Treeaporn, Amit Ashok, and Mark Neifeld, “Compressive Stereo Cameras for Computing Disparity Maps,” in OSA Imaging and Applied Optics Technical Digest, paper CM1C.4 (2013). DOI 10.1364/COSI.2013.CM1C.4
We present a joint-design approach to extended depth of field imaging within the computational imaging framework. Superior performance of the optimized Zernike phase-mask, relative to cubic and trefoil phase-masks is demonstrated by simulation and experiment.
For details, see Ronan Kerviche, and Amit Ashok, “A Joint-Design Approach for Extended Depth of Field Imaging,” in OSA Imaging and Applied Optics Technical Digest, paper CW4C.4 (2013). DOI 10.1364/COSI.2013.CW4C.4
In this paper, compressive sensing strategies for interception of Frequency-Hopping Spread Spectrum (FHSS) signals are introduced. Rapid switching of the carrier among many frequency channels using a pseudorandom sequence (unknown to the eavesdropper) makes FHSS signals dicult to intercept. The conventional approach to intercept FHSS signals necessitates capturing of all frequency channels and, thus, requires the Analog-to-Digital Converters (ADCs) to sample at very high rates. Using the fact that the FHSS signals have sparse instantaneous spectra, we propose compressive sensing strategies for their interception. The proposed techniques are validated using Gaussian Frequency-Shift Keying (GFSK) modulated FHSS signals as dened by the Bluetooth specication.
For details, see Feng Liu, Yookyung Kim, Nathan Goodman, Amit Ashok, and Ali Bilgin, “Compressive sensing of frequency-hopping spread spectrum signals,” in Proceeding of SPIE 8365, Compressive Sensing, 83650P (2012). DOI 10.1117/12.919561
A dynamically programmable computational imaging system has been demonstrated. The system operates in the visible and near infrared bands. Principal components and random binary measurements were used with the imaging hardware to demonstrate compressive imaging.
For details, see B. M. Kaylor, Amit Ashok, E. M. Seger, C. J. Keith, and R. R. Reibel, “Dynamically programmable, dual-band computational imaging system,” in OSA Imaging and Applied Optics Technical Digest, paper CM4B.3 (2012). DOI 10.1364/COSI.2012.CM4B.3
We present an information-theoretic framework for measurement basis design in compressive imaging. Simulation results show that the reconstruction error obtained with information-optimal projections is nearly an order of magnitude lower than that for random projections.
For details, see Amit Ashok, Mark A. Neifeld, and James Huang, “Information Optimal Static Measurement Design for Compressive Imaging,” in OSA Imaging and Applied Optics Technical Digest, paper CTu3B.3 (2012). DOI 10.1364/COSI.2012.CTu3B.3
We adopt a sequential Bayesian experiment design framework for compressive imaging wherein the measurement basis is data dependent and therefore adaptive. The criteria for measurement basis design employs the task-specific information (TSI), an information theoretic metric, that is conditioned on the past measurements. A Gaussian scale mixture prior model is used to represent compressible natural scenes in the Wavelet basis. The resulting adaptive compressive imager design yields significant performance improvements compared to a static compressive imager using random projections.
For details, see Amit Ashok, Liang C. Huang, and Mark A. Neifeld, “Information-Optimal Adaptive Compressive Imaging,” Invited Paper, in Proceeding of the IEEE Asilomar Conference on Signal, Systems, and Computers, pp. 1255-1259 (2011). DOI 10.1109/ACSSC.2011.6190217
Traditional light field imagers do not exploit the inherent spatio-angular correlations in light field of natural scenes towards reducing the number of measurements and minimizing the spatio-angular resolution trade-off. Here we describe a compressive light field imager that utilizes the prior knowledge of sparsity/compressibility along the spatial dimension of the light field to make compressive measurements. The reconstruction performance is analyzed for three choices of measurement bases: wavelet, random, and weighted random using a simulation study. We find that the weighted random bases outperforms both the coherent wavelet basis and the incoherent random basis on a light field data set. Specifically, the simulation study shows that the weighted random basis achieves 44% to 50% lower reconstruction error compared to wavelet and random bases for a compression ratio of three.
For details, see Amit Ashok and Mark A. Neifeld, “Compressive light field imaging with weighted random projections,” in Proc. SPIE 8165, Adaptive Coded Aperture Imaging and Non-Imaging Sensors V, 816519 (2011). DOI 10.1117/12.894367
We describe a compressive imager that adapts the measurement basis based on past measurements within a sequential Bayesian estimation framework. Simulation study shows a 7% improvement in reconstruction performance compared to a static measurement basis.
For details, see Amit Ashok and Mark A. Neifeld, “Adaptive Compressive Imaging via Sequential Parameter Estimation,” in Computational Optical Sensing and Imaging (COSI), paper CMA3, Toronto, Canada (2011). DOI 10.1364/COSI.2011.CMA3
Feature-specific imaging (FSI) or compressive imaging involves measuring relatively few linear projections of a scene compared to the dimensionality of the scene. Researchers have exploited the spatial correlation inherent in natural scenes to design compressive imaging systems using various measurement bases such as Karhunen-Lo`eve (KL) transform, random projections, Discrete Cosine transform (DCT) and Discrete Wavelet transform (DWT) to yield significant improvements in system performance and size, weight, and power (SWaP) compared to conventional non-compressive imaging systems. Here we extend the FSI approach to time-varying natural scenes by exploiting the inherent spatio-temporal correlations to make compressive measurements. The performance of space-time feature-specific/compressive imaging systems is analyzed using the KL measurement basis. We find that the addition of temporal redundancy in natural time-varying scenes yields further compression relative to space-only feature specific imaging. For a relative noise strength of 10% and reconstruction error of 10% using 8×8×16 spatio-temporal blocks we find about a 114x compression compared to a conventional imager while space-only FSI realizes about a 32x compression. We also describe a candidate space-time compressive optical imaging system architecture.
For details, see Vicha Treeaporn, Amit Ashok, and Mark A. Neifeld, “Space-time Compressive Imaging,” in Proc. SPIE 8056, Visual Information Processing XX, 80560P (2011). DOI 10.1117/12.884440
Static Feature-specific imaging (SFSI) employing a fixed/static measurement basis has been shown to achieve superior reconstruction performance to conventional imaging under certain conditions.1-5 In this paper, we describe an adaptive FSI system in which past measurements inform the choice of measurement basis for future measurements so as to maximize the reconstruction fidelity while employing the fewest measurements. An algorithm to implement an adaptive FSI system for principle component (PC) measurement basis is described. The resulting system is referred to as a PC-based adaptive FSI (AFSI) system. A simulation study employing the root mean squared error (RMSE) metric to quantify the reconstruction fidelity is used to analyze the performance of the PC-based AFSI system. We observe that the AFSI system achieves as much as 30% lower RMSE compared to a SFSI system.
For details, see Jun Ke, Amit Ashok, and Mark A. Neifeld, “Adaptive compressive imaging for object reconstruction,” in Proc. Of SPIE 7818, Adaptive Coded Aperture Imaging and Non-Imaging Sensors and Unconventional Imaging Sensor Systems II, 781809 (2010). DOI 10.1117/12.861738
Compressive imaging/sensing employing a random measurement basis does not incorporate the specific object prior information available for natural images. An alternate hybrid measurement basis is proposed that yields improved reconstruction performance for natural images.
For details, see Amit Ashok and Mark A. Neifeld, “Compressive Imaging: Hybrid Projection Design,” Invited Paper, in OSA Imaging and Applied Optics Technical Digest, paper, IWD3 (2010). DOI 10.1364/IS.2010.IWD3
Light field imagers such as the plenoptic and the integral imagers inherently measure projections of the four dimensional (4D) light field scalar function onto a two dimensional sensor and therefore, suffer from a spatial vs. angular resolution trade-off. Programmable light field imagers, proposed recently, overcome this spatioangular resolution trade-off and allow high-resolution capture of the (4D) light field function with multiple measurements at the cost of a longer exposure time. However, these light field imagers do not exploit the spatio-angular correlations inherent in the light fields of natural scenes and thus result in photon-inefficient measurements. Here, we describe two architectures for compressive light field imaging that require relatively few photon-efficient measurements to obtain a high-resolution estimate of the light field while reducing the overall exposure time. Our simulation study shows that, compressive light field imagers using the principal component (PC) measurement basis require four times fewer measurements and three times shorter exposure time compared to a conventional light field imager in order to achieve an equivalent light field reconstruction quality.
For details, see Amit Ashok and Mark A. Neifeld, “Compressive Light Field Imaging,” Invited Paper, Best Paper Award, in Proc. of SPIE 7690A, Three-Dimensional Imaging, Visualization, and Display, 76900Q (2010). DOI 10.1117/12.852738
A compact thin-film shuttered multi-beamsplitter superposition imaging solution to reduce some of the system size and weight costs typically associated with conventional wide field of view imaging techniques is described and demonstrated.
For details, see Vicha Treeaporn, Amit Ashok, and Mark A. Neifeld, “Increased Field of View Through Optical Multiplexing,” OSA Imaging Systems, OSA Technical Digest, paper IMC4 (2010). DOI 10.1364/IS.2010.IMC4
We present a novel method for computing the information content of an image. We introduce the notion of task-specific information (TSI) in order to quantify imaging system performance for a given task. This new approach employs a recently-discovered relationship between the Shannon mutual-information and minimum estimation error. We demonstrate the utility of the TSI formulation by applying it to several familiar imaging systems including (a) geometric imagers, (b) diffraction-limiter imagers, and (c) projective/ compressive imagers. Imaging system TSI performance is analyzed for two tasks: (a) detection, and (b) classification.
For details, see Amit Ashok, Pawan Baheti, and Mark A. Neifeld, “Task-specific information: an imaging system analysis tool,” in Proc. of SPIE 6575, Visual Information Processing XVI, 65750G (2007). DOI 10.1117/12.720878
Conventional imaging systems can suffer from significant aliasing and/or blur distortions when the detector array in the focal plane under-samples the image. We propose to address this problem by engineering the optical PSF of the imaging system followed by electronic post-processing to minimize the overall distortions. The optical PSF of the candidate imaging system is modified by placing a phase-mask in its aperture-stop. We consider a particular parameterization of the phase-mask and optimize its parameters to minimize the distortions. We obtain as much as 30% improvement in the final imaging quality with the optimized optical PSF imager (SPEL) relative to the conventional imager.
For details, see Amit Ashok and Mark A. Neifeld, “Recent progress on multi-domain optimization for ultrathin cameras,” Invited Paper, Proceedings of SPIE 6232, Intelligent Integrated Microsystems, 62320N (2006). DOI 10.1117/12.668280
An optical imaging system’s resolution can often be limited by the detector array instead of the optics. We present alternate non-impulse like optical point spread functions that overcome the distortions introduced by the detector array.
For details, see Mark A. Neifeld and Amit Ashok, “Imaging using Alternate Point Spread functions: Lenslets with Pseudo-Random Phase Diversity, ” Invited Paper, OSA Computational Optical Sensing and Imaging (COSI) Technical Digest, paper CMB1 (2005). DOI 10.1364/COSI.2005.CMB1
Multiple-antenna SAR interferometryinvolves the use of three or more antennas to reduce the overall phase ambiguities and phase noise in interferometric data. This paper presents a Bayesian approach to topographic mapping with multiple-antenna SAR interferometry. Topographic reconstruction is formulated as a parameter estimation problem in the model-based Bayesian inference framework. An InSAR simula. tor based on a forward model is developed for simulating SAR data from multiple-antenna InSAR for evaluating the Bayesian topographic reconstruction algorithms. A Bayesian point position algorithm is developed to estimate the height of a point in the image. A measure of the uncertainty in estimated position and height is also defined in terms of the spread of the dominant mode of the posterior distribution. An example demonstrating the performance of the algorithm for a three-antenna InSAR system is reported, and conclusions are drawn regarding the performance and improvements are proposed.
For details, see Amit Ashok and Andrew J. Wilkinson, “Topographic mapping with multiple antenna SAR interferometry: a Bayesian model-based approach,” in Proceedings of IEEE Geoscience and Remote Sensing Symposium (IGARSS), vol. 5, pp. 2058-2060 (2001). DOI 10.1109/IGARSS.2001.977902