50 resultados para automatic target detection
Resumo:
Background: Brown adipose tissue (BAT) plays an important role in whole body metabolism and could potentially mediate weight gain and insulin sensitivity. Although some imaging techniques allow BAT detection, there are currently no viable methods for continuous acquisition of BAT energy expenditure. We present a non-invasive technique for long term monitoring of BAT metabolism using microwave radiometry. Methods: A multilayer 3D computational model was created in HFSS™ with 1.5 mm skin, 3-10 mm subcutaneous fat, 200 mm muscle and a BAT region (2-6 cm3) located between fat and muscle. Based on this model, a log-spiral antenna was designed and optimized to maximize reception of thermal emissions from the target (BAT). The power absorption patterns calculated in HFSS™ were combined with simulated thermal distributions computed in COMSOL® to predict radiometric signal measured from an ultra-low-noise microwave radiometer. The power received by the antenna was characterized as a function of different levels of BAT metabolism under cold and noradrenergic stimulation. Results: The optimized frequency band was 1.5-2.2 GHz, with averaged antenna efficiency of 19%. The simulated power received by the radiometric antenna increased 2-9 mdBm (noradrenergic stimulus) and 4-15 mdBm (cold stimulus) corresponding to increased 15-fold BAT metabolism. Conclusions: Results demonstrated the ability to detect thermal radiation from small volumes (2-6 cm3) of BAT located up to 12 mm deep and to monitor small changes (0.5°C) in BAT metabolism. As such, the developed miniature radiometric antenna sensor appears suitable for non-invasive long term monitoring of BAT metabolism.
Resumo:
Conventional film based X-ray imaging systems are being replaced by their digital equivalents. Different approaches are being followed by considering direct or indirect conversion, with the later technique dominating. The typical, indirect conversion, X-ray panel detector uses a phosphor for X-ray conversion coupled to a large area array of amorphous silicon based optical sensors and a couple of switching thin film transistors (TFT). The pixel information can then be readout by switching the correspondent line and column transistors, routing the signal to an external amplifier. In this work we follow an alternative approach, where the electrical switching performed by the TFT is replaced by optical scanning using a low power laser beam and a sensing/switching PINPIN structure, thus resulting in a simpler device. The optically active device is a PINPIN array, sharing both front and back electrical contacts, deposited over a glass substrate. During X-ray exposure, each sensing side photodiode collects photons generated by the scintillator screen (560 nm), charging its internal capacitance. Subsequently a laser beam (445 nm) scans the switching diodes (back side) retrieving the stored charge in a sequential way, reconstructing the image. In this paper we present recent work on the optoelectronic characterization of the PINPIN structure to be incorporated in the X-ray image sensor. The results from the optoelectronic characterization of the device and the dependence on scanning beam parameters are presented and discussed. Preliminary results of line scans are also presented. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
In this paper, a damage-detection approach using the Mahalanobis distance with structural forced dynamic response data, in the form of transmissibility, is proposed. Transmissibility, as a damage-sensitive feature, varies in accordance with the damage level. Besides, Mahalanobis distance can distinguish the damaged structural state condition from the undamaged one by condensing the baseline data. For comparison reasons, the Mahalanobis distance results using transmissibility are compared with those using frequency response functions. The experiment results reveal quite a significant capacity for damage detection, and the comparison between the use of transmissibility and frequency response functions shows that, in both cases, the different damage scenarios could be well detected. Copyright (c) 2015 John Wiley & Sons, Ltd.
Resumo:
Beam-like structures are the most common components in real engineering, while single side damage is often encountered. In this study, a numerical analysis of single side damage in a free-free beam is analysed with three different finite element models; namely solid, shell and beam models for demonstrating their performance in simulating real structures. Similar to experiment, damage is introduced into one side of the beam, and natural frequencies are extracted from the simulations and compared with experimental and analytical results. Mode shapes are also analysed with modal assurance criterion. The results from simulations reveal a good performance of the three models in extracting natural frequencies, and solid model performs better than shell while shell model performs better than beam model under intact state. For damaged states, the natural frequencies captured from solid model show more sensitivity to damage severity than shell model and shell model performs similar to the beam model in distinguishing damage. The main contribution of this paper is to perform a comparison between three finite element models and experimental data as well as analytical solutions. The finite element results show a relatively well performance.
Resumo:
The development of high spatial resolution airborne and spaceborne sensors has improved the capability of ground-based data collection in the fields of agriculture, geography, geology, mineral identification, detection [2, 3], and classification [4–8]. The signal read by the sensor from a given spatial element of resolution and at a given spectral band is a mixing of components originated by the constituent substances, termed endmembers, located at that element of resolution. This chapter addresses hyperspectral unmixing, which is the decomposition of the pixel spectra into a collection of constituent spectra, or spectral signatures, and their corresponding fractional abundances indicating the proportion of each endmember present in the pixel [9, 10]. Depending on the mixing scales at each pixel, the observed mixture is either linear or nonlinear [11, 12]. The linear mixing model holds when the mixing scale is macroscopic [13]. The nonlinear model holds when the mixing scale is microscopic (i.e., intimate mixtures) [14, 15]. The linear model assumes negligible interaction among distinct endmembers [16, 17]. The nonlinear model assumes that incident solar radiation is scattered by the scene through multiple bounces involving several endmembers [18]. Under the linear mixing model and assuming that the number of endmembers and their spectral signatures are known, hyperspectral unmixing is a linear problem, which can be addressed, for example, under the maximum likelihood setup [19], the constrained least-squares approach [20], the spectral signature matching [21], the spectral angle mapper [22], and the subspace projection methods [20, 23, 24]. Orthogonal subspace projection [23] reduces the data dimensionality, suppresses undesired spectral signatures, and detects the presence of a spectral signature of interest. The basic concept is to project each pixel onto a subspace that is orthogonal to the undesired signatures. As shown in Settle [19], the orthogonal subspace projection technique is equivalent to the maximum likelihood estimator. This projection technique was extended by three unconstrained least-squares approaches [24] (signature space orthogonal projection, oblique subspace projection, target signature space orthogonal projection). Other works using maximum a posteriori probability (MAP) framework [25] and projection pursuit [26, 27] have also been applied to hyperspectral data. In most cases the number of endmembers and their signatures are not known. Independent component analysis (ICA) is an unsupervised source separation process that has been applied with success to blind source separation, to feature extraction, and to unsupervised recognition [28, 29]. ICA consists in finding a linear decomposition of observed data yielding statistically independent components. Given that hyperspectral data are, in given circumstances, linear mixtures, ICA comes to mind as a possible tool to unmix this class of data. In fact, the application of ICA to hyperspectral data has been proposed in reference 30, where endmember signatures are treated as sources and the mixing matrix is composed by the abundance fractions, and in references 9, 25, and 31–38, where sources are the abundance fractions of each endmember. In the first approach, we face two problems: (1) The number of samples are limited to the number of channels and (2) the process of pixel selection, playing the role of mixed sources, is not straightforward. In the second approach, ICA is based on the assumption of mutually independent sources, which is not the case of hyperspectral data, since the sum of the abundance fractions is constant, implying dependence among abundances. This dependence compromises ICA applicability to hyperspectral images. In addition, hyperspectral data are immersed in noise, which degrades the ICA performance. IFA [39] was introduced as a method for recovering independent hidden sources from their observed noisy mixtures. IFA implements two steps. First, source densities and noise covariance are estimated from the observed data by maximum likelihood. Second, sources are reconstructed by an optimal nonlinear estimator. Although IFA is a well-suited technique to unmix independent sources under noisy observations, the dependence among abundance fractions in hyperspectral imagery compromises, as in the ICA case, the IFA performance. Considering the linear mixing model, hyperspectral observations are in a simplex whose vertices correspond to the endmembers. Several approaches [40–43] have exploited this geometric feature of hyperspectral mixtures [42]. Minimum volume transform (MVT) algorithm [43] determines the simplex of minimum volume containing the data. The MVT-type approaches are complex from the computational point of view. Usually, these algorithms first find the convex hull defined by the observed data and then fit a minimum volume simplex to it. Aiming at a lower computational complexity, some algorithms such as the vertex component analysis (VCA) [44], the pixel purity index (PPI) [42], and the N-FINDR [45] still find the minimum volume simplex containing the data cloud, but they assume the presence in the data of at least one pure pixel of each endmember. This is a strong requisite that may not hold in some data sets. In any case, these algorithms find the set of most pure pixels in the data. Hyperspectral sensors collects spatial images over many narrow contiguous bands, yielding large amounts of data. For this reason, very often, the processing of hyperspectral data, included unmixing, is preceded by a dimensionality reduction step to reduce computational complexity and to improve the signal-to-noise ratio (SNR). Principal component analysis (PCA) [46], maximum noise fraction (MNF) [47], and singular value decomposition (SVD) [48] are three well-known projection techniques widely used in remote sensing in general and in unmixing in particular. The newly introduced method [49] exploits the structure of hyperspectral mixtures, namely the fact that spectral vectors are nonnegative. The computational complexity associated with these techniques is an obstacle to real-time implementations. To overcome this problem, band selection [50] and non-statistical [51] algorithms have been introduced. This chapter addresses hyperspectral data source dependence and its impact on ICA and IFA performances. The study consider simulated and real data and is based on mutual information minimization. Hyperspectral observations are described by a generative model. This model takes into account the degradation mechanisms normally found in hyperspectral applications—namely, signature variability [52–54], abundance constraints, topography modulation, and system noise. The computation of mutual information is based on fitting mixtures of Gaussians (MOG) to data. The MOG parameters (number of components, means, covariances, and weights) are inferred using the minimum description length (MDL) based algorithm [55]. We study the behavior of the mutual information as a function of the unmixing matrix. The conclusion is that the unmixing matrix minimizing the mutual information might be very far from the true one. Nevertheless, some abundance fractions might be well separated, mainly in the presence of strong signature variability, a large number of endmembers, and high SNR. We end this chapter by sketching a new methodology to blindly unmix hyperspectral data, where abundance fractions are modeled as a mixture of Dirichlet sources. This model enforces positivity and constant sum sources (full additivity) constraints. The mixing matrix is inferred by an expectation-maximization (EM)-type algorithm. This approach is in the vein of references 39 and 56, replacing independent sources represented by MOG with mixture of Dirichlet sources. Compared with the geometric-based approaches, the advantage of this model is that there is no need to have pure pixels in the observations. The chapter is organized as follows. Section 6.2 presents a spectral radiance model and formulates the spectral unmixing as a linear problem accounting for abundance constraints, signature variability, topography modulation, and system noise. Section 6.3 presents a brief resume of ICA and IFA algorithms. Section 6.4 illustrates the performance of IFA and of some well-known ICA algorithms with experimental data. Section 6.5 studies the ICA and IFA limitations in unmixing hyperspectral data. Section 6.6 presents results of ICA based on real data. Section 6.7 describes the new blind unmixing scheme and some illustrative examples. Section 6.8 concludes with some remarks.