22 resultados para maximum contrast analysis
em Cambridge University Engineering Department Publications Database
Resumo:
Electron multiplication charge-coupled devices (EMCCD) are widely used for photon counting experiments and measurements of low intensity light sources, and are extensively employed in biological fluorescence imaging applications. These devices have a complex statistical behaviour that is often not fully considered in the analysis of EMCCD data. Robust and optimal analysis of EMCCD images requires an understanding of their noise properties, in particular to exploit fully the advantages of Bayesian and maximum-likelihood analysis techniques, whose value is increasingly recognised in biological imaging for obtaining robust quantitative measurements from challenging data. To improve our own EMCCD analysis and as an effort to aid that of the wider bioimaging community, we present, explain and discuss a detailed physical model for EMCCD noise properties, giving a likelihood function for image counts in each pixel for a given incident intensity, and we explain how to measure the parameters for this model from various calibration images. © 2013 Hirsch et al.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.
Resumo:
Gene microarray technology is highly effective in screening for differential gene expression and has hence become a popular tool in the molecular investigation of cancer. When applied to tumours, molecular characteristics may be correlated with clinical features such as response to chemotherapy. Exploitation of the huge amount of data generated by microarrays is difficult, however, and constitutes a major challenge in the advancement of this methodology. Independent component analysis (ICA), a modern statistical method, allows us to better understand data in such complex and noisy measurement environments. The technique has the potential to significantly increase the quality of the resulting data and improve the biological validity of subsequent analysis. We performed microarray experiments on 31 postmenopausal endometrial biopsies, comprising 11 benign and 20 malignant samples. We compared ICA to the established methods of principal component analysis (PCA), Cyber-T, and SAM. We show that ICA generated patterns that clearly characterized the malignant samples studied, in contrast to PCA. Moreover, ICA improved the biological validity of the genes identified as differentially expressed in endometrial carcinoma, compared to those found by Cyber-T and SAM. In particular, several genes involved in lipid metabolism that are differentially expressed in endometrial carcinoma were only found using this method. This report highlights the potential of ICA in the analysis of microarray data.
Resumo:
Cluster analysis of ranking data, which occurs in consumer questionnaires, voting forms or other inquiries of preferences, attempts to identify typical groups of rank choices. Empirically measured rankings are often incomplete, i.e. different numbers of filled rank positions cause heterogeneity in the data. We propose a mixture approach for clustering of heterogeneous rank data. Rankings of different lengths can be described and compared by means of a single probabilistic model. A maximum entropy approach avoids hidden assumptions about missing rank positions. Parameter estimators and an efficient EM algorithm for unsupervised inference are derived for the ranking mixture model. Experiments on both synthetic data and real-world data demonstrate significantly improved parameter estimates on heterogeneous data when the incomplete rankings are included in the inference process.
Resumo:
An experimental and theoretical investigation of premixed turbulent combustion in an engine simulator is presented. The distribution of hydroxyl radicals formed in the combustion of propane/air mixtures was visualized by 2D-LIF and used to monitor the progress of the combustion process. For stoichiometric mixtures, images showed a continuous wrinkled flame front, while in lean (λ=1.5) mixtures, local flame extinction was observed as discontinuities in the reaction zone. A bright active reaction zone was still observed in flame inlets and closed concave structures. The effects of self-absorption and of collisional quenching on the fluorescence signal are considered and appear to have only a minor net influence on the shape and width of the flame front. The images are evaluated and interpreted in terms of the Lewis number effect and the laminar flamelet model. Analysis was performed by determining the contour lines of the images (specifically, the ratios of average maximum to equilibrium OH concentration) and comparing with corresponding ratios from unstrained flame simulations. The results show that although the degree of turbulence is not high enough for straining effects to be important, flamelet curvature does play a significant role in the combustion of lean mixtures; this is manifested by a mean effective flame velocity that is less than the laminar burning velocity. © 1991 Combustion Institute.
Resumo:
This paper investigates the performance of diode temperature sensors when operated at ultra high temperatures (above 250°C). A low leakage Silicon On Insulator (SOI) diode was designed and fabricated in a 1 μm CMOS process and suspended within a dielectric membrane for efficient thermal insulation. The diode can be used for accurate temperature monitoring in a variety of sensors such as microcalorimeters, IR detectors, or thermal flow sensors. A CMOS compatible micro-heater was integrated with the diode for local heating. It was found that the diode forward voltage exhibited a linear dependence on temperature as long as the reverse saturation current remained below the forward driving current. We have proven experimentally that the maximum temperature can be as high as 550°C. Long term continuous operation at high temperatures (400°C) showed good stability of the voltage drop. Furthermore, we carried out a detailed theoretical analysis to determine the maximum operating temperature and exlain the presence of nonlinearity factors at ultra high temperatures. © 2008 IEEE.
Resumo:
Common-rail fuel injection systems on modern light duty diesel engines are effectively able to respond instantaneously to changes in the demanded injection quantity. In contrast, the air-system is subject to significantly slower dynamics, primarily due to filling/emptying effects in the manifolds and turbocharger inertia. The behaviour of the air-path in a diesel engine is therefore the main limiting factor in terms of engine-out emissions during transient operation. This paper presents a simple mean-value model for the air-path during throttled operation, which is used to design a feed-forward controller that delivers very rapid changes in the in-cylinder charge properties. The feed-forward control action is validated using a state-of-the-art sampling system that allows true cycle-by-cycle measurement of the in-cylinder CO2 concentration. © 2011 SAE International.
Resumo:
Mandrel peel tests with mandrels or rollers of varying diameters have been carried out using Mylar backing of several thicknesses and a commercial synthetic acrylic adhesive. The results are critically compared with the numerical predictions of the peeling software package ICPeel. In addition, a finite element model of the mandrel peeling process has been completed which gives good agreement with experiment provided appropriate mechanical properties of adherend and adhesive are used which must include the effects of adherent constraint. The influence of the thickness of the backing is also considered and both experiment and analysis confirm that there is a backing thickness at which the peel force for a laminate of this sort will show a maximum. © 2010 Blackwell Publishing Ltd.
Resumo:
This work is concerned with the characteristics of the impact force produced when two randomly vibrating elastic bodies collide with each other, or when a single randomly vibrating elastic body collides with a stop. The impact condition includes a non-linear spring, which may represent, for example, a Hertzian contact, and in the case of a single body, closed form approximate expressions are derived for the duration and magnitude of the impact force and for the maximum deceleration at the impact point. For the case of two impacting bodies, a set of algebraic equations are derived which can be solved numerically to yield the quantities of interest. The approach is applied to a beam impacting a stop, a plate impacting a stop, and to two impacting beams, and in each case a comparison is made with detailed numerical simulations. Aspects of the statistics of impact velocity are also considered, including the probability that the impact velocity will exceed a specified value within a certain time. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
A direct numerical simulation (DNS) database of freely propagating statistically planar turbulent premixed flames with a range of different turbulent Reynolds numbers has been used to assess the performance of algebraic flame surface density (FSD) models based on a fractal representation of the flame wrinkling factor. The turbulent Reynolds number Ret has been varied by modifying the Karlovitz number Ka and the Damköhler number Da independently of each other in such a way that the flames remain within the thin reaction zones regime. It has been found that the turbulent Reynolds number and the Karlovitz number both have a significant influence on the fractal dimension, which is found to increase with increasing Ret and Ka before reaching an asymptotic value for large values of Ret and Ka. A parameterisation of the fractal dimension is presented in which the effects of the Reynolds and the Karlovitz numbers are explicitly taken into account. By contrast, the inner cut-off scale normalised by the Zel'dovich flame thickness ηi/δz does not exhibit any significant dependence on Ret for the cases considered here. The performance of several algebraic FSD models has been assessed based on various criteria. Most of the algebraic models show a deterioration in performance with increasing the LES filter width. © 2012 Mohit Katragadda et al.
Resumo:
Reinforced concrete buildings in low-to-moderate seismic zones are often designed only for gravity loads in accordance with the non-seismic detailing provisions. Deficient detailing of columns and beam-column joints can lead to unpredictable brittle failures even under moderate earthquakes. Therefore, a reliable estimate of structural response is required for the seismic evaluation of these structures. For this purpose, analytical models for both interior and exterior slab-beam-column subassemblages and for a 1/3 scale model frame were implemented into the nonlinear finite element platform OpenSees. Comparison between the analytical results and experimental data available in the literature is carried out using nonlinear pushover analyses and nonlinear time history analysis for the subassemblages and the model frame, respectively. Furthermore, the seismic fragility assessment of reinforced concrete buildings is performed on a set of non-ductile frames using nonlinear time history analyses. The fragility curves, which are developed for various damage states for the maximum interstory drift ratio are characterized in terms of peak ground acceleration and spectral acceleration using a suite of ground motions representative of the seismic hazard in the region.
Resumo:
We study unsupervised learning in a probabilistic generative model for occlusion. The model uses two types of latent variables: one indicates which objects are present in the image, and the other how they are ordered in depth. This depth order then determines how the positions and appearances of the objects present, specified in the model parameters, combine to form the image. We show that the object parameters can be learnt from an unlabelled set of images in which objects occlude one another. Exact maximum-likelihood learning is intractable. However, we show that tractable approximations to Expectation Maximization (EM) can be found if the training images each contain only a small number of objects on average. In numerical experiments it is shown that these approximations recover the correct set of object parameters. Experiments on a novel version of the bars test using colored bars, and experiments on more realistic data, show that the algorithm performs well in extracting the generating causes. Experiments based on the standard bars benchmark test for object learning show that the algorithm performs well in comparison to other recent component extraction approaches. The model and the learning algorithm thus connect research on occlusion with the research field of multiple-causes component extraction methods.
Resumo:
The trapped magnetic field is examined in bulk high-temperature superconductors that are artificially drilled along their c-axis. The influence of the hole pattern on the magnetization is studied and compared by means of numerical models and Hall probe mapping techniques. To this aim, we consider two bulk YBCO samples with a rectangular cross-section that are drilled each by six holes arranged either on a rectangular lattice (sample I) or on a centered rectangular lattice (sample II). For the numerical analysis, three different models are considered for calculating the trapped flux: (i), a two-dimensional (2D) Bean model neglecting demagnetizing effects and flux creep, (ii), a 2D finite-element model neglecting demagnetizing effects but incorporating magnetic relaxation in the form of an E-J power law, and, (iii), a 3D finite element analysis that takes into account both the finite height of the sample and flux creep effects. For the experimental analysis, the trapped magnetic flux density is measured above the sample surface by Hall probe mapping performed before and after the drilling process. The maximum trapped flux density in the drilled samples is found to be smaller than that in the plain samples. The smallest magnetization drop is found for sample II, with the centered rectangular lattice. This result is confirmed by the numerical models. In each sample, the relative drops that are calculated independently with the three different models are in good agreement. As observed experimentally, the magnetization drop calculated in the sample II is the smallest one and its relative value is comparable to the measured one. By contrast, the measured magnetization drop in sample (1) is much larger than that predicted by the simulations, most likely because of a change of the microstructure during the drilling process.
Resumo:
This paper derives a new algorithm that performs independent component analysis (ICA) by optimizing the contrast function of the RADICAL algorithm. The core idea of the proposed optimization method is to combine the global search of a good initial condition with a gradient-descent algorithm. This new ICA algorithm performs faster than the RADICAL algorithm (based on Jacobi rotations) while still preserving, and even enhancing, the strong robustness properties that result from its contrast. © Springer-Verlag Berlin Heidelberg 2007.