974 resultados para range estimation
Resumo:
A new approach is proposed to estimate the thermal diffusivity of optically transparent solids at ambient temperature based on the velocity of an effective temperature point (ETP), and by using a two-beam interferometer the proposed concept is corroborated. 1D unsteady heat flow via step-temperature excitation is interpreted as a `micro-scale rectilinear translatory motion' of an ETP. The velocity dependent function is extracted by revisiting the Fourier heat diffusion equation. The relationship between the velocity of the ETP with thermal diffusivity is modeled using a standard solution. Under optimized thermal excitation, the product of the `velocity of the ETP' and the distance is a new constitutive equation for the thermal diffusivity of the solid. The experimental approach involves the establishment of a 1D unsteady heat flow inside the sample through step-temperature excitation. In the moving isothermal surfaces, the ETP is identified using a two-beam interferometer. The arrival-time of the ETP to reach a fixed distance away from heat source is measured, and its velocity is calculated. The velocity of the ETP and a given distance is sufficient to estimate the thermal diffusivity of a solid. The proposed method is experimentally verified for BK7 glass samples and the measured results are found to match closely with the reported value.
Resumo:
We propose a Monte Carlo filter for recursive estimation of diffusive processes that modulate the instantaneous rates of Poisson measurements. A key aspect is the additive update, through a gain-like correction term, empirically approximated from the innovation integral in the time-discretized Kushner-Stratonovich equation. The additive filter-update scheme eliminates the problem of particle collapse encountered in many conventional particle filters. Through a few numerical demonstrations, the versatility of the proposed filter is brought forth.
Resumo:
Acoustic feature based speech (syllable) rate estimation and syllable nuclei detection are important problems in automatic speech recognition (ASR), computer assisted language learning (CALL) and fluency analysis. A typical solution for both the problems consists of two stages. The first stage involves computing a short-time feature contour such that most of the peaks of the contour correspond to the syllabic nuclei. In the second stage, the peaks corresponding to the syllable nuclei are detected. In this work, instead of the peak detection, we perform a mode-shape classification, which is formulated as a supervised binary classification problem - mode-shapes representing the syllabic nuclei as one class and remaining as the other. We use the temporal correlation and selected sub-band correlation (TCSSBC) feature contour and the mode-shapes in the TCSSBC feature contour are converted into a set of feature vectors using an interpolation technique. A support vector machine classifier is used for the classification. Experiments are performed separately using Switchboard, TIMIT and CTIMIT corpora in a five-fold cross validation setup. The average correlation coefficients for the syllable rate estimation turn out to be 0.6761, 0.6928 and 0.3604 for three corpora respectively, which outperform those obtained by the best of the existing peak detection techniques. Similarly, the average F-scores (syllable level) for the syllable nuclei detection are 0.8917, 0.8200 and 0.7637 for three corpora respectively. (C) 2016 Elsevier B.V. All rights reserved.
Resumo:
The utility of canonical correlation analysis (CCA) for domain adaptation (DA) in the context of multi-view head pose estimation is examined in this work. We consider the three problems studied in 1], where different DA approaches are explored to transfer head pose-related knowledge from an extensively labeled source dataset to a sparsely labeled target set, whose attributes are vastly different from the source. CCA is found to benefit DA for all the three problems, and the use of a covariance profile-based diagonality score (DS) also improves classification performance with respect to a nearest neighbor (NN) classifier.
Resumo:
The distribution of cortical bone in the proximal femur is believed to be a critical component in determining fracture resistance. Current CT technology is limited in its ability to measure cortical thickness, especially in the sub-millimetre range which lies within the point spread function of today's clinical scanners. In this paper, we present a novel technique that is capable of producing unbiased thickness estimates down to 0.3mm. The technique relies on a mathematical model of the anatomy and the imaging system, which is fitted to the data at a large number of sites around the proximal femur, producing around 17,000 independent thickness estimates per specimen. In a series of experiments on 16 cadaveric femurs, estimation errors were measured as -0.01+/-0.58mm (mean+/-1std.dev.) for cortical thicknesses in the range 0.3-4mm. This compares with 0.25+/-0.69mm for simple thresholding and 0.90+/-0.92mm for a variant of the 50% relative threshold method. In the clinically relevant sub-millimetre range, thresholding increasingly fails to detect the cortex at all, whereas the new technique continues to perform well. The many cortical thickness estimates can be displayed as a colour map painted onto the femoral surface. Computation of the surfaces and colour maps is largely automatic, requiring around 15min on a modest laptop computer.
Resumo:
In addition to the layer thickness and effective Young’s modulus, the impact of the kinematic assumptions, interfacial condition, in-plane force, boundary conditions, and structure dimensions on the curvature of a film/substrate bilayer is examined. Different models for the analysis of the bilayer curvature are compared. It is demonstrated in our model that the assumption of a uniform curvature is valid only if there is no in-plane force. The effects of boundary conditions and structure dimensions, which are not-fully-included in previous models are shown to be significant. Three different approaches for deriving the curvature of a film/substrate bilayer are presented, compared, and analyzed. A more comprehensive study of the conditions regarding the applicability of Stoney’s formula and modified formulas is presented.
Resumo:
It has been shown in CA simulations and data analysis of earthquakes that declustered or characteristic large earthquakes may occur with long-range stress redistribution. In order to understand long-range stress redistribution, we propose a linear-elastic but heterogeneous-brittle model. The stress redistribution in the heterogeneous-brittle medium implies a longer-range interaction than that in an elastic medium. Therefore, it is surmised that the longer-range stress redistribution resulting from damage in heterogeneous media may be a plausible mechanism governing main shocks.
Resumo:
Modern technology has allowed real-time data collection in a variety of domains, ranging from environmental monitoring to healthcare. Consequently, there is a growing need for algorithms capable of performing inferential tasks in an online manner, continuously revising their estimates to reflect the current status of the underlying process. In particular, we are interested in constructing online and temporally adaptive classifiers capable of handling the possibly drifting decision boundaries arising in streaming environments. We first make a quadratic approximation to the log-likelihood that yields a recursive algorithm for fitting logistic regression online. We then suggest a novel way of equipping this framework with self-tuning forgetting factors. The resulting scheme is capable of tracking changes in the underlying probability distribution, adapting the decision boundary appropriately and hence maintaining high classification accuracy in dynamic or unstable environments. We demonstrate the scheme's effectiveness in both real and simulated streaming environments. © Springer-Verlag 2009.
Resumo:
This paper reports the fabrication and electrical characterization of high tuning range AlSi RF MEMS capacitors. We present experimental results obtained by a surface micromachining process that uses dry etching of sacrificial amorphous silicon to release Al-1%Si membranes and has a low thermal budget (<450 °C) being compatible with CMOS post-processing. The proposed silicon sacrificial layer dry etching (SSLDE) process is able to provide very high Si etch rates (3-15 μm/min, depending on process parameters) with high Si: SiO2 selectivity (>10,000:1). Single- and double-air-gap MEMS capacitors, as well as some dedicated test structures needed to calibrate the electro-mechanical parameters and explore the reliability of the proposed technology, have been fabricated with the new process. S-parameter measurements from 100 MHz up to 2 GHz have shown a capacitance tuning range higher than 100% with the double-air-gap architecture. The tuning range can be enlarged with a proper DC electrical bias of the capacitor electrodes. Finally, the reported results make the proposed MEMS tuneable capacitor a good candidate for above-IC integration in communications applications. © 2004 Elsevier B.V. All rights reserved.
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.
Resumo:
We develop methods for performing filtering and smoothing in non-linear non-Gaussian dynamical models. The methods rely on a particle cloud representation of the filtering distribution which evolves through time using importance sampling and resampling ideas. In particular, novel techniques are presented for generation of random realisations from the joint smoothing distribution and for MAP estimation of the state sequence. Realisations of the smoothing distribution are generated in a forward-backward procedure, while the MAP estimation procedure can be performed in a single forward pass of the Viterbi algorithm applied to a discretised version of the state space. An application to spectral estimation for time-varying autoregressions is described.