54 resultados para deduced optical model parameters


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In vitro studies and mathematical models are now being widely used to study the underlying mechanisms driving the expansion of cell colonies. This can improve our understanding of cancer formation and progression. Although much progress has been made in terms of developing and analysing mathematical models, far less progress has been made in terms of understanding how to estimate model parameters using experimental in vitro image-based data. To address this issue, a new approximate Bayesian computation (ABC) algorithm is proposed to estimate key parameters governing the expansion of melanoma cell (MM127) colonies, including cell diffusivity, D, cell proliferation rate, λ, and cell-to-cell adhesion, q, in two experimental scenarios, namely with and without a chemical treatment to suppress cell proliferation. Even when little prior biological knowledge about the parameters is assumed, all parameters are precisely inferred with a small posterior coefficient of variation, approximately 2–12%. The ABC analyses reveal that the posterior distributions of D and q depend on the experimental elapsed time, whereas the posterior distribution of λ does not. The posterior mean values of D and q are in the ranges 226–268 µm2h−1, 311–351 µm2h−1 and 0.23–0.39, 0.32–0.61 for the experimental periods of 0–24 h and 24–48 h, respectively. Furthermore, we found that the posterior distribution of q also depends on the initial cell density, whereas the posterior distributions of D and λ do not. The ABC approach also enables information from the two experiments to be combined, resulting in greater precision for all estimates of D and λ.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we examine approaches to estimate a Bayesian mixture model at both single and multiple time points for a sample of actual and simulated aerosol particle size distribution (PSD) data. For estimation of a mixture model at a single time point, we use Reversible Jump Markov Chain Monte Carlo (RJMCMC) to estimate mixture model parameters including the number of components which is assumed to be unknown. We compare the results of this approach to a commonly used estimation method in the aerosol physics literature. As PSD data is often measured over time, often at small time intervals, we also examine the use of an informative prior for estimation of the mixture parameters which takes into account the correlated nature of the parameters. The Bayesian mixture model offers a promising approach, providing advantages both in estimation and inference.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The effectiveness of higher-order spectral (HOS) phase features in speaker recognition is investigated by comparison with Mel Cepstral features on the same speech data. HOS phase features retain phase information from the Fourier spectrum unlikeMel–frequency Cepstral coefficients (MFCC). Gaussian mixture models are constructed from Mel– Cepstral features and HOS features, respectively, for the same data from various speakers in the Switchboard telephone Speech Corpus. Feature clusters, model parameters and classification performance are analyzed. HOS phase features on their own provide a correct identification rate of about 97% on the chosen subset of the corpus. This is the same level of accuracy as provided by MFCCs. Cluster plots and model parameters are compared to show that HOS phase features can provide complementary information to better discriminate between speakers.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Catheter-related bloodstream infections are a serious problem. Many interventions reduce risk, and some have been evaluated in cost-effectiveness studies. We review the usefulness and quality of these economic studies. Evidence is incomplete, and data required to inform a coherent policy are missing. The cost-effectiveness studies are characterized by a lack of transparency, short time-horizons, and narrow economic perspectives. Data quality is low for some important model parameters. Authors of future economic evaluations should aim to model the complete policy and not just single interventions. They should be rigorous in developing the structure of the economic model, include all relevant economic outcomes, use a systematic approach for selecting data sources for model parameters, and propagate the effect of uncertainty in model parameters on conclusions. This will inform future data collection and improve our understanding of the economics of preventing these infections.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work presents an extended Joint Factor Analysis model including explicit modelling of unwanted within-session variability. The goals of the proposed extended JFA model are to improve verification performance with short utterances by compensating for the effects of limited or imbalanced phonetic coverage, and to produce a flexible JFA model that is effective over a wide range of utterance lengths without adjusting model parameters such as retraining session subspaces. Experimental results on the 2006 NIST SRE corpus demonstrate the flexibility of the proposed model by providing competitive results over a wide range of utterance lengths without retraining and also yielding modest improvements in a number of conditions over current state-of-the-art.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose. To explore the role of the neighborhood environment in supporting walking Design. Cross sectional study of 10,286 residents of 200 neighborhoods. Participants were selected using a stratified two-stage cluster design. Data were collected by mail survey (68.5% response rate). Setting. The Brisbane City Local Government Area, Australia, 2007. Subjects. Brisbane residents aged 40 to 65 years. Measures. Environmental: street connectivity, residential density, hilliness, tree coverage, bikeways, and street lights within a one kilometer circular buffer from each resident’s home; and network distance to nearest river or coast, public transport, shop, and park. Walking: minutes in the previous week categorized as < 30 minutes, ≥ 30 < 90 minutes, ≥ 90 < 150 minutes, ≥ 150 < 300 minutes, and ≥ 300 minutes. Analysis. The association between each neighborhood characteristic and walking was examined using multilevel multinomial logistic regression and the model parameters were estimated using Markov chain Monte Carlo simulation. Results. After adjustment for individual factors, the likelihood of walking for more than 300 minutes (relative to <30 minutes) was highest in areas with the most connectivity (OR=1.93, 99% CI 1.32-2.80), the greatest residential density (OR=1.47, 99% CI 1.02-2.12), the least tree coverage (OR=1.69, 99% CI 1.13-2.51), the most bikeways (OR=1.60, 99% CI 1.16-2.21), and the most street lights (OR=1.50, 99% CI 1.07-2.11). The likelihood of walking for more than 300 minutes was also higher among those who lived closest to a river or the coast (OR=2.06, 99% CI 1.41-3.02). Conclusion. The likelihood of meeting (and exceeding) physical activity recommendations on the basis of walking was higher in neighborhoods with greater street connectivity and residential density, more street lights and bikeways, closer proximity to waterways, and less tree coverage. Interventions targeting these neighborhood characteristics may lead to improved environmental quality as well as lower rates of overweight and obesity and associated chromic disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With a view to assessing the vulnerability of columns to low elevation vehicular impacts, a non-linear explicit numerical model has been developed and validated using existing experimental results. The numerical model accounts for the effects of strain rate and confinement of the reinforced concrete, which are fundamental to the successful prediction of the impact response. The sensitivity of the material model parameters used for the validation is also scrutinised and numerical tests are performed to examine their suitability to simulate the shear failure conditions. Conflicting views on the strain gradient effects are discussed and the validation process is extended to investigate the ability of the equations developed under concentric loading conditions to simulate flexural failure events. Experimental data on impact force–time histories, mid span and residual deflections and support reactions have been verified against corresponding numerical results. A universal technique which can be applied to determine the vulnerability of the impacted columns against collisions with new generation vehicles under the most common impact modes is proposed. Additionally, the observed failure characteristics of the impacted columns are explained using extended outcomes. Based on the overall results, an analytical method is suggested to quantify the vulnerability of the columns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Asset health inspections can produce two types of indicators: (1) direct indicators (e.g. the thickness of a brake pad, and the crack depth on a gear) which directly relate to a failure mechanism; and (2) indirect indicators (e.g. the indicators extracted from vibration signals and oil analysis data) which can only partially reveal a failure mechanism. While direct indicators enable more precise references to asset health condition, they are often more difficult to obtain than indirect indicators. The state space model provides an efficient approach to estimating direct indicators by using indirect indicators. However, existing state space models to estimate direct indicators largely depend on assumptions such as, discrete time, discrete state, linearity, and Gaussianity. The discrete time assumption requires fixed inspection intervals. The discrete state assumption entails discretising continuous degradation indicators, which often introduces additional errors. The linear and Gaussian assumptions are not consistent with nonlinear and irreversible degradation processes in most engineering assets. This paper proposes a state space model without these assumptions. Monte Carlo-based algorithms are developed to estimate the model parameters and the remaining useful life. These algorithms are evaluated for performance using numerical simulations through MATLAB. The result shows that both the parameters and the remaining useful life are estimated accurately. Finally, the new state space model is used to process vibration and crack depth data from an accelerated test of a gearbox. During this application, the new state space model shows a better fitness result than the state space model with linear and Gaussian assumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A magneto-rheological (MR) fluid damper is a semi-active control device that has recently begun to receive more attention in the vibration control community. However, the inherent nonlinear nature of the MR fluid damper makes it challenging to use this device to achieve high damping control system performance. Therefore the development of an accurate modeling method for a MR fluid damper is necessary to take advantage of its unique characteristics. Our goal was to develop an alternative method for modeling a MR fluid damper by using a self tuning fuzzy (STF) method based on neural technique. The behavior of the researched damper is directly estimated through a fuzzy mapping system. In order to improve the accuracy of the STF model, a back propagation and a gradient descent method are used to train online the fuzzy parameters to minimize the model error function. A series of simulations had been done to validate the effectiveness of the suggested modeling method when compared with the data measured from experiments on a test rig with a researched MR fluid damper. Finally, modeling results show that the proposed STF interference system trained online by using neural technique could describe well the behavior of the MR fluid damper without need of calculation time for generating the model parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Computer vision is an attractive solution for uninhabited aerial vehicle (UAV) collision avoidance, due to the low weight, size and power requirements of hardware. A two-stage paradigm has emerged in the literature for detection and tracking of dim targets in images, comprising of spatial preprocessing, followed by temporal filtering. In this paper, we investigate a hidden Markov model (HMM) based temporal filtering approach. Specifically, we propose an adaptive HMM filter, in which the variance of model parameters is refined as the quality of the target estimate improves. Filters with high variance (fat filters) are used for target acquisition, and filters with low variance (thin filters) are used for target tracking. The adaptive filter is tested in simulation and with real data (video of a collision-course aircraft). Our test results demonstrate that our adaptive filtering approach has improved tracking performance, and provides an estimate of target heading not present in previous HMM filtering approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To explore the role of the neighborhood environment in supporting walking Design: Cross sectional study of 10,286 residents of 200 neighborhoods. Participants were selected using a stratified two-stage cluster design. Data were collected by mail survey (68.5% response rate). Setting: The Brisbane City Local Government Area, Australia, 2007. Subjects: Brisbane residents aged 40 to 65 years. Measures Environmental: street connectivity, residential density, hilliness, tree coverage, bikeways, and street lights within a one kilometer circular buffer from each resident’s home; and network distance to nearest river or coast, public transport, shop, and park. Walking: minutes in the previous week categorized as < 30 minutes, ≥ 30 < 90 minutes, ≥ 90 < 150 minutes, ≥ 150 < 300 minutes, and ≥ 300 minutes. Analysis: The association between each neighborhood characteristic and walking was examined using multilevel multinomial logistic regression and the model parameters were estimated using Markov chain Monte Carlo simulation. Results: After adjustment for individual factors, the likelihood of walking for more than 300 minutes (relative to <30 minutes) was highest in areas with the most connectivity (OR=1.93, 99% CI 1.32-2.80), the greatest residential density (OR=1.47, 99% CI 1.02-2.12), the least tree coverage (OR=1.69, 99% CI 1.13-2.51), the most bikeways (OR=1.60, 99% CI 1.16-2.21), and the most street lights (OR=1.50, 99% CI 1.07-2.11). The likelihood of walking for more than 300 minutes was also higher among those who lived closest to a river or the coast (OR=2.06, 99% CI 1.41-3.02). Conclusion: The likelihood of meeting (and exceeding) physical activity recommendations on the basis of walking was higher in neighborhoods with greater street connectivity and residential density, more street lights and bikeways, closer proximity to waterways, and less tree coverage. Interventions targeting these neighborhood characteristics may lead to improved environmental quality as well as lower rates of overweight and obesity and associated chromic disease.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Multiple marker sets and models are currently available for assessing foot and ankle kinematics in gait. Despite the presence of such a wide variety of models, the reporting of methodological designs remains inconsistent and lacks clearly defined standards. This review highlights the variability found when reporting biomechanical model parameters, methodological design, and model reliability. Further, the review clearly demonstrates the need for a consensus of what methodological considerations to report in manuscripts, which focus on the topic of foot and ankle biomechanics. We propose five minimum reporting standards, that we believe will ensure the transparency of methods and begin to allow the community to move towards standard modelling practice. The strict adherence to these standards should ultimately improve the interpretation and clinical useability of foot and ankle marker sets and their corresponding models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The impact of weather on traffic and its behavior is not well studied in literature primarily due to lack of integrated traffic and weather data. Weather can significant effect the traffic and traffic management measures developed for fine weather might not be optimal for adverse weather. Simulation is an efficient tool for analyzing traffic management measures even before their actual implementation. Therefore, in order to develop and test traffic management measures for adverse weather condition we need to first analyze the effect of weather on fundamental traffic parameters and thereafter, calibrate the simulation model parameters in order to simulate the traffic under adverse weather conditions. In this paper we first, analyses the impact of weather on motorway traffic flow and drivers’ behaviour with traffic data from Swiss motorways and weather data from MeteoSuisse. Thereafter, we develop methodology to calibrate a microscopic simulation model with the aim to utilize the simulation model for simulating traffic under adverse weather conditions. Here, study is performed using AIMSUN, a microscopic traffic simulator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fusion techniques have received considerable attention for achieving lower error rates with biometrics. A fused classifier architecture based on sequential integration of multi-instance and multi-sample fusion schemes allows controlled trade-off between false alarms and false rejects. Expressions for each type of error for the fused system have previously been derived for the case of statistically independent classifier decisions. It is shown in this paper that the performance of this architecture can be improved by modelling the correlation between classifier decisions. Correlation modelling also enables better tuning of fusion model parameters, ‘N’, the number of classifiers and ‘M’, the number of attempts/samples, and facilitates the determination of error bounds for false rejects and false accepts for each specific user. Error trade-off performance of the architecture is evaluated using HMM based speaker verification on utterances of individual digits. Results show that performance is improved for the case of favourable correlated decisions. The architecture investigated here is directly applicable to speaker verification from spoken digit strings such as credit card numbers in telephone or voice over internet protocol based applications. It is also applicable to other biometric modalities such as finger prints and handwriting samples.