212 resultados para parameter uncertainty
Resumo:
Vibration and acoustic analysis at higher frequencies faces two challenges: computing the response without using an excessive number of degrees of freedom, and quantifying its uncertainty due to small spatial variations in geometry, material properties and boundary conditions. Efficient models make use of the observation that when the response of a decoupled vibro-acoustic subsystem is sufficiently sensitive to uncertainty in such spatial variations, the local statistics of its natural frequencies and mode shapes saturate to universal probability distributions. This holds irrespective of the causes that underly these spatial variations and thus leads to a nonparametric description of uncertainty. This work deals with the identification of uncertain parameters in such models by using experimental data. One of the difficulties is that both experimental errors and modeling errors, due to the nonparametric uncertainty that is inherent to the model type, are present. This is tackled by employing a Bayesian inference strategy. The prior probability distribution of the uncertain parameters is constructed using the maximum entropy principle. The likelihood function that is subsequently computed takes the experimental information, the experimental errors and the modeling errors into account. The posterior probability distribution, which is computed with the Markov Chain Monte Carlo method, provides a full uncertainty quantification of the identified parameters, and indicates how well their uncertainty is reduced, with respect to the prior information, by the experimental data. © 2013 Taylor & Francis Group, London.
Resumo:
The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.
Resumo:
Coupled hydrology and water quality models are an important tool today, used in the understanding and management of surface water and watershed areas. Such problems are generally subject to substantial uncertainty in parameters, process understanding, and data. Component models, drawing on different data, concepts, and structures, are affected differently by each of these uncertain elements. This paper proposes a framework wherein the response of component models to their respective uncertain elements can be quantified and assessed, using a hydrological model and water quality model as two exemplars. The resulting assessments can be used to identify model coupling strategies that permit more appropriate use and calibration of individual models, and a better overall coupled model response. One key finding was that an approximate balance of water quality and hydrological model responses can be obtained using both the QUAL2E and Mike11 water quality models. The balance point, however, does not support a particularly narrow surface response (or stringent calibration criteria) with respect to the water quality calibration data, at least in the case examined here. Additionally, it is clear from the results presented that the structural source of uncertainty is at least as significant as parameter-based uncertainties in areal models. © 2012 John Wiley & Sons, Ltd.
Resumo:
We demonstrate a parameter extraction algorithm based on a theoretical transfer function, which takes into account a converging THz beam. Using this, we successfully extract material parameters from data obtained for a quartz sample with a THz time domain spectrometer. © 2010 IEEE.
Resumo:
This paper presents a method for the fast and direct extraction of model parameters for capacitive MEMS resonators from their measured transmission response such as quality factor, resonant frequency, and motional resistance. We show that these parameters may be extracted without having to first de-embed the resonator motional current from the feedthrough. The series and parallel resonances from the measured electrical transmission are used to determine the MEMS resonator circuit parameters. The theoretical basis for the method is elucidated by using both the Nyquist and susceptance frequency response plots, and applicable in the limit where CF > CmQ; commonly the case when characterizing MEMS resonators at RF. The method is then applied to the measured electrical transmission for capacitively transduced MEMS resonators, and compared against parameters obtained using a Lorentzian fit to the measured response. Close agreement between the two methods is reported herein. © 2010 IEEE.
Resumo:
This paper presents a method for fast and accurate determination of parameters relevant to the characterization of capacitive MEMS resonators like quality factor (Q), resonant frequency (fn), and equivalent circuit parameters such as the motional capacitance (Cm). In the presence of a parasitic feedthrough capacitor (CF) appearing across the input and output ports, the transmission characteristic is marked by two resonances: series (S) and parallel (P). Close approximations of these circuit parameters are obtained without having to first de-embed the resonator motional current typically buried in feedthrough by using the series and parallel resonances. While previous methods with the same objective are well known, we show that these are limited to the condition where CF ≪ CmQ. In contrast, this work focuses on moderate capacitive feedthrough levels where CF > CmQ, which are more common in MEMS resonators. The method is applied to data obtained from the measured electrical transmission of fabricated SOI MEMS resonators. Parameter values deduced via direct extraction are then compared against those obtained by a full extraction procedure where de-embedding is first performed and followed by a Lorentzian fit to the data based on the classical transfer function associated with a generic LRC series resonant circuit. © 2011 Elsevier B.V. All rights reserved.
Resumo:
This research addresses product introduction dispersed across locations and companies. Mechanisms appropriate to integrate activities in collocated teams may not serve dispersed teams well. A semiconductor design licensor was studied in depth to explore how dispersed product introduction varies with uncertainty. We found that autonomous teams focused on sub-products (micro-products) were used rather than cross-functional teams in departments with high architectural uncertainty. Both types of teams were effectively dispersed across locations and companies. This suggests that small high-technology companies may find it easier to expand into new geographies and product lines than was previously believed.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.