947 resultados para Empirical Bayes Methods
Resumo:
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly.
We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments.
We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which the melting temperature is a design criterion.
We present in detail two examples of refractory materials. First, we demonstrate how key material properties that provide guidance in the design of refractory materials can be accurately determined via ab initio thermodynamic calculations in conjunction with experimental techniques based on synchrotron X-ray diffraction and thermal analysis under laser-heated aerodynamic levitation. The properties considered include melting point, heat of fusion, heat capacity, thermal expansion coefficients, thermal stability, and sublattice disordering, as illustrated in a motivating example of lanthanum zirconate (La2Zr2O7). The close agreement with experiment in the known but structurally complex compound La2Zr2O7 provides good indication that the computation methods described can be used within a computational screening framework to identify novel refractory materials. Second, we report an extensive investigation into the melting temperatures of the Hf-C and Hf-Ta-C systems using ab initio calculations. With melting points above 4000 K, hafnium carbide (HfC) and tantalum carbide (TaC) are among the most refractory binary compounds known to date. Their mixture, with a general formula TaxHf1-xCy, is known to have a melting point of 4215 K at the composition Ta4HfC5, which has long been considered as the highest melting temperature for any solid. Very few measurements of melting point in tantalum and hafnium carbides have been documented, because of the obvious experimental difficulties at extreme temperatures. The investigation lets us identify three major chemical factors that contribute to the high melting temperatures. Based on these three factors, we propose and explore a new class of materials, which, according to our ab initio calculations, may possess even higher melting temperatures than Ta-Hf-C. This example also demonstrates the feasibility of materials screening and discovery via ab initio calculations for the optimization of "higher-level" properties whose determination requires extensive sampling of atomic configuration space.
Resumo:
This thesis studies decision making under uncertainty and how economic agents respond to information. The classic model of subjective expected utility and Bayesian updating is often at odds with empirical and experimental results; people exhibit systematic biases in information processing and often exhibit aversion to ambiguity. The aim of this work is to develop simple models that capture observed biases and study their economic implications.
In the first chapter I present an axiomatic model of cognitive dissonance, in which an agent's response to information explicitly depends upon past actions. I introduce novel behavioral axioms and derive a representation in which beliefs are directionally updated. The agent twists the information and overweights states in which his past actions provide a higher payoff. I then characterize two special cases of the representation. In the first case, the agent distorts the likelihood ratio of two states by a function of the utility values of the previous action in those states. In the second case, the agent's posterior beliefs are a convex combination of the Bayesian belief and the one which maximizes the conditional value of the previous action. Within the second case a unique parameter captures the agent's sensitivity to dissonance, and I characterize a way to compare sensitivity to dissonance between individuals. Lastly, I develop several simple applications and show that cognitive dissonance contributes to the equity premium and price volatility, asymmetric reaction to news, and belief polarization.
The second chapter characterizes a decision maker with sticky beliefs. That is, a decision maker who does not update enough in response to information, where enough means as a Bayesian decision maker would. This chapter provides axiomatic foundations for sticky beliefs by weakening the standard axioms of dynamic consistency and consequentialism. I derive a representation in which updated beliefs are a convex combination of the prior and the Bayesian posterior. A unique parameter captures the weight on the prior and is interpreted as the agent's measure of belief stickiness or conservatism bias. This parameter is endogenously identified from preferences and is easily elicited from experimental data.
The third chapter deals with updating in the face of ambiguity, using the framework of Gilboa and Schmeidler. There is no consensus on the correct way way to update a set of priors. Current methods either do not allow a decision maker to make an inference about her priors or require an extreme level of inference. In this chapter I propose and axiomatize a general model of updating a set of priors. A decision maker who updates her beliefs in accordance with the model can be thought of as one that chooses a threshold that is used to determine whether a prior is plausible, given some observation. She retains the plausible priors and applies Bayes' rule. This model includes generalized Bayesian updating and maximum likelihood updating as special cases.
Resumo:
The intent of this study is to provide formal apparatus which facilitates the investigation of problems in the methodology of science. The introduction contains several examples of such problems and motivates the subsequent formalism.
A general definition of a formal language is presented, and this definition is used to characterize an individual’s view of the world around him. A notion of empirical observation is developed which is independent of language. The interplay of formal language and observation is taken as the central theme. The process of science is conceived as the finding of that formal language that best expresses the available experimental evidence.
To characterize the manner in which a formal language imposes structure on its universe of discourse, the fundamental concepts of elements and states of a formal language are introduced. Using these, the notion of a basis for a formal language is developed as a collection of minimal states distinguishable within the language. The relation of these concepts to those of model theory is discussed.
An a priori probability defined on sets of observations is postulated as a reflection of an individual’s ontology. This probability, in conjunction with a formal language and a basis for that language, induces a subjective probability describing an individual’s conceptual view of admissible configurations of the universe. As a function of this subjective probability, and consequently of language, a measure of the informativeness of empirical observations is introduced and is shown to be intuitively plausible – particularly in the case of scientific experimentation.
The developed formalism is then systematically applied to the general problems presented in the introduction. The relationship of scientific theories to empirical observations is discussed and the need for certain tacit, unstatable knowledge is shown to be necessary to fully comprehend the meaning of realistic theories. The idea that many common concepts can be specified only by drawing on knowledge obtained from an infinite number of observations is presented, and the problems of reductionism are examined in this context.
A definition of when one formal language can be considered to be more expressive than another is presented, and the change in the informativeness of an observation as language changes is investigated. In this regard it is shown that the information inherent in an observation may decrease for a more expressive language.
The general problem of induction and its relation to the scientific method are discussed. Two hypotheses concerning an individual’s selection of an optimal language for a particular domain of discourse are presented and specific examples from the introduction are examined.
Resumo:
Jet noise reduction is an important goal within both commercial and military aviation. Although large-scale numerical simulations are now able to simultaneously compute turbulent jets and their radiated sound, lost-cost, physically-motivated models are needed to guide noise-reduction efforts. A particularly promising modeling approach centers around certain large-scale coherent structures, called wavepackets, that are observed in jets and their radiated sound. The typical approach to modeling wavepackets is to approximate them as linear modal solutions of the Euler or Navier-Stokes equations linearized about the long-time mean of the turbulent flow field. The near-field wavepackets obtained from these models show compelling agreement with those educed from experimental and simulation data for both subsonic and supersonic jets, but the acoustic radiation is severely under-predicted in the subsonic case. This thesis contributes to two aspects of these models. First, two new solution methods are developed that can be used to efficiently compute wavepackets and their acoustic radiation, reducing the computational cost of the model by more than an order of magnitude. The new techniques are spatial integration methods and constitute a well-posed, convergent alternative to the frequently used parabolized stability equations. Using concepts related to well-posed boundary conditions, the methods are formulated for general hyperbolic equations and thus have potential applications in many fields of physics and engineering. Second, the nonlinear and stochastic forcing of wavepackets is investigated with the goal of identifying and characterizing the missing dynamics responsible for the under-prediction of acoustic radiation by linear wavepacket models for subsonic jets. Specifically, we use ensembles of large-eddy-simulation flow and force data along with two data decomposition techniques to educe the actual nonlinear forcing experienced by wavepackets in a Mach 0.9 turbulent jet. Modes with high energy are extracted using proper orthogonal decomposition, while high gain modes are identified using a novel technique called empirical resolvent-mode decomposition. In contrast to the flow and acoustic fields, the forcing field is characterized by a lack of energetic coherent structures. Furthermore, the structures that do exist are largely uncorrelated with the acoustic field. Instead, the forces that most efficiently excite an acoustic response appear to take the form of random turbulent fluctuations, implying that direct feedback from nonlinear interactions amongst wavepackets is not an essential noise source mechanism. This suggests that the essential ingredients of sound generation in high Reynolds number jets are contained within the linearized Navier-Stokes operator rather than in the nonlinear forcing terms, a conclusion that has important implications for jet noise modeling.
Resumo:
Nine base-quartets were calculated by six semi-empirical methods and ab initio Hartree-Fork method using STO-3G basis set. The results showed that PM3 method can be use to calculate base quartets, the results of PM3 calculations are close to the ab initio
Resumo:
The ground movements induced by the construction of supported excavation systems are generally predicted by empirical/semi-empirical methods in the design stage. However, these methods cannot account for the site-specific conditions and for information that becomes available as an excavation proceeds. A Bayesian updating methodology is proposed to update the predictions of ground movements in the later stages of excavation based on recorded deformation measurements. As an application, the proposed framework is used to predict the three-dimensional deformation shapes at four incremental excavation stages of an actual supported excavation project. © 2011 Taylor & Francis Group, London.
Resumo:
The ground movements induced by the construction of supported excavation systems are generally predicted in the design stage by empirical/semi-empirical methods. However, these methods cannot account for the site-specific conditions and for information that become available as an excavation proceeds. A Bayesian updating methodology is proposed to update the predictions of ground movements in the later stages of excavation based on recorded deformation measurements. As an application, the proposed framework is used to predict the three-dimensional deformation shapes at four incremental excavation stages of an actual supported excavation project. Copyright © ASCE 2011.
Resumo:
A new theoretical framework of tracer methods is proposed in the present contribution, on the basis of mass conservation. This model is applicable for both artificial and natural tracers. It can be used to calculate the spatial distribution patterns of sediment transport rate, thus providing independent information and verification for the results derived from empirical formulae. For the procedures of the calculation, first, the tracer concentration and topographic maps of two times are obtained. Then, the spatial and temporal changes in the concentration and seabed elevation are calculated, and the boundary conditions required are determined by field observations (such as flow and bedform migration measurements). Finally, based upon eqs. (1) and (13), the transport rate is calculated and expressed as a function of the position over the study area. Further, appropriate modifications to the model may allow the tracer to have different densities and grain size distributions from the bulk sediment.
Resumo:
This thesis bases on horizontal research project “The research about the fine structure and mechanical parameters of abutment jointed rock mass of high arch dam on Jinping Ⅰ Hydropower Station, Yalong River” and “The research about the fine structure and mechanical parameters of the columnar basalt rock mass on Baihetan Hydropower Station, Jinsha River”. A rounded system about the fine structure description and rock mass classification is established. This research mainly contains six aspects as follow: (1) Methods about fine structure description of the window rock mass; (2) The window rock mass classification about the fine structure; (3) Model test study of intermittent joints; (4) Window rock mass strength theory; (5) Numerical experimentations about window rock mass; (6) The multi-source fusion of mechanical parameters based on Bayes principle. Variation of intact rock strength and joint conditions with the weathering and relaxation degree is studied through the description of window rock mass. And four principal parameters: intact rock point load strength, integration degree of window rock mass, joint conditions, and groundwater condition is selected to assess the window rock mass. Window rock mass is classified into three types using the results of window rock mass fine structure description combined with joints develop model. Scores about intact rock strength, integrality condition, divisional plane condition and groundwater conditions are given based on window rock mass fine structure description. Then quality evaluation about two different types of rock mass: general joint structure and columnar jointing structure are carried out to use this window rock mass classification system. Application results show that the window rock mass classification system is effective and applicable. Aimed at structural features of window structure of “the rock mass damaged by recessive fracture”, model tests and numerical models are designed about intermittent joints. By conducting model tests we get shear strength under different normal stress in integrated samples, through samples and intermittent joints samples. Also, the changing trends of shear strength in various connectivity rates are analyzed. We numerically simulate the entire process of direct shear tests by using PFC2D. In order to tally the stress-strain curve of numerical simulation with experimental tests about both integrated samples and through samples, we adjust mechanical factors between particles. Through adopting the same particle geometric parameter, the numerical sample of intermittent joints in different connective condition is re-built. At the same time, we endow the rock bridges and joints in testing samples with the fixed particle contacting parameters, and conduct a series of direct shear tests. Then the destructive process and mechanical parameters in both micro-prospective and macro-prospective are obtained. By synthesizing the results of numerical and sample tests and analyzing the evolutionary changes of stress and strain on intermittent joints plane, we conclude that the centralization of compressive stress on rock bridges increase the shear strength of it. We discuss the destructive mechanics of intermittent joints rock under direct shear condition, meanwhile, divide the whole shear process into five phases, which are elasticity phase, fracture initiation phase, peak value phase, after-peak phase and residual phase. In development of strength theory, the shear strength mechanisms of joint and rock bridge are analyzed respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. Some sets of numerical simulation methods, i.e. the distinct element method (UDEC) based on in-situ geology mapping are developed and introduced. The working methods about determining mechanical parameters of intact rock and joints in numerical model are studied. The operation process and analysis results are demonstrated detailed from the research on parameters of rock mass based on numerical test in the Jinping Ⅰ Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Results about numerical simulation study show that we can get the shear strength mechanical parameters by changing the load conditions. The multi-source rock mass mechanical parameters can be fused by the Bayes theory, which are test value, empirical value and theoretical value. Then the value range and its confidence probability of different rock mass grade are induced and these data supports the reliability design.
Resumo:
Morgan, H.; Habbal, S. R., An empirical 3D model of the large-scale coronal structure based on the distribution of H? filaments on the solar disk, Astronomy and Astrophysics, Volume 464, Issue 1, March II 2007, pp.357-365
Resumo:
The abundance of many commercially important fish stocks are declining and this has led to widespread concern on the performance of traditional approach in fisheries management. Quantitative models are used for obtaining estimates of population abundance and the management advice is based on annual harvest levels (TAC), where only a certain amount of catch is allowed from specific fish stocks. However, these models are data intensive and less useful when stocks have limited historical information. This study examined whether empirical stock indicators can be used to manage fisheries. The relationship between indicators and the underlying stock abundance is not direct and hence can be affected by disturbances that may account for both transient and persistent effects. Methods from Statistical Process Control (SPC) theory such as the Cumulative Sum (CUSUM) control charts are useful in classifying these effects and hence they can be used to trigger management response only when a significant impact occurs to the stock biomass. This thesis explores how empirical indicators along with CUSUM can be used for monitoring, assessment and management of fish stocks. I begin my thesis by exploring various age based catch indicators, to identify those which are potentially useful in tracking the state of fish stocks. The sensitivity and response of these indicators towards changes in Spawning Stock Biomass (SSB) showed that indicators based on age groups that are fully selected to the fishing gear or Large Fish Indicators (LFIs) are most useful and robust across the range of scenarios considered. The Decision-Interval (DI-CUSUM) and Self-Starting (SS-CUSUM) forms are the two types of control charts used in this study. In contrast to the DI-CUSUM, the SS-CUSUM can be initiated without specifying a target reference point (‘control mean’) to detect out-of-control (significant impact) situations. The sensitivity and specificity of SS-CUSUM showed that the performances are robust when LFIs are used. Once an out-of-control situation is detected, the next step is to determine how much shift has occurred in the underlying stock biomass. If an estimate of this shift is available, they can be used to update TAC by incorporation into Harvest Control Rules (HCRs). Various methods from Engineering Process Control (EPC) theory were tested to determine which method can measure the shift size in stock biomass with the highest accuracy. Results showed that methods based on Grubb’s harmonic rule gave reliable shift size estimates. The accuracy of these estimates can be improved by monitoring a combined indicator metric of stock-recruitment and LFI because this may account for impacts independent of fishing. The procedure of integrating both SPC and EPC is known as Statistical Process Adjustment (SPA). A HCR based on SPA was designed for DI-CUSUM and the scheme was successful in bringing out-of-control fish stocks back to its in-control state. The HCR was also tested using SS-CUSUM in the context of data poor fish stocks. Results showed that the scheme will be useful for sustaining the initial in-control state of the fish stock until more observations become available for quantitative assessments.
Resumo:
BACKGROUND: Dropouts and missing data are nearly-ubiquitous in obesity randomized controlled trails, threatening validity and generalizability of conclusions. Herein, we meta-analytically evaluate the extent of missing data, the frequency with which various analytic methods are employed to accommodate dropouts, and the performance of multiple statistical methods. METHODOLOGY/PRINCIPAL FINDINGS: We searched PubMed and Cochrane databases (2000-2006) for articles published in English and manually searched bibliographic references. Articles of pharmaceutical randomized controlled trials with weight loss or weight gain prevention as major endpoints were included. Two authors independently reviewed each publication for inclusion. 121 articles met the inclusion criteria. Two authors independently extracted treatment, sample size, drop-out rates, study duration, and statistical method used to handle missing data from all articles and resolved disagreements by consensus. In the meta-analysis, drop-out rates were substantial with the survival (non-dropout) rates being approximated by an exponential decay curve (e(-lambdat)) where lambda was estimated to be .0088 (95% bootstrap confidence interval: .0076 to .0100) and t represents time in weeks. The estimated drop-out rate at 1 year was 37%. Most studies used last observation carried forward as the primary analytic method to handle missing data. We also obtained 12 raw obesity randomized controlled trial datasets for empirical analyses. Analyses of raw randomized controlled trial data suggested that both mixed models and multiple imputation performed well, but that multiple imputation may be more robust when missing data are extensive. CONCLUSION/SIGNIFICANCE: Our analysis offers an equation for predictions of dropout rates useful for future study planning. Our raw data analyses suggests that multiple imputation is better than other methods for handling missing data in obesity randomized controlled trials, followed closely by mixed models. We suggest these methods supplant last observation carried forward as the primary method of analysis.
Resumo:
Illicit trade carries the potential to magnify existing tobacco-related health care costs through increased availability of untaxed and inexpensive cigarettes. What is known with respect to the magnitude of illicit trade for Vietnam is produced primarily by the industry, and methodologies are typically opaque. Independent assessment of the illicit cigarette trade in Vietnam is vital to tobacco control policy. This paper measures the magnitude of illicit cigarette trade for Vietnam between 1998 and 2010 using two methods, discrepancies between legitimate domestic cigarette sales and domestic tobacco consumption estimated from surveys, and trade discrepancies as recorded by Vietnam and trade partners. The results indicate that Vietnam likely experienced net smuggling in during the period studied. With the inclusion of adjustments for survey respondent under-reporting, inward illicit trade likely occurred in three of the four years for which surveys were available. Discrepancies in trade records indicate that the value of smuggled cigarettes into Vietnam ranges from $100 million to $300 million between 2000 and 2010 and that these cigarettes primarily originate in Singapore, Hong Kong, Macao, Malaysia, and Australia. Notable differences in trends over time exist between the two methods, but by comparison, the industry estimates consistently place the magnitude of illicit trade at the upper bounds of what this study shows. The unavailability of annual, survey-based estimates of consumption may obscure the true, annual trend over time. Second, as surveys changed over time, estimates relying on them may be inconsistent with one another. Finally, these two methods measure different components of illicit trade, specifically consumption of illicit cigarettes regardless of origin and smuggling of cigarettes into a particular market. However, absent a gold standard, comparisons of different approaches to illicit trade measurement serve efforts to refine and improve measurement approaches and estimates.
Resumo:
Cigarette smuggling reduces the price of cigarettes, thwarts youth access restrictions, reduces government revenue, and undercuts the ability of taxes to reduce consumption. The tobacco industry often opposes increases to tobacco taxes on the claim that greater taxes induce more smuggling. To date, little is known about the magnitude of smuggling in the Philippines. his information is necessary to effectively address illicit trade and to measure the impacts of tax changes and the introduction of secure tax markings on illicit trade. This study employs two gap discrepancy methods to estimate the magnitude of illicit trade in cigarettes for the Philippines between 1994 and 2009. First, domestic consumption is compared with tax-paid sales to measure the consumption of illicit cigarettes. Second, imports recorded by the Philippines are compared with exports to the Philippines by trade partners to measure smuggling. Domestic consumption fell short of tax-paid sales for all survey years. The magnitude of these differences and a comparison with a prevalence survey for 2009 suggest a high level of survey under-reporting of smoking. In the late 1990s and the mid 2000s, the Philippines experienced two sharp declines in trade discrepancies, from a high of $750 million in 1995 to a low of $133.7 million in 2008. Discrepancies composed more than one-third of the domestic market in 1995, but only 10 percent in 2009. Hong Kong, Singapore, and China together account for more than 80 percent of the cumulative discrepancies over the period and 74 percent of the discrepancy in 2009. The presence of large discrepancies supports the need to implement an effective tax marking and tobacco track and trace system to reduce illicit trade and support tax collection. The absence of a relation between tax changes and smuggling suggests that potential increases in the excise tax should not be discouraged by illicit trade. Finally, the identification of specific trade partners as primary sources for illicit trade may facilitate targeted efforts in cooperation with these governments to reduce illicit trade.