937 resultados para Empirical Bayes method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In many real world prediction problems the output is a structured object like a sequence or a tree or a graph. Such problems range from natural language processing to compu- tational biology or computer vision and have been tackled using algorithms, referred to as structured output learning algorithms. We consider the problem of structured classifi- cation. In the last few years, large margin classifiers like sup-port vector machines (SVMs) have shown much promise for structured output learning. The related optimization prob -lem is a convex quadratic program (QP) with a large num-ber of constraints, which makes the problem intractable for large data sets. This paper proposes a fast sequential dual method (SDM) for structural SVMs. The method makes re-peated passes over the training set and optimizes the dual variables associated with one example at a time. The use of additional heuristics makes the proposed method more efficient. We present an extensive empirical evaluation of the proposed method on several sequence learning problems.Our experiments on large data sets demonstrate that the proposed method is an order of magnitude faster than state of the art methods like cutting-plane method and stochastic gradient descent method (SGD). Further, SDM reaches steady state generalization performance faster than the SGD method. The proposed SDM is thus a useful alternative for large scale structured output learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution). The interface experiences a jump discontinuity in metal concentration. The extended finite-element model (XFEM) handles this jump discontinuity by using discontinuous-derivative enrichment formulation, eliminating the requirement of using front conforming mesh and re-meshing after each time step as in the conventional finite-element method. However, prior interface location is required so as to solve the governing equations for concentration field for which a numerical technique, the level set method, is used for tracking the interface explicitly and updating it over time. The level set method is chosen as it is independent of shape and location of the interface. Thus, a combined XFEM and level set method is developed in this paper. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed model is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions. An empirical model for pitting potential is also derived based on the finite-element results. Studies show that pitting profile depends on factors such as ion concentration, solution pH and temperature to a large extent. Studying the individual and combined effects of these factors on pitting potential is worth knowing, as pitting potential directly influences corrosion rate.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the penetration process of ogive-nose projectiles into the semi-infinite concrete target is investigated by the dimensional analysis method and FEM simulation. With the dimensional analysis, main non-dimensional parameters which control the penetration depth are obtained with some reasonable hypothesis. Then, a new semi-empirical equation is present based on the original work of Forrestal et al., has only two non-dimensional combined variables with definite physical meanings. To verify this equation, prediction results are compared with experiments in a wide variation region of velocity. Then, a commercial FEM code, LS-DYNA, is used to simulate the complex penetration process, that also show the novel semi-empirical equation is reasonable for determining the penetration depth in a concrete target.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, the penetration process of ogive-nose projectiles into the semi-infinite concrete target is investigated by the dimensional analysis method and FEM simulation. With the dimensional analysis, main non-dimensional parameters which control the penetration depth are obtained with some reasonable hypothesis. Then, a new semi-empirical equation is present based on the original work of Forrestal et al., has only two non-dimensional combined variables with definite physical meanings. To verify this equation, prediction results are compared with experiments in a wide variation region of velocity. Then, a commercial FEM code, LS-DYNA, is used to simulate the complex penetration process, that also show the novel semi-empirical equation is reasonable for determining the penetration depth in a concrete target.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nanocrystalline La0.8Pb0.2FeO3 has been prepared by the sol-gel method. XRD patterns show that the nanocrystalline La0.8Pb0.2FeO3 is a perovskite phase with the orthorhombic structure and its mean crystallite size is about 19 nm. The influence of Pb ions which replaced the La ions on A-sites can be directly observed from the electrical and sensing properties to H-2 gas. The conductance of La0.8Pb0.2FeO3-based sensor is considerably higher than that of LaFeO3-based sensor, and Pb-doping can enhance the sensitivity to H2 gas. An empirical relationship of R = KCH2alpha with alpha = 0.668 was obtained.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The hydrodynamic properties of free surface vortices at hydraulic intakes were investigated. Based on the axisymmetric Navier-Stokes equations and empirical assumptions, two sets of formulations for the velocity distributions and the free surface profiles are proposed and validated against measurements available in the literature. Compared with previous formulae, the modifications based on Mih's formula are found to greatly improve the agreement with the experimental data. Physical model tests were also conducted to study the intake vortex of the Xiluodu hydroelectric project in China. The proposed velocity distribution formula was applied to the solid boundary as considered by the method of images. A good agreement was again observed between the prediction and the measurements. © 2011 International Association for Hydro-Environment Engineering and Research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As operational impacts from buildings are reduced, embodied impacts are increasing. However, the latter are seldom calculated in the UK; when they are, they tend to be calculated after the building has been constructed, or are underestimated by considering only the initial materials stage. In 2010, the UK Government recommended that a standard methodology for calculating embodied impacts of buildings be developed for early stage design decisions. This was followed in 2011-12 by the publication of the European TC350 standards defining the 'cradle to grave' impact of buildings and products through a process Life Cycle Analysis. This paper describes a new whole life embodied carbon and energy of buildings (ECEB) tool, designed as a usable empirical-based approach for early stage design decisions for UK buildings. The tool complies where possible with the TC350 standards. Initial results for a simple masonry construction dwelling are given in terms of the percentage contribution of each life cycle stage. The main difficulty in obtaining these results is found to be the lack of data, and the paper suggests that the construction and manufacturing industries now have a responsibility to develop new data in order to support this task. © 2013 The Authors. Published by Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The atomistic pseudopotential quantum mechanical calculations are used to study the transport in million atom nanosized metal-oxide-semiconductor field-effect transistors. In the charge self-consistent calculation, the quantum mechanical eigenstates of closed systems instead of scattering states of open systems are calculated. The question of how to use these eigenstates to simulate a nonequilibrium system, and how to calculate the electric currents, is addressed. Two methods to occupy the electron eigenstates to yield the charge density in a nonequilibrium condition are tested and compared. One is a partition method and another is a quasi-Fermi level method. Two methods are also used to evaluate the current: one uses the ballistic and tunneling current approximation, another uses the drift-diffusion method. (C) 2009 American Institute of Physics. [doi:10.1063/1.3248262]

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The advantages of the supercell model in employing the recursion method are discussed in comparison with the cluster model. A transformation for changing complex Bloch-sum seed states to real seed states in recursion calculations is presented and band dispersion in the recursion method is extracted with use of the Lanczos algorithm. The method is illustrated by the band structure of GaAs in the empirical tight-binding parametrized model. In the supercell model, the treatment of boundary conditions is discussed for various seed-state choices. The method is useful in applying tight-binding techniques to systems with substantial deviations from periodicity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The relation between the lattice energies and the bulk moduli on binary inorganic crystals was studied, and the concept of lattice energy density is introduced. We find that the lattice energy densities are in good linear relation with the bulk moduli in the same type of crystals, the slopes of fitting lines for various types of crystals are related to the valence and coordination number of cations of crystals, and the empirical expression of calculated slope is obtained. From crystal structure, the calculated results are in very good agreement with the experimental values. At the same time, by means of the dielectric theory of the chemical bond and the calculating method of the lattice energy of complex crystals, the estimative method of the bulk modulus of complex crystals was established reasonably, and the calculated results are in very good agreement with the experimental values.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, based on the consideration of covalent behavior of adjacent ions in crystals, a calculation formula of lattice energy was proposed. In which, the concept of ionic effective valence and the empirical formula covalent energy were introduced,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis bases on horizontal research project “The research about the fine structure and mechanical parameters of abutment jointed rock mass of high arch dam on Jinping Ⅰ Hydropower Station, Yalong River” and “The research about the fine structure and mechanical parameters of the columnar basalt rock mass on Baihetan Hydropower Station, Jinsha River”. A rounded system about the fine structure description and rock mass classification is established. This research mainly contains six aspects as follow: (1) Methods about fine structure description of the window rock mass; (2) The window rock mass classification about the fine structure; (3) Model test study of intermittent joints; (4) Window rock mass strength theory; (5) Numerical experimentations about window rock mass; (6) The multi-source fusion of mechanical parameters based on Bayes principle. Variation of intact rock strength and joint conditions with the weathering and relaxation degree is studied through the description of window rock mass. And four principal parameters: intact rock point load strength, integration degree of window rock mass, joint conditions, and groundwater condition is selected to assess the window rock mass. Window rock mass is classified into three types using the results of window rock mass fine structure description combined with joints develop model. Scores about intact rock strength, integrality condition, divisional plane condition and groundwater conditions are given based on window rock mass fine structure description. Then quality evaluation about two different types of rock mass: general joint structure and columnar jointing structure are carried out to use this window rock mass classification system. Application results show that the window rock mass classification system is effective and applicable. Aimed at structural features of window structure of “the rock mass damaged by recessive fracture”, model tests and numerical models are designed about intermittent joints. By conducting model tests we get shear strength under different normal stress in integrated samples, through samples and intermittent joints samples. Also, the changing trends of shear strength in various connectivity rates are analyzed. We numerically simulate the entire process of direct shear tests by using PFC2D. In order to tally the stress-strain curve of numerical simulation with experimental tests about both integrated samples and through samples, we adjust mechanical factors between particles. Through adopting the same particle geometric parameter, the numerical sample of intermittent joints in different connective condition is re-built. At the same time, we endow the rock bridges and joints in testing samples with the fixed particle contacting parameters, and conduct a series of direct shear tests. Then the destructive process and mechanical parameters in both micro-prospective and macro-prospective are obtained. By synthesizing the results of numerical and sample tests and analyzing the evolutionary changes of stress and strain on intermittent joints plane, we conclude that the centralization of compressive stress on rock bridges increase the shear strength of it. We discuss the destructive mechanics of intermittent joints rock under direct shear condition, meanwhile, divide the whole shear process into five phases, which are elasticity phase, fracture initiation phase, peak value phase, after-peak phase and residual phase. In development of strength theory, the shear strength mechanisms of joint and rock bridge are analyzed respectively. In order to apply the deducted formulation conveniently in the real projects, a relationship between these formulations and Mohr-Coulomb hypothesis is built up. Some sets of numerical simulation methods, i.e. the distinct element method (UDEC) based on in-situ geology mapping are developed and introduced. The working methods about determining mechanical parameters of intact rock and joints in numerical model are studied. The operation process and analysis results are demonstrated detailed from the research on parameters of rock mass based on numerical test in the Jinping Ⅰ Hydropower Station and Baihetan Hydropower Station. By comparison,the advantages and disadvantages are discussed. Results about numerical simulation study show that we can get the shear strength mechanical parameters by changing the load conditions. The multi-source rock mass mechanical parameters can be fused by the Bayes theory, which are test value, empirical value and theoretical value. Then the value range and its confidence probability of different rock mass grade are induced and these data supports the reliability design.