822 resultados para Theoretical models
Resumo:
With the advances in technology, seismological theory, and data acquisition, a number of high-resolution seismic tomography models have been published. However, discrepancies between tomography models often arise from different theoretical treatments of seismic wave propagation, different inversion strategies, and different data sets. Using a fixed velocity-to-density scaling and a fixed radial viscosity profile, we compute global mantle flow models associated with the different tomography models and test the impact of these for explaining surface geophysical observations (geoid, dynamic topography, stress, and strain rates). We use the joint modeling of lithosphere and mantle dynamics approach of Ghosh and Holt (2012) to compute the full lithosphere stresses, except that we use HC for the mantle circulation model, which accounts for the primary flow-coupling features associated with density-driven mantle flow. Our results show that the seismic tomography models of S40RTS and SAW642AN provide a better match with surface observables on a global scale than other models tested. Both of these tomography models have important similarities, including upwellings located in Pacific, Eastern Africa, Iceland, and mid-ocean ridges in the Atlantic and Indian Ocean and downwelling flows mainly located beneath the Andes, the Middle East, and central and Southeast Asia.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
Resumo:
We introduce a conceptual model for the in-plane physics of an earthquake fault. The model employs cellular automaton techniques to simulate tectonic loading, earthquake rupture, and strain redistribution. The impact of a hypothetical crustal elastodynamic Green's function is approximated by a long-range strain redistribution law with a r(-p) dependance. We investigate the influence of the effective elastodynamic interaction range upon the dynamical behaviour of the model by conducting experiments with different values of the exponent (p). The results indicate that this model has two distinct, stable modes of behaviour. The first mode produces a characteristic earthquake distribution with moderate to large events preceeded by an interval of time in which the rate of energy release accelerates. A correlation function analysis reveals that accelerating sequences are associated with a systematic, global evolution of strain energy correlations within the system. The second stable mode produces Gutenberg-Richter statistics, with near-linear energy release and no significant global correlation evolution. A model with effectively short-range interactions preferentially displays Gutenberg-Richter behaviour. However, models with long-range interactions appear to switch between the characteristic and GR modes. As the range of elastodynamic interactions is increased, characteristic behaviour begins to dominate GR behaviour. These models demonstrate that evolution of strain energy correlations may occur within systems with a fixed elastodynamic interaction range. Supposing that similar mode-switching dynamical behaviour occurs within earthquake faults then intermediate-term forecasting of large earthquakes may be feasible for some earthquakes but not for others, in alignment with certain empirical seismological observations. Further numerical investigation of dynamical models of this type may lead to advances in earthquake forecasting research and theoretical seismology.
Resumo:
The present paper describes a systematic study of argon plasmas in a bell-jar inductively coupled plasma (ICP) source over the range of pressure 5-20 mtorr and power input 0.2-0.5 kW, Experimental measurements as well as results of numerical simulations are presented. The models used in the study include the well-known global balance model (or the global model) as well as a detailed two-dimensional (2-D) fluid model of the system, The global model is able to provide reasonably accurate values for the global electron temperature and plasma density, The 2-D model provides spatial distributions of various plasma parameters that make it possible to compare with data measured in the experiments, The experimental measurements were obtained using a tuned Langmuir double-probe technique to reduce the RF interference and obtain the light versus current (I-V) characteristics of the probe. Time-averaged electron temperature and plasma density were measured for various combinations of pressure and applied RF power, The predictions of the 2-D model were found to be in good qualitative agreement with measured data, It was found that the electron temperature distribution T-e was more or less uniform in the chamber, It was also seen that the electron temperature depends primarily on pressure, but is almost independent of the power input, except in the very low-pressure regime. The plasma density goes up almost linearly with the power input.
Resumo:
The concept ''sample-specific'' is suggested to describe the behavior of disordered media close to macroscopic failure. it is pointed out that the transition from universal scaling to sample-specific behavior may be a common phenomenon in failure models of disordered media. The dynamical evolution plays an important role in the transition.
Resumo:
Optimal management in a multi-cohort Beverton-Holt model with any number of age classes and imperfect selectivity is equivalent to finding the optimal fish lifespan by chosen fallow cycles. Optimal policy differs in two main ways from the optimal lifespan rule with perfect selectivity. First, weight gain is valued in terms of the whole population structure. Second, the cost of waiting is the interest rate adjusted for the increase in the pulse length. This point is especially relevant for assessing the role of selectivity. Imperfect selectivity reduces the optimal lifespan and the optimal pulse length. We illustrate our theoretical findings with a numerical example. Results obtained using global numerical methods select the optimal pulse length predicted by the optimal lifespan rule.
Resumo:
In this paper, we studied the role of vertical component Of Surface tension of a water droplet on the deformation of membranes and microcantilevers (MCLs) widely used in lab-on-a-chip and micro-and nano-electromechanical system (MEMS/NEMS). Firstly, a membrane made of a rubber-like material, poly(dimethylsiloxane) (PDMS), was considered. The deformation was investigated using the Mooney-Rivlin (MR) model and the linear elastic constitutive relation, respectively. By comparison between the numerical solutions with two different models, we found that the simple linear elastic model is accurate enough to describe such kind of problem, which would be quite convenient for engineering applications. Furthermore, based on small-deflection beam theory, the effect of a liquid droplet on the deflection of a MCL was also studied. The free-end deflection of the MCL was investigated by considering different cases like a cylindrical droplet, a spherical droplet centered on the MCL and a spherical droplet arbitrarily positioned on the MCL. Numerical simulations demonstrated that the deflection might not be neglected, and showed good agreement with our theoretical analyses. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
The learning of probability distributions from data is a ubiquitous problem in the fields of Statistics and Artificial Intelligence. During the last decades several learning algorithms have been proposed to learn probability distributions based on decomposable models due to their advantageous theoretical properties. Some of these algorithms can be used to search for a maximum likelihood decomposable model with a given maximum clique size, k, which controls the complexity of the model. Unfortunately, the problem of learning a maximum likelihood decomposable model given a maximum clique size is NP-hard for k > 2. In this work, we propose a family of algorithms which approximates this problem with a computational complexity of O(k · n^2 log n) in the worst case, where n is the number of implied random variables. The structures of the decomposable models that solve the maximum likelihood problem are called maximal k-order decomposable graphs. Our proposals, called fractal trees, construct a sequence of maximal i-order decomposable graphs, for i = 2, ..., k, in k − 1 steps. At each step, the algorithms follow a divide-and-conquer strategy based on the particular features of this type of structures. Additionally, we propose a prune-and-graft procedure which transforms a maximal k-order decomposable graph into another one, increasing its likelihood. We have implemented two particular fractal tree algorithms called parallel fractal tree and sequential fractal tree. These algorithms can be considered a natural extension of Chow and Liu’s algorithm, from k = 2 to arbitrary values of k. Both algorithms have been compared against other efficient approaches in artificial and real domains, and they have shown a competitive behavior to deal with the maximum likelihood problem. Due to their low computational complexity they are especially recommended to deal with high dimensional domains.
Resumo:
In the preparation of small organic paramagnets, these structures may conceptually be divided into spin-containing units (SCs) and ferromagnetic coupling units (FCs). The synthesis and direct observation of a series of hydrocarbon tetraradicals designed to test the ferromagnetic coupling ability of m-phenylene, 1,3-cyclobutane, 1,3- cyclopentane, and 2,4-adamantane (a chair 1,3-cyclohexane) using Berson TMMs and cyclobutanediyls as SCs are described. While 1,3-cyclobutane and m-phenylene are good ferromagnetic coupling units under these conditions, the ferromagnetic coupling ability of 1,3-cyclopentane is poor, and 1,3-cyclohexane is apparently an antiferromagnetic coupling unit. In addition, this is the first report of ferromagnetic coupling between the spins of localized biradical SCs.
The poor coupling of 1,3-cyclopentane has enabled a study of the variable temperature behavior of a 1,3-cyclopentane FC-based tetraradical in its triplet state. Through fitting the observed data to the usual Boltzman statistics, we have been able to determine the separation of the ground quintet and excited triplet states. From this data, we have inferred the singlet-triplet gap in 1,3-cyclopentanediyl to be 900 cal/mol, in remarkable agreement with theoretical predictions of this number.
The ability to simulate EPR spectra has been crucial to the assignments made here. A powder EPR simulation package is described that uses the Zeeman and dipolar terms to calculate powder EPR spectra for triplet and quintet states.
Methods for characterizing paramagnetic samples by SQUID magnetometry have been developed, including robust routines for data fitting and analysis. A precursor to a potentially magnetic polymer was prepared by ring-opening metathesis polymerization (ROMP), and doped samples of this polymer were studied by magnetometry. While the present results are not positive, calculations have suggested modifications in this structure which should lead to the desired behavior.
Source listings for all computer programs are given in the appendix.
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.
Resumo:
This thesis summarizes the application of conventional and modern electron paramagnetic resonance (EPR) techniques to establish proximity relationships between paramagnetic metal centers in metalloproteins and between metal centers and magnetic ligand nuclei in two important and timely membrane proteins: succinate:ubiquinone oxidoreductase (SQR) from Paracoccus denitrificans and particulate methane monooxygenase (pMMO) from Methylococcus capsulatus. Such proximity relationships are thought to be critical to the biological function and the associated biochemistry mediated by the metal centers in these proteins. A mechanistic understanding of biological function relies heavily on structure-function relationships and the knowledge of how molecular structure and electronic properties of the metal centers influence the reactivity in metalloenzymes. EPR spectroscopy has proven to be one of the most powerful techniques towards obtaining information about interactions between metal centers as well as defining ligand structures. SQR is an electron transport enzyme wherein the substrates, organic and metallic cofactors are held relatively far apart. Here, the proximity relationships of the metallic cofactors were studied through their weak spin-spin interactions by means of EPR power saturation and electron spin-lattice (T_1) measurements, when the enzyme was poised at designated reduction levels. Analysis of the electron T_1 measurements for the S-3 center when the b-heme is paramagnetic led to a detailed analysis of the dipolar interactions and distance determination between two interacting metal centers. Studies of ligand environment of the metal centers by electron spin echo envelope modulation (ESEEM) spectroscopy resulted in the identication of peptide nitrogens as coupled nuclei in the environment of the S-1 and S-3 centers.
Finally, an EPR model was developed to describe the ferromagnetically coupled trinuclear copper clusters in pMMO when the enzyme is oxidized. The Cu(II) ions in these clusters appear to be strongly exchange coupled, and the EPR is consistent with equilateral triangular arrangements of type 2 copper ions. These results offer the first glimpse of the magneto-structural correlations for a trinuclear copper cluster of this type, which, until the work on pMMO, has had no precedent in the metalloprotein literature. Such trinuclear copper clusters are even rare in synthetic models.
Resumo:
Consumption of addictive substances poses a challenge to economic models of rational, forward-looking agents. This dissertation presents a theoretical and empirical examination of consumption of addictive goods.
The theoretical model draws on evidence from psychology and neurobiology to improve on the standard assumptions used in intertemporal consumption studies. I model agents who may misperceive the severity of the future consequences from consuming addictive substances and allow for an agent's environment to shape her preferences in a systematic way suggested by numerous studies that have found craving to be induced by the presence of environmental cues associated with past substance use. The behavior of agents in this behavioral model of addiction can mimic the pattern of quitting and relapsing that is prevalent among addictive substance users.
Chapter 3 presents an empirical analysis of the Becker and Murphy (1988) model of rational addiction using data on grocery store sales of cigarettes. This essay empirically tests the model's predictions concerning consumption responses to future and past price changes as well as the prediction that the response to an anticipated price change differs from the response to an unanticipated price change. In addition, I consider the consumption effects of three institutional changes that occur during the time period 1996 through 1999.
Resumo:
The Earth's largest geoid anomalies occur at the lowest spherical harmonic degrees, or longest wavelengths, and are primarily the result of mantle convection. Thermal density contrasts due to convection are partially compensated by boundary deformations due to viscous flow whose effects must be included in order to obtain a dynamically consistent model for the geoid. These deformations occur rapidly with respect to the timescale for convection, and we have analytically calculated geoid response kernels for steady-state, viscous, incompressible, self-gravitating, layered Earth models which include the deformation of boundaries due to internal loads. Both the sign and magnitude of geoid anomalies depend strongly upon the viscosity structure of the mantle as well as the possible presence of chemical layering.
Correlations of various global geophysical data sets with the observed geoid can be used to construct theoretical geoid models which constrain the dynamics of mantle convection. Surface features such as topography and plate velocities are not obviously related to the low-degree geoid, with the exception of subduction zones which are characterized by geoid highs (degrees 4-9). Recent models for seismic heterogeneity in the mantle provide additional constraints, and much of the low-degree (2-3) geoid can be attributed to seismically inferred density anomalies in the lower mantle. The Earth's largest geoid highs are underlain by low density material in the lower mantle, thus requiring compensating deformations of the Earth's surface. A dynamical model for whole mantle convection with a low viscosity upper mantle can explain these observations and successfully predicts more than 80% of the observed geoid variance.
Temperature variations associated with density anomalies in the man tie cause lateral viscosity variations whose effects are not included in the analytical models. However, perturbation theory and numerical tests show that broad-scale lateral viscosity variations are much less important than radial variations; in this respect, geoid models, which depend upon steady-state surface deformations, may provide more reliable constraints on mantle structure than inferences from transient phenomena such as postglacial rebound. Stronger, smaller-scale viscosity variations associated with mantle plumes and subducting slabs may be more important. On the basis of numerical modelling of low viscosity plumes, we conclude that the global association of geoid highs (after slab effects are removed) with hotspots and, perhaps, mantle plumes, is the result of hot, upwelling material in the lower mantle; this conclusion does not depend strongly upon plume rheology. The global distribution of hotspots and the dominant, low-degree geoid highs may correspond to a dominant mode of convection stabilized by the ancient Pangean continental assemblage.
Resumo:
When estimating parameters that constitute a discrete probability distribution {pj}, it is difficult to determine how constraints should be made to guarantee that the estimated parameters { pˆj} constitute a probability distribution (i.e., pˆj>0, Σ pˆj =1). For age distributions estimated from mixtures of length-at-age distributions, the EM (expectationmaximization) algorithm (Hasselblad, 1966; Hoenig and Heisey, 1987; Kimura and Chikuni, 1987), restricted least squares (Clark, 1981), and weak quasisolutions (Troynikov, 2004) have all been used. Each of these methods appears to guarantee that the estimated distribution will be a true probability distribution with all categories greater than or equal to zero and with individual probabilities that sum to one. In addition, all these methods appear to provide a theoretical basis for solutions that will be either maximum-likelihood estimates or at least convergent to a probability distribut
Resumo:
Based on the experimental data of scanning tunneling microscopy (STM), models of three-stranded braid-like DNAs composed by three kinds of base triplets AAA, TAT and GCA were constructed. We investigated the braid-like DNAs and their comparative tripler DNAs using a molecular mechanics method. The three strands of braid-like DNAs are proven equivalent, while those of tripler DNAs are not. The conformational energies for braid-like DNAs were found to be higher than that for tripler DNAs. Each period in one strand of braid-like DNA has 18 nucleotides, half of which are right-handed, while the other half are left-handed. Additional discussions concerning sugar puckering modes and the H-bonds are also included. (C) 1999 Elsevier Science B.V. All rights reserved.