947 resultados para Mean field models
Resumo:
The stress release model, a stochastic version of the elastic rebound theory, is applied to the large events from four synthetic earthquake catalogs generated by models with various levels of disorder in distribution of fault zone strength (Ben-Zion, 1996) They include models with uniform properties (U), a Parkfield-type asperity (A), fractal brittle properties (F), and multi-size-scale heterogeneities (M). The results show that the degree of regularity or predictability in the assumed fault properties, based on both the Akaike information criterion and simulations, follows the order U, F, A, and M, which is in good agreement with that obtained by pattern recognition techniques applied to the full set of synthetic data. Data simulated from the best fitting stress release models reproduce, both visually and in distributional terms, the main features of the original catalogs. The differences in character and the quality of prediction between the four cases are shown to be dependent on two main aspects: the parameter controlling the sensitivity to departures from the mean stress level and the frequency-magnitude distribution, which differs substantially between the four cases. In particular, it is shown that the predictability of the data is strongly affected by the form of frequency-magnitude distribution, being greatly reduced if a pure Gutenburg-Richter form is assumed to hold out to high magnitudes.
Resumo:
A new set of equations for the energies of the mean magnetic field and the mean plasma velocity is derived taking the dynamo effects into account, by which the anomalous phenomenon, T(i) > T(e), observed in some reversed field pinches (RFP's) is successfully explained.
Resumo:
As an improvement of resolution of observations, more and more radio galaxies with radiojets have been identified and many fine structures in the radio jets yielded. In the presentpaper, the two-dimensional magnetohydrodynamical theory is applied to the analysis of themagnetic field configurations in the radio jefs. Two-dimensional results not only are con-sistent theoretically, but also explain the fine structures of observations. One of the theo-retical models is discussed in detail, and is in good agreement as compared with the observedradio jets of NGC6251. The results of the present paper also show that the magneticfields in the radio jets are mainly longitudinal ones and associate with the double sources ofQSOs if the magnetic field of the central object is stronger; the fields in the radio jets aremainly transverse ones and associate with the double sources of radio galaxies if the fieldof the central object is weaker. The magnetic field has great influence on the morphol-ogy and dynamic process.
Resumo:
本文讨论了等离子体湍流对电子加速的两种模型:(1)假定在空间中存在一个空间均匀的等离子体湍流区,当具有一定初始分布的电子束通过此湍流区时,研究湍流场对电子束的加速过程;(2)在某一封闭的区域中,存在着具有一定初始分布和空间均匀的等离子体,当某种类型的等离子体波突然传入此等离子体区,然后考察此区中电子的加速过程。在这两种模型中,可能存在着某种电子消失机制。假定湍谱是幂指数形式,我们给出了不同类型湍流扩散系数的普遍形式。利用较简单的数学方法,求解了包括消失过程的一维准线性动力学方程,对于给定的初始分布,得出了分布函数的解析解,并给出了平均能量时间关系的表达式。另外,对于特定的湍谱指数,解出了当平行电场和湍流同时存在时的分布函数。最后,对所得结果进行了数值分析和讨论。
Resumo:
Based on the three-dimensional elastic inclusion model proposed by Dobrovolskii, we developed a rheological inclusion model to study earthquake preparation processes. By using the Corresponding Principle in the theory of rheologic mechanics, we derived the analytic expressions of viscoelastic displacement U(r, t) , V(r, t) and W(r, t), normal strains epsilon(xx) (r, t), epsilon(yy) (r, t) and epsilon(zz) (r, t) and the bulk strain theta (r, t) at an arbitrary point (x, y, z) in three directions of X axis, Y axis and Z axis produced by a three-dimensional inclusion in the semi-infinite rheologic medium defined by the standard linear rheologic model. Subsequent to the spatial-temporal variation of bulk strain being computed on the ground produced by such a spherical rheologic inclusion, interesting results are obtained, suggesting that the bulk strain produced by a hard inclusion change with time according to three stages (alpha, beta, gamma) with different characteristics, similar to that of geodetic deformation observations, but different with the results of a soft inclusion. These theoretical results can be used to explain the characteristics of spatial-temporal evolution, patterns, quadrant-distribution of earthquake precursors, the changeability, spontaneity and complexity of short-term and imminent-term precursors. It offers a theoretical base to build physical models for earthquake precursors and to predict the earthquakes.
Resumo:
The effects of the free-stream thermo-chemical state on the test model flow field in the high-enthalpy tunnel are studied numerically. The properties of the free-stream, which is in thermo-chemical non-equilibrium, are determined by calculating the nozzle flow field. A free-stream with total enthalpy equal to the real one in the tunnel while in thermo-chemical equilibrium is constructed artificially to simulate the natural atmosphere condition. The flow fields over the test models (blunt cone and Apollo command capsule model) under both the non-equilibrium and the virtual equilibrium free-stream conditions are calculated. By comparing the properties including pressure, temperature, species concentration and radiation distributions of these two types of flow fields, the effects of the non-equilibrium state of the free-stream in the high-enthalpy shock tunnel are analyzed.
Resumo:
[EN]The Mallows and Generalized Mallows models are compact yet powerful and natural ways of representing a probability distribution over the space of permutations. In this paper we deal with the problems of sampling and learning (estimating) such distributions when the metric on permutations is the Cayley distance. We propose new methods for both operations, whose performance is shown through several experiments. We also introduce novel procedures to count and randomly generate permutations at a given Cayley distance both with and without certain structural restrictions. An application in the field of biology is given to motivate the interest of this model.
Resumo:
Recently, probability models on rankings have been proposed in the field of estimation of distribution algorithms in order to solve permutation-based combinatorial optimisation problems. Particularly, distance-based ranking models, such as Mallows and Generalized Mallows under the Kendall’s-t distance, have demonstrated their validity when solving this type of problems. Nevertheless, there are still many trends that deserve further study. In this paper, we extend the use of distance-based ranking models in the framework of EDAs by introducing new distance metrics such as Cayley and Ulam. In order to analyse the performance of the Mallows and Generalized Mallows EDAs under the Kendall, Cayley and Ulam distances, we run them on a benchmark of 120 instances from four well known permutation problems. The conducted experiments showed that there is not just one metric that performs the best in all the problems. However, the statistical test pointed out that Mallows-Ulam EDA is the most stable algorithm among the studied proposals.
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
Two of the most important questions in mantle dynamics are investigated in three separate studies: the influence of phase transitions (studies 1 and 2), and the influence of temperature-dependent viscosity (study 3).
(1) Numerical modeling of mantle convection in a three-dimensional spherical shell incorporating the two major mantle phase transitions reveals an inherently three-dimensional flow pattern characterized by accumulation of cold downwellings above the 670 km discontinuity, and cylindrical 'avalanches' of upper mantle material into the lower mantle. The exothermic phase transition at 400 km depth reduces the degree of layering. A region of strongly-depressed temperature occurs at the base of the mantle. The temperature field is strongly modulated by this partial layering, both locally and in globally-averaged diagnostics. Flow penetration is strongly wavelength-dependent, with easy penetration at long wavelengths but strong inhibition at short wavelengths. The amplitude of the geoid is not significantly affected.
(2) Using a simple criterion for the deflection of an upwelling or downwelling by an endothermic phase transition, the scaling of the critical phase buoyancy parameter with the important lengthscales is obtained. The derived trends match those observed in numerical simulations, i.e., deflection is enhanced by (a) shorter wavelengths, (b) narrower up/downwellings (c) internal heating and (d) narrower phase loops.
(3) A systematic investigation into the effects of temperature-dependent viscosity on mantle convection has been performed in three-dimensional Cartesian geometry, with a factor of 1000-2500 viscosity variation, and Rayleigh numbers of 10^5-10^7. Enormous differences in model behavior are found, depending on the details of rheology, heating mode, compressibility and boundary conditions. Stress-free boundaries, compressibility, and temperature-dependent viscosity all favor long-wavelength flows, even in internally heated cases. However, small cells are obtained with some parameter combinations. Downwelling plumes and upwelling sheets are possible when viscosity is dependent solely on temperature. Viscous dissipation becomes important with temperature-dependent viscosity.
The sensitivity of mantle flow and structure to these various complexities illustrates the importance of performing mantle convection calculations with rheological and thermodynamic properties matching as closely as possible those of the Earth.
Resumo:
We investigate high-order harmonic emission and isolated attosecond pulse (IAP) generation in atoms driven by a two-colour multi-cycle laser field consisting of an 800 nm pulse and an infrared laser pulse at an arbitrary wavelength. With moderate laser intensity, an IAP of similar to 220 as can be generated in helium atoms by using two-colour laser pulses of 35 fs/800 nm and 46 fs/1150 nm. The discussion based on the three-step semiclassical model, and time-frequency analysis shows a clear picture of the high-order harmonic generation in the waveform-controlled laser field which is of benefit to the generation of XUV IAP and attosecond electron pulses. When the propagation effect is included, the duration of the IAP can be shorter than 200 as, when the driving laser pulses are focused 1 mm before the gas medium with a length between 1.5 mm and 2 mm.
Resumo:
Large quantities of teleseismic short-period seismograms recorded at SCARLET provide travel time, apparent velocity and waveform data for study of upper mantle compressional velocity structure. Relative array analysis of arrival times from distant (30° < Δ < 95°) earthquakes at all azimuths constrains lateral velocity variations beneath southern California. We compare dT/dΔ back azimuth and averaged arrival time estimates from the entire network for 154 events to the same parameters derived from small subsets of SCARLET. Patterns of mislocation vectors for over 100 overlapping subarrays delimit the spatial extent of an east-west striking, high-velocity anomaly beneath the Transverse Ranges. Thin lens analysis of the averaged arrival time differences, called 'net delay' data, requires the mean depth of the corresponding lens to be more than 100 km. Our results are consistent with the PKP-delay times of Hadley and Kanamori (1977), who first proposed the high-velocity feature, but we place the anomalous material at substantially greater depths than their 40-100 km estimate.
Detailed analysis of travel time, ray parameter and waveform data from 29 events occurring in the distance range 9° to 40° reveals the upper mantle structure beneath an oceanic ridge to depths of over 900 km. More than 1400 digital seismograms from earthquakes in Mexico and Central America yield 1753 travel times and 58 dT/dΔ measurements as well as high-quality, stable waveforms for investigation of the deep structure of the Gulf of California. The result of a travel time inversion with the tau method (Bessonova et al., 1976) is adjusted to fit the p(Δ) data, then further refined by incorporation of relative amplitude information through synthetic seismogram modeling. The application of a modified wave field continuation method (Clayton and McMechan, 1981) to the data with the final model confirms that GCA is consistent with the entire data set and also provides an estimate of the data resolution in velocity-depth space. We discover that the upper mantle under this spreading center has anomalously slow velocities to depths of 350 km, and place new constraints on the shape of the 660 km discontinuity.
Seismograms from 22 earthquakes along the northeast Pacific rim recorded in southern California form the data set for a comparative investigation of the upper mantle beneath the Cascade Ranges-Juan de Fuca region, an ocean-continent transit ion. These data consist of 853 seismograms (6° < Δ < 42°) which produce 1068 travel times and 40 ray parameter estimates. We use the spreading center model initially in synthetic seismogram modeling, and perturb GCA until the Cascade Ranges data are matched. Wave field continuation of both data sets with a common reference model confirms that real differences exist between the two suites of seismograms, implying lateral variation in the upper mantle. The ocean-continent transition model, CJF, features velocities from 200 and 350 km that are intermediate between GCA and T7 (Burdick and Helmberger, 1978), a model for the inland western United States. Models of continental shield regions (e.g., King and Calcagnile, 1976) have higher velocities in this depth range, but all four model types are similar below 400 km. This variation in rate of velocity increase with tectonic regime suggests an inverse relationship between velocity gradient and lithospheric age above 400 km depth.
Resumo:
Since the discovery of D-branes as non-perturbative, dynamic objects in string theory, various configurations of branes in type IIA/B string theory and M-theory have been considered to study their low-energy dynamics described by supersymmetric quantum field theories.
One example of such a construction is based on the description of Seiberg-Witten curves of four-dimensional N = 2 supersymmetric gauge theories as branes in type IIA string theory and M-theory. This enables us to study the gauge theories in strongly-coupled regimes. Spectral networks are another tool for utilizing branes to study non-perturbative regimes of two- and four-dimensional supersymmetric theories. Using spectral networks of a Seiberg-Witten theory we can find its BPS spectrum, which is protected from quantum corrections by supersymmetry, and also the BPS spectrum of a related two-dimensional N = (2,2) theory whose (twisted) superpotential is determined by the Seiberg-Witten curve. When we don’t know the perturbative description of such a theory, its spectrum obtained via spectral networks is a useful piece of information. In this thesis we illustrate these ideas with examples of the use of Seiberg-Witten curves and spectral networks to understand various two- and four-dimensional supersymmetric theories.
First, we examine how the geometry of a Seiberg-Witten curve serves as a useful tool for identifying various limits of the parameters of the Seiberg-Witten theory, including Argyres-Seiberg duality and Argyres-Douglas fixed points. Next, we consider the low-energy limit of a two-dimensional N = (2, 2) supersymmetric theory from an M-theory brane configuration whose (twisted) superpotential is determined by the geometry of the branes. We show that, when the two-dimensional theory flows to its infra-red fixed point, particular cases realize Kazama-Suzuki coset models. We also study the BPS spectrum of an Argyres-Douglas type superconformal field theory on the Coulomb branch by using its spectral networks. We provide strong evidence of the equivalence of superconformal field theories from different string-theoretic constructions by comparing their BPS spectra.
Resumo:
This study addresses the problem of obtaining reliable velocities and displacements from accelerograms, a concern which often arises in earthquake engineering. A closed-form acceleration expression with random parameters is developed to test any strong-motion accelerogram processing method. Integration of this analytical time history yields the exact velocities, displacements and Fourier spectra. Noise and truncation can also be added. A two-step testing procedure is proposed and the original Volume II routine is used as an illustration. The main sources of error are identified and discussed. Although these errors may be reduced, it is impossible to extract the true time histories from an analog or digital accelerogram because of the uncertain noise level and missing data. Based on these uncertainties, a probabilistic approach is proposed as a new accelerogram processing method. A most probable record is presented as well as a reliability interval which reflects the level of error-uncertainty introduced by the recording and digitization process. The data is processed in the frequency domain, under assumptions governing either the initial value or the temporal mean of the time histories. This new processing approach is tested on synthetic records. It induces little error and the digitization noise is adequately bounded. Filtering is intended to be kept to a minimum and two optimal error-reduction methods are proposed. The "noise filters" reduce the noise level at each harmonic of the spectrum as a function of the signal-to-noise ratio. However, the correction at low frequencies is not sufficient to significantly reduce the drifts in the integrated time histories. The "spectral substitution method" uses optimization techniques to fit spectral models of near-field, far-field or structural motions to the amplitude spectrum of the measured data. The extremes of the spectrum of the recorded data where noise and error prevail are then partly altered, but not removed, and statistical criteria provide the choice of the appropriate cutoff frequencies. This correction method has been applied to existing strong-motion far-field, near-field and structural data with promising results. Since this correction method maintains the whole frequency range of the record, it should prove to be very useful in studying the long-period dynamics of local geology and structures.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.