980 resultados para empirical correlation
Resumo:
A systematic study on the available data of 26 metallic glasses shows that there is an intrinsic correlation between fragility of a liquid and bulk modulus of its glass. The underlying physics can be rationalized within the formalism of potential energy landscape thermodynamics. It is surprising to find that the linear correlation between the fragility and the bulk-shear modulus ratio exists strictly at either absolute zero temperature or very high frequency. Further analyses indicate that a real flow event in bulk metallic glasses is shear dominant, and fragility is in inverse proportion to shear-induced bulk dilatation. Finally, extension of these findings to nonmetallic glasses is discussed.
Resumo:
Large-eddy simulation (LES) has emerged as a promising tool for simulating turbulent flows in general and, in recent years,has also been applied to the particle-laden turbulence with some success (Kassinos et al., 2007). The motion of inertial particles is much more complicated than fluid elements, and therefore, LES of turbulent flow laden with inertial particles encounters new challenges. In the conventional LES, only large-scale eddies are explicitly resolved and the effects of unresolved, small or subgrid scale (SGS) eddies on the large-scale eddies are modeled. The SGS turbulent flow field is not available. The effects of SGS turbulent velocity field on particle motion have been studied by Wang and Squires (1996), Armenio et al. (1999), Yamamoto et al. (2001), Shotorban and Mashayek (2006a,b), Fede and Simonin (2006), Berrouk et al. (2007), Bini and Jones (2008), and Pozorski and Apte (2009), amongst others. One contemporary method to include the effects of SGS eddies on inertial particle motions is to introduce a stochastic differential equation (SDE), that is, a Langevin stochastic equation to model the SGS fluid velocity seen by inertial particles (Fede et al., 2006; Shotorban and Mashayek, 2006a; Shotorban and Mashayek, 2006b; Berrouk et al., 2007; Bini and Jones, 2008; Pozorski and Apte, 2009).However, the accuracy of such a Langevin equation model depends primarily on the prescription of the SGS fluid velocity autocorrelation time seen by an inertial particle or the inertial particle–SGS eddy interaction timescale (denoted by $\delt T_{Lp}$ and a second model constant in the diffusion term which controls the intensity of the random force received by an inertial particle (denoted by C_0, see Eq. (7)). From the theoretical point of view, dTLp differs significantly from the Lagrangian fluid velocity correlation time (Reeks, 1977; Wang and Stock, 1993), and this carries the essential nonlinearity in the statistical modeling of particle motion. dTLp and C0 may depend on the filter width and particle Stokes number even for a given turbulent flow. In previous studies, dTLp is modeled either by the fluid SGS Lagrangian timescale (Fede et al., 2006; Shotorban and Mashayek, 2006b; Pozorski and Apte, 2009; Bini and Jones, 2008) or by a simple extension of the timescale obtained from the full flow field (Berrouk et al., 2007). In this work, we shall study the subtle and on-monotonic dependence of $\delt T_{Lp}$ on the filter width and particle Stokes number using a flow field obtained from Direct Numerical Simulation (DNS). We then propose an empirical closure model for $\delta T_{Lp}$. Finally, the model is validated against LES of particle-laden turbulence in predicting single-particle statistics such as particle kinetic energy. As a first step, we consider the particle motion under the one-way coupling assumption in isotropic turbulent flow and neglect the gravitational settling effect. The one-way coupling assumption is only valid for low particle mass loading.
Resumo:
Taking shear-induced dilatation into consideration in shear transformation zone (STZ) operations, we derive a new yield criterion that reflects the pressure sensitivity in plastic flow in metallic glasses (MGs), which agrees well with experiments. Furthermore, an intrinsic theoretical correlation between the pressure sensitivity coefficient and the dilatation factor is revealed. It is found that the pressure sensitivity of plastic flow of MGs originates in the dilatation of microscale STZs.
Resumo:
The study presented here was carried out to obtain the actual solids flow rate by the combination of electrical resistance tomography and electromagnetic flow meter. A new in-situ measurement method based on measurements of the Electromagnetic Flow Meters (EFM) and Electrical Resistance Tomography (ERT) to study the flow rates of individual phases in a vertical flow was proposed. The study was based on laboratory experiments that were carried out with a 50 mm vertical flow rig for a number of sand concentrations and different mixture velocities. A range of sand slurries with median particle size from 212 mu m to 355 mu m was tested. The solid concentration by volume covered was 5% and 15%, and the corresponding density of 5% was 1078 kg/m(3) and of 15% was 1238 kg/m(3). The flow velocity was between 1.5 m/s and 3.0 m/s. A total of 6 experimental tests were conducted. The equivalent liquid model was adopted to validate in-situ volumetric solids fraction and calculate the slip velocity. The results show that the ERT technique can be used in conjunction with an electromagnetic flow meter as a way of measurement of slurry flow rate in a vertical pipe flow. However it should be emphasized that the EFM results must be treated with reservation when the flow pattern at the EFM mounting position is a non-homogenous flow. The flow rate obtained by the EFM should be corrected considering the slip velocity and the flow pattern.
Resumo:
Be it a physical object or a mathematical model, a nonlinear dynamical system can display complicated aperiodic behavior, or "chaos." In many cases, this chaos is associated with motion on a strange attractor in the system's phase space. And the dimension of the strange attractor indicates the effective number of degrees of freedom in the dynamical system.
In this thesis, we investigate numerical issues involved with estimating the dimension of a strange attractor from a finite time series of measurements on the dynamical system.
Of the various definitions of dimension, we argue that the correlation dimension is the most efficiently calculable and we remark further that it is the most commonly calculated. We are concerned with the practical problems that arise in attempting to compute the correlation dimension. We deal with geometrical effects (due to the inexact self-similarity of the attractor), dynamical effects (due to the nonindependence of points generated by the dynamical system that defines the attractor), and statistical effects (due to the finite number of points that sample the attractor). We propose a modification of the standard algorithm, which eliminates a specific effect due to autocorrelation, and a new implementation of the correlation algorithm, which is computationally efficient.
Finally, we apply the algorithm to chaotic data from the Caltech tokamak and the Texas tokamak (TEXT); we conclude that plasma turbulence is not a low- dimensional phenomenon.
Resumo:
We present a method of image-speckle contrast for the nonprecalibration measurement of the root-mean-square roughness and the lateral-correlation length of random surfaces with Gaussian correlation. We use the simplified model of the speckle fields produced by the weak scattering object in the theoretical analysis. The explicit mathematical relation shows that the saturation value of the image-speckle contrast at a large aperture radius determines the roughness, while the variation of the contrast with the aperture radius determines the lateral-correlation length. In the experimental performance, we specially fabricate the random surface samples with Gaussian correlation. The square of the image-speckle contrast is measured versus the radius of the aperture in the 4f system, and the roughness and the lateral-correlation length are extracted by fitting the theoretical result to the experimental data. Comparison of the measurement with that by an atomic force microscope shows our method has a satisfying accuracy. (C) 2002 Optical Society of America.
Resumo:
This thesis examines four distinct facets and methods for understanding political ideology, and so it includes four distinct chapters with only moderate connections between them. Chapter 2 examines how reactions to emotional stimuli vary with political opinion, and how the stimuli can produce changes in an individuals political preferences. Chapter 3 examines the connection between self-reported fear and item nonresponse on surveys. Chapter 4 examines the connection between political and moral consistency with low-dimensional ideology, and Chapter 5 develops a technique for estimating ideal points and salience in a low-dimensional ideological space.
Resumo:
Cosmic birefringence (CB)---a rotation of photon-polarization plane in vacuum---is a generic signature of new scalar fields that could provide dark energy. Previously, WMAP observations excluded a uniform CB-rotation angle larger than a degree.
In this thesis, we develop a minimum-variance--estimator formalism for reconstructing direction-dependent rotation from full-sky CMB maps, and forecast more than an order-of-magnitude improvement in sensitivity with incoming Planck data and future satellite missions. Next, we perform the first analysis of WMAP-7 data to look for rotation-angle anisotropies and report null detection of the rotation-angle power-spectrum multipoles below L=512, constraining quadrupole amplitude of a scale-invariant power to less than one degree. We further explore the use of a cross-correlation between CMB temperature and the rotation for detecting the CB signal, for different quintessence models. We find that it may improve sensitivity in case of marginal detection, and provide an empirical handle for distinguishing details of new physics indicated by CB.
We then consider other parity-violating physics beyond standard models---in particular, a chiral inflationary-gravitational-wave background. We show that WMAP has no constraining power, while a cosmic-variance--limited experiment would be capable of detecting only a large parity violation. In case of a strong detection of EB/TB correlations, CB can be readily distinguished from chiral gravity waves.
We next adopt our CB analysis to investigate patchy screening of the CMB, driven by inhomogeneities during the Epoch of Reionization (EoR). We constrain a toy model of reionization with WMAP-7 data, and show that data from Planck should start approaching interesting portions of the EoR parameter space and can be used to exclude reionization tomographies with large ionized bubbles.
In light of the upcoming data from low-frequency radio observations of the redshifted 21-cm line from the EoR, we examine probability-distribution functions (PDFs) and difference PDFs of the simulated 21-cm brightness temperature, and discuss the information that can be recovered using these statistics. We find that PDFs are insensitive to details of small-scale physics, but highly sensitive to the properties of the ionizing sources and the size of ionized bubbles.
Finally, we discuss prospects for related future investigations.
Resumo:
Consumption of addictive substances poses a challenge to economic models of rational, forward-looking agents. This dissertation presents a theoretical and empirical examination of consumption of addictive goods.
The theoretical model draws on evidence from psychology and neurobiology to improve on the standard assumptions used in intertemporal consumption studies. I model agents who may misperceive the severity of the future consequences from consuming addictive substances and allow for an agent's environment to shape her preferences in a systematic way suggested by numerous studies that have found craving to be induced by the presence of environmental cues associated with past substance use. The behavior of agents in this behavioral model of addiction can mimic the pattern of quitting and relapsing that is prevalent among addictive substance users.
Chapter 3 presents an empirical analysis of the Becker and Murphy (1988) model of rational addiction using data on grocery store sales of cigarettes. This essay empirically tests the model's predictions concerning consumption responses to future and past price changes as well as the prediction that the response to an anticipated price change differs from the response to an unanticipated price change. In addition, I consider the consumption effects of three institutional changes that occur during the time period 1996 through 1999.
Resumo:
In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.
We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.
We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.
In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.
In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.
We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.
In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.
Resumo:
Ghost imaging with classical incoherent light by third-order correlation is investigated. We discuss the similarities and the differences between ghost imaging by third-order correlation and by second-order correlation, and analyze the effect from each correlation part of the third-order correlation function on the imaging process. It is shown that the third-order correlated imaging includes richer correlated imaging effects than the second-order correlated one, while the imaging information originates mainly from the correlation of the intensity fluctuations between the test detector and each reference detector, as does ghost imaging by second-order correlation.