5 resultados para unit root tests

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.

A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.

The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The laminar to turbulent transition process in boundary layer flows in thermochemical nonequilibrium at high enthalpy is measured and characterized. Experiments are performed in the T5 Hypervelocity Reflected Shock Tunnel at Caltech, using a 1 m length 5-degree half angle axisymmetric cone instrumented with 80 fast-response annular thermocouples, complemented by boundary layer stability computations using the STABL software suite. A new mixing tank is added to the shock tube fill apparatus for premixed freestream gas experiments, and a new cleaning procedure results in more consistent transition measurements. Transition location is nondimensionalized using a scaling with the boundary layer thickness, which is correlated with the acoustic properties of the boundary layer, and compared with parabolized stability equation (PSE) analysis. In these nondimensionalized terms, transition delay with increasing CO2 concentration is observed: tests in 100% and 50% CO2, by mass, transition up to 25% and 15% later, respectively, than air experiments. These results are consistent with previous work indicating that CO2 molecules at elevated temperatures absorb acoustic instabilities in the MHz range, which is the expected frequency of the Mack second-mode instability at these conditions, and also consistent with predictions from PSE analysis. A strong unit Reynolds number effect is observed, which is believed to arise from tunnel noise. NTr for air from 5.4 to 13.2 is computed, substantially higher than previously reported for noisy facilities. Time- and spatially-resolved heat transfer traces are used to track the propagation of turbulent spots, and convection rates at 90%, 76%, and 63% of the boundary layer edge velocity, respectively, are observed for the leading edge, centroid, and trailing edge of the spots. A model constructed with these spot propagation parameters is used to infer spot generation rates from measured transition onset to completion distance. Finally, a novel method to control transition location with boundary layer gas injection is investigated. An appropriate porous-metal injector section for the cone is designed and fabricated, and the efficacy of injected CO2 for delaying transition is gauged at various mass flow rates, and compared with both no injection and chemically inert argon injection cases. While CO2 injection seems to delay transition, and argon injection seems to promote it, the experimental results are inconclusive and matching computations do not predict a reduction in N factor from any CO2 injection condition computed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An array of two spark chambers and six trays of plastic scintillation counters was used to search for unaccompanied fractionally charged particles in cosmic rays near sea level. No acceptable events were found with energy losses by ionization between 0.04 and 0.7 that of unit-charged minimum-ionizing particles. New 90%-confidence upper limits were thereby established for the fluxes of fractionally charged particles in cosmic rays, namely, (1.04 ± 0.07)x10-10 and (2.03 ± 0.16)x10-10 cm-2sr-1sec-1 for minimum-ionizing particles with charges 1/3 and 2/3, respectively.

In order to be certain that the spark chambers could have functioned for the low levels of ionization expected from particles with small fractional charges, tests were conducted to estimate the efficiency of the chambers as they had been used in this experiment. These tests showed that the spark-chamber system with the track-selection criteria used might have been over 99% efficient for the entire range of energy losses considered.

Lower limits were then obtained for the mass of a quark by considering the above flux limits and a particular model for the production of quarks in cosmic rays. In this model, which is one involving the multi-peripheral Regge hypothesis, the production cross section and a corresponding mass limit are critically dependent on the Regge trajectory assigned to a quark. If quarks are "elementary'' with a flat trajectory, the mass of a quark can be expected to be at least 6 ± 2 BeV/c2. If quarks have a trajectory with unit slope, just as the existing hadrons do, the mass of a quark might be as small as 1.3 ± 0.2 BeV/c2. For a trajectory with unit slope and a mass larger than a couple of BeV/c2, the production cross section may be so low that quarks might never be observed in nature.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. The 3.7 Å Crystal Structure of Horse Heart Ferricytochrome C.

The crystal structure of horse heart ferricytochrome c has been determined to a resolution of 3.7 Å using the multiple isomorphous replacement technique. Two isomorphous derivatives were used in the analysis, leading to a map with a mean figure of merit of 0.458. The quality of the resulting map was extremely high, even though the derivative data did not appear to be of high quality.

Although it was impossible to fit the known amino acid sequence to the calculated structure in an unambiguous way, many important features of the molecule could still be determined from the 3.7 Å electron density map. Among these was the fact that cytochrome c contains little or no α-helix. The polypeptide chain appears to be wound about the heme group in such a way as to form a loosely packed hydrophobic core in the molecule.

The heme group is located in a cleft on the molecule with one edge exposed to the solvent. The fifth coordinating ligand is His 18 and the sixth coordinating ligand is probably neither His 26 nor His 33.

The high resolution analysis of cytochrome c is now in progress and should be completed within the next year.

II. The Application of the Karle-Hauptman Tangent Formula to Protein Phasing.

The Karle-Hauptman tangent formula has been shown to be applicable to the refinement of previously determined protein phases. Tests were made with both the cytochrome c data from Part I and a theoretical structure based on the myoglobin molecule. The refinement process was found to be highly dependent upon the manner in which the tangent formula was applied. Iterative procedures did not work well, at least at low resolution.

The tangent formula worked very well in selecting the true phase from the two possible phase choices resulting from a single isomorphous replacement phase analysis. The only restriction on this application is that the heavy atoms form a non-centric cluster in the unit cell.

Pages 156 through 284 in this Thesis consist of previously published papers relating to the above two sections. References to these papers can be found on page 155.