23 resultados para Theoretical assumptions

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Consumption of addictive substances poses a challenge to economic models of rational, forward-looking agents. This dissertation presents a theoretical and empirical examination of consumption of addictive goods.

The theoretical model draws on evidence from psychology and neurobiology to improve on the standard assumptions used in intertemporal consumption studies. I model agents who may misperceive the severity of the future consequences from consuming addictive substances and allow for an agent's environment to shape her preferences in a systematic way suggested by numerous studies that have found craving to be induced by the presence of environmental cues associated with past substance use. The behavior of agents in this behavioral model of addiction can mimic the pattern of quitting and relapsing that is prevalent among addictive substance users.

Chapter 3 presents an empirical analysis of the Becker and Murphy (1988) model of rational addiction using data on grocery store sales of cigarettes. This essay empirically tests the model's predictions concerning consumption responses to future and past price changes as well as the prediction that the response to an anticipated price change differs from the response to an unanticipated price change. In addition, I consider the consumption effects of three institutional changes that occur during the time period 1996 through 1999.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I.

We have developed a technique for measuring the depth time history of rigid body penetration into brittle materials (hard rocks and concretes) under a deceleration of ~ 105 g. The technique includes bar-coded projectile, sabot-projectile separation, detection and recording systems. Because the technique can give very dense data on penetration depth time history, penetration velocity can be deduced. Error analysis shows that the technique has a small intrinsic error of ~ 3-4 % in time during penetration, and 0.3 to 0.7 mm in penetration depth. A series of 4140 steel projectile penetration into G-mixture mortar targets have been conducted using the Caltech 40 mm gas/ powder gun in the velocity range of 100 to 500 m/s.

We report, for the first time, the whole depth-time history of rigid body penetration into brittle materials (the G-mixture mortar) under 105 g deceleration. Based on the experimental results, including penetration depth time history, damage of recovered target and projectile materials and theoretical analysis, we find:

1. Target materials are damaged via compacting in the region in front of a projectile and via brittle radial and lateral crack propagation in the region surrounding the penetration path. The results suggest that expected cracks in front of penetrators may be stopped by a comminuted region that is induced by wave propagation. Aggregate erosion on the projectile lateral surface is < 20% of the final penetration depth. This result suggests that the effect of lateral friction on the penetration process can be ignored.

2. Final penetration depth, Pmax, is linearly scaled with initial projectile energy per unit cross-section area, es , when targets are intact after impact. Based on the experimental data on the mortar targets, the relation is Pmax(mm) 1.15es (J/mm2 ) + 16.39.

3. Estimation of the energy needed to create an unit penetration volume suggests that the average pressure acting on the target material during penetration is ~ 10 to 20 times higher than the unconfined strength of target materials under quasi-static loading, and 3 to 4 times higher than the possible highest pressure due to friction and material strength and its rate dependence. In addition, the experimental data show that the interaction between cracks and the target free surface significantly affects the penetration process.

4. Based on the fact that the penetration duration, tmax, increases slowly with es and does not depend on projectile radius approximately, the dependence of tmax on projectile length is suggested to be described by tmax(μs) = 2.08es (J/mm2 + 349.0 x m/(πR2), in which m is the projectile mass in grams and R is the projectile radius in mm. The prediction from this relation is in reasonable agreement with the experimental data for different projectile lengths.

5. Deduced penetration velocity time histories suggest that whole penetration history is divided into three stages: (1) An initial stage in which the projectile velocity change is small due to very small contact area between the projectile and target materials; (2) A steady penetration stage in which projectile velocity continues to decrease smoothly; (3) A penetration stop stage in which projectile deceleration jumps up when velocities are close to a critical value of ~ 35 m/s.

6. Deduced averaged deceleration, a, in the steady penetration stage for projectiles with same dimensions is found to be a(g) = 192.4v + 1.89 x 104, where v is initial projectile velocity in m/s. The average pressure acting on target materials during penetration is estimated to be very comparable to shock wave pressure.

7. A similarity of penetration process is found to be described by a relation between normalized penetration depth, P/Pmax, and normalized penetration time, t/tmax, as P/Pmax = f(t/tmax, where f is a function of t/tmax. After f(t/tmax is determined using experimental data for projectiles with 150 mm length, the penetration depth time history for projectiles with 100 mm length predicted by this relation is in good agreement with experimental data. This similarity also predicts that average deceleration increases with decreasing projectile length, that is verified by the experimental data.

8. Based on the penetration process analysis and the present data, a first principle model for rigid body penetration is suggested. The model incorporates the models for contact area between projectile and target materials, friction coefficient, penetration stop criterion, and normal stress on the projectile surface. The most important assumptions used in the model are: (1) The penetration process can be treated as a series of impact events, therefore, pressure normal to projectile surface is estimated using the Hugoniot relation of target material; (2) The necessary condition for penetration is that the pressure acting on target materials is not lower than the Hugoniot elastic limit; (3) The friction force on projectile lateral surface can be ignored due to cavitation during penetration. All the parameters involved in the model are determined based on independent experimental data. The penetration depth time histories predicted from the model are in good agreement with the experimental data.

9. Based on planar impact and previous quasi-static experimental data, the strain rate dependence of the mortar compressive strength is described by σf0f = exp(0.0905(log(έ/έ_0) 1.14, in the strain rate range of 10-7/s to 103/s (σ0f and έ are reference compressive strength and strain rate, respectively). The non-dispersive Hugoniot elastic wave in the G-mixture has an amplitude of ~ 0.14 GPa and a velocity of ~ 4.3 km/s.

Part II.

Stress wave profiles in vitreous GeO2 were measured using piezoresistance gauges in the pressure range of 5 to 18 GPa under planar plate and spherical projectile impact. Experimental data show that the response of vitreous GeO2 to planar shock loading can be divided into three stages: (1) A ramp elastic precursor has peak amplitude of 4 GPa and peak particle velocity of 333 m/s. Wave velocity decreases from initial longitudinal elastic wave velocity of 3.5 km/s to 2.9 km/s at 4 GPa; (2) A ramp wave with amplitude of 2.11 GPa follows the precursor when peak loading pressure is 8.4 GPa. Wave velocity drops to the value below bulk wave velocity in this stage; (3) A shock wave achieving final shock state forms when peak pressure is > 6 GPa. The Hugoniot relation is D = 0.917 + 1.711u (km/s) using present data and the data of Jackson and Ahrens [1979] when shock wave pressure is between 6 and 40 GPa for ρ0 = 3.655 gj cm3 . Based on the present data, the phase change from 4-fold to 6-fold coordination of Ge+4 with O-2 in vitreous GeO2 occurs in the pressure range of 4 to 15 ± 1 GPa under planar shock loading. Comparison of the shock loading data for fused SiO2 to that on vitreous GeO2 demonstrates that transformation to the rutile structure in both media are similar. The Hugoniots of vitreous GeO2 and fused SiO2 are found to coincide approximately if pressure in fused SiO2 is scaled by the ratio of fused SiO2to vitreous GeO2 density. This result, as well as the same structure, provides the basis for considering vitreous Ge02 as an analogous material to fused SiO2 under shock loading. Experimental results from the spherical projectile impact demonstrate: (1) The supported elastic shock in fused SiO2 decays less rapidly than a linear elastic wave when elastic wave stress amplitude is higher than 4 GPa. The supported elastic shock in vitreous GeO2 decays faster than a linear elastic wave; (2) In vitreous GeO2 , unsupported shock waves decays with peak pressure in the phase transition range (4-15 GPa) with propagation distance, x, as α 1/x-3.35 , close to the prediction of Chen et al. [1998]. Based on a simple analysis on spherical wave propagation, we find that the different decay rates of a spherical elastic wave in fused SiO2 and vitreous GeO2 is predictable on the base of the compressibility variation with stress under one-dimensional strain condition in the two materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented.  The variables are used  in the analysis of the first proton-proton collisions dataset at CMS  (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results.  A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011.  The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content.   The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with  gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.

With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I

Particles are a key feature of planetary atmospheres. On Earth they represent the greatest source of uncertainty in the global energy budget. This uncertainty can be addressed by making more measurement, by improving the theoretical analysis of measurements, and by better modeling basic particle nucleation and initial particle growth within an atmosphere. This work will focus on the latter two methods of improvement.

Uncertainty in measurements is largely due to particle charging. Accurate descriptions of particle charging are challenging because one deals with particles in a gas as opposed to a vacuum, so different length scales come into play. Previous studies have considered the effects of transition between the continuum and kinetic regime and the effects of two and three body interactions within the kinetic regime. These studies, however, use questionable assumptions about the charging process which resulted in skewed observations, and bias in the proposed dynamics of aerosol particles. These assumptions affect both the ions and particles in the system. Ions are assumed to be point monopoles that have a single characteristic speed rather than follow a distribution. Particles are assumed to be perfect conductors that have up to five elementary charges on them. The effects of three body interaction, ion-molecule-particle, are also overestimated. By revising this theory so that the basic physical attributes of both ions and particles and their interactions are better represented, we are able to make more accurate predictions of particle charging in both the kinetic and continuum regimes.

The same revised theory that was used above to model ion charging can also be applied to the flux of neutral vapor phase molecules to a particle or initial cluster. Using these results we can model the vapor flux to a neutral or charged particle due to diffusion and electromagnetic interactions. In many classical theories currently applied to these models, the finite size of the molecule and the electromagnetic interaction between the molecule and particle, especially for the neutral particle case, are completely ignored, or, as is often the case for a permanent dipole vapor species, strongly underestimated. Comparing our model to these classical models we determine an “enhancement factor” to characterize how important the addition of these physical parameters and processes is to the understanding of particle nucleation and growth.

Part II

Whispering gallery mode (WGM) optical biosensors are capable of extraordinarily sensitive specific and non-specific detection of species suspended in a gas or fluid. Recent experimental results suggest that these devices may attain single-molecule sensitivity to protein solutions in the form of stepwise shifts in their resonance wavelength, \lambda_{R}, but present sensor models predict much smaller steps than were reported. This study examines the physical interaction between a WGM sensor and a molecule adsorbed to its surface, exploring assumptions made in previous efforts to model WGM sensor behavior, and describing computational schemes that model the experiments for which single protein sensitivity was reported. The resulting model is used to simulate sensor performance, within constraints imposed by the limited material property data. On this basis, we conclude that nonlinear optical effects would be needed to attain the reported sensitivity, and that, in the experiments for which extreme sensitivity was reported, a bound protein experiences optical energy fluxes too high for such effects to be ignored.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Disorder and interactions both play crucial roles in quantum transport. Decades ago, Mott showed that electron-electron interactions can lead to insulating behavior in materials that conventional band theory predicts to be conducting. Soon thereafter, Anderson demonstrated that disorder can localize a quantum particle through the wave interference phenomenon of Anderson localization. Although interactions and disorder both separately induce insulating behavior, the interplay of these two ingredients is subtle and often leads to surprising behavior at the periphery of our current understanding. Modern experiments probe these phenomena in a variety of contexts (e.g. disordered superconductors, cold atoms, photonic waveguides, etc.); thus, theoretical and numerical advancements are urgently needed. In this thesis, we report progress on understanding two contexts in which the interplay of disorder and interactions is especially important.

The first is the so-called “dirty” or random boson problem. In the past decade, a strong-disorder renormalization group (SDRG) treatment by Altman, Kafri, Polkovnikov, and Refael has raised the possibility of a new unstable fixed point governing the superfluid-insulator transition in the one-dimensional dirty boson problem. This new critical behavior may take over from the weak-disorder criticality of Giamarchi and Schulz when disorder is sufficiently strong. We analytically determine the scaling of the superfluid susceptibility at the strong-disorder fixed point and connect our analysis to recent Monte Carlo simulations by Hrahsheh and Vojta. We then shift our attention to two dimensions and use a numerical implementation of the SDRG to locate the fixed point governing the superfluid-insulator transition there. We identify several universal properties of this transition, which are fully independent of the microscopic features of the disorder.

The second focus of this thesis is the interplay of localization and interactions in systems with high energy density (i.e., far from the usual low energy limit of condensed matter physics). Recent theoretical and numerical work indicates that localization can survive in this regime, provided that interactions are sufficiently weak. Stronger interactions can destroy localization, leading to a so-called many-body localization transition. This dynamical phase transition is relevant to questions of thermalization in isolated quantum systems: it separates a many-body localized phase, in which localization prevents transport and thermalization, from a conducting (“ergodic”) phase in which the usual assumptions of quantum statistical mechanics hold. Here, we present evidence that many-body localization also occurs in quasiperiodic systems that lack true disorder.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chapter 1

Cyclobutanediyl has been studied in both its singlet and triplet states by ab initio electronic structure theory. The triplet, which is the ground state of the molecule, exists in both C_(2h) and C_(2v) forms, which interconvert via a C_s transition state. For the singlet, only a C_(2h) form is found. It passes, via a C_s transition state, onto the C_(2v) surface on which bicyclobutane is the only minimum. The ring-flipping (inversion) process in bicyclobutane includes the singlet biradical as an intermediate, and involves a novel, nonleast motion pathway. Semiclassical periodic orbit theory indicates that the various minima on both the singlet and triplet surfaces can interconvert via quantum mechanical tunneling.

Chapter 2

The dimethylenepolycyclobutadienes (n) are the non-Kekulé analogues of the classical acenes. Application of a variety of theoretical methods reveals several novel features of such structures. Most interesting is the emergence of a parity rule. When n is even, n is predicted to be a singlet, with n disjoint NBMOs. When n is odd, theory predicts a triplet ground state with (n+1) NBMOs that are not fully disjoint.

Chapter 3

Bi(cyclobutadienyl) (2), the cyclobutadiene analogue of biphenyl, and its homologues tri- (3) and tetra(cyclobutadienyl) (4) have been studied using electronic structure theory. Ab initio calculations on 2 reveal that the central bond is a true double bond, and that the structure is best thought of as two allyl radicals plus an ethylene. The singlet and triplet states are essentially degenerate. Trimer 3 is two allyls plus a dimethylenecyclobutanediyl, while 4 is two coplanar bi(cyclobutadienyl) units connected by a single bond. For both 3 and 4, the quintet, triplet, and singlet states are essentially degenerate, indicating that they are tetraradicals. The infinite polymer, polycyclobutadiene, has been studied by HMO, EHCO, and VEH methods. Several geometries based on the structures of 3 and 4 have been studied, and the band structures are quite intriguing. A novel crossing between the valence and conduction bands produces a small band gap and a high density of states at the Fermi level.

Chapter 4

At the level of Hückel theory, polyfulvene has a HOCO-LUCO degeneracy much like that seen in polyacetylene. Higher levels of theory remove the degeneracy, but the band gap (E_g) is predicted to be significantly smaller than analogous structures such as polythiophene and polypyrrole at the fulvenoid geometry. An alternative geometry, which we have termed quinoid, is also conceivable for polyfulvene, and it is predicted to have a much larger E_g. The effects of benzannelation to produce analogues of polyisothianaphthene have been evaluated. We propose a new model for such structures based on conventional orbital mixing arguments. Several of the proposed structures have quite interesting properties, which suggest that they are excellent candidates for conducting polymers.

Chapter 5

Theoretical studies of polydimethylenecyclobutene and polydiisopropylidene- cyclobutene reveal that, because of steric crowding, they cannot achieve a planar, fully conjugated structure in either their undoped or doped states. Rather, the structure consists of essentially orthogonal hexatriene units. Such a structure is incompatible with conventional conduction mechanisms involving polarons and bipolarons.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Observational and theoretical work towards the separation of foreground emission from the cosmic microwave background is described. The bulk of this work is in the design, construction, and commissioning of the C-Band All-Sky Survey (C-BASS), an experiment to produce a template of the Milky Way Galaxy's polarized synchrotron emission. Theoretical work is the derivation of an analytical approximation to the emission spectrum of spinning dust grains.

The performance of the C-BASS experiment is demonstrated through a preliminary, deep survey of the North Celestial Pole region. A comparison to multiwavelength data is performed, and the thermal and systematic noise properties of the experiment are explored. The systematic noise has been minimized through careful data processing algorithms, implemented both in the experiment's Field Programmable Gate Array (FPGA) based digital backend and in the data analysis pipeline. Detailed descriptions of these algorithms are presented.

The analytical function of spinning dust emission is derived through the application of careful approximations, with each step tested against numerical calculations. This work is intended for use in the parameterized separation of cosmological foreground components and as a framework for interpreting and comparing the variety of anomalous microwave emission observations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Notch signaling acts in many diverse developmental spatial patterning processes. To better understand why this particular pathway is employed where it is and how downstream feedbacks interact with the signaling system to drive patterning, we have pursued three aims: (i) to quantitatively measure the Notch system's signal input/output (I/O) relationship in cell culture, (ii) to use the quantitative I/O relationship to computationally predict patterning outcomes of downstream feedbacks, and (iii) to reconstitute a Notch-mediated lateral induction feedback (in which Notch signaling upregulates the expression of Delta) in cell culture. The quantitative Notch I/O relationship revealed that in addition to the trans-activation between Notch and Delta on neighboring cells there is also a strong, mutual cis-inactivation between Notch and Delta on the same cell. This feature tends to amplify small differences between cells. Incorporating our improved understanding of the signaling system into simulations of different types of downstream feedbacks and boundary conditions lent us several insights into their function. The Notch system converts a shallow gradient of Delta expression into a sharp band of Notch signaling without any sort of feedback at all, in a system motivated by the Drosophila wing vein. It also improves the robustness of lateral inhibition patterning, where signal downregulates ligand expression, by removing the requirement for explicit cooperativity in the feedback and permitting an exceptionally simple mechanism for the pattern. When coupled to a downstream lateral induction feedback, the Notch system supports the propagation of a signaling front across a tissue to convert a large area from one state to another with only a local source of initial stimulation. It is also capable of converting a slowly-varying gradient in parameters into a sharp delineation between high- and low-ligand populations of cells, a pattern reminiscent of smooth muscle specification around artery walls. Finally, by implementing a version of the lateral induction feedback architecture modified with the addition of an autoregulatory positive feedback loop, we were able to generate cells that produce enough cis ligand when stimulated by trans ligand to themselves transmit signal to neighboring cells, which is the hallmark of lateral induction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis summarizes the application of conventional and modern electron paramagnetic resonance (EPR) techniques to establish proximity relationships between paramagnetic metal centers in metalloproteins and between metal centers and magnetic ligand nuclei in two important and timely membrane proteins: succinate:ubiquinone oxidoreductase (SQR) from Paracoccus denitrificans and particulate methane monooxygenase (pMMO) from Methylococcus capsulatus. Such proximity relationships are thought to be critical to the biological function and the associated biochemistry mediated by the metal centers in these proteins. A mechanistic understanding of biological function relies heavily on structure-function relationships and the knowledge of how molecular structure and electronic properties of the metal centers influence the reactivity in metalloenzymes. EPR spectroscopy has proven to be one of the most powerful techniques towards obtaining information about interactions between metal centers as well as defining ligand structures. SQR is an electron transport enzyme wherein the substrates, organic and metallic cofactors are held relatively far apart. Here, the proximity relationships of the metallic cofactors were studied through their weak spin-spin interactions by means of EPR power saturation and electron spin-lattice (T_1) measurements, when the enzyme was poised at designated reduction levels. Analysis of the electron T_1 measurements for the S-3 center when the b-heme is paramagnetic led to a detailed analysis of the dipolar interactions and distance determination between two interacting metal centers. Studies of ligand environment of the metal centers by electron spin echo envelope modulation (ESEEM) spectroscopy resulted in the identication of peptide nitrogens as coupled nuclei in the environment of the S-1 and S-3 centers.

Finally, an EPR model was developed to describe the ferromagnetically coupled trinuclear copper clusters in pMMO when the enzyme is oxidized. The Cu(II) ions in these clusters appear to be strongly exchange coupled, and the EPR is consistent with equilateral triangular arrangements of type 2 copper ions. These results offer the first glimpse of the magneto-structural correlations for a trinuclear copper cluster of this type, which, until the work on pMMO, has had no precedent in the metalloprotein literature. Such trinuclear copper clusters are even rare in synthetic models.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An electrostatic mechanism for the flocculation of charged particles by polyelectrolytes of opposite charge is proposed. The difference between this and previous electrostatic coagulation mechanisms is the formation of charged polyion patches on the oppositely charged surfaces. The size of a patch is primarily a function of polymer molecular weight and the total patch area is a function of the amount of polymer adsorbed. The theoretical predictions of the model agree with the experimental dependence of the polymer dose required for flocculation on polymer molecular weight and solution ionic strength.

A theoretical analysis based on the Derjaguin-Landau, Verwey- Overbeek electrical double layer theory and statistical mechanical treatments of adsorbed polymer configurations indicates that flocculation of charged particles in aqueous solutions by polyelectrolytes of opposite charge does not occur by the commonly accepted polymerbridge mechanism.

A series of 1, 2-dimethyl-5 -vinylpyridinium bromide polymers with a molecular weight range of 6x10^3 to 5x10^6 was synthesized and used to flocculate dilute polystyrene latex and silica suspensions in solutions of various ionic strengths. It was found that with high molecular weight polymers and/or high ionic strengths the polymer dose required for flocculation is independent of molecular weight. With low molecular weights and/or low ionic strengths, the flocculation dose decreases with increasing molecular weight.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Algorithmic DNA tiles systems are fascinating. From a theoretical perspective, they can result in simple systems that assemble themselves into beautiful, complex structures through fundamental interactions and logical rules. As an experimental technique, they provide a promising method for programmably assembling complex, precise crystals that can grow to considerable size while retaining nanoscale resolution. In the journey from theoretical abstractions to experimental demonstrations, however, lie numerous challenges and complications.

In this thesis, to examine these challenges, we consider the physical principles behind DNA tile self-assembly. We survey recent progress in experimental algorithmic self-assembly, and explain the simple physical models behind this progress. Using direct observation of individual tile attachments and detachments with an atomic force microscope, we test some of the fundamental assumptions of the widely-used kinetic Tile Assembly Model, obtaining results that fit the model to within error. We then depart from the simplest form of that model, examining the effects of DNA sticky end sequence energetics on tile system behavior. We develop theoretical models, sequence assignment algorithms, and a software package, StickyDesign, for sticky end sequence design.

As a demonstration of a specific tile system, we design a binary counting ribbon that can accurately count from a programmable starting value and stop growing after overflowing, resulting in a single system that can construct ribbons of precise and programmable length. In the process of designing the system, we explain numerous considerations that provide insight into more general tile system design, particularly with regards to tile concentrations, facet nucleation, the construction of finite assemblies, and design beyond the abstract Tile Assembly Model.

Finally, we present our crystals that count: experimental results with our binary counting system that represent a significant improvement in the accuracy of experimental algorithmic self-assembly, including crystals that count perfectly with 5 bits from 0 to 31. We show some preliminary experimental results on the construction of our capping system to stop growth after counters overflow, and offer some speculation on potential future directions of the field.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Young's modulus, stress-strain curves, and failure properties of glass bead-filled EPDM vulcanizates were studied under superposed hydrostatic pressure. The glass bead-filled EPDM was employed as a representation of composite systems, and the hydrostatic pressure controls the filler-elastomer separation under deformation. This separation shows up as a volume change of the system, and its infuence is reflected in the mechanical behavior as a reinforcing effect of variable degree.

The strain energy stored in the composite system in simple tension was calculated by introducing a model which is described as a cylindrical block of elastomer with two half spheres of filler on each end with their centers on the axis of the cylinder. In the derivation of the strain energy, assumptions were made to obtain the strain distribution in the model, and strain energy-strain relation for the elastomer was also assumed. The derivation was carried out for the case of no filler-elastomer separation and was modified to include the case of filler-elastomer separation.

The resulting strain energy, as a function of stretch ratio and volume of the system, was used to obtain stress-strain curves and volume change-strain curves of composite systems under superposed hydrostatic pressure.

Changes in the force and the lateral dimension of a ring specimen were measured as it was stretched axially under a superposed hydrostatic pressure in order to calculate the mechanical properties mentioned above. A tensile tester was used which is capable of sealing the whole system to carry out a measurement under pressure. A thickness measuring device, based on the Hall effect, was built for the measurement of changes in the lateral dimension of a specimen.

The theoretical and experimental results of Young's modulus and stress-strain curves were compared and showed fairly good agreement.

The failure data were discussed in terms of failure surfaces, and it was concluded that a failure surface of the glass-bead-filled EPDM consists of two cones.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To obtain accurate information from a structural tool it is necessary to have an understanding of the physical principles which govern the interaction between the probe and the sample under investigation. In this thesis a detailed study of the physical basis for Extended X-ray Absorption Fine Structure (EXAFS) spectroscopy is presented. A single scattering formalism of EXAFS is introduced which allows a rigorous treatment of the central atom potential. A final state interaction formalism of EXAFS is also discussed. Multiple scattering processes are shown to be significant for systems of certain geometries. The standard single scattering EXAFS analysis produces erroneous results if the data contain a large multiple scattering contribution. The effect of thermal vibrations on such multiple scattering paths is also discussed. From symmetry considerations it is shown that only certain normal modes contribute to the Debye-Waller factor for a particular scattering path. Furthermore, changes in the scattering angles induced by thermal vibrations produces additional EXAFS components called modification factors. These factors are shown to be small for most systems.

A study of the physical basis for the determination of structural information from EXAFS data is also presented. An objective method of determining the background absorption and the threshold energy is discussed and involves Gaussian functions. In addition, a scheme to determine the nature of the scattering atom in EXAFS experiments is introduced. This scheme is based on the fact that the phase intercept is a measure of the type of scattering atom. A method to determine bond distances is also discussed and does not require the use of model compounds or calculated phase shifts. The physical basis for this method is the absence of a linear term in the scattering phases. Therefore, it is possible to separate these phases from the linear term containing the distance information in the total phase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epoch of reionization remains one of the last uncharted eras of cosmic history, yet this time is of crucial importance, encompassing the formation of both the first galaxies and the first metals in the universe. In this thesis, I present four related projects that both characterize the abundance and properties of these first galaxies and uses follow-up observations of these galaxies to achieve one of the first observations of the neutral fraction of the intergalactic medium during the heart of the reionization era.

First, we present the results of a spectroscopic survey using the Keck telescopes targeting 6.3 < z < 8.8 star-forming galaxies. We secured observations of 19 candidates, initially selected by applying the Lyman break technique to infrared imaging data from the Wide Field Camera 3 (WFC3) onboard the Hubble Space Telescope (HST). This survey builds upon earlier work from Stark et al. (2010, 2011), which showed that star-forming galaxies at 3 < z < 6, when the universe was highly ionized, displayed a significant increase in strong Lyman alpha emission with redshift. Our work uses the LRIS and NIRSPEC instruments to search for Lyman alpha emission in candidates at a greater redshift in the observed near-infrared, in order to discern if this evolution continues, or is quenched by an increase in the neutral fraction of the intergalactic medium. Our spectroscopic observations typically reach a 5-sigma limiting sensitivity of < 50 AA. Despite expecting to detect Lyman alpha at 5-sigma in 7-8 galaxies based on our Monte Carlo simulations, we only achieve secure detections in two of 19 sources. Combining these results with a similar sample of 7 galaxies from Fontana et al. (2010), we determine that these few detections would only occur in < 1% of simulations if the intrinsic distribution was the same as that at z ~ 6. We consider other explanations for this decline, but find the most convincing explanation to be an increase in the neutral fraction of the intergalactic medium. Using theoretical models, we infer a neutral fraction of X_HI ~ 0.44 at z = 7.

Second, we characterize the abundance of star-forming galaxies at z > 6.5 again using WFC3 onboard the HST. This project conducted a detailed search for candidates both in the Hubble Ultra Deep Field as well as a number of additional wider Hubble Space Telescope surveys to construct luminosity functions at both z ~ 7 and 8, reaching 0.65 and 0.25 mag fainter than any previous surveys, respectively. With this increased depth, we achieve some of the most robust constraints on the Schechter function faint end slopes at these redshifts, finding very steep values of alpha_{z~7} = -1.87 +/- 0.18 and alpha_{z~8} = -1.94 +/- 0.23. We discuss these results in the context of cosmic reionization, and show that given reasonable assumptions about the ionizing spectra and escape fraction of ionizing photons, only half the photons needed to maintain reionization are provided by currently observable galaxies at z ~ 7-8. We show that an extension of the luminosity function down to M_{UV} = -13.0, coupled with a low level of star-formation out to higher redshift, can fit all available constraints on the ionization history of the universe.

Third, we investigate the strength of nebular emission in 3 < z < 5 star-forming galaxies. We begin by using the Infrared Array Camera (IRAC) onboard the Spitzer Space Telescope to investigate the strength of H alpha emission in a sample of 3.8 < z < 5.0 spectroscopically confirmed galaxies. We then conduct near-infrared observations of star-forming galaxies at 3 < z < 3.8 to investigate the strength of the [OIII] 4959/5007 and H beta emission lines from the ground using MOSFIRE. In both cases, we uncover near-ubiquitous strong nebular emission, and find excellent agreement between the fluxes derived using the separate methods. For a subset of 9 objects in our MOSFIRE sample that have secure Spitzer IRAC detections, we compare the emission line flux derived from the excess in the K_s band photometry to that derived from direct spectroscopy and find 7 to agree within a factor of 1.6, with only one catastrophic outlier. Finally, for a different subset for which we also have DEIMOS rest-UV spectroscopy, we compare the relative velocities of Lyman alpha and the rest-optical nebular lines which should trace the cites of star-formation. We find a median velocity offset of only v_{Ly alpha} = 149 km/s, significantly less than the 400 km/s observed for star-forming galaxies with weaker Lyman alpha emission at z = 2-3 (Steidel et al. 2010), and show that this decrease can be explained by a decrease in the neutral hydrogen column density covering the galaxy. We discuss how this will imply a lower neutral fraction for a given observed extinction of Lyman alpha when its visibility is used to probe the ionization state of the intergalactic medium.

Finally, we utilize the recent CANDELS wide-field, infra-red photometry over the GOODS-N and S fields to re-analyze the use of Lyman alpha emission to evaluate the neutrality of the intergalactic medium. With this new data, we derive accurate ultraviolet spectral slopes for a sample of 468 3 < z < 6 star-forming galaxies, already observed in the rest-UV with the Keck spectroscopic survey (Stark et al. 2010). We use a Bayesian fitting method which accurately accounts for contamination and obscuration by skylines to derive a relationship between the UV-slope of a galaxy and its intrinsic Lyman alpha equivalent width probability distribution. We then apply this data to spectroscopic surveys during the reionization era, including our own, to accurately interpret the drop in observed Lyman alpha emission. From our most recent such MOSFIRE survey, we also present evidence for the most distant galaxy confirmed through emission line spectroscopy at z = 7.62, as well as a first detection of the CIII]1907/1909 doublet at z > 7.

We conclude the thesis by exploring future prospects and summarizing the results of Robertson et al. (2013). This work synthesizes many of the measurements in this thesis, along with external constraints, to create a model of reionization that fits nearly all available constraints.