11 resultados para 750602 Understanding electoral systems

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We examine voting situations in which individuals have incomplete information over each others' true preferences. In many respects, this work is motivated by a desire to provide a more complete understanding of so-called probabilistic voting.

Chapter 2 examines the similarities and differences between the incentives faced by politicians who seek to maximize expected vote share, expected plurality, or probability of victory in single member: single vote, simple plurality electoral systems. We find that, in general, the candidates' optimal policies in such an electoral system vary greatly depending on their objective function. We provide several examples, as well as a genericity result which states that almost all such electoral systems (with respect to the distributions of voter behavior) will exhibit different incentives for candidates who seek to maximize expected vote share and those who seek to maximize probability of victory.

In Chapter 3, we adopt a random utility maximizing framework in which individuals' preferences are subject to action-specific exogenous shocks. We show that Nash equilibria exist in voting games possessing such an information structure and in which voters and candidates are each aware that every voter's preferences are subject to such shocks. A special case of our framework is that in which voters are playing a Quantal Response Equilibrium (McKelvey and Palfrey (1995), (1998)). We then examine candidate competition in such games and show that, for sufficiently large electorates, regardless of the dimensionality of the policy space or the number of candidates, there exists a strict equilibrium at the social welfare optimum (i.e., the point which maximizes the sum of voters' utility functions). In two candidate contests we find that this equilibrium is unique.

Finally, in Chapter 4, we attempt the first steps towards a theory of equilibrium in games possessing both continuous action spaces and action-specific preference shocks. Our notion of equilibrium, Variational Response Equilibrium, is shown to exist in all games with continuous payoff functions. We discuss the similarities and differences between this notion of equilibrium and the notion of Quantal Response Equilibrium and offer possible extensions of our framework.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Separating the dynamics of variables that evolve on different timescales is a common assumption in exploring complex systems, and a great deal of progress has been made in understanding chemical systems by treating independently the fast processes of an activated chemical species from the slower processes that proceed activation. Protein motion underlies all biocatalytic reactions, and understanding the nature of this motion is central to understanding how enzymes catalyze reactions with such specificity and such rate enhancement. This understanding is challenged by evidence of breakdowns in the separability of timescales of dynamics in the active site form motions of the solvating protein. Quantum simulation methods that bridge these timescales by simultaneously evolving quantum and classical degrees of freedom provide an important method on which to explore this breakdown. In the following dissertation, three problems of enzyme catalysis are explored through quantum simulation.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Radiation in the first days of supernova explosions contains rich information about physical properties of the exploding stars. In the past three years, I used the intermediate Palomar Transient Factory to conduct one-day cadence surveys, in order to systematically search for infant supernovae. I show that the one-day cadences in these surveys were strictly controlled, that the realtime image subtraction pipeline managed to deliver transient candidates within ten minutes of images being taken, and that we were able to undertake follow-up observations with a variety of telescopes within hours of transients being discovered. So far iPTF has discovered over a hundred supernovae within a few days of explosions, forty-nine of which were spectroscopically classified within twenty-four hours of discovery.

Our observations of infant Type Ia supernovae provide evidence for both the single-degenerate and double-degenerate progenitor channels. On the one hand, a low-velocity Type Ia supernova iPTF14atg revealed a strong ultraviolet pulse within four days of its explosion. I show that the pulse is consistent with the expected emission produced by collision between the supernova ejecta and a companion star, providing direct evidence for the single degenerate channel. By comparing the distinct early-phase light curves of iPTF14atg to an otherwise similar event iPTF14dpk, I show that the viewing angle dependence of the supernova-companion collision signature is probably responsible to the difference of the early light curves. I also show evidence for a dark period between the supernova explosion and the first light of the radioactively-powered light curve. On the other hand, a peculiar Type Ia supernova iPTF13asv revealed strong near-UV emission and absence of iron in the spectra within the first two weeks of explosion, suggesting a stratified ejecta structure with iron group elements confined to the slow-moving part of the ejecta. With its total ejecta mass estimated to exceed the Chandrasekhar limit, I show that the stratification and large mass of the ejecta favor the double-degenerate channel.

In a separate approach, iPTF found the first progenitor system of a Type Ib supernova iPTF13bvn in the pre-explosion HST archival mages. Independently, I used the early-phase optical observations of this supernova to constrain its progenitor radius to be no larger than several solar radii. I also used its early radio detections to derive a mass loss rate of 3e-5 solar mass per year for the progenitor right before the supernova explosion. These constraints on the physical properties of the iPTF13bvn progenitor provide a comprehensive data set to test Type Ib supernova theories. A recent HST revisit to the iPTF13bvn site two years after the supernova explosion has confirmed the progenitor system.

Moving forward, the next frontier in this area is to extend these single-object analyses to a large sample of infant supernovae. The upcoming Zwicky Transient Facility with its fast survey speed, which is expected to find one infant supernova every night, is well positioned to carry out this task.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The first thesis topic is a perturbation method for resonantly coupled nonlinear oscillators. By successive near-identity transformations of the original equations, one obtains new equations with simple structure that describe the long time evolution of the motion. This technique is related to two-timing in that secular terms are suppressed in the transformation equations. The method has some important advantages. Appropriate time scalings are generated naturally by the method, and don't need to be guessed as in two-timing. Furthermore, by continuing the procedure to higher order, one extends (formally) the time scale of valid approximation. Examples illustrate these claims. Using this method, we investigate resonance in conservative, non-conservative and time dependent problems. Each example is chosen to highlight a certain aspect of the method.

The second thesis topic concerns the coupling of nonlinear chemical oscillators. The first problem is the propagation of chemical waves of an oscillating reaction in a diffusive medium. Using two-timing, we derive a nonlinear equation that determines how spatial variations in the phase of the oscillations evolves in time. This result is the key to understanding the propagation of chemical waves. In particular, we use it to account for certain experimental observations on the Belusov-Zhabotinskii reaction.

Next, we analyse the interaction between a pair of coupled chemical oscillators. This time, we derive an equation for the phase shift, which measures how much the oscillators are out of phase. This result is the key to understanding M. Marek's and I. Stuchl's results on coupled reactor systems. In particular, our model accounts for synchronization and its bifurcation into rhythm splitting.

Finally, we analyse large systems of coupled chemical oscillators. Using a continuum approximation, we demonstrate mechanisms that cause auto-synchronization in such systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Vortex rings constitute the main structure in the wakes of a wide class of swimming and flying animals, as well as in cardiac flows and in the jets generated by some moss and fungi. However, there is a physical limit, determined by an energy maximization principle called the Kelvin-Benjamin principle, to the size that axisymmetric vortex rings can achieve. The existence of this limit is known to lead to the separation of a growing vortex ring from the shear layer feeding it, a process known as `vortex pinch-off', and characterized by the dimensionless vortex formation number. The goal of this thesis is to improve our understanding of vortex pinch-off as it relates to biological propulsion, and to provide future researchers with tools to assist in identifying and predicting pinch-off in biological flows.

To this end, we introduce a method for identifying pinch-off in starting jets using the Lagrangian coherent structures in the flow, and apply this criterion to an experimentally generated starting jet. Since most naturally occurring vortex rings are not circular, we extend the definition of the vortex formation number to include non-axisymmetric vortex rings, and find that the formation number for moderately non-axisymmetric vortices is similar to that of circular vortex rings. This suggests that naturally occurring vortex rings may be modeled as axisymmetric vortex rings. Therefore, we consider the perturbation response of the Norbury family of axisymmetric vortex rings. This family is chosen to model vortex rings of increasing thickness and circulation, and their response to prolate shape perturbations is simulated using contour dynamics. Finally, the response of more realistic models for vortex rings, constructed from experimental data using nested contours, to perturbations which resemble those encountered by forming vortices more closely, is simulated using contour dynamics. In both families of models, a change in response analogous to pinch-off is found as members of the family with progressively thicker cores are considered. We posit that this analogy may be exploited to understand and predict pinch-off in complex biological flows, where current methods are not applicable in practice, and criteria based on the properties of vortex rings alone are necessary.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Disorder and interactions both play crucial roles in quantum transport. Decades ago, Mott showed that electron-electron interactions can lead to insulating behavior in materials that conventional band theory predicts to be conducting. Soon thereafter, Anderson demonstrated that disorder can localize a quantum particle through the wave interference phenomenon of Anderson localization. Although interactions and disorder both separately induce insulating behavior, the interplay of these two ingredients is subtle and often leads to surprising behavior at the periphery of our current understanding. Modern experiments probe these phenomena in a variety of contexts (e.g. disordered superconductors, cold atoms, photonic waveguides, etc.); thus, theoretical and numerical advancements are urgently needed. In this thesis, we report progress on understanding two contexts in which the interplay of disorder and interactions is especially important.

The first is the so-called “dirty” or random boson problem. In the past decade, a strong-disorder renormalization group (SDRG) treatment by Altman, Kafri, Polkovnikov, and Refael has raised the possibility of a new unstable fixed point governing the superfluid-insulator transition in the one-dimensional dirty boson problem. This new critical behavior may take over from the weak-disorder criticality of Giamarchi and Schulz when disorder is sufficiently strong. We analytically determine the scaling of the superfluid susceptibility at the strong-disorder fixed point and connect our analysis to recent Monte Carlo simulations by Hrahsheh and Vojta. We then shift our attention to two dimensions and use a numerical implementation of the SDRG to locate the fixed point governing the superfluid-insulator transition there. We identify several universal properties of this transition, which are fully independent of the microscopic features of the disorder.

The second focus of this thesis is the interplay of localization and interactions in systems with high energy density (i.e., far from the usual low energy limit of condensed matter physics). Recent theoretical and numerical work indicates that localization can survive in this regime, provided that interactions are sufficiently weak. Stronger interactions can destroy localization, leading to a so-called many-body localization transition. This dynamical phase transition is relevant to questions of thermalization in isolated quantum systems: it separates a many-body localized phase, in which localization prevents transport and thermalization, from a conducting (“ergodic”) phase in which the usual assumptions of quantum statistical mechanics hold. Here, we present evidence that many-body localization also occurs in quasiperiodic systems that lack true disorder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-step electron tunneling, or “hopping,” has become a fast-developing research field with studies ranging from theoretical modeling systems, inorganic complexes, to biological systems. In particular, the field is exploring hopping mechanisms in new proteins and protein complexes, as well as further understanding the classical biological hopping systems such as ribonuclease reductase, DNA photolyases, and photosystem II. Despite the plethora of natural systems, only a few biologically engineered systems exist. Engineered hopping systems can provide valuable information on key structural and electronic features, just like other kinds of biological model systems. Also, engineered systems can harness common biologic processes and utilize them for alternative reactions. In this thesis, two new hopping systems are engineered and characterized.

The protein Pseudomonas aeruginosa azurin is used as a building block to create the two new hopping systems. Besides being well studied and amenable to mutation, azurin already has been used to successfully engineer a hopping system. The two hopping systems presented in this thesis have a histidine-attached high potential rhenium 4,7-dimethyl-1,10-phenanthroline tricarbonyl [Re(dmp)(CO)3] + label which, when excited, acts as the initial electron acceptor. The metal donor is the type I copper of the azurin protein. The hopping intermediates are all tryptophan, an amino acid mutated into the azurin at select sites between the photoactive metal label and the protein metal site. One system exhibits an inter-molecular hopping through a protein dimer interface; the other system undergoes intra-molecular multi-hopping utilizing a tryptophan “wire.” The electron transfer reactions are triggered by excitation of the rhenium label and monitored by UV-Visible transient absorption, luminescence decays measurements, and time-resolved Infrared spectroscopy (TRIR). Both systems were structurally characterized by protein X-ray crystallography.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the use of multiply-substituted stable isotopologues of carbonate minerals and methane gas to better understand how these environmentally significant minerals and gases form and are modified throughout their geological histories. Stable isotopes have a long tradition in earth science as a tool for providing quantitative constraints on how molecules, in or on the earth, formed in both the present and past. Nearly all studies, until recently, have only measured the bulk concentrations of stable isotopes in a phase or species. However, the abundance of various isotopologues within a phase, for example the concentration of isotopologues with multiple rare isotopes (multiply substituted or 'clumped' isotopologues) also carries potentially useful information. Specifically, the abundances of clumped isotopologues in an equilibrated system are a function of temperature and thus knowledge of their abundances can be used to calculate a sample’s formation temperature. In this thesis, measurements of clumped isotopologues are made on both carbonate-bearing minerals and methane gas in order to better constrain the environmental and geological histories of various samples.

Clumped-isotope-based measurements of ancient carbonate-bearing minerals, including apatites, have opened up paleotemperature reconstructions to a variety of systems and time periods. However, a critical issue when using clumped-isotope based measurements to reconstruct ancient mineral formation temperatures is whether the samples being measured have faithfully recorded their original internal isotopic distributions. These original distributions can be altered, for example, by diffusion of atoms in the mineral lattice or through diagenetic reactions. Understanding these processes quantitatively is critical for the use of clumped isotopes to reconstruct past temperatures, quantify diagenesis, and calculate time-temperature burial histories of carbonate minerals. In order to help orient this part of the thesis, Chapter 2 provides a broad overview and history of clumped-isotope based measurements in carbonate minerals.

In Chapter 3, the effects of elevated temperatures on a sample’s clumped-isotope composition are probed in both natural and experimental apatites (which contain structural carbonate groups) and calcites. A quantitative model is created that is calibrated by the experiments and consistent with the natural samples. The model allows for calculations of the change in a sample’s clumped isotope abundances as a function of any time-temperature history.

In Chapter 4, the effects of diagenesis on the stable isotopic compositions of apatites are explored on samples from a variety of sedimentary phosphorite deposits. Clumped isotope temperatures and bulk isotopic measurements from carbonate and phosphate groups are compared for all samples. These results demonstrate that samples have experienced isotopic exchange of oxygen atoms in both the carbonate and phosphate groups. A kinetic model is developed that allows for the calculation of the amount of diagenesis each sample has experienced and yields insight into the physical and chemical processes of diagenesis.

The thesis then switches gear and turns its attention to clumped isotope measurements of methane. Methane is critical greenhouse gas, energy resource, and microbial metabolic product and substrate. Despite its importance both environmentally and economically, much about methane’s formational mechanisms and the relative sources of methane to various environments remains poorly constrained. In order to add new constraints to our understanding of the formation of methane in nature, I describe the development and application of methane clumped isotope measurements to environmental deposits of methane. To help orient the reader, a brief overview of the formation of methane in both high and low temperature settings is given in Chapter 5.

In Chapter 6, a method for the measurement of methane clumped isotopologues via mass spectrometry is described. This chapter demonstrates that the measurement is precise and accurate. Additionally, the measurement is calibrated experimentally such that measurements of methane clumped isotope abundances can be converted into equivalent formational temperatures. This study represents the first time that methane clumped isotope abundances have been measured at useful precisions.

In Chapter 7, the methane clumped isotope method is applied to natural samples from a variety of settings. These settings include thermogenic gases formed and reservoired in shales, migrated thermogenic gases, biogenic gases, mixed biogenic and thermogenic gas deposits, and experimentally generated gases. In all cases, calculated clumped isotope temperatures make geological sense as formation temperatures or mixtures of high and low temperature gases. Based on these observations, we propose that the clumped isotope temperature of an unmixed gas represents its formation temperature — this was neither an obvious nor expected result and has important implications for how methane forms in nature. Additionally, these results demonstrate that methane-clumped isotope compositions provided valuable additional constraints to studying natural methane deposits.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The assembly history of massive galaxies is one of the most important aspects of galaxy formation and evolution. Although we have a broad idea of what physical processes govern the early phases of galaxy evolution, there are still many open questions. In this thesis I demonstrate the crucial role that spectroscopy can play in a physical understanding of galaxy evolution. I present deep near-infrared spectroscopy for a sample of high-redshift galaxies, from which I derive important physical properties and their evolution with cosmic time. I take advantage of the recent arrival of efficient near-infrared detectors to target the rest-frame optical spectra of z > 1 galaxies, from which many physical quantities can be derived. After illustrating the applications of near-infrared deep spectroscopy with a study of star-forming galaxies, I focus on the evolution of massive quiescent systems.

Most of this thesis is based on two samples collected at the W. M. Keck Observatory that represent a significant step forward in the spectroscopic study of z > 1 quiescent galaxies. All previous spectroscopic samples at this redshift were either limited to a few objects, or much shallower in terms of depth. Our first sample is composed of 56 quiescent galaxies at 1 < z < 1.6 collected using the upgraded red arm of the Low Resolution Imaging Spectrometer (LRIS). The second consists of 24 deep spectra of 1.5 < z < 2.5 quiescent objects observed with the Multi-Object Spectrometer For Infra-Red Exploration (MOSFIRE). Together, these spectra span the critical epoch 1 < z < 2.5, where most of the red sequence is formed, and where the sizes of quiescent systems are observed to increase significantly.

We measure stellar velocity dispersions and dynamical masses for the largest number of z > 1 quiescent galaxies to date. By assuming that the velocity dispersion of a massive galaxy does not change throughout its lifetime, as suggested by theoretical studies, we match galaxies in the local universe with their high-redshift progenitors. This allows us to derive the physical growth in mass and size experienced by individual systems, which represents a substantial advance over photometric inferences based on the overall galaxy population. We find a significant physical growth among quiescent galaxies over 0 < z < 2.5 and, by comparing the slope of growth in the mass-size plane dlogRe/dlogM with the results of numerical simulations, we can constrain the physical process responsible for the evolution. Our results show that the slope of growth becomes steeper at higher redshifts, yet is broadly consistent with minor mergers being the main process by which individual objects evolve in mass and size.

By fitting stellar population models to the observed spectroscopy and photometry we derive reliable ages and other stellar population properties. We show that the addition of the spectroscopic data helps break the degeneracy between age and dust extinction, and yields significantly more robust results compared to fitting models to the photometry alone. We detect a clear relation between size and age, where larger galaxies are younger. Therefore, over time the average size of the quiescent population will increase because of the contribution of large galaxies recently arrived to the red sequence. This effect, called progenitor bias, is different from the physical size growth discussed above, but represents another contribution to the observed difference between the typical sizes of low- and high-redshift quiescent galaxies. By reconstructing the evolution of the red sequence starting at z ∼ 1.25 and using our stellar population histories to infer the past behavior to z ∼ 2, we demonstrate that progenitor bias accounts for only half of the observed growth of the population. The remaining size evolution must be due to physical growth of individual systems, in agreement with our dynamical study.

Finally, we use the stellar population properties to explore the earliest periods which led to the formation of massive quiescent galaxies. We find tentative evidence for two channels of star formation quenching, which suggests the existence of two independent physical mechanisms. We also detect a mass downsizing, where more massive galaxies form at higher redshift, and then evolve passively. By analyzing in depth the star formation history of the brightest object at z > 2 in our sample, we are able to put constraints on the quenching timescale and on the properties of its progenitor.

A consistent picture emerges from our analyses: massive galaxies form at very early epochs, are quenched on short timescales, and then evolve passively. The evolution is passive in the sense that no new stars are formed, but significant mass and size growth is achieved by accreting smaller, gas-poor systems. At the same time the population of quiescent galaxies grows in number due to the quenching of larger star-forming galaxies. This picture is in agreement with other observational studies, such as measurements of the merger rate and analyses of galaxy evolution at fixed number density.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Liquefaction is a devastating instability associated with saturated, loose, and cohesionless soils. It poses a significant risk to distributed infrastructure systems that are vital for the security, economy, safety, health, and welfare of societies. In order to make our cities resilient to the effects of liquefaction, it is important to be able to identify areas that are most susceptible. Some of the prevalent methodologies employed to identify susceptible areas include conventional slope stability analysis and the use of so-called liquefaction charts. However, these methodologies have some limitations, which motivate our research objectives. In this dissertation, we investigate the mechanics of origin of liquefaction in a laboratory test using grain-scale simulations, which helps (i) understand why certain soils liquefy under certain conditions, and (ii) identify a necessary precursor for onset of flow liquefaction. Furthermore, we investigate the mechanics of liquefaction charts using a continuum plasticity model; this can help in modeling the surface hazards of liquefaction following an earthquake. Finally, we also investigate the microscopic definition of soil shear wave velocity, a soil property that is used as an index to quantify liquefaction resistance of soil. We show that anisotropy in fabric, or grain arrangement can be correlated with anisotropy in shear wave velocity. This has the potential to quantify the effects of sample disturbance when a soil specimen is extracted from the field. In conclusion, by developing a more fundamental understanding of soil liquefaction, this dissertation takes necessary steps for a more physical assessment of liquefaction susceptibility at the field-scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cardiovascular diseases (CVDs) have reached an epidemic proportion in the US and worldwide with serious consequences in terms of human suffering and economic impact. More than one third of American adults are suffering from CVDs. The total direct and indirect costs of CVDs are more than $500 billion per year. Therefore, there is an urgent need to develop noninvasive diagnostics methods, to design minimally invasive assist devices, and to develop economical and easy-to-use monitoring systems for cardiovascular diseases. In order to achieve these goals, it is necessary to gain a better understanding of the subsystems that constitute the cardiovascular system. The aorta is one of these subsystems whose role in cardiovascular functioning has been underestimated. Traditionally, the aorta and its branches have been viewed as resistive conduits connected to an active pump (left ventricle of the heart). However, this perception fails to explain many observed physiological results. My goal in this thesis is to demonstrate the subtle but important role of the aorta as a system, with focus on the wave dynamics in the aorta.

The operation of a healthy heart is based on an optimized balance between its pumping characteristics and the hemodynamics of the aorta and vascular branches. The delicate balance between the aorta and heart can be impaired due to aging, smoking, or disease. The heart generates pulsatile flow that produces pressure and flow waves as it enters into the compliant aorta. These aortic waves propagate and reflect from reflection sites (bifurcations and tapering). They can act constructively and assist the blood circulation. However, they may act destructively, promoting diseases or initiating sudden cardiac death. These waves also carry information about the diseases of the heart, vascular disease, and coupling of heart and aorta. In order to elucidate the role of the aorta as a dynamic system, the interplay between the dominant wave dynamic parameters is investigated in this study. These parameters are heart rate, aortic compliance (wave speed), and locations of reflection sites. Both computational and experimental approaches have been used in this research. In some cases, the results are further explained using theoretical models.

The main findings of this study are as follows: (i) developing a physiologically realistic outflow boundary condition for blood flow modeling in a compliant vasculature; (ii) demonstrating that pulse pressure as a single index cannot predict the true level of pulsatile workload on the left ventricle; (iii) proving that there is an optimum heart rate in which the pulsatile workload of the heart is minimized and that the optimum heart rate shifts to a higher value as aortic rigidity increases; (iv) introducing a simple bio-inspired device for correction and optimization of aortic wave reflection that reduces the workload on the heart; (v) deriving a non-dimensional number that can predict the optimum wave dynamic state in a mammalian cardiovascular system; (vi) demonstrating that waves can create a pumping effect in the aorta; (vii) introducing a system parameter and a new medical index, Intrinsic Frequency, that can be used for noninvasive diagnosis of heart and vascular diseases; and (viii) proposing a new medical hypothesis for sudden cardiac death in young athletes.