11 resultados para cash-in-advance model

em CaltechTHESIS


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.

The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.

In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.

The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis, we develop an efficient collapse prediction model, the PFA (Peak Filtered Acceleration) model, for buildings subjected to different types of ground motions.

For the structural system, the PFA model covers modern steel and reinforced concrete moment-resisting frame buildings (potentially reinforced concrete shear wall buildings). For ground motions, the PFA model covers ramp-pulse-like ground motions, long-period ground motions, and short-period ground motions.

To predict whether a building will collapse in response to a given ground motion, we first extract long-period components from the ground motion using a Butterworth low-pass filter with suggested order and cutoff frequency. The order depends on the type of ground motion, and the cutoff frequency depends on the building’s natural frequency and ductility. We then compare the filtered acceleration time history with the capacity of the building. The capacity of the building is a constant for 2-dimentional buildings and a limit domain for 3-dimentional buildings. If the filtered acceleration exceeds the building’s capacity, the building is predicted to collapse. Otherwise, it is expected to survive the ground motion.

The parameters used in PFA model, which include fundamental period, global ductility and lateral capacity, can be obtained either from numerical analysis or interpolation based on the reference building system proposed in this thesis.

The PFA collapse prediction model greatly reduces computational complexity while archiving good accuracy. It is verified by FEM simulations of 13 frame building models and 150 ground motion records.

Based on the developed collapse prediction model, we propose to use PFA (Peak Filtered Acceleration) as a new ground motion intensity measure for collapse prediction. We compare PFA with traditional intensity measures PGA, PGV, PGD, and Sa in collapse prediction and find that PFA has the best performance among all the intensity measures.

We also provide a close form in term of a vector intensity measure (PGV, PGD) of the PFA collapse prediction model for practical collapse risk assessment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes simple extensions of the standard model with new sources of baryon number violation but no proton decay. The motivation for constructing such theories comes from the shortcomings of the standard model to explain the generation of baryon asymmetry in the universe, and from the absence of experimental evidence for proton decay. However, lack of any direct evidence for baryon number violation in general puts strong bounds on the naturalness of some of those models and favors theories with suppressed baryon number violation below the TeV scale. The initial part of the thesis concentrates on investigating models containing new scalars responsible for baryon number breaking. A model with new color sextet scalars is analyzed in more detail. Apart from generating cosmological baryon number, it gives nontrivial predictions for the neutron-antineutron oscillations, the electric dipole moment of the neutron, and neutral meson mixing. The second model discussed in the thesis contains a new scalar leptoquark. Although this model predicts mainly lepton flavor violation and a nonzero electric dipole moment of the electron, it includes, in its original form, baryon number violating nonrenormalizable dimension-five operators triggering proton decay. Imposing an appropriate discrete symmetry forbids such operators. Finally, a supersymmetric model with gauged baryon and lepton numbers is proposed. It provides a natural explanation for proton stability and predicts lepton number violating processes below the supersymmetry breaking scale, which can be tested at the Large Hadron Collider. The dark matter candidate in this model carries baryon number and can be searched for in direct detection experiments as well. The thesis is completed by constructing and briefly discussing a minimal extension of the standard model with gauged baryon, lepton, and flavor symmetries.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to develop better catalysts for the cleavage of aryl-X bonds fundamental studies of the mechanism and individual steps of the mechanism have been investigated in detail. As the described studies are difficult at best in catalytic systems, model systems are frequently used. To study aryl-oxygen bond activation, a terphenyl diphosphine scaffold containing an ether moiety in the central arene was designed. The first three chapters of this dissertation focus on the studies of the nickel complexes supported by this diphosphine backbone and the research efforts in regards to aryl-oxygen bond activation.

Chapter 2 outlines the synthesis of a variety of diphosphine terphenyl ether ligand scaffolds. The metallation of these scaffolds with nickel is described. The reactivity of these nickel(0) systems is also outlined. The systems were found to typically undergo a reductive cleavage of the aryl oxygen bond. The mechanism was found to be a subsequent oxidative addition, β-H elimination, reductive elimination and (or) decarbonylation.

Chapter 3 presents kinetic studies of the aryl oxygen bond in the systems outlined in Chapter 2. Using a series of nickel(0) diphosphine terphenyl ether complexes the kinetics of aryl oxygen bond activation was studied. The activation parameters of oxidative addition for the model systems were determined. Little variation was observed in the rate and activation parameters of oxidative addition with varying electronics in the model system. The cause of the lack of variation is due to the ground state and oxidative addition transition state being affected similarly. Attempts were made to extend this study to catalytic systems.

Chapter 4 investigates aryl oxygen bond activation in the presence of additives. It was found that the addition of certain metal alkyls to the nickel(0) model system lead to an increase in the rate of aryl oxygen bond activation. The addition of excess Grignard reagent led to an order of magnitude increase in the rate of aryl oxygen bond activation. Similarly the addition of AlMe3 led to a three order of magnitude rate increase. Addition of AlMe3 at -80 °C led to the formation of an intermediate which was identified by NOESY correlations as a system in which the AlMe3 is coordinated to the ether moiety of the backbone. The rates and activation parameters of aryl oxygen bond activation in the presence of AlMe3 were investigated.

The last two chapters involve the study of metalla-macrocycles as ligands. Chapter 5 details the synthesis of a variety of glyoxime backbones and diphenol precursors and their metallation with aluminum. The coordination chemistry of iron on the aluminum scaffolds was investigated. Varying the electronics of the aluminum macrocycle was found to affect the observed electrochemistry of the iron center.

Chapter 6 extends the studies of chapter 5 to cobalt complexes. The synthesis of cobalt dialuminum glyoxime metal complexes is described. The electrochemistry of the cobalt complexes was investigated. The electrochemistry was compared to the observed electrochemistry of a zinc analog to identify the redox activity of the ligand. In the presence of acid the cobalt complexes were found to electrochemically reduce protons to dihydrogen. The electronics of the ancillary aluminum ligands were found to affect the potential of proton reduction in the cobalt complexes. These potentials were compared to other diglyoximate complexes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The following work explores the processes individuals utilize when making multi-attribute choices. With the exception of extremely simple or familiar choices, most decisions we face can be classified as multi-attribute choices. In order to evaluate and make choices in such an environment, we must be able to estimate and weight the particular attributes of an option. Hence, better understanding the mechanisms involved in this process is an important step for economists and psychologists. For example, when choosing between two meals that differ in taste and nutrition, what are the mechanisms that allow us to estimate and then weight attributes when constructing value? Furthermore, how can these mechanisms be influenced by variables such as attention or common physiological states, like hunger?

In order to investigate these and similar questions, we use a combination of choice and attentional data, where the attentional data was collected by recording eye movements as individuals made decisions. Chapter 1 designs and tests a neuroeconomic model of multi-attribute choice that makes predictions about choices, response time, and how these variables are correlated with attention. Chapter 2 applies the ideas in this model to intertemporal decision-making, and finds that attention causally affects discount rates. Chapter 3 explores how hunger, a common physiological state, alters the mechanisms we utilize as we make simple decisions about foods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The molecular inputs necessary for cell behavior are vital to our understanding of development and disease. Proper cell behavior is necessary for processes ranging from creating one’s face (neural crest migration) to spreading cancer from one tissue to another (invasive metastatic cancers). Identifying the genes and tissues involved in cell behavior not only increases our understanding of biology but also has the potential to create targeted therapies in diseases hallmarked by aberrant cell behavior.

A well-characterized model system is key to determining the molecular and spatial inputs necessary for cell behavior. In this work I present the C. elegans uterine seam cell (utse) as an ideal model for studying cell outgrowth and shape change. The utse is an H-shaped cell within the hermaphrodite uterus that functions in attaching the uterus to the body wall. Over L4 larval stage, the utse grows bidirectionally along the anterior-posterior axis, changing from an ellipsoidal shape to an elongated H-shape. Spatially, the utse requires the presence of the uterine toroid cells, sex muscles, and the anchor cell nucleus in order to properly grow outward. Several gene families are involved in utse development, including Trio, Nav, Rab GTPases, Arp2/3, as well as 54 other genes found from a candidate RNAi screen. The utse can be used as a model system for studying metastatic cancer. Meprin proteases are involved in promoting invasiveness of metastatic cancers and the meprin-likw genes nas-21, nas-22, and toh-1 act similarly within the utse. Studying nas-21 activity has also led to the discovery of novel upstream inhibitors and activators as well as targets of nas-21, some of which have been characterized to affect meprin activity. This illustrates that the utse can be used as an in vivo model for learning more about meprins, as well as various other proteins involved in metastasis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An array of two spark chambers and six trays of plastic scintillation counters was used to search for unaccompanied fractionally charged particles in cosmic rays near sea level. No acceptable events were found with energy losses by ionization between 0.04 and 0.7 that of unit-charged minimum-ionizing particles. New 90%-confidence upper limits were thereby established for the fluxes of fractionally charged particles in cosmic rays, namely, (1.04 ± 0.07)x10-10 and (2.03 ± 0.16)x10-10 cm-2sr-1sec-1 for minimum-ionizing particles with charges 1/3 and 2/3, respectively.

In order to be certain that the spark chambers could have functioned for the low levels of ionization expected from particles with small fractional charges, tests were conducted to estimate the efficiency of the chambers as they had been used in this experiment. These tests showed that the spark-chamber system with the track-selection criteria used might have been over 99% efficient for the entire range of energy losses considered.

Lower limits were then obtained for the mass of a quark by considering the above flux limits and a particular model for the production of quarks in cosmic rays. In this model, which is one involving the multi-peripheral Regge hypothesis, the production cross section and a corresponding mass limit are critically dependent on the Regge trajectory assigned to a quark. If quarks are "elementary'' with a flat trajectory, the mass of a quark can be expected to be at least 6 ± 2 BeV/c2. If quarks have a trajectory with unit slope, just as the existing hadrons do, the mass of a quark might be as small as 1.3 ± 0.2 BeV/c2. For a trajectory with unit slope and a mass larger than a couple of BeV/c2, the production cross section may be so low that quarks might never be observed in nature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis aims at enhancing our fundamental understanding of the East Asian summer monsoon (EASM), and mechanisms implicated in its climatology in present-day and warmer climates. We focus on the most prominent feature of the EASM, i.e., the so-called Meiyu-Baiu (MB), which is characterized by a well-defined, southwest to northeast elongated quasi-stationary rainfall band, spanning from eastern China to Japan and into the northwestern Pacific Ocean in June and July.

We begin with an observational study of the energetics of the MB front in present-day climate. Analyses of the moist static energy (MSE) budget of the MB front indicate that horizontal advection of moist enthalpy, primarily of dry enthalpy, sustains the front in a region of otherwise negative net energy input into the atmospheric column. A decomposition of the horizontal dry enthalpy advection into mean, transient, and stationary eddy fluxes identifies the longitudinal thermal gradient due to zonal asymmetries and the meridional stationary eddy velocity as the most influential factors determining the pattern of horizontal moist enthalpy advection. Numerical simulations in which the Tibetan Plateau (TP) is either retained or removed show that the TP influences the stationary enthalpy flux, and hence the MB front, primarily by changing the meridional stationary eddy velocity, with reinforced southerly wind on the northwestern flank of the north Pacific subtropical high (NPSH) over the MB region and northerly wind to its north. Changes in the longitudinal thermal gradient are mainly confined to the near downstream of the TP, with the resulting changes in zonal warm air advection having a lesser impact on the rainfall in the extended MB region.

Similar mechanisms are shown to be implicated in present climate simulations in the Couple Model Intercomparison Project - Phase 5 (CMIP5) models. We find that the spatial distribution of the EASM precipitation simulated by different models is highly correlated with the meridional stationary eddy velocity. The correlation becomes more robust when energy fluxes into the atmospheric column are considered, consistent with the observational analyses. The spread in the area-averaged rainfall amount can be partially explained by the spread in the simulated globally-averaged precipitation, with the rest primarily due to the lower-level meridional wind convergence. Clear relationships between precipitation and zonal and meridional eddy velocities are observed.

Finally, the response of the EASM to greenhouse gas forcing is investigated at different time scales in CMIP5 model simulations. The reduction of radiative cooling and the increase in continental surface temperature occur much more rapidly than changes in sea surface temperatures (SSTs). Without changes in SSTs, the rainfall in the monsoon region decreases (increases) over ocean (land) in most models. On longer time scales, as SSTs increase, rainfall changes are opposite. The total response to atmospheric CO^2 forcing and subsequent SST warming is a large (modest) increase in rainfall over ocean (land) in the EASM region. Dynamic changes, in spite of significant contributions from the thermodynamic component, play an important role in setting up the spatial pattern of precipitation changes. Rainfall anomalies over East China are a direct consequence of local land-sea contrast, while changes in the larger-scale oceanic rainfall band are closely associated with the displacement of the larger-scale NPSH. Numerical simulations show that topography and SST patterns play an important role in rainfall changes in the EASM region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.

It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.

In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.

Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hair cells from the bull frog's sacculus, a vestibular organ responding to substrate-borne vibration, possess electrically resonant membrane properties which maximize the sensitivity of each cell to a particular frequency of mechanical input. The electrical resonance of these cells and its underlying ionic basis were studied by applying gigohm-seal recording techniques to solitary hair cells enzymatically dissociated from the sacculus. The contribution of electrical resonance to frequency selectivity was assessed from microelectrode recordings from hair cells in an excised preparation of the sacculus.

Electrical resonance in the hair cell is demonstrated by damped membrane-potential oscillations in response to extrinsic current pulses applied through the recording pipette. This response is analyzed as that of a damped harmonic oscillator. Oscillation frequency rises with membrane depolarization, from 80-160 Hz at resting potential to asymptotic values of 200-250 Hz. The sharpness of electrical tuning, denoted by the electrical quality factor, Qe, is a bell-shaped function of membrane voltage, reaching a maximum value around eight at a membrane potential slightly positive to the resting potential.

In whole cells, three time-variant ionic currents are activated at voltages more positive than -60 to -50 mV; these are identified as a voltage-dependent, non-inactivating Ca current (Ica), a voltage-dependent, transient K current (Ia), and a Ca-dependent K current (Ic). The C channel is identified in excised, inside-out membrane patches on the basis of its large conductance (130-200 pS), its selective permeability to Kover Na or Cl, and its activation by internal Ca ions and membrane depolarization. Analysis of open- and closed-lifetime distributions suggests that the C channel can assume at least two open and three closed kinetic states.

Exposing hair cells to external solutions that inhibit the Ca or C conductances degrades the electrical resonance properties measured under current-clamp conditions, while blocking the A conductance has no significant effect, providing evidence that only the Ca and C conductances participate in the resonance mechanism. To test the sufficiency of these two conductances to account for electrical resonance, a mathematical model is developed that describes Ica, Ic, and intracellular Ca concentration during voltage-clamp steps. Ica activation is approximated by a third-order Hodgkin-Huxley kinetic scheme. Ca entering the cell is assumed to be confined to a small submembrane compartment which contains an excess of Ca buffer; Ca leaves this space with first-order kinetics. The Ca- and voltage-dependent activation of C channels is described by a five-state kinetic scheme suggested by the results of single-channel observations. Parameter values in the model are adjusted to fit the waveforms of Ica and Ic evoked by a series of voltage-clamp steps in a single cell. Having been thus constrained, the model correctly predicts the character of voltage oscillations produced by current-clamp steps, including the dependencies of oscillation frequency and Qe on membrane voltage. The model shows quantitatively how the Ca and C conductances interact, via changes in intracellular Ca concentration, to produce electrical resonance in a vertebrate hair cell.