13 resultados para Economic conversion
em CaltechTHESIS
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.
Resumo:
Future fossil fuel scarcity and environmental degradation have demonstrated the need for renewable, low-carbon sources of energy to power an increasingly industrialized world. Solar energy with its infinite supply makes it an extraordinary resource that should not go unused. However with current materials, adoption is limited by cost and so a paradigm shift must occur to get everyone on the same page embracing solar technology. Cuprous Oxide (Cu2O) is a promising earth abundant material that can be a great alternative to traditional thin-film photovoltaic materials like CIGS, CdTe, etc. We have prepared Cu2O bulk substrates by the thermal oxidation of copper foils as well Cu2O thin films deposited via plasma-assisted Molecular Beam Epitaxy. From preliminary Hall measurements it was determined that Cu2O would need to be doped extrinsically. This was further confirmed by simulations of ZnO/Cu2O heterojunctions. A cyclic interdependence between, defect concentration, minority carrier lifetime, film thickness, and carrier concentration manifests itself a primary reason for why efficiencies greater than 4% has yet to be realized. Our growth methodology for our thin-film heterostructures allow precise control of the number of defects that incorporate into our film during both equilibrium and nonequilibrium growth. We also report process flow/device design/fabrication techniques in order to create a device. A typical device without any optimizations exhibited open-circuit voltages Voc, values in excess 500mV; nearly 18% greater than previous solid state devices.
Resumo:
Threefold symmetric Fe phosphine complexes have been used to model the structural and functional aspects of biological N2 fixation by nitrogenases. Low-valent bridging Fe-S-Fe complexes in the formal oxidation states Fe(II)Fe(II), Fe(II)/Fe(I), and Fe(I)/Fe(I) have been synthesized which display rich spectroscopic and magnetic behavior. A series of cationic tris-phosphine borane (TPB) ligated Fe complexes have been synthesized and been shown to bind a variety of nitrogenous ligands including N2H4, NH3, and NH2
Treatment of an anionic FeN2 complex with excess acid also results in the formation of some NH3, suggesting the possibility of a catalytic cycle for the conversion of N2 to NH3 mediated by Fe. Indeed, use of excess acid and reductant results in the formation of seven equivalents of NH3 per Fe center, demonstrating Fe mediated catalytic N2 fixation with acids and protons for the first time. Numerous control experiments indicate that this catalysis is likely being mediated by a molecular species.
A number of other phosphine ligated Fe complexes have also been tested for catalysis and suggest that a hemi-labile Fe-B interaction may be critical for catalysis. Additionally, various conditions for the catalysis have been investigated. These studies further support the assignment of a molecular species and delineate some of the conditions required for catalysis.
Finally, combined spectroscopic studies have been performed on a putative intermediate for catalysis. These studies converge on an assignment of this new species as a hydrazido(2-) complex. Such species have been known on group 6 metals for some time, but this represents the first characterization of this ligand on Fe. Further spectroscopic studies suggest that this species is present in catalytic mixtures, which suggests that the first steps of a distal mechanism for N2 fixation are feasible in this system.
Resumo:
Nanostructured tungsten trioxide (WO3) photoelectrodes are potential candidates for the anodic portion of an integrated solar water-splitting device that generates hydrogen fuel and oxygen from water. These nanostructured materials can potentially offer improved performance in photooxidation reactions compared to unstructured materials because of enhancements in light scattering, increases in surface area, and their decoupling of the directions of light absorption and carrier collection. To evaluate the presence of these effects and their contributions toward energy conversion efficiency, a variety of nanostructured WO3 photoanodes were synthesized by electrodeposition within nanoporous templates and by anodization of tungsten foils. A robust fabrication process was developed for the creation of oriented WO3 nanorod arrays, which allows for control nanorod diameter and length. Films of nanostructured WO3 platelets were grown via anodization, the morphology of the films was controlled by the anodization conditions, and the current-voltage performance and spectral response properties of these films were studied. The observed photocurrents were consistent with the apparent morphologies of the nanostructured arrays. Measurements of electrochemically active surface area and other physical characteristics were correlated with observed differences in absorbance, external quantum yield, and photocurrent density for the anodized arrays. The capability to quantify these characteristics and relate them to photoanode performance metrics can allow for selection of appropriate structural parameters when designing photoanodes for solar energy conversion.
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
Part I: An approach to the total synthesis of the triterpene shionone is described, which proceeds through the tetracyclic ketone i. The shionone side chain has been attached to this key intermediate in 5 steps, affording the olefin 2 in 29% yield. A method for the stereo-specific introduction of the angular methyl group at C-5 of shionone has been developed on a model system. The attempted utilization of this method to convert olefin 2 into shionone is described.
Part II: A method has been developed for activating the C-9 and C-10 positions of estrogenic steroids for substitution. Estrone has been converted to 4β,5β-epoxy-10β-hydroxyestr-3-one; cleavage of this epoxyketone using an Eschenmoser procedure, and subsequent modification of the product afforded 4-seco-9-estren-3,5-dione 3-ethylene acetal. This versatile intermediate, suitable for substitution at the 9 and/or 10 position, was converted to androst-4-ene-3-one by known procedures.
Resumo:
The warm plasma resonance cone structure of the quasistatic field produced by a gap source in a bounded magnetized slab plasma is determined theoretically. This is initially determined for a homogeneous or mildly inhomogeneous plasma with source frequency lying between the lower hybrid frequency and the plasma frequency. It is then extended to the complicated case of an inhomogeneous plasma with two internal lower hybrid layers present, which is of interest to radio frequency heating of plasmas.
In the first case, the potential is obtained as a sum of multiply reflected warm plasma resonance cones, each of which has a similar structure, but a different size, amplitude, and position. An important interference between nearby multiply-reflected resonance cones is found. The cones are seen to spread out as they move away from the source, so that this interference increases and the individual resonance cones become obscured far away from the source.
In the second case, the potential is found to be expressible as a sum of multiply-reflected, multiply-tunnelled, and mode converted resonance cones, each of which has a unique but similar structure. The effects of both collisional and collisionless damping are included and their effects on the decay of the cone structure studied. Various properties of the cones such as how they move into and out of the hybrid layers, through the evanescent region, and transform at the hybrid layers are determined. It is found that cones can tunnel through the evanescent layer if the layer is thin, and the effect of the thin evanescent layer is to subdue the secondary maxima of cone relative to the main peak, while slightly broadening the main peak and shifting it closer to the cold plasma cone line.
Energy theorems for quasistatic fields are developed and applied to determine the power flow and absorption along the individual cones. This reveals the points of concentration of the flow and the various absorption mechanisms.
Resumo:
Experimental demonstrations and theoretical analyses of a new electromechanical energy conversion process which is made feasible only by the unique properties of superconductors are presented in this dissertation. This energy conversion process is characterized by a highly efficient direct energy transformation from microwave energy into mechanical energy or vice versa and can be achieved at high power level. It is an application of a well established physical principle known as the adiabatic theorem (Boltzmann-Ehrenfest theorem) and in this case time dependent superconducting boundaries provide the necessary interface between the microwave energy on one hand and the mechanical work on the other. The mechanism which brings about the conversion is another known phenomenon - the Doppler effect. The resonant frequency of a superconducting resonator undergoes continuous infinitesimal shifts when the resonator boundaries are adiabatically changed in time by an external mechanical mechanism. These small frequency shifts can accumulate coherently over an extended period of time to produce a macroscopic shift when the resonator remains resonantly excited throughout this process. In addition, the electromagnetic energy in s ide the resonator which is proportional to the oscillation frequency is al so accordingly changed so that a direct conversion between electromagnetic and mechanical energies takes place. The intrinsically high efficiency of this process is due to the electromechanical interactions involved in the conversion rather than a process of thermodynamic nature and therefore is not limited by the thermodynamic value.
A highly reentrant superconducting resonator resonating in the range of 90 to 160 MHz was used for demonstrating this new conversion technique. The resonant frequency was mechanically modulated at a rate of two kilohertz. Experimental results showed that the time evolution of the electromagnetic energy inside this frequency modulated (FM) superconducting resonator indeed behaved as predicted and thus demonstrated the unique features of this process. A proposed usage of FM superconducting resonators as electromechanical energy conversion devices is given along with some practical design considerations. This device seems to be very promising in producing high power (~10W/cm^3) microwave energy at 10 - 30 GHz.
Weakly coupled FM resonator system is also analytically studied for its potential applications. This system shows an interesting switching characteristic with which the spatial distribution of microwave energies can be manipulated by external means. It was found that if the modulation was properly applied, a high degree (>95%) of unidirectional energy transfer from one resonator to the other could be accomplished. Applications of this characteristic to fabricate high efficiency energy switching devices and high power microwave pulse generators are also found feasible with present superconducting technology.
Resumo:
No abstract.
Resumo:
Electric dipole internal conversion has been experimentally studied for several nuclei in the rare earth region. Anomalies in the conversion process have been interpreted in terms of nuclear structure effects. It was found that all the experimental results could be interpreted in terms of the j ∙ r type of penetration matrix element; the j ∙ ∇ type of penetration matrix element was not important. The ratio λ of the El j ∙ r penetration matrix element to the El gamma-ray matrix element was determined from the experiments to be:
Lu175,396 keV, λ = - 1000 ± 100;
282 keV, λ = 500 ± 100;
144 keV, λ = 500 ± 250;
Hf177, 321 keV λ = - 1400 ± 200;
208 keV λ = - 90 ± 40;
72 keV |λ| ≤ 650;
Gd155, 86 keV λ = - 150 ± 100;
Tm169, 63 keV λ = - 100 ± 100;
W182, 152 keV, λ = - 160 ±80;
67 keV, λ = - 100 ± 100.
Predictions for λ are made using the unified nuclear model.
Resumo:
STEEL, the Caltech created nonlinear large displacement analysis software, is currently used by a large number of researchers at Caltech. However, due to its complexity, lack of visualization tools (such as pre- and post-processing capabilities) rapid creation and analysis of models using this software was difficult. SteelConverter was created as a means to facilitate model creation through the use of the industry standard finite element solver ETABS. This software allows users to create models in ETABS and intelligently convert model information such as geometry, loading, releases, fixity, etc., into a format that STEEL understands. Models that would take several days to create and verify now take several hours or less. The productivity of the researcher as well as the level of confidence in the model being analyzed is greatly increased.
It has always been a major goal of Caltech to spread the knowledge created here to other universities. However, due to the complexity of STEEL it was difficult for researchers or engineers from other universities to conduct analyses. While SteelConverter did help researchers at Caltech improve their research, sending SteelConverter and its documentation to other universities was less than ideal. Issues of version control, individual computer requirements, and the difficulty of releasing updates made a more centralized solution preferred. This is where the idea for Caltech VirtualShaker was born. Through the creation of a centralized website where users could log in, submit, analyze, and process models in the cloud, all of the major concerns associated with the utilization of SteelConverter were eliminated. Caltech VirtualShaker allows users to create profiles where defaults associated with their most commonly run models are saved, and allows them to submit multiple jobs to an online virtual server to be analyzed and post-processed. The creation of this website not only allowed for more rapid distribution of this tool, but also created a means for engineers and researchers with no access to powerful computer clusters to run computationally intensive analyses without the excessive cost of building and maintaining a computer cluster.
In order to increase confidence in the use of STEEL as an analysis system, as well as verify the conversion tools, a series of comparisons were done between STEEL and ETABS. Six models of increasing complexity, ranging from a cantilever column to a twenty-story moment frame, were analyzed to determine the ability of STEEL to accurately calculate basic model properties such as elastic stiffness and damping through a free vibration analysis as well as more complex structural properties such as overall structural capacity through a pushover analysis. These analyses showed a very strong agreement between the two softwares on every aspect of each analysis. However, these analyses also showed the ability of the STEEL analysis algorithm to converge at significantly larger drifts than ETABS when using the more computationally expensive and structurally realistic fiber hinges. Following the ETABS analysis, it was decided to repeat the comparisons in a software more capable of conducting highly nonlinear analysis, called Perform. These analyses again showed a very strong agreement between the two softwares in every aspect of each analysis through instability. However, due to some limitations in Perform, free vibration analyses for the three story one bay chevron brace frame, two bay chevron brace frame, and twenty story moment frame could not be conducted. With the current trend towards ultimate capacity analysis, the ability to use fiber based models allows engineers to gain a better understanding of a building’s behavior under these extreme load scenarios.
Following this, a final study was done on Hall’s U20 structure [1] where the structure was analyzed in all three softwares and their results compared. The pushover curves from each software were compared and the differences caused by variations in software implementation explained. From this, conclusions can be drawn on the effectiveness of each analysis tool when attempting to analyze structures through the point of geometric instability. The analyses show that while ETABS was capable of accurately determining the elastic stiffness of the model, following the onset of inelastic behavior the analysis tool failed to converge. However, for the small number of time steps the ETABS analysis was converging, its results exactly matched those of STEEL, leading to the conclusion that ETABS is not an appropriate analysis package for analyzing a structure through the point of collapse when using fiber elements throughout the model. The analyses also showed that while Perform was capable of calculating the response of the structure accurately, restrictions in the material model resulted in a pushover curve that did not match that of STEEL exactly, particularly post collapse. However, such problems could be alleviated by choosing a more simplistic material model.
Resumo:
Experimental studies of nuclear effects in internal conversion in Ta181 and Lu175 have been performed. Nuclear structure effects (“penetration” effects), in internal conversion are described in general. Calculation of theoretical conversion coefficients are outlined. Comparisons with the theoretical conversion coefficient tables of Rose and Sliv and Band are made. Discrepancies between our results and those of Rose and Sliv are noted. The theoretical conversion coefficients of Sliv and Band are in substantially better agreement with our results than are those of Rose. The ratio of the M1 penetration matrix element to the M1 gamma-ray matrix element, called λ, is equal to + 175 ± 25 for the 482 keV transition in Ta181 . The results for the 343 keV transition in Lu175 indicate that λ may be as large as – 8 ± 5. These transitions are discussed in terms of the unified collective model. Precision L subshell measurements in Tm169 (130keV), W182 (100 keV), and Ta181 (133 keV) show definite systematic deviations from the theoretical conversion coefficients. The possibility of explaining these deviations by penetration effects is investigated and is shown to be excluded. Other explanations of these anomalies are discussed.