15 resultados para second-best investments

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Government procurement of a new good or service is a process that usually includes basic research, development, and production. Empirical evidences indicate that investments in research and development (R and D) before production are significant in many defense procurements. Thus, optimal procurement policy should not be only to select the most efficient producer, but also to induce the contractors to design the best product and to develop the best technology. It is difficult to apply the current economic theory of optimal procurement and contracting, which has emphasized production, but ignored R and D, to many cases of procurement.

In this thesis, I provide basic models of both R and D and production in the procurement process where a number of firms invest in private R and D and compete for a government contract. R and D is modeled as a stochastic cost-reduction process. The government is considered both as a profit-maximizer and a procurement cost minimizer. In comparison to the literature, the following results derived from my models are significant. First, R and D matters in procurement contracting. When offering the optimal contract the government will be better off if it correctly takes into account costly private R and D investment. Second, competition matters. The optimal contract and the total equilibrium R and D expenditures vary with the number of firms. The government usually does not prefer infinite competition among firms. Instead, it prefers free entry of firms. Third, under a R and D technology with the constant marginal returns-to-scale, it is socially optimal to have only one firm to conduct all of the R and D and production. Fourth, in an independent private values environment with risk-neutral firms, an informed government should select one of four standard auction procedures with an appropriate announced reserve price, acting as if it does not have any private information.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neurons in the primate lateral intraparietal area (area LIP) carry visual, saccade-related and eye position activities. The visual and saccade activities are anchored in a retinotopic framework and the overall response magnitude is modulated by eye position. It was proposed that the modulation by eye position might be the basis of a distributed coding of target locations in a head-centered space. Other recording studies demonstrated that area LIP is involved in oculomotor planning. These results overall suggest that area LIP transforms sensory information for motor functions. In this thesis I further explore the role of area LIP in processing saccadic eye movements by observing the effects of reversible inactivation of this area. Macaque monkeys were trained to do visually guided and memory saccades and a double saccade task to examine the use of eye position signal. Finally, by intermixing visual saccades with trials in which two targets were presented at opposite sides of the fixation point, I examined the behavior of visual extinction.

In chapter 2, I will show that lesion of area LIP results in increased latency of contralesional visual and memory saccades. Contralesional memory saccades are also hypometric and slower in velocity. Moreover, the impairment of memory saccades does not vary with the duration of the delay period. This suggests that the oculomotor deficits observed after inactivation of area LIP is not due to the disruption of spatial memory.

In chapter 3, I will show that lesion of area LIP does not severely affect the processing of spontaneous eye movement. However, the monkeys made fewer contralesional saccades and tended to confine their gaze to the ipsilesional field after inactivation of area LIP. On the other hand, lesion of area LIP results in extinction of the contralesional stimulus. When the initial fixation position was varied so that the retinal and spatial locations of the targets could be dissociated, it was found that the extinction behavior could best be described in a head-centered coordinate.

In chapter 4, I will show that inactivation of area LIP disrupts the use of eye position signal to compute the second movement correctly in the double saccade task. If the first saccade steps into the contralesional field, the error rate and latency of the second saccade are both increased. Furthermore, the direction of the first eye movement largely does not have any effect on the impairment of the second saccade. I will argue that this study provides important evidence that the extraretinal signal used for saccadic localization is eye position rather than a displacement vector.

In chapter 5, I will demonstrate that in parietal monkeys the eye drifts toward the lesion side at the end of the memory saccade in darkness. This result suggests that the eye position activity in the posterior parietal cortex is active in nature and subserves gaze holding.

Overall, these results further support the view that area LIP neurons encode spatial locations in a craniotopic framework and is involved in processing voluntary eye movements.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a growing amount of experimental evidence that suggests people often deviate from the predictions of game theory. Some scholars attempt to explain the observations by introducing errors into behavioral models. However, most of these modifications are situation dependent and do not generalize. A new theory, called the rational novice model, is introduced as an attempt to provide a general theory that takes account of erroneous behavior. The rational novice model is based on two central principals. The first is that people systematically make inaccurate guesses when they are evaluating their options in a game-like situation. The second is that people treat their decisions similar to a portfolio problem. As a result, non optimal actions in a game theoretic sense may be included in the rational novice strategy profile with positive weights.

The rational novice model can be divided into two parts: the behavioral model and the equilibrium concept. In a theoretical chapter, the mathematics of the behavioral model and the equilibrium concept are introduced. The existence of the equilibrium is established. In addition, the Nash equilibrium is shown to be a special case of the rational novice equilibrium. In another chapter, the rational novice model is applied to a voluntary contribution game. Numerical methods were used to obtain the solution. The model is estimated with data obtained from the Palfrey and Prisbrey experimental study of the voluntary contribution game. It is found that the rational novice model explains the data better than the Nash model. Although a formal statistical test was not used, pseudo R^2 analysis indicates that the rational novice model is better than a Probit model similar to the one used in the Palfrey and Prisbrey study.

The rational novice model is also applied to a first price sealed bid auction. Again, computing techniques were used to obtain a numerical solution. The data obtained from the Chen and Plott study were used to estimate the model. The rational novice model outperforms the CRRAM, the primary Nash model studied in the Chen and Plott study. However, the rational novice model is not the best amongst all models. A sophisticated rule-of-thumb, called the SOPAM, offers the best explanation of the data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Among the branches of astronomy, radio astronomy is unique in that it spans the largest portion of the electromagnetic spectrum, e.g., from about 10 MHz to 300 GHz. On the other hand, due to scientific priorities as well as technological limitations, radio astronomy receivers have traditionally covered only about an octave bandwidth. This approach of "one specialized receiver for one primary science goal" is, however, not only becoming too expensive for next-generation radio telescopes comprising thousands of small antennas, but also is inadequate to answer some of the scientific questions of today which require simultaneous coverage of very large bandwidths.

This thesis presents significant improvements on the state of the art of two key receiver components in pursuit of decade-bandwidth radio astronomy: 1) reflector feed antennas; 2) low-noise amplifiers on compound-semiconductor technologies. The first part of this thesis introduces the quadruple-ridged flared horn, a flexible, dual linear-polarization reflector feed antenna that achieves 5:1-7:1 frequency bandwidths while maintaining near-constant beamwidth. The horn is unique in that it is the only wideband feed antenna suitable for radio astronomy that: 1) can be designed to have nominal 10 dB beamwidth between 30 and 150 degrees; 2) requires one single-ended 50 Ohm low-noise amplifier per polarization. Design, analysis, and measurements of several quad-ridged horns are presented to demonstrate its feasibility and flexibility.

The second part of the thesis focuses on modeling and measurements of discrete high-electron mobility transistors (HEMTs) and their applications in wideband, extremely low-noise amplifiers. The transistors and microwave monolithic integrated circuit low-noise amplifiers described herein have been fabricated on two state-of-the-art HEMT processes: 1) 35 nm indium phosphide; 2) 70 nm gallium arsenide. DC and microwave performance of transistors from both processes at room and cryogenic temperatures are included, as well as first-reported measurements of detailed noise characterization of the sub-micron HEMTs at both temperatures. Design and measurements of two low-noise amplifiers covering 1--20 and 8—50 GHz fabricated on both processes are also provided, which show that the 1--20 GHz amplifier improves the state of the art in cryogenic noise and bandwidth, while the 8--50 GHz amplifier achieves noise performance only slightly worse than the best published results but does so with nearly a decade bandwidth.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The material presented in this thesis concerns the growth and characterization of III-V semiconductor heterostructures. Studies of the interactions between bound states in coupled quantum wells and between well and barrier bound states in AlAs/GaAs heterostructures are presented. We also demonstrate the broad array of novel tunnel structures realizable in the InAs/GaSb/AlSb material system. Because of the unique broken-gap band alignment of InAs/GaSb these structures involve transport between the conduction- and valence-bands of adjacent layers. These devices possess a wide range of electrical properties and are fundamentally different from conventional AlAs/GaAs tunnel devices. We report on the fabrication of a novel tunnel transistor with the largest reported room temperature current gains. We also present time-resolved studies of the growth fronts of InAs/GainSb strained layer superlattices and investigations of surface anion exchange reactions.

Chapter 2 covers tunneling studies of conventional AlAs/GaAs RTD's. The results of two studies are presented: (i) A test of coherent vs. sequential tunneling in triple barrier heterostructures, (ii) An optical measurement of the effect of barrier X-point states on Γ-point well states. In the first it was found if two quantum wells are separated by a sufficiently thin barrier, then the eigenstates of the system extend coherently across both wells and the central barriers. For thicker barriers between the wells, the electrons become localized in the individual wells and transport is best described by the electrons hopping between the wells. In the second, it was found that Γ-point well states and X-point barrier states interact strongly. The barrier X-point states modify the energies of the well states and increase the escape rate for carriers in the quantum well.

The results of several experimental studies of a novel class of tunnel devices realized in the InAs/GaSb/AlSb material system are presented in Chapter 3. These interband tunnel structures involve transport between conduction- and valence-band states in adjacent material layers. These devices are compared and contrasted with the conventional AlAs/GaAs structures discussed in Chapter 2 and experimental results are presented for both resonant and nonresonant devices. These results are compared with theoretical simulations and necessary extensions to the theoretical models are discussed.

In chapter 4 experimental results from a novel tunnel transistor are reported. The measured current gains in this transistor exceed 100 at room temperature. This is the highest reported gain at room temperature for any tunnel transistor. The device is analyzed and the current conduction and gain mechanisms are discussed.

Chapters 5 and 6 are studies of the growth of structures involving layers with different anions. Chapter 5 covers the growth of InAs/GainSb superlattices for far infrared detectors and time resolved, in-situ studies of their growth fronts. It was found that the bandgap of superlattices with identical layer thicknesses and compositions varied by as much as 40 meV depending on how their internal interfaces are formed. The absorption lengths in superlattices with identical bandgaps but whose interfaces were formed in different ways varied by as much as a factor of two. First the superlattice is discussed including an explanation of the device and the complications involved in its growth. The experimental technique of reflection high energy electron diffraction (RHEED) is reviewed, and the results of RHEED studies of the growth of these complicated structures are presented. The development of a time resolved, in-situ characterization of the internal interfaces of these superlattices is described. Chapter 6 describes the result of a detailed study of some of the phenomena described in chapter 5. X-ray photoelectron spectroscopy (XPS) studies of anion exchange reactions on the growth fronts of these superlattices are reported. Concurrent RHEED studies of the same physical systems studied with XPS are presented. Using the RHEED and XPS results, a real-time, indirect measurement of surface exchange reactions was developed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In three essays we examine user-generated product ratings with aggregation. While recommendation systems have been studied extensively, this simple type of recommendation system has been neglected, despite its prevalence in the field. We develop a novel theoretical model of user-generated ratings. This model improves upon previous work in three ways: it considers rational agents and allows them to abstain from rating when rating is costly; it incorporates rating aggregation (such as averaging ratings); and it considers the effect on rating strategies of multiple simultaneous raters. In the first essay we provide a partial characterization of equilibrium behavior. In the second essay we test this theoretical model in laboratory, and in the third we apply established behavioral models to the data generated in the lab. This study provides clues to the prevalence of extreme-valued ratings in field implementations. We show theoretically that in equilibrium, ratings distributions do not represent the value distributions of sincere ratings. Indeed, we show that if rating strategies follow a set of regularity conditions, then in equilibrium the rate at which players participate is increasing in the extremity of agents' valuations of the product. This theoretical prediction is realized in the lab. We also find that human subjects show a disproportionate predilection for sincere rating, and that when they do send insincere ratings, they are almost always in the direction of exaggeration. Both sincere and exaggerated ratings occur with great frequency despite the fact that such rating strategies are not in subjects' best interest. We therefore apply the behavioral concepts of quantal response equilibrium (QRE) and cursed equilibrium (CE) to the experimental data. Together, these theories explain the data significantly better than does a theory of rational, Bayesian behavior -- accurately predicting key comparative statics. However, the theories fail to predict the high rates of sincerity, and it is clear that a better theory is needed.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dynamic properties of a structure are a function of its physical properties, and changes in the physical properties of the structure, including the introduction of structural damage, can cause changes in its dynamic behavior. Structural health monitoring (SHM) and damage detection methods provide a means to assess the structural integrity and safety of a civil structure using measurements of its dynamic properties. In particular, these techniques enable a quick damage assessment following a seismic event. In this thesis, the application of high-frequency seismograms to damage detection in civil structures is investigated.

Two novel methods for SHM are developed and validated using small-scale experimental testing, existing structures in situ, and numerical testing. The first method is developed for pre-Northridge steel-moment-resisting frame buildings that are susceptible to weld fracture at beam-column connections. The method is based on using the response of a structure to a nondestructive force (i.e., a hammer blow) to approximate the response of the structure to a damage event (i.e., weld fracture). The method is applied to a small-scale experimental frame, where the impulse response functions of the frame are generated during an impact hammer test. The method is also applied to a numerical model of a steel frame, in which weld fracture is modeled as the tensile opening of a Mode I crack. Impulse response functions are experimentally obtained for a steel moment-resisting frame building in situ. Results indicate that while acceleration and velocity records generated by a damage event are best approximated by the acceleration and velocity records generated by a colocated hammer blow, the method may not be robust to noise. The method seems to be better suited for damage localization, where information such as arrival times and peak accelerations can also provide indication of the damage location. This is of significance for sparsely-instrumented civil structures.

The second SHM method is designed to extract features from high-frequency acceleration records that may indicate the presence of damage. As short-duration high-frequency signals (i.e., pulses) can be indicative of damage, this method relies on the identification and classification of pulses in the acceleration records. It is recommended that, in practice, the method be combined with a vibration-based method that can be used to estimate the loss of stiffness. Briefly, pulses observed in the acceleration time series when the structure is known to be in an undamaged state are compared with pulses observed when the structure is in a potentially damaged state. By comparing the pulse signatures from these two situations, changes in the high-frequency dynamic behavior of the structure can be identified, and damage signals can be extracted and subjected to further analysis. The method is successfully applied to a small-scale experimental shear beam that is dynamically excited at its base using a shake table and damaged by loosening a screw to create a moving part. Although the damage is aperiodic and nonlinear in nature, the damage signals are accurately identified, and the location of damage is determined using the amplitudes and arrival times of the damage signal. The method is also successfully applied to detect the occurrence of damage in a test bed data set provided by the Los Alamos National Laboratory, in which nonlinear damage is introduced into a small-scale steel frame by installing a bumper mechanism that inhibits the amount of motion between two floors. The method is successfully applied and is robust despite a low sampling rate, though false negatives (undetected damage signals) begin to occur at high levels of damage when the frequency of damage events increases. The method is also applied to acceleration data recorded on a damaged cable-stayed bridge in China, provided by the Center of Structural Monitoring and Control at the Harbin Institute of Technology. Acceleration records recorded after the date of damage show a clear increase in high-frequency short-duration pulses compared to those previously recorded. One undamage pulse and two damage pulses are identified from the data. The occurrence of the detected damage pulses is consistent with a progression of damage and matches the known chronology of damage.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A description is given of experimental work on the damping of a second order electron plasma wave echo due to velocity space diffusion in a low temperature magnetoplasma. Sufficient precision was obtained to verify the theoretically predicted cubic rather than quadratic or quartic dependence of the damping on exciter separation. Compared to the damping predicted for Coulomb collisions in a thermal plasma in an infinite magnetic field, the magnitude of the damping was approximately as predicted, while the velocity dependence of the damping was weaker than predicted. The discrepancy is consistent with the actual non-Maxwellian electron distribution of the plasma.

In conjunction with the damping work, echo amplitude saturation was measured as a function of the velocity of the electrons contributing to the echo. Good agreement was obtained with the predicted J1 Bessel function amplitude dependence, as well as a demonstration that saturation did not influence the damping results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The σD values of nitrated cellulose from a variety of trees covering a wide geographic range have been measured. These measurements have been used to ascertain which factors are likely to cause σD variations in cellulose C-H hydrogen.

It is found that a primary source of tree σD variation is the σD variation of the environmental precipitation. Superimposed on this are isotopic variations caused by the transpiration of the leaf water incorporated by the tree. The magnitude of this transpiration effect appears to be related to relative humidity.

Within a single tree, it is found that the hydrogen isotope variations which occur for a ring sequence in one radial direction may not be exactly the same as those which occur in a different direction. Such heterogeneities appear most likely to occur in trees with asymmetric ring patterns that contain reaction wood. In the absence of reaction wood such heterogeneities do not seem to occur. Thus, hydrogen isotope analyses of tree ring sequences should be performed on trees which do not contain reaction wood.

Comparisons of tree σD variations with variations in local climate are performed on two levels: spatial and temporal. It is found that the σD values of 20 North American trees from a wide geographic range are reasonably well-correlated with the corresponding average annual temperature. The correlation is similar to that observed for a comparison of the σD values of annual precipitation of 11 North American sites with annual temperature. However, it appears that this correlation is significantly disrupted by trees which grew on poorly drained sites such as those in stagnant marshes. Therefore, site selection may be important in choosing trees for climatic interpretation of σD values, although proper sites do not seem to be uncommon.

The measurement of σD values in 5-year samples from the tree ring sequences of 13 trees from 11 North American sites reveals a variety of relationships with local climate. As it was for the spatial σD vs climate comparison, site selection is also apparently important for temporal tree σD vs climate comparisons. Again, it seems that poorly-drained sites are to be avoided. For nine trees from different "well-behaved" sites, it was found that the local climatic variable best related to the σD variations was not the same for all sites.

Two of these trees showed a strong negative correlation with the amount of local summer precipitation. Consideration of factors likely to influence the isotopic composition of summer rain suggests that rainfall intensity may be important. The higher the intensity, the lower the σD value. Such an effect might explain the negative correlation of σD vs summer precipitation amount for these two trees. A third tree also exhibited a strong correlation with summer climate, but in this instance it was a positive correlation of σD with summer temperature.

The remaining six trees exhibited the best correlation between σD values and local annual climate. However, in none of these six cases was it annual temperature that was the most important variable. In fact annual temperature commonly showed no relationship at all with tree σD values. Instead, it was found that a simple mass balance model incorporating two basic assumptions yielded parameters which produced the best relationships with tree σD values. First, it was assumed that the σD values of these six trees reflected the σD values of annual precipitation incorporated by these trees. Second, it was assumed that the σD value of the annual precipitation was a weighted average of two seasonal isotopic components: summer and winter. Mass balance equations derived from these assumptions yielded combinations of variables that commonly showed a relationship with tree σD values where none had previously been discerned.

It was found for these "well-behaved" trees that not all sample intervals in a σD vs local climate plot fell along a well-defined trend. These departures from the local σD VS climate norm were defined as "anomalous". Some of these anomalous intervals were common to trees from different locales. When such widespread commonalty of an anomalous interval occurred, it was observed that the interval corresponded to an interval in which drought had existed in the North American Great Plains.

Consequently, there appears to be a combination of both local and large scale climatic information in the σD variations of tree cellulose C-H hydrogen.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part I: The mobilities of photo-generated electrons and holes in orthorhombic sulfur are determined by drift mobility techniques. At room temperature electron mobilities between 0.4 cm2/V-sec and 4.8 cm2/V-sec and hole mobilities of about 5.0 cm2/V-sec are reported. The temperature dependence of the electron mobility is attributed to a level of traps whose effective depth is about 0.12 eV. This value is further supported by both the voltage dependence of the space-charge-limited, D.C. photocurrents and the photocurrent versus photon energy measurements.

As the field is increased from 10 kV/cm to 30 kV/cm a second mechanism for electron transport becomes appreciable and eventually dominates. Evidence that this is due to impurity band conduction at an appreciably lower mobility (4.10-4 cm2/V-sec) is presented. No low mobility hole current could be detected. When fields exceeding 30 kV/cm for electron transport and 35 kV/cm for hole transport are applied, avalanche phenomena are observed. The results obtained are consistent with recent energy gap studies in sulfur.

The theory of the transport of photo-generated carriers is modified to include the case of appreciable thermos-regeneration from the traps in one transit time.

Part II: An explicit formula for the electric field E necessary to accelerate an electron to a steady-state velocity v in a polarizable crystal at arbitrary temperature is determined via two methods utilizing Feynman Path Integrals. No approximation is made regarding the magnitude of the velocity or the strength of the field. However, the actual electron-lattice Coulombic interaction is approximated by a distribution of harmonic oscillator potentials. One may be able to find the “best possible” distribution of oscillators using a variational principle, but we have not been able to find the expected criterion. However, our result is relatively insensitive to the actual distribution of oscillators used, and our E-v relationship exhibits the physical behavior expected for the polaron. Threshold fields for ejecting the electron for the polaron state are calculated for several substances using numerical results for a simple oscillator distribution.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I report the solubility and diffusivity of water in lunar basalt and an iron-free basaltic analogue at 1 atm and 1350 °C. Such parameters are critical for understanding the degassing histories of lunar pyroclastic glasses. Solubility experiments have been conducted over a range of fO2 conditions from three log units below to five log units above the iron-wüstite buffer (IW) and over a range of pH2/pH2O from 0.03 to 24. Quenched experimental glasses were analyzed by Fourier transform infrared spectroscopy (FTIR) and secondary ionization mass spectrometry (SIMS) and were found to contain up to ~420 ppm water. Results demonstrate that, under the conditions of our experiments: (1) hydroxyl is the only H-bearing species detected by FTIR; (2) the solubility of water is proportional to the square root of pH2O in the furnace atmosphere and is independent of fO2 and pH2/pH2O; (3) the solubility of water is very similar in both melt compositions; (4) the concentration of H2 in our iron-free experiments is <3 ppm, even at oxygen fugacities as low as IW-2.3 and pH2/pH2O as high as 24; and (5) SIMS analyses of water in iron-rich glasses equilibrated under variable fO2 conditions can be strongly influenced by matrix effects, even when the concentrations of water in the glasses are low. Our results can be used to constrain the entrapment pressure of the lunar melt inclusions of Hauri et al. (2011).

Diffusion experiments were conducted over a range of fO2 conditions from IW-2.2 to IW+6.7 and over a range of pH2/pH2O from nominally zero to ~10. The water concentrations measured in our quenched experimental glasses by SIMS and FTIR vary from a few ppm to ~430 ppm. Water concentration gradients are well described by models in which the diffusivity of water (D*water) is assumed to be constant. The relationship between D*water and water concentration is well described by a modified speciation model (Ni et al. 2012) in which both molecular water and hydroxyl are allowed to diffuse. The success of this modified speciation model for describing our results suggests that we have resolved the diffusivity of hydroxyl in basaltic melt for the first time. Best-fit values of D*water for our experiments on lunar basalt vary within a factor of ~2 over a range of pH2/pH2O from 0.007 to 9.7, a range of fO2 from IW-2.2 to IW+4.9, and a water concentration range from ~80 ppm to ~280 ppm. The relative insensitivity of our best-fit values of D*water to variations in pH2 suggests that H2 diffusion was not significant during degassing of the lunar glasses of Saal et al. (2008). D*water during dehydration and hydration in H2/CO2 gas mixtures are approximately the same, which supports an equilibrium boundary condition for these experiments. However, dehydration experiments into CO2 and CO/CO2 gas mixtures leave some scope for the importance of kinetics during dehydration into H-free environments. The value of D*water chosen by Saal et al. (2008) for modeling the diffusive degassing of the lunar volcanic glasses is within a factor of three of our measured value in our lunar basaltic melt at 1350 °C.

In Chapter 4 of this thesis, I document significant zonation in major, minor, trace, and volatile elements in naturally glassy olivine-hosted melt inclusions from the Siqueiros Fracture Zone and the Galapagos Islands. Components with a higher concentration in the host olivine than in the melt (MgO, FeO, Cr2O3, and MnO) are depleted at the edges of the zoned melt inclusions relative to their centers, whereas except for CaO, H2O, and F, components with a lower concentration in the host olivine than in the melt (Al2O3, SiO2, Na2O, K2O, TiO2, S, and Cl) are enriched near the melt inclusion edges. This zonation is due to formation of an olivine-depleted boundary layer in the adjacent melt in response to cooling and crystallization of olivine on the walls of the melt inclusions concurrent with diffusive propagation of the boundary layer toward the inclusion center.

Concentration profiles of some components in the melt inclusions exhibit multicomponent diffusion effects such as uphill diffusion (CaO, FeO) or slowing of the diffusion of typically rapidly diffusing components (Na2O, K2O) by coupling to slow diffusing components such as SiO2 and Al2O3. Concentrations of H2O and F decrease towards the edges of some of the Siqueiros melt inclusions, suggesting either that these components have been lost from the inclusions into the host olivine late in their cooling histories and/or that these components are exhibiting multicomponent diffusion effects.

A model has been developed of the time-dependent evolution of MgO concentration profiles in melt inclusions due to simultaneous depletion of MgO at the inclusion walls due to olivine growth and diffusion of MgO in the melt inclusions in response to this depletion. Observed concentration profiles were fit to this model to constrain their thermal histories. Cooling rates determined by a single-stage linear cooling model are 150–13,000 °C hr-1 from the liquidus down to ~1000 °C, consistent with previously determined cooling rates for basaltic glasses; compositional trends with melt inclusion size observed in the Siqueiros melt inclusions are described well by this simple single-stage linear cooling model. Despite the overall success of the modeling of MgO concentration profiles using a single-stage cooling history, MgO concentration profiles in some melt inclusions are better fit by a two-stage cooling history with a slower-cooling first stage followed by a faster-cooling second stage; the inferred total duration of cooling from the liquidus down to ~1000 °C is 40 s to just over one hour.

Based on our observations and models, compositions of zoned melt inclusions (even if measured at the centers of the inclusions) will typically have been diffusively fractionated relative to the initially trapped melt; for such inclusions, the initial composition cannot be simply reconstructed based on olivine-addition calculations, so caution should be exercised in application of such reconstructions to correct for post-entrapment crystallization of olivine on inclusion walls. Off-center analyses of a melt inclusion can also give results significantly fractionated relative to simple olivine crystallization.

All melt inclusions from the Siqueiros and Galapagos sample suites exhibit zoning profiles, and this feature may be nearly universal in glassy, olivine-hosted inclusions. If so, zoning profiles in melt inclusions could be widely useful to constrain late-stage syneruptive processes and as natural diffusion experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thermodynamical fluctuations in temperature and position exist in every physical system, and show up as a fundamental noise limit whenever we choose to measure some quantity in a laboratory environment. Thermodynamical fluctuations in the position of the atoms in the dielectric coatings on the mirrors for optical cavities at the forefront of precision metrology (e.g., LIGO, the cavities which probe atomic transitions to define the second) are a current limiting noise source for these experiments, and anything which involves locking a laser to an optical cavity. These thermodynamic noise sources scale physical geometry of experiment, material properties (such as mechanical loss in our dielectric coatings), and temperature. The temperature scaling provides a natural motivation to move to lower temperatures, with a potential huge benefit for redesigning a room temperature experiment which is limited by thermal noise for cryogenic operation.

We design, build, and characterize a pair of linear Fabry-Perot cavities to explore limitations to ultra low noise laser stabilization experiments at cryogenic temperatures. We use silicon as the primary material for the cavity and mirrors, due to a zero crossing in its linear coefficient of thermal expansion (CTE) at 123 K, and other desirable material properties. We use silica tantala coatings, which are currently the best for making high finesse low noise cavities at room temperature. The material properties of these coating materials (which set the thermal noise levels) are relatively unknown at cryogenic temperatures, which motivates us to study them at these temperatures. We were not able to measure any thermal noise source with our experiment due to excess noise. In this work we analyze the design and performance of the cavities, and recommend a design shift from mid length cavities to short cavities in order to facilitate a direct measurement of cryogenic coating noise.

In addition, we measure the cavities (frequency dependent) photo-thermal response. This can help characterize thermooptic noise in the coatings, which is poorly understood at cryogenic temperatures. We also explore the feasibility of using the cavity to do macroscopic quantum optomechanics such as ground state cooling.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The effect of intermolecular coupling in molecular energy levels (electronic and vibrational) has been investigated in neat and isotopic mixed crystals of benzene. In the isotopic mixed crystals of C6H6, C6H5D, m-C6H4D2, p-C6H4D2, sym-C6H3D3, C6D5H, and C6D6 in either a C6H6 or C6D6 host, the following phenomena have been observed and interpreted in terms of a refined Frenkel exciton theory: a) Site shifts; b) site group splittings of the degenerate ground state vibrations of C6H6, C6D6, and sym-C6H3D3; c) the orientational effect for the isotopes without a trigonal axis in both the 1B2u electronic state and the ground state vibrations; d) intrasite Fermi resonance between molecular fundamentals due to the reduced symmetry of the crystal site; and e) intermolecular or intersite Fermi resonance between nearly degenerate states of the host and guest molecules. In the neat crystal experiments on the ground state vibrations it was possible to observe many of these phenomena in conjunction with and in addition to the exciton structure.

To theoretically interpret these diverse experimental data, the concepts of interchange symmetry, the ideal mixed crystal, and site wave functions have been developed and are presented in detail. In the interpretation of the exciton data the relative signs of the intermolecular coupling constants have been emphasized, and in the limit of the ideal mixed crystal a technique is discussed for locating the exciton band center or unobserved exciton components. A differentiation between static and dynamic interactions is made in the Frenkel limit which enables the concepts of site effects and exciton coupling to be sharpened. It is thus possible to treat the crystal induced effects in such a fashion as to make their similarities and differences quite apparent.

A calculation of the ground state vibrational phenomena (site shifts and splittings, orientational effects, and exciton structure) and of the crystal lattice modes has been carried out for these systems. This calculation serves as a test of the approximations of first order Frenkel theory and the atom-atom, pair wise interaction model for the intermolecular potentials. The general form of the potential employed was V(r) = Be-Cr - A/r6 ; the force constants were obtained from the potential by assuming the atoms were undergoing simple harmonic motion.

In part II the location and identification of the benzene first and second triplet states (3B1u and 3E1u) is given.