22 resultados para Losses

em CaltechTHESIS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Amorphous metals that form fully glassy parts over a few millimeters in thickness are still relatively new materials. Their glassy structure gives them particularly high strengths, high yield strains, high hardness values, high resilience, and low damping losses, but this can also result in an extremely low tolerance to the presence of flaws in the material. Since this glassy structure lacks the ordered crystal structure, it also lacks the crystalline defect (dislocations) that provides the micromechanism of toughening and flaw insensitivity in conventional metals. Without a sufficient and reliable toughness that results in a large tolerance of damage in the material, metallic glasses will struggle to be adopted commercially. Here, we identify the origin of toughness in metallic glass as the competition between the intrinsic toughening mechanism of shear banding ahead of a crack and crack propagation by the cavitation of the liquid inside the shear bands. We present a detailed study over the first three chapters mainly focusing on the process of shear banding; its crucial role in giving rise to one of the most damage-tolerant materials known, its extreme sensitivity to the configurational state of a glass with moderate toughness, and how the configurational state can be changed with the addition of minor elements. The last chapter is a novel investigation into the cavitation barrier in glass-forming liquids, the competing process to shear banding. The combination of our results represents an increased understanding of the major influences on the fracture toughness of metallic glasses and thus provides a path for the improvement and development of tougher metallic glasses.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Secondary organic aerosol (SOA) is produced in the atmosphere by oxidation of volatile organic compounds. Laboratory chambers are used understand the formation mechanisms and evolution of SOA formed under controlled conditions. This thesis presents studies of SOA formed from anthropogenic and biogenic precursors and discusses the effects of chamber walls on suspended vapors and particles.

During a chamber experiment, suspended vapors and particles can interact with the chamber walls. Particle wall loss is relatively well-understood, but vapor wall losses have received little study. Vapor wall loss of 2,3-epoxy-1,4-butanediol (BEPOX) and glyoxal was identified, quantified, and found to depend on chamber age and relative humidity.

Particles reside in the atmosphere for a week or more and can evolve chemically during that time period, a process termed aging. Simulating aging in laboratory chambers has proven to be challenging. A protocol was developed to extend the duration of a chamber experiment to 36 h of oxidation and was used to evaluate aging of SOA produced from m-xylene. Total SOA mass concentration increased and then decreased with increasing photooxidation suggesting a transition from functionalization to fragmentation chemistry driven by photochemical processes. SOA oxidation, measured as the bulk particle elemental oxygen-to-carbon ratio and fraction of organic mass at m/z 44, increased continuously starting after 5 h of photooxidation.

The physical state and chemical composition of an organic aerosol affect the mixing of aerosol components and its interactions with condensing species. A laboratory chamber protocol was developed to evaluate the mixing of SOA produced sequentially from two different sources by heating the chamber to induce particle evaporation. Using this protocol, SOA produced from toluene was found to be less volatile than that produced from a-pinene. When the two types of SOA were formed sequentially, the evaporation behavior most closely represented that of SOA from the second parent hydrocarbon, suggesting that the structure of the mixed SOA particles resembles a core of SOA from the first precursor coated by a layer of SOA from the second precursor, indicative of limiting mixing.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis presents an experimental investigation of the axisymmetric heat transfer from a small scale fire and resulting buoyant plume to a horizontal, unobstructed ceiling during the initial stages of development. A propane-air burner yielding a heat source strength between 1.0 kW and 1.6 kW was used to simulate the fire, and measurements proved that this heat source did satisfactorily represent a source of buoyancy only. The ceiling consisted of a 1/16" steel plate of 0.91 m. diameter, insulated on the upper side. The ceiling height was adjustable between 0.5 m and 0.91 m. Temperature measurements were carried out in the plume, ceiling jet, and on the ceiling.

Heat transfer data were obtained by using the transient method and applying corrections for the radial conduction along the ceiling and losses through the insulation material. The ceiling heat transfer coefficient was based on the adiabatic ceiling jet temperature (recovery temperature) reached after a long time. A parameter involving the source strength Q and ceiling height H was found to correlate measurements of this temperature and its radial variation. A similar parameter for estimating the ceiling heat transfer coefficient was confirmed by the experimental results.

This investigation therefore provides reasonable estimates for the heat transfer from a buoyant gas plume to a ceiling in the axisymmetric case, for the stagnation region where such heat transfer is a maximum and for the ceiling jet region (r/H ≤ 0.7). A comparison with data from experiments which involved larger heat sources indicates that the predicted scaling of temperatures and heat transfer rates for larger scale fires is adequate.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Spontaneous emission into the lasing mode fundamentally limits laser linewidths. Reducing cavity losses provides two benefits to linewidth: (1) fewer excited carriers are needed to reach threshold, resulting in less phase-corrupting spontaneous emission into the laser mode, and (2) more photons are stored in the laser cavity, such that each individual spontaneous emission event disturbs the phase of the field less. Strong optical absorption in III-V materials causes high losses, preventing currently-available semiconductor lasers from achieving ultra-narrow linewidths. This absorption is a natural consequence of the compromise between efficient electrical and efficient optical performance in a semiconductor laser. Some of the III-V layers must be heavily doped in order to funnel excited carriers into the active region, which has the side effect of making the material strongly absorbing.

This thesis presents a new technique, called modal engineering, to remove modal energy from the lossy region and store it in an adjacent low-loss material, thereby reducing overall optical absorption. A quantum mechanical analysis of modal engineering shows that modal gain and spontaneous emission rate into the laser mode are both proportional to the normalized intensity of that mode at the active region. If optical absorption near the active region dominates the total losses of the laser cavity, shifting modal energy from the lossy region to the low-loss region will reduce modal gain, total loss, and the spontaneous emission rate into the mode by the same factor, so that linewidth decreases while the threshold inversion remains constant. The total spontaneous emission rate into all other modes is unchanged.

Modal engineering is demonstrated using the Si/III-V platform, in which light is generated in the III-V material and stored in the low-loss silicon material. The silicon is patterned as a high-Q resonator to minimize all sources of loss. Fabricated lasers employing modal engineering to concentrate light in silicon demonstrate linewidths at least 5 times smaller than lasers without modal engineering at the same pump level above threshold, while maintaining the same thresholds.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the quest for a descriptive theory of decision-making, the rational actor model in economics imposes rather unrealistic expectations and abilities on human decision makers. The further we move from idealized scenarios, such as perfectly competitive markets, and ambitiously extend the reach of the theory to describe everyday decision making situations, the less sense these assumptions make. Behavioural economics has instead proposed models based on assumptions that are more psychologically realistic, with the aim of gaining more precision and descriptive power. Increased psychological realism, however, comes at the cost of a greater number of parameters and model complexity. Now there are a plethora of models, based on different assumptions, applicable in differing contextual settings, and selecting the right model to use tends to be an ad-hoc process. In this thesis, we develop optimal experimental design methods and evaluate different behavioral theories against evidence from lab and field experiments.

We look at evidence from controlled laboratory experiments. Subjects are presented with choices between monetary gambles or lotteries. Different decision-making theories evaluate the choices differently and would make distinct predictions about the subjects' choices. Theories whose predictions are inconsistent with the actual choices can be systematically eliminated. Behavioural theories can have multiple parameters requiring complex experimental designs with a very large number of possible choice tests. This imposes computational and economic constraints on using classical experimental design methods. We develop a methodology of adaptive tests: Bayesian Rapid Optimal Adaptive Designs (BROAD) that sequentially chooses the "most informative" test at each stage, and based on the response updates its posterior beliefs over the theories, which informs the next most informative test to run. BROAD utilizes the Equivalent Class Edge Cutting (EC2) criteria to select tests. We prove that the EC2 criteria is adaptively submodular, which allows us to prove theoretical guarantees against the Bayes-optimal testing sequence even in the presence of noisy responses. In simulated ground-truth experiments, we find that the EC2 criteria recovers the true hypotheses with significantly fewer tests than more widely used criteria such as Information Gain and Generalized Binary Search. We show, theoretically as well as experimentally, that surprisingly these popular criteria can perform poorly in the presence of noise, or subject errors. Furthermore, we use the adaptive submodular property of EC2 to implement an accelerated greedy version of BROAD which leads to orders of magnitude speedup over other methods.

We use BROAD to perform two experiments. First, we compare the main classes of theories for decision-making under risk, namely: expected value, prospect theory, constant relative risk aversion (CRRA) and moments models. Subjects are given an initial endowment, and sequentially presented choices between two lotteries, with the possibility of losses. The lotteries are selected using BROAD, and 57 subjects from Caltech and UCLA are incentivized by randomly realizing one of the lotteries chosen. Aggregate posterior probabilities over the theories show limited evidence in favour of CRRA and moments' models. Classifying the subjects into types showed that most subjects are described by prospect theory, followed by expected value. Adaptive experimental design raises the possibility that subjects could engage in strategic manipulation, i.e. subjects could mask their true preferences and choose differently in order to obtain more favourable tests in later rounds thereby increasing their payoffs. We pay close attention to this problem; strategic manipulation is ruled out since it is infeasible in practice, and also since we do not find any signatures of it in our data.

In the second experiment, we compare the main theories of time preference: exponential discounting, hyperbolic discounting, "present bias" models: quasi-hyperbolic (α, β) discounting and fixed cost discounting, and generalized-hyperbolic discounting. 40 subjects from UCLA were given choices between 2 options: a smaller but more immediate payoff versus a larger but later payoff. We found very limited evidence for present bias models and hyperbolic discounting, and most subjects were classified as generalized hyperbolic discounting types, followed by exponential discounting.

In these models the passage of time is linear. We instead consider a psychological model where the perception of time is subjective. We prove that when the biological (subjective) time is positively dependent, it gives rise to hyperbolic discounting and temporal choice inconsistency.

We also test the predictions of behavioral theories in the "wild". We pay attention to prospect theory, which emerged as the dominant theory in our lab experiments of risky choice. Loss aversion and reference dependence predicts that consumers will behave in a uniquely distinct way than the standard rational model predicts. Specifically, loss aversion predicts that when an item is being offered at a discount, the demand for it will be greater than that explained by its price elasticity. Even more importantly, when the item is no longer discounted, demand for its close substitute would increase excessively. We tested this prediction using a discrete choice model with loss-averse utility function on data from a large eCommerce retailer. Not only did we identify loss aversion, but we also found that the effect decreased with consumers' experience. We outline the policy implications that consumer loss aversion entails, and strategies for competitive pricing.

In future work, BROAD can be widely applicable for testing different behavioural models, e.g. in social preference and game theory, and in different contextual settings. Additional measurements beyond choice data, including biological measurements such as skin conductance, can be used to more rapidly eliminate hypothesis and speed up model comparison. Discrete choice models also provide a framework for testing behavioural models with field data, and encourage combined lab-field experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While concentrator photovoltaic cells have shown significant improvements in efficiency in the past ten years, once these cells are integrated into concentrating optics, connected to a power conditioning system and deployed in the field, the overall module efficiency drops to only 34 to 36%. This efficiency is impressive compared to conventional flat plate modules, but it is far short of the theoretical limits for solar energy conversion. Designing a system capable of achieving ultra high efficiency of 50% or greater cannot be achieved by refinement and iteration of current design approaches.

This thesis takes a systems approach to designing a photovoltaic system capable of 50% efficient performance using conventional diode-based solar cells. The effort began with an exploration of the limiting efficiency of spectrum splitting ensembles with 2 to 20 sub cells in different electrical configurations. Incorporating realistic non-ideal performance with the computationally simple detailed balance approach resulted in practical limits that are useful to identify specific cell performance requirements. This effort quantified the relative benefit of additional cells and concentration for system efficiency, which will help in designing practical optical systems.

Efforts to improve the quality of the solar cells themselves focused on the development of tunable lattice constant epitaxial templates. Initially intended to enable lattice matched multijunction solar cells, these templates would enable increased flexibility in band gap selection for spectrum splitting ensembles and enhanced radiative quality relative to metamorphic growth. The III-V material family is commonly used for multijunction solar cells both for its high radiative quality and for the ease of integrating multiple band gaps into one monolithic growth. The band gap flexibility is limited by the lattice constant of available growth templates. The virtual substrate consists of a thin III-V film with the desired lattice constant. The film is grown strained on an available wafer substrate, but the thickness is below the dislocation nucleation threshold. By removing the film from the growth substrate, allowing the strain to relax elastically, and bonding it to a supportive handle, a template with the desired lattice constant is formed. Experimental efforts towards this structure and initial proof of concept are presented.

Cells with high radiative quality present the opportunity to recover a large amount of their radiative losses if they are incorporated in an ensemble that couples emission from one cell to another. This effect is well known, but has been explored previously in the context of sub cells that independently operate at their maximum power point. This analysis explicitly accounts for the system interaction and identifies ways to enhance overall performance by operating some cells in an ensemble at voltages that reduce the power converted in the individual cell. Series connected multijunctions, which by their nature facilitate strong optical coupling between sub-cells, are reoptimized with substantial performance benefit.

Photovoltaic efficiency is usually measured relative to a standard incident spectrum to allow comparison between systems. Deployed in the field systems may differ in energy production due to sensitivity to changes in the spectrum. The series connection constraint in particular causes system efficiency to decrease as the incident spectrum deviates from the standard spectral composition. This thesis performs a case study comparing performance of systems over a year at a particular location to identify the energy production penalty caused by series connection relative to independent electrical connection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis describes the development of low-noise heterodyne receivers at THz frequencies for submillimeter astronomy using Nb-based superconductor-insulator-superconductor (SIS) tunneling junctions. The mixers utilize a quasi-optical configuration which consists of a planar twin-slot antenna and antisymmetrically-fed two-junctions on an antireflection-coated silicon hyperhemispherical lens. On-chip integrated tuning circuits, in the form of microstrip lines, are used to obtain maximum coupling efficiency in the designed frequency band. To reduce the rf losses in the integrated tuning circuits above the superconducting Nb gap frequency (~ 700 GHz), normal-metal Al is used to replace Nb as the tuning circuits.

To account the rf losses in the micros trip lines, we calculated the surface impedance of the AI films using the nonlocal anomalous skin effect for finite thickness films. Nb films were calculated using the Mattis-Bardeen theory in the extreme anomalous limit. Our calculations show that the losses of the Al and Nb microstrip lines are about equal at 830 GHz. For Al-wiring and Nb-wiring mixers both optimized at 1050 GHz, the RF coupling efficiency of Al-wiring mixer is higher than that of Nb-wiring one by almost 50%. We have designed both Nb-wiring and Al-wiring mixers below and above the gap frequency.

A Fourier transform spectrometer (FTS) has been constructed especially for the study of the frequency response of SIS receivers. This FTS features large aperture size (10 inch) and high frequency resolution (114 MHz). The FTS spectra, obtained using the SIS receivers as direct detectors on the FTS, agree quite well with our theoretical simulations. We have also, for the first time, measured the FTS heterodyne response of an SIS mixer at sufficiently high resolution to resolve the LO and the sidebands. Heterodyne measurements of our SIS receivers with Nb-wiring or Al-wiring have yielded results which arc among the best reported to date for broadband heterodyne receivers. The Nb-wiring mixers, covering 400 - 850 GHz band with four separate fixed-tuned mixers, have uncorrected DSB receiver noise temperature around 5hv/kb to 700 GHz, and better than 540 K at 808 GHz. An Al-wiring mixer designed for 1050 GHz band has an uncorrected DSB receiver noise temperature 840 K at 1042 GHz and 2.5 K bath temperature. Mixer performance analysis shows that Nb junctions can work well up to twice the gap frequency and the major cause of loss above the gap frequency is the rf losses in the microstrip tuning structures. Further advances in THz SIS mixers may be possible using circuits fabricated with higher-gap superconductors such as NbN. However, this will require high-quality films with low RF surface resistance at THz frequencies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The propagation of the fast magnetosonic wave in a tokamak plasma has been investigated at low power, between 10 and 300 watts, as a prelude to future heating experiments.

The attention of the experiments has been focused on the understanding of the coupling between a loop antenna and a plasma-filled cavity. Special emphasis has been given to the measurement of the complex loading impedance of the plasma. The importance of this measurement is that once the complex loading impedance of the plasma is known, a matching network can be designed so that the r.f. generator impedance can be matched to one of the cavity modes, thus delivering maximum power to the plasma. For future heating experiments it will be essential to be able to match the generator impedance to a cavity mode in order to couple the r.f. energy efficiently to the plasma.

As a consequence of the complex impedance measurements, it was discovered that the designs of the transmitting antenna and the impedance matching network are both crucial. The losses in the antenna and the matching network must be kept below the plasma loading in order to be able to detect the complex plasma loading impedance. This is even more important in future heating experiments, because the fundamental basis for efficient heating before any other consideration is to deliver more energy into the plasma than is dissipated in the antenna system.

The characteristics of the magnetosonic cavity modes are confirmed by three different methods. First, the cavity modes are observed as voltage maxima at the output of a six-turn receiving probe. Second, they also appear as maxima in the input resistance of the transmitting antenna. Finally, when the real and imaginary parts of the measured complex input impedance of the antenna are plotted in the complex impedance plane, the resulting curves are approximately circles, indicating a resonance phenomenon.

The observed plasma loading resistances at the various cavity modes are as high as 3 to 4 times the basic antenna resistance (~ .4 Ω). The estimated cavity Q’s were between 400 and 700. This means that efficient energy coupling into the tokamak and low losses in the antenna system are possible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Analysis of the data from the Heavy Nuclei Experiment on the HEAO-3 spacecraft has yielded the cosmic ray abundances of odd-even element pairs with atomic number, Z, in the range 33 ≤ Z ≤60, and the abundances of broad element groups in the range 62 ≤ Z ≤83, relative to iron. These data show that the cosmic ray source composition in this charge range is quite similar to that of the solar system provided an allowance is made for a source fractionation based on first ionization potential. The observations are inconsistent with a source composition which is dominated by either r-process or s-process material, whether or not an allowance is made for first ionization potential. Although the observations do not exclude a source containing the same mixture of r- and s-process material as in the solar system. the data are best fit by a source having an r- to s-process ratio of 1.22^(+0.25)_(0.21), relative to the solar system The abundances of secondary elements are consistent with the leaky box model of galactic propagation, implying a pathlength distribution similar to that which explains the abundances of nuclei with Z<29.

The energy spectra of the even elements in the range 38 ≤ Z ≤ 60 are found to have a deficiency of particles in the range ~1.5 to 3 GeV/amu, compared to iron. This deficiency may result from ionization energy loss in the interstellar medium, and is not predicted by propagation models which ignore such losses. ln addition, the energy spectra of secondary elements are found to be different to those of the primary elements. Such effects are consistent with observations of lighter nuclei, and are in qualitative agreement with galactic propagation models using a rigidity dependent escape length. The energy spectra of secondaries arising from the platinum group are found to be much steeper than those of lower Z. This effect may result from energy dependent fragmentation cross sections.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Surface plasma waves arise from the collective oscillations of billions of electrons at the surface of a metal in unison. The simplest way to quantize these waves is by direct analogy to electromagnetic fields in free space, with the surface plasmon, the quantum of the surface plasma wave, playing the same role as the photon. It follows that surface plasmons should exhibit all of the same quantum phenomena that photons do, including quantum interference and entanglement.

Unlike photons, however, surface plasmons suffer strong losses that arise from the scattering of free electrons from other electrons, phonons, and surfaces. Under some circumstances, these interactions might also cause “pure dephasing,” which entails a loss of coherence without absorption. Quantum descriptions of plasmons usually do not account for these effects explicitly, and sometimes ignore them altogether. In light of this extra microscopic complexity, it is necessary for experiments to test quantum models of surface plasmons.

In this thesis, I describe two such tests that my collaborators and I performed. The first was a plasmonic version of the Hong-Ou-Mandel experiment, in which we observed two-particle quantum interference between plasmons with a visibility of 93 ± 1%. This measurement confirms that surface plasmons faithfully reproduce this effect with the same visibility and mutual coherence time, to within measurement error, as in the photonic case.

The second experiment demonstrated path entanglement between surface plasmons with a visibility of 95 ± 2%, confirming that a path-entangled state can indeed survive without measurable decoherence. This measurement suggests that elastic scattering mechanisms of the type that might cause pure dephasing must have been weak enough not to significantly perturb the state of the metal under the experimental conditions we investigated.

These two experiments add quantum interference and path entanglement to a growing list of quantum phenomena that surface plasmons appear to exhibit just as clearly as photons, confirming the predictions of the simplest quantum models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem is to calculate the attenuation of plane sound waves passing through a viscous, heat-conducting fluid containing small spherical inhomogeneities. The attenuation is calculated by evaluating the rate of increase of entropy caused by two irreversible processes: (1) the mechanical work done by the viscous stresses in the presence of velocity gradients, and (2) the flow of heat down the thermal gradients. The method is first applied to a homogeneous fluid with no spheres and shown to give the classical Stokes-Kirchhoff expressions. The method is then used to calculate the additional viscous and thermal attenuation when small spheres are present. The viscous attenuation agrees with Epstein's result obtained in 1941 for a non-heat-conducting fluid. The thermal attenuation is found to be similar in form to the viscous attenuation and, for gases, of comparable magnitude. The general results are applied to the case of water drops in air and air bubbles in water.

For water drops in air the viscous and thermal attenuations are camparable; the thermal losses occur almost entirely in the air, the thermal dissipation in the water being negligible. The theoretical values are compared with Knudsen's experimental data for fogs and found to agree in order of magnitude and dependence on frequency. For air bubbles in water the viscous losses are negligible and the calculated attenuation is almost completely due to thermal losses occurring in the air inside the bubbles, the thermal dissipation in the water being relatively small. (These results apply only to non-resonant bubbles whose radius changes but slightly during the acoustic cycle.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current power grid is on the cusp of modernization due to the emergence of distributed generation and controllable loads, as well as renewable energy. On one hand, distributed and renewable generation is volatile and difficult to dispatch. On the other hand, controllable loads provide significant potential for compensating for the uncertainties. In a future grid where there are thousands or millions of controllable loads and a large portion of the generation comes from volatile sources like wind and solar, distributed control that shifts or reduces the power consumption of electric loads in a reliable and economic way would be highly valuable.

Load control needs to be conducted with network awareness. Otherwise, voltage violations and overloading of circuit devices are likely. To model these effects, network power flows and voltages have to be considered explicitly. However, the physical laws that determine power flows and voltages are nonlinear. Furthermore, while distributed generation and controllable loads are mostly located in distribution networks that are multiphase and radial, most of the power flow studies focus on single-phase networks.

This thesis focuses on distributed load control in multiphase radial distribution networks. In particular, we first study distributed load control without considering network constraints, and then consider network-aware distributed load control.

Distributed implementation of load control is the main challenge if network constraints can be ignored. In this case, we first ignore the uncertainties in renewable generation and load arrivals, and propose a distributed load control algorithm, Algorithm 1, that optimally schedules the deferrable loads to shape the net electricity demand. Deferrable loads refer to loads whose total energy consumption is fixed, but energy usage can be shifted over time in response to network conditions. Algorithm 1 is a distributed gradient decent algorithm, and empirically converges to optimal deferrable load schedules within 15 iterations.

We then extend Algorithm 1 to a real-time setup where deferrable loads arrive over time, and only imprecise predictions about future renewable generation and load are available at the time of decision making. The real-time algorithm Algorithm 2 is based on model-predictive control: Algorithm 2 uses updated predictions on renewable generation as the true values, and computes a pseudo load to simulate future deferrable load. The pseudo load consumes 0 power at the current time step, and its total energy consumption equals the expectation of future deferrable load total energy request.

Network constraints, e.g., transformer loading constraints and voltage regulation constraints, bring significant challenge to the load control problem since power flows and voltages are governed by nonlinear physical laws. Remarkably, distribution networks are usually multiphase and radial. Two approaches are explored to overcome this challenge: one based on convex relaxation and the other that seeks a locally optimal load schedule.

To explore the convex relaxation approach, a novel but equivalent power flow model, the branch flow model, is developed, and a semidefinite programming relaxation, called BFM-SDP, is obtained using the branch flow model. BFM-SDP is mathematically equivalent to a standard convex relaxation proposed in the literature, but numerically is much more stable. Empirical studies show that BFM-SDP is numerically exact for the IEEE 13-, 34-, 37-, 123-bus networks and a real-world 2065-bus network, while the standard convex relaxation is numerically exact for only two of these networks.

Theoretical guarantees on the exactness of convex relaxations are provided for two types of networks: single-phase radial alternative-current (AC) networks, and single-phase mesh direct-current (DC) networks. In particular, for single-phase radial AC networks, we prove that a second-order cone program (SOCP) relaxation is exact if voltage upper bounds are not binding; we also modify the optimal load control problem so that its SOCP relaxation is always exact. For single-phase mesh DC networks, we prove that an SOCP relaxation is exact if 1) voltage upper bounds are not binding, or 2) voltage upper bounds are uniform and power injection lower bounds are strictly negative; we also modify the optimal load control problem so that its SOCP relaxation is always exact.

To seek a locally optimal load schedule, a distributed gradient-decent algorithm, Algorithm 9, is proposed. The suboptimality gap of the algorithm is rigorously characterized and close to 0 for practical networks. Furthermore, unlike the convex relaxation approach, Algorithm 9 ensures a feasible solution. The gradients used in Algorithm 9 are estimated based on a linear approximation of the power flow, which is derived with the following assumptions: 1) line losses are negligible; and 2) voltages are reasonably balanced. Both assumptions are satisfied in practical distribution networks. Empirical results show that Algorithm 9 obtains 70+ times speed up over the convex relaxation approach, at the cost of a suboptimality within numerical precision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Pulse-height and time-of-flight methods have been used to measure the electronic stopping cross sections for projectiles of 12C, 16O, 19F, 23Na, 24Mg, and 27Al, slowing in helium, neon, argon, krypton, and xenon. The ion energies were in the range 185 keV ≤ E ≤ 2560 keV.

A semiempirical calculation of the electronic stopping cross section for projectiles with atomic numbers between 6 and 13 passing through the inert gases has been performed using a modification of the Firsov model. Using Hartree-Slater-Fock orbitals, and summing over the losses for the individual charge states of the projectiles, good agreement has been obtained with the experimental data. The main features of the stopping cross section seen in the data, such as the Z1 oscillation and the variation of the velocity dependence on Z1 and Z2, are present in the calculation. The inclusion of a modified form of the Bethe-Bloch formula as an additional term allows the increase of the velocity dependence for projectile velocities above vo to be reproduced in the calculation.