42 resultados para P-median Model
em CaltechTHESIS
Resumo:
An economic air pollution control model, which determines the least cost of reaching various air quality levels, is formulated. The model takes the form of a general, nonlinear, mathematical programming problem. Primary contaminant emission levels are the independent variables. The objective function is the cost of attaining various emission levels and is to be minimized subject to constraints that given air quality levels be attained.
The model is applied to a simplified statement of the photochemical smog problem in Los Angeles County in 1975 with emissions specified by a two-dimensional vector, total reactive hydrocarbon, (RHC), and nitrogen oxide, (NOx), emissions. Air quality, also two-dimensional, is measured by the expected number of days per year that nitrogen dioxide, (NO2), and mid-day ozone, (O3), exceed standards in Central Los Angeles.
The minimum cost of reaching various emission levels is found by a linear programming model. The base or "uncontrolled" emission levels are those that will exist in 1975 with the present new car control program and with the degree of stationary source control existing in 1971. Controls, basically "add-on devices", are considered here for used cars, aircraft, and existing stationary sources. It is found that with these added controls, Los Angeles County emission levels [(1300 tons/day RHC, 1000 tons /day NOx) in 1969] and [(670 tons/day RHC, 790 tons/day NOx) at the base 1975 level], can be reduced to 260 tons/day RHC (minimum RHC program) and 460 tons/day NOx (minimum NOx program).
"Phenomenological" or statistical air quality models provide the relationship between air quality and emissions. These models estimate the relationship by using atmospheric monitoring data taken at one (yearly) emission level and by using certain simple physical assumptions, (e. g., that emissions are reduced proportionately at all points in space and time). For NO2, (concentrations assumed proportional to NOx emissions), it is found that standard violations in Central Los Angeles, (55 in 1969), can be reduced to 25, 5, and 0 days per year by controlling emissions to 800, 550, and 300 tons /day, respectively. A probabilistic model reveals that RHC control is much more effective than NOx control in reducing Central Los Angeles ozone. The 150 days per year ozone violations in 1969 can be reduced to 75, 30, 10, and 0 days per year by abating RHC emissions to 700, 450, 300, and 150 tons/day, respectively, (at the 1969 NOx emission level).
The control cost-emission level and air quality-emission level relationships are combined in a graphical solution of the complete model to find the cost of various air quality levels. Best possible air quality levels with the controls considered here are 8 O3 and 10 NO2 violations per year (minimum ozone program) or 25 O3 and 3 NO2 violations per year (minimum NO2 program) with an annualized cost of $230,000,000 (above the estimated $150,000,000 per year for the new car control program for Los Angeles County motor vehicles in 1975).
Resumo:
A general solution is presented for water waves generated by an arbitrary movement of the bed (in space and time) in a two-dimensional fluid domain with a uniform depth. The integral solution which is developed is based on a linearized approximation to the complete (nonlinear) set of governing equations. The general solution is evaluated for the specific case of a uniform upthrust or downthrow of a block section of the bed; two time-displacement histories of the bed movement are considered.
An integral solution (based on a linear theory) is also developed for a three-dimensional fluid domain of uniform depth for a class of bed movements which are axially symmetric. The integral solution is evaluated for the specific case of a block upthrust or downthrow of a section of the bed, circular in planform, with a time-displacement history identical to one of the motions used in the two-dimensional model.
Since the linear solutions are developed from a linearized approximation of the complete nonlinear description of wave behavior, the applicability of these solutions is investigated. Two types of non-linear effects are found which limit the applicability of the linear theory: (1) large nonlinear effects which occur in the region of generation during the bed movement, and (2) the gradual growth of nonlinear effects during wave propagation.
A model of wave behavior, which includes, in an approximate manner, both linear and nonlinear effects is presented for computing wave profiles after the linear theory has become invalid due to the growth of nonlinearities during wave propagation.
An experimental program has been conducted to confirm both the linear model for the two-dimensional fluid domain and the strategy suggested for determining wave profiles during propagation after the linear theory becomes invalid. The effect of a more general time-displacement history of the moving bed than those employed in the theoretical models is also investigated experimentally.
The linear theory is found to accurately approximate the wave behavior in the region of generation whenever the total displacement of the bed is much less than the water depth. Curves are developed and confirmed by the experiments which predict gross features of the lead wave propagating from the region of generation once the values of certain nondimensional parameters (which characterize the generation process) are known. For example, the maximum amplitude of the lead wave propagating from the region of generation has been found to never exceed approximately one-half of the total bed displacement. The gross features of the tsunami resulting from the Alaskan earthquake of 27 March 1964 can be estimated from the results of this study.
Resumo:
A model equation for water waves has been suggested by Whitham to study, qualitatively at least, the different kinds of breaking. This is an integro-differential equation which combines a typical nonlinear convection term with an integral for the dispersive effects and is of independent mathematical interest. For an approximate kernel of the form e^(-b|x|) it is shown first that solitary waves have a maximum height with sharp crests and secondly that waves which are sufficiently asymmetric break into "bores." The second part applies to a wide class of bounded kernels, but the kernel giving the correct dispersion effects of water waves has a square root singularity and the present argument does not go through. Nevertheless the possibility of the two kinds of breaking in such integro-differential equations is demonstrated.
Difficulties arise in finding variational principles for continuum mechanics problems in the Eulerian (field) description. The reason is found to be that continuum equations in the original field variables lack a mathematical "self-adjointness" property which is necessary for Euler equations. This is a feature of the Eulerian description and occurs in non-dissipative problems which have variational principles for their Lagrangian description. To overcome this difficulty a "potential representation" approach is used which consists of transforming to new (Eulerian) variables whose equations are self-adjoint. The transformations to the velocity potential or stream function in fluids or the scaler and vector potentials in electromagnetism often lead to variational principles in this way. As yet no general procedure is available for finding suitable transformations. Existing variational principles for the inviscid fluid equations in the Eulerian description are reviewed and some ideas on the form of the appropriate transformations and Lagrangians for fluid problems are obtained. These ideas are developed in a series of examples which include finding variational principles for Rossby waves and for the internal waves of a stratified fluid.
Resumo:
In the first part of this thesis a study of the effect of the longitudinal distribution of optical intensity and electron density on the static and dynamic behavior of semiconductor lasers is performed. A static model for above threshold operation of a single mode laser, consisting of multiple active and passive sections, is developed by calculating the longitudinal optical intensity distribution and electron density distribution in a self-consistent manner. Feedback from an index and gain Bragg grating is included, as well as feedback from discrete reflections at interfaces and facets. Longitudinal spatial holeburning is analyzed by including the dependence of the gain and the refractive index on the electron density. The mechanisms of spatial holeburning in quarter wave shifted DFB lasers are analyzed. A new laser structure with a uniform optical intensity distribution is introduced and an implementation is simulated, resulting in a large reduction of the longitudinal spatial holeburning effect.
A dynamic small-signal model is then developed by including the optical intensity and electron density distribution, as well as the dependence of the grating coupling coefficients on the electron density. Expressions are derived for the intensity and frequency noise spectrum, the spontaneous emission rate into the lasing mode, the linewidth enhancement factor, and the AM and FM modulation response. Different chirp components are identified in the FM response, and a new adiabatic chirp component is discovered. This new adiabatic chirp component is caused by the nonuniform longitudinal distributions, and is found to dominate at low frequencies. Distributed feedback lasers with partial gain coupling are analyzed, and it is shown how the dependence of the grating coupling coefficients on the electron density can result in an enhancement of the differential gain with an associated enhancement in modulation bandwidth and a reduction in chirp.
In the second part, spectral characteristics of passively mode-locked two-section multiple quantum well laser coupled to an external cavity are studied. Broad-band wavelength tuning using an external grating is demonstrated for the first time in passively mode-locked semiconductor lasers. A record tuning range of 26 nm is measured, with pulse widths of typically a few picosecond and time-bandwidth products of more than 10 times the transform limit. It is then demonstrated that these large time-bandwidth products are due to a strong linear upchirp, by performing pulse compression by a factor of 15 to a record pulse widths as low 320 fs.
A model for pulse propagation through a saturable medium with self-phase-modulation, due to the a-parameter, is developed for quantum well material, including the frequency dependence of the gain medium. This model is used to simulate two-section devices coupled to an external cavity. When no self-phase-modulation is present, it is found that the pulses are asymmetric with a sharper rising edge, that the pulse tails have an exponential behavior, and that the transform limit is 0.3. Inclusion of self-phase-modulation results in a linear upchirp imprinted on the pulse after each round-trip. This linear upchirp is due to a combination of self-phase-modulation in a gain section and absorption of the leading edge of the pulse in the saturable absorber.
Resumo:
This thesis addresses the fine structure, both radial and lateral, of compressional wave velocity and attenuation of the Earth's core and the lowermost mantle using waveforms, differential travel times and amplitudes of PKP waves, which penetrate the Earth's core.
The structure near the inner core boundary (ICB) is studied by analyzing waveforms of a regional sample. The waveform modeling approach is demonstrated to be an effective tool for constrainning the ICB structure. The best model features a sharp velocity jump of 0.78km/s at the ICB and a low velocity gradient at the lowermost outer core (indicating possible inhomogeneity) and high attenuation at the top of the inner core.
A spherically symmetric P-wave model of the core, is proposed from PKP differential times, waveforms and amplitudes. The ICB remains sharp with a velocity jump of 0. 78km/ s. A very low velocity gradient at the base of the fluid core is demonstrated to be a robust feature, indicating inhomogeneity is practically inevitable. The model also indicates that the attenuation in the inner core decreases with depth. The velocity at D" is smaller than PREM.
The inner core is confirmed to be very anisotropic, possessing a cylindrical symmetry around the Earth spin axis with the N-S direction 3% faster than the E-W direction. All of the N-S rays through the inner core were found to be faster than the E-W rays by 1.5 to 3.5s. Exhaustive data selection and efforts in insolating contributions from the region above ensure that this is an inner core feature.
The anisotropy at the very top of the inner core is found to be distinctly different from the deeper part. The top 60km of the inner core is not anisotropic. From 60km to 150km, there appears to be a transition from isotropy to anisotropy.
PKP differential travel times are used to study the P velocity structure in D". Systematic regional variations of up to 2s in AB-DF times were observed, attributed primarily to heterogeneities in the lower 500km of the mantle. However, direct comparisons with tomographic models are not successful.
Resumo:
Compliant foams are usually characterized by a wide range of desirable mechanical properties. These properties include viscoelasticity at different temperatures, energy absorption, recoverability under cyclic loading, impact resistance, and thermal, electrical, acoustic and radiation-resistance. Some foams contain nano-sized features and are used in small-scale devices. This implies that the characteristic dimensions of foams span multiple length scales, rendering modeling their mechanical properties difficult. Continuum mechanics-based models capture some salient experimental features like the linear elastic regime, followed by non-linear plateau stress regime. However, they lack mesostructural physical details. This makes them incapable of accurately predicting local peaks in stress and strain distributions, which significantly affect the deformation paths. Atomistic methods are capable of capturing the physical origins of deformation at smaller scales, but suffer from impractical computational intensity. Capturing deformation at the so-called meso-scale, which is capable of describing the phenomenon at a continuum level, but with some physical insights, requires developing new theoretical approaches.
A fundamental question that motivates the modeling of foams is ‘how to extract the intrinsic material response from simple mechanical test data, such as stress vs. strain response?’ A 3D model was developed to simulate the mechanical response of foam-type materials. The novelty of this model includes unique features such as the hardening-softening-hardening material response, strain rate-dependence, and plastically compressible solids with plastic non-normality. Suggestive links from atomistic simulations of foams were borrowed to formulate a physically informed hardening material input function. Motivated by a model that qualitatively captured the response of foam-type vertically aligned carbon nanotube (VACNT) pillars under uniaxial compression [2011,“Analysis of Uniaxial Compression of Vertically Aligned Carbon Nanotubes,” J. Mech.Phys. Solids, 59, pp. 2227–2237, Erratum 60, 1753–1756 (2012)], the property space exploration was advanced to three types of simple mechanical tests: 1) uniaxial compression, 2) uniaxial tension, and 3) nanoindentation with a conical and a flat-punch tip. The simulations attempt to explain some of the salient features in experimental data, like
1) The initial linear elastic response.
2) One or more nonlinear instabilities, yielding, and hardening.
The model-inherent relationships between the material properties and the overall stress-strain behavior were validated against the available experimental data. The material properties include the gradient in stiffness along the height, plastic and elastic compressibility, and hardening. Each of these tests was evaluated in terms of their efficiency in extracting material properties. The uniaxial simulation results proved to be a combination of structural and material influences. Out of all deformation paths, flat-punch indentation proved to be superior since it is the most sensitive in capturing the material properties.
Resumo:
Strong quenching of the fluorescence of aromatic hydrocarbons by tertiary aliphatic amines has been observed in solution at room temperature. Accompanying the fluorescence quenching of aromatic hydrocarbons, an anomalous emission is observed. This new emission is very broad, structureless and red-shifted from the original hydrocarbon fluorescence.
Kinetic studies indicate that this anomalous emission is due to an exciplex formed by an aromatic hydrocarbon molecule in its lowest excited singlet state with an amine molecule. The fluorescence quenching of the aromatic hydrocarbons is due to the depopulation of excited hydrocarbon molecules by the formation of exciplexes, with subsequent de-excitation of exciplexes by either radiative or non-radiative processes.
Analysis of rate constants shows the electron-transfer nature of the exciplex. Through the study of the effects on the frequencies of exciplex emissions of substituents on the hydrocarbons, it is concluded that partial electron transfer from the amine molecule to the aromatic hydrocarbon molecule in its lowest excited singlet state occurs in the formation of exciplex. Solvent effects on the exciplex emission frequencies further demonstrate the polar nature of the exciplex.
A model based on this electron-transfer nature of exciplex is proposed and proves satisfactory in interpreting the exciplex emission phenomenon in the fluorescence quenching of aromatic hydrocarbons by tertiary aliphatic amines.
Resumo:
The application of principles from evolutionary biology has long been used to gain new insights into the progression and clinical control of both infectious diseases and neoplasms. This iterative evolutionary process consists of expansion, diversification and selection within an adaptive landscape - species are subject to random genetic or epigenetic alterations that result in variations; genetic information is inherited through asexual reproduction and strong selective pressures such as therapeutic intervention can lead to the adaptation and expansion of resistant variants. These principles lie at the center of modern evolutionary synthesis and constitute the primary reasons for the development of resistance and therapeutic failure, but also provide a framework that allows for more effective control.
A model system for studying the evolution of resistance and control of therapeutic failure is the treatment of chronic HIV-1 infection by broadly neutralizing antibody (bNAb) therapy. A relatively recent discovery is that a minority of HIV-infected individuals can produce broadly neutralizing antibodies, that is, antibodies that inhibit infection by many strains of HIV. Passive transfer of human antibodies for the prevention and treatment of HIV-1 infection is increasingly being considered as an alternative to a conventional vaccine. However, recent evolution studies have uncovered that antibody treatment can exert selective pressure on virus that results in the rapid evolution of resistance. In certain cases, complete resistance to an antibody is conferred with a single amino acid substitution on the viral envelope of HIV.
The challenges in uncovering resistance mechanisms and designing effective combination strategies to control evolutionary processes and prevent therapeutic failure apply more broadly. We are motivated by two questions: Can we predict the evolution to resistance by characterizing genetic alterations that contribute to modified phenotypic fitness? Given an evolutionary landscape and a set of candidate therapies, can we computationally synthesize treatment strategies that control evolution to resistance?
To address the first question, we propose a mathematical framework to reason about evolutionary dynamics of HIV from computationally derived Gibbs energy fitness landscapes -- expanding the theoretical concept of an evolutionary landscape originally conceived by Sewall Wright to a computable, quantifiable, multidimensional, structurally defined fitness surface upon which to study complex HIV evolutionary outcomes.
To design combination treatment strategies that control evolution to resistance, we propose a methodology that solves for optimal combinations and concentrations of candidate therapies, and allows for the ability to quantifiably explore tradeoffs in treatment design, such as limiting the number of candidate therapies in the combination, dosage constraints and robustness to error. Our algorithm is based on the application of recent results in optimal control to an HIV evolutionary dynamics model and is constructed from experimentally derived antibody resistant phenotypes and their single antibody pharmacodynamics. This method represents a first step towards integrating principled engineering techniques with an experimentally based mathematical model in the rational design of combination treatment strategies and offers predictive understanding of the effects of combination therapies of evolutionary dynamics and resistance of HIV. Preliminary in vitro studies suggest that the combination antibody therapies predicted by our algorithm can neutralize heterogeneous viral populations despite containing resistant mutations.
Resumo:
I report the solubility and diffusivity of water in lunar basalt and an iron-free basaltic analogue at 1 atm and 1350 °C. Such parameters are critical for understanding the degassing histories of lunar pyroclastic glasses. Solubility experiments have been conducted over a range of fO2 conditions from three log units below to five log units above the iron-wüstite buffer (IW) and over a range of pH2/pH2O from 0.03 to 24. Quenched experimental glasses were analyzed by Fourier transform infrared spectroscopy (FTIR) and secondary ionization mass spectrometry (SIMS) and were found to contain up to ~420 ppm water. Results demonstrate that, under the conditions of our experiments: (1) hydroxyl is the only H-bearing species detected by FTIR; (2) the solubility of water is proportional to the square root of pH2O in the furnace atmosphere and is independent of fO2 and pH2/pH2O; (3) the solubility of water is very similar in both melt compositions; (4) the concentration of H2 in our iron-free experiments is <3 ppm, even at oxygen fugacities as low as IW-2.3 and pH2/pH2O as high as 24; and (5) SIMS analyses of water in iron-rich glasses equilibrated under variable fO2 conditions can be strongly influenced by matrix effects, even when the concentrations of water in the glasses are low. Our results can be used to constrain the entrapment pressure of the lunar melt inclusions of Hauri et al. (2011).
Diffusion experiments were conducted over a range of fO2 conditions from IW-2.2 to IW+6.7 and over a range of pH2/pH2O from nominally zero to ~10. The water concentrations measured in our quenched experimental glasses by SIMS and FTIR vary from a few ppm to ~430 ppm. Water concentration gradients are well described by models in which the diffusivity of water (D*water) is assumed to be constant. The relationship between D*water and water concentration is well described by a modified speciation model (Ni et al. 2012) in which both molecular water and hydroxyl are allowed to diffuse. The success of this modified speciation model for describing our results suggests that we have resolved the diffusivity of hydroxyl in basaltic melt for the first time. Best-fit values of D*water for our experiments on lunar basalt vary within a factor of ~2 over a range of pH2/pH2O from 0.007 to 9.7, a range of fO2 from IW-2.2 to IW+4.9, and a water concentration range from ~80 ppm to ~280 ppm. The relative insensitivity of our best-fit values of D*water to variations in pH2 suggests that H2 diffusion was not significant during degassing of the lunar glasses of Saal et al. (2008). D*water during dehydration and hydration in H2/CO2 gas mixtures are approximately the same, which supports an equilibrium boundary condition for these experiments. However, dehydration experiments into CO2 and CO/CO2 gas mixtures leave some scope for the importance of kinetics during dehydration into H-free environments. The value of D*water chosen by Saal et al. (2008) for modeling the diffusive degassing of the lunar volcanic glasses is within a factor of three of our measured value in our lunar basaltic melt at 1350 °C.
In Chapter 4 of this thesis, I document significant zonation in major, minor, trace, and volatile elements in naturally glassy olivine-hosted melt inclusions from the Siqueiros Fracture Zone and the Galapagos Islands. Components with a higher concentration in the host olivine than in the melt (MgO, FeO, Cr2O3, and MnO) are depleted at the edges of the zoned melt inclusions relative to their centers, whereas except for CaO, H2O, and F, components with a lower concentration in the host olivine than in the melt (Al2O3, SiO2, Na2O, K2O, TiO2, S, and Cl) are enriched near the melt inclusion edges. This zonation is due to formation of an olivine-depleted boundary layer in the adjacent melt in response to cooling and crystallization of olivine on the walls of the melt inclusions concurrent with diffusive propagation of the boundary layer toward the inclusion center.
Concentration profiles of some components in the melt inclusions exhibit multicomponent diffusion effects such as uphill diffusion (CaO, FeO) or slowing of the diffusion of typically rapidly diffusing components (Na2O, K2O) by coupling to slow diffusing components such as SiO2 and Al2O3. Concentrations of H2O and F decrease towards the edges of some of the Siqueiros melt inclusions, suggesting either that these components have been lost from the inclusions into the host olivine late in their cooling histories and/or that these components are exhibiting multicomponent diffusion effects.
A model has been developed of the time-dependent evolution of MgO concentration profiles in melt inclusions due to simultaneous depletion of MgO at the inclusion walls due to olivine growth and diffusion of MgO in the melt inclusions in response to this depletion. Observed concentration profiles were fit to this model to constrain their thermal histories. Cooling rates determined by a single-stage linear cooling model are 150–13,000 °C hr-1 from the liquidus down to ~1000 °C, consistent with previously determined cooling rates for basaltic glasses; compositional trends with melt inclusion size observed in the Siqueiros melt inclusions are described well by this simple single-stage linear cooling model. Despite the overall success of the modeling of MgO concentration profiles using a single-stage cooling history, MgO concentration profiles in some melt inclusions are better fit by a two-stage cooling history with a slower-cooling first stage followed by a faster-cooling second stage; the inferred total duration of cooling from the liquidus down to ~1000 °C is 40 s to just over one hour.
Based on our observations and models, compositions of zoned melt inclusions (even if measured at the centers of the inclusions) will typically have been diffusively fractionated relative to the initially trapped melt; for such inclusions, the initial composition cannot be simply reconstructed based on olivine-addition calculations, so caution should be exercised in application of such reconstructions to correct for post-entrapment crystallization of olivine on inclusion walls. Off-center analyses of a melt inclusion can also give results significantly fractionated relative to simple olivine crystallization.
All melt inclusions from the Siqueiros and Galapagos sample suites exhibit zoning profiles, and this feature may be nearly universal in glassy, olivine-hosted inclusions. If so, zoning profiles in melt inclusions could be widely useful to constrain late-stage syneruptive processes and as natural diffusion experiments.
Resumo:
A model for some of the many physical-chemical and biological processes in intermittent sand filtration of wastewaters is described and an expression for oxygen transfer is formulated.
The model assumes that aerobic bacterial activity within the sand or soil matrix is limited, mostly by oxygen deficiency, while the surface is ponded with wastewater. Atmospheric oxygen reenters into the soil after infiltration ends. Aerobic activity is resumed, but the extent of penetration of oxygen is limited and some depths may be always anaerobic. These assumptions lead to the conclusion that the percolate shows large variations with respect to the concentration of certain contaminants, with some portions showing little change in a specific contaminant. Analyses of soil moisture in field studies and of effluent from laboratory sand columns substantiated the model.
The oxygen content of the system at sufficiently long times after addition of wastes can be described by a quasi-steady-state diffusion equation including a term for an oxygen sink. Measurements of oxygen content during laboratory and field studies show that the oxygen profile changes only slightly up to two days after the quasi-steady state is attained.
Results of these hypotheses and experimental verification can be applied in the operation of existing facilities and in the interpretation of data from pilot plant-studies.
Resumo:
The pattern of energy release during the Imperial Valley, California, earthquake of 1940 is studied by analysing the El Centro strong motion seismograph record and records from the Tinemaha seismograph station, 546 km from the epicenter. The earthquake was a multiple event sequence with at least 4 events recorded at El Centro in the first 25 seconds, followed by 9 events recorded in the next 5 minutes. Clear P, S and surface waves were observed on the strong motion record. Although the main part of the earthquake energy was released during the first 15 seconds, some of the later events were as large as M = 5.8 and thus are important for earthquake engineering studies. The moment calculated using Fourier analysis of surface waves agrees with the moment estimated from field measurements of fault offset after the earthquake. The earthquake engineering significance of the complex pattern of energy release is discussed. It is concluded that a cumulative increase in amplitudes of building vibration resulting from the present sequence of shocks would be significant only for structures with relatively long natural period of vibration. However, progressive weakening effects may also lead to greater damage for multiple event earthquakes.
The model with surface Love waves propagating through a single layer as a surface wave guide is studied. It is expected that the derived properties for this simple model illustrate well several phenomena associated with strong earthquake ground motion. First, it is shown that a surface layer, or several layers, will cause the main part of the high frequency energy, radiated from the nearby earthquake, to be confined to the layer as a wave guide. The existence of the surface layer will thus increase the rate of the energy transfer into the man-made structures on or near the surface of the layer. Secondly, the surface amplitude of the guided SH waves will decrease if the energy of the wave is essentially confined to the layer and if the wave propagates towards an increasing layer thickness. It is also shown that the constructive interference of SH waves will cause the zeroes and the peaks in the Fourier amplitude spectrum of the surface ground motion to be continuously displaced towards the longer periods as the distance from the source of the energy release increases.
Resumo:
In the first part of this thesis we search for beyond the Standard Model physics through the search for anomalous production of the Higgs boson using the razor kinematic variables. We search for anomalous Higgs boson production using proton-proton collisions at center of mass energy √s=8 TeV collected by the Compact Muon Solenoid experiment at the Large Hadron Collider corresponding to an integrated luminosity of 19.8 fb-1.
In the second part we present a novel method for using a quantum annealer to train a classifier to recognize events containing a Higgs boson decaying to two photons. We train that classifier using simulated proton-proton collisions at √s=8 TeV producing either a Standard Model Higgs boson decaying to two photons or a non-resonant Standard Model process that produces a two photon final state.
The production mechanisms of the Higgs boson are precisely predicted by the Standard Model based on its association with the mechanism of electroweak symmetry breaking. We measure the yield of Higgs bosons decaying to two photons in kinematic regions predicted to have very little contribution from a Standard Model Higgs boson and search for an excess of events, which would be evidence of either non-standard production or non-standard properties of the Higgs boson. We divide the events into disjoint categories based on kinematic properties and the presence of additional b-quarks produced in the collisions. In each of these disjoint categories, we use the razor kinematic variables to characterize events with topological configurations incompatible with typical configurations found from standard model production of the Higgs boson.
We observe an excess of events with di-photon invariant mass compatible with the Higgs boson mass and localized in a small region of the razor plane. We observe 5 events with a predicted background of 0.54 ± 0.28, which observation has a p-value of 10-3 and a local significance of 3.35σ. This background prediction comes from 0.48 predicted non-resonant background events and 0.07 predicted SM higgs boson events. We proceed to investigate the properties of this excess, finding that it provides a very compelling peak in the di-photon invariant mass distribution and is physically separated in the razor plane from predicted background. Using another method of measuring the background and significance of the excess, we find a 2.5σ deviation from the Standard Model hypothesis over a broader range of the razor plane.
In the second part of the thesis we transform the problem of training a classifier to distinguish events with a Higgs boson decaying to two photons from events with other sources of photon pairs into the Hamiltonian of a spin system, the ground state of which is the best classifier. We then use a quantum annealer to find the ground state of this Hamiltonian and train the classifier. We find that we are able to do this successfully in less than 400 annealing runs for a problem of median difficulty at the largest problem size considered. The networks trained in this manner exhibit good classification performance, competitive with the more complicated machine learning techniques, and are highly resistant to overtraining. We also find that the nature of the training gives access to additional solutions that can be used to improve the classification performance by up to 1.2% in some regions.
Resumo:
Galaxies evolve throughout the history of the universe from the first star-forming sources, through gas-rich asymmetric structures with rapid star formation rates, to the massive symmetrical stellar systems observed at the present day. Determining the physical processes which drive galaxy formation and evolution is one of the most important questions in observational astrophysics. This thesis presents four projects aimed at improving our understanding of galaxy evolution from detailed measurements of star forming galaxies at high redshift.
We use resolved spectroscopy of gravitationally lensed z ≃ 2 - 3 star forming galaxies to measure their kinematic and star formation properties. The combination of lensing with adaptive optics yields physical resolution of ≃ 100 pc, sufficient to resolve giant Hii regions. We find that ~ 70 % of galaxies in our sample display ordered rotation with high local velocity dispersion indicating turbulent thick disks. The rotating galaxies are gravitationally unstable and are expected to fragment into giant clumps. The size and dynamical mass of giant Hii regions are in agreement with predictions for such clumps indicating that gravitational instability drives the rapid star formation. The remainder of our sample is comprised of ongoing major mergers. Merging galaxies display similar star formation rate, morphology, and local velocity dispersion as isolated sources, but their velocity fields are more chaotic with no coherent rotation.
We measure resolved metallicity in four lensed galaxies at z = 2.0 − 2.4 from optical emission line diagnostics. Three rotating galaxies display radial gradients with higher metallicity at smaller radii, while the fourth is undergoing a merger and has an inverted gradient with lower metallicity at the center. Strong gradients in the rotating galaxies indicate that they are growing inside-out with star formation fueled by accretion of metal-poor gas at large radii. By comparing measured gradients with an appropriate comparison sample at z = 0, we demonstrate that metallicity gradients in isolated galaxies must flatten at later times. The amount of size growth inferred by the gradients is in rough agreement with direct measurements of massive galaxies. We develop a chemical evolution model to interpret these data and conclude that metallicity gradients are established by a gradient in the outflow mass loading factor, combined with radial inflow of metal-enriched gas.
We present the first rest-frame optical spectroscopic survey of a large sample of low-luminosity galaxies at high redshift (L < L*, 1.5 < z < 3.5). This population dominates the star formation density of the universe at high redshifts, yet such galaxies are normally too faint to be studied spectroscopically. We take advantage of strong gravitational lensing magnification to compile observations for a sample of 29 galaxies using modest integration times with the Keck and Palomar telescopes. Balmer emission lines confirm that the sample has a median SFR ∼ 10 M_sun yr^−1 and extends to lower SFR than has been probed by other surveys at similar redshift. We derive the metallicity, dust extinction, SFR, ionization parameter, and dynamical mass from the spectroscopic data, providing the first accurate characterization of the star-forming environment in low-luminosity galaxies at high redshift. For the first time, we directly test the proposal that the relation between galaxy stellar mass, star formation rate, and gas phase metallicity does not evolve. We find lower gas phase metallicity in the high redshift galaxies than in local sources with equivalent stellar mass and star formation rate, arguing against a time-invariant relation. While our result is preliminary and may be biased by measurement errors, this represents an important first measurement that will be further constrained by ongoing analysis of the full data set and by future observations.
We present a study of composite rest-frame ultraviolet spectra of Lyman break galaxies at z = 4 and discuss implications for the distribution of neutral outflowing gas in the circumgalactic medium. In general we find similar spectroscopic trends to those found at z = 3 by earlier surveys. In particular, absorption lines which trace neutral gas are weaker in less evolved galaxies with lower stellar masses, smaller radii, lower luminosity, less dust, and stronger Lyα emission. Typical galaxies are thus expected to have stronger Lyα emission and weaker low-ionization absorption at earlier times, and we indeed find somewhat weaker low-ionization absorption at higher redshifts. In conjunction with earlier results, we argue that the reduced low-ionization absorption is likely caused by lower covering fraction and/or velocity range of outflowing neutral gas at earlier epochs. This result has important implications for the hypothesis that early galaxies were responsible for cosmic reionization. We additionally show that fine structure emission lines are sensitive to the spatial extent of neutral gas, and demonstrate that neutral gas is concentrated at smaller galactocentric radii in higher redshift galaxies.
The results of this thesis present a coherent picture of galaxy evolution at high redshifts 2 ≲ z ≲ 4. Roughly 1/3 of massive star forming galaxies at this period are undergoing major mergers, while the rest are growing inside-out with star formation occurring in gravitationally unstable thick disks. Star formation, stellar mass, and metallicity are limited by outflows which create a circumgalactic medium of metal-enriched material. We conclude by describing some remaining open questions and prospects for improving our understanding of galaxy evolution with future observations of gravitationally lensed galaxies.
Resumo:
We investigate the 2d O(3) model with the standard action by Monte Carlo simulation at couplings β up to 2.05. We measure the energy density, mass gap and susceptibility of the model, and gather high statistics on lattices of size L ≤ 1024 using the Floating Point Systems T-series vector hypercube and the Thinking Machines Corp.'s Connection Machine 2. Asymptotic scaling does not appear to set in for this action, even at β = 2.10, where the correlation length is 420. We observe a 20% difference between our estimate m/Λ^─_(Ms) = 3.52(6) at this β and the recent exact analytical result . We use the overrelaxation algorithm interleaved with Metropolis updates and show that decorrelation time scales with the correlation length and the number of overrelaxation steps per sweep. We determine its effective dynamical critical exponent to be z' = 1.079(10); thus critical slowing down is reduced significantly for this local algorithm that is vectorizable and parallelizable.
We also use the cluster Monte Carlo algorithms, which are non-local Monte Carlo update schemes which can greatly increase the efficiency of computer simulations of spin models. The major computational task in these algorithms is connected component labeling, to identify clusters of connected sites on a lattice. We have devised some new SIMD component labeling algorithms, and implemented them on the Connection Machine. We investigate their performance when applied to the cluster update of the two dimensional Ising spin model.
Finally we use a Monte Carlo Renormalization Group method to directly measure the couplings of block Hamiltonians at different blocking levels. For the usual averaging block transformation we confirm the renormalized trajectory (RT) observed by Okawa. For another improved probabilistic block transformation we find the RT, showing that it is much closer to the Standard Action. We then use this block transformation to obtain the discrete β-function of the model which we compare to the perturbative result. We do not see convergence, except when using a rescaled coupling β_E to effectively resum the series. For the latter case we see agreement for m/ Λ^─_(Ms) at , β = 2.14, 2.26, 2.38 and 2.50. To three loops m/Λ^─_(Ms) = 3.047(35) at β = 2.50, which is very close to the exact value m/ Λ^─_(Ms) = 2.943. Our last point at β = 2.62 disagrees with this estimate however.
Resumo:
To explain the ^(26)Mg isotopic anomaly seen in meteorites (^(26)Al daughter) as well as the observation of 1809-keV γ rays in the interstellar medium (live decay of 26Al) one must know, among other things, the destruction rate of ^(26)Al. Properties of states in ^(27)Si just above the ^(26)Al + p mass were investigated to determine the destruction rate of ^(26)Al via the ^(26)Al(p,γ)^(27)Si reaction at astrophysical temperatures.
Twenty micrograms of ^(26)Al were used to produce two types of Al_2O_3 targets by evaporation of the oxide. One was onto a thick platinum backing suitable for (p,γ) work, and the other onto a thin carbon foil for the (^3He,d) reaction.
The ^(26)Al(p,γ)^(27)Si excitation function, obtained using a germanium detector and voltage-ramped target, confirmed known resonances and revealed new ones at 770, 847, 876, 917, and 928 keV. Possible resonances below the lowest observed one at E_p = 286 keV were investigated using the ^(26)Al(^3He,d)^(27)Si proton-transfer reaction. States in 27Si corresponding to 196- and 286-keV proton resonances were observed. A possible resonance at 130 keV (postulated in prior work) was shown to have a strength of wγ less than 0.02 µeV.
By arranging four large Nal detector as a 47π calorimeter, the 196-keV proton resonance, and one at 247 keV, were observed directly, having wγ = 55± 9 and 10 ± 5 µeV, respectively.
Large uncertainties in the reaction rate have been reduced. At novae temperatures, the rate is about 100 times faster than that used in recent model calculations, casting some doubt on novae production of galactic ^(26)Al.