7 resultados para Entropy production

em CaltechTHESIS


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stars with a core mass greater than about 30 M become dynamically unstable due to electron-positron pair production when their central temperature reaches 1.5-2.0 x 109 0K. The collapse and subsequent explosion of stars with core masses of 45, 52, and 60 M is calculated. The range of the final velocity of expansion (3,400 – 8,500 km/sec) and of the mass ejected (1 – 40 M) is comparable to that observed for type II supernovae.

An implicit scheme of hydrodynamic difference equations (stable for large time steps) used for the calculation of the evolution is described.

For fast evolution the turbulence caused by convective instability does not produce the zero entropy gradient and perfect mixing found for slower evolution. A dynamical model of the convection is derived from the equations of motion and then incorporated into the difference equations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The creation of thermostable enzymes has wide-ranging applications in industrial, scientific, and pharmaceutical settings. As various stabilization techniques exist, it is often unclear how to best proceed. To this end, we have redesigned Cel5A (HjCel5A) from Hypocrea jecorina (anamorph Trichoderma reesei) to comparatively evaluate several significantly divergent stabilization methods: 1) consensus design, 2) core repacking, 3) helix dipole stabilization, 4) FoldX ΔΔG approximations, 5) Triad ΔΔG approximations, and 6) entropy reduction through backbone stabilization. As several of these techniques require structural data, we initially solved the first crystal structure of HjCel5A to 2.05 Å. Results from the stabilization experiments demonstrate that consensus design works best at accurately predicting highly stabilizing and active mutations. FoldX and helix dipole stabilization, however, also performed well. Both methods rely on structural data and can reveal non-conserved, structure-dependent mutations with high fidelity. HjCel5A is a prime target for stabilization. Capable of cleaving cellulose strands from agricultural waste into fermentable sugars, this protein functions as the primary endoglucanase in an organism commonly used in the sustainable biofuels industry. Creating a long-lived, highly active thermostable HjCel5A would allow cellulose hydrolysis to proceed more efficiently, lowering production expenses. We employed information gleaned during the survey of stabilization techniques to generate HjCel5A variants demonstrating a 12-15 °C increase in the temperature at which 50% of the total activity persists, an 11-14 °C increase in optimal operating temperature, and a 60% increase over the maximal amount of hydrolysis achievable using the wild type enzyme. We anticipate that our comparative analysis of stabilization methods will prove useful in future thermostabilization experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this thesis, we test the electroweak sector of the Standard Model of particle physics through the measurements of the cross section of the simultaneous production of the neutral weak boson Z and photon γ, and the limits on the anomalous Zγγ and ZZγ triple gauge couplings h3 and h4 with the Z decaying to leptons (electrons and muons). We analyze events collected in proton-proton collisions at center of mass energy of sqrt(s) = 7 TeV corresponding to an integrated luminosity of 5.0 inverse femtobarn. The analyzed events were recorded by the Compact Muon Solenoid detector at the Large Hadron Collider in 2011.

The production cross section has been measured for hard photons with transverse momentum greater than 15 GeV that are separated from the the final state leptons in the eta-phi plane by Delta R greater than 0.7, whose sum of the transverse energy of hadrons over the transverse energy of the photon in a cone around the photon with Delta R less than 0.3 is less than 0.5, and with the invariant mass of the dilepton system greater than 50 GeV. The measured cross section value is 5.33 +/- 0.08 (stat.) +/- 0.25 (syst.) +/- 0.12 (lumi.) picobarn. This is compatible with the Standard Model prediction that includes next-to-leading-order QCD contributions: 5.45 +/- 0.27 picobarn.

The measured 95 % confidence-level upper limits on the absolute values of the anomalous couplings h3 and h4 are 0.01 and 8.8E-5 for the Zγγ interactions, and, 8.6E-3 and 8.0E-5 for the ZZγ interactions. These values are also compatible with the Standard Model where they vanish in the tree-level approximation. They extend the sensitivity of the 2012 results from the ATLAS collaboration based on 1.02 inverse femtobarn of data by a factor of 2.4 to 3.1.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This dissertation consists of two parts. The first part presents an explicit procedure for applying multi-Regge theory to production processes. As an illustrative example, the case of three body final states is developed in detail, both with respect to kinematics and multi-Regge dynamics. Next, the experimental consistency of the multi-Regge hypothesis is tested in a specific high energy reaction; the hypothesis is shown to provide a good qualitative fit to the data. In addition, the results demonstrate a severe suppression of double Pomeranchon exchange, and show the coupling of two "Reggeons" to an external particle to be strongly damped as the particle's mass increases. Finally, with the use of two body Regge parameters, order of magnitude estimates of the multi-Regge cross section for various reactions are given.

The second part presents a diffraction model for high energy proton-proton scattering. This model developed by Chou and Yang assumes high energy elastic scattering results from absorption of the incident wave into the many available inelastic channels, with the absorption proportional to the amount of interpenetrating hadronic matter. The assumption that the hadronic matter distribution is proportional to the charge distribution relates the scattering amplitude for pp scattering to the proton form factor. The Chou-Yang model with the empirical proton form factor as input is then applied to calculate a high energy, fixed momentum transfer limit for the scattering cross section, This limiting cross section exhibits the same "dip" or "break" structure indicated in present experiments, but falls significantly below them in magnitude. Finally, possible spin dependence is introduced through a weak spin-orbit type term which gives rather good agreement with pp polarization data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A study of the muon decay channel of the τ lepton with the presence of a photon has been carried out to verify theoretical predictions for the production rate of e+e- → τ+τ-γ and for the branching ratio of τ- → ντ µ-νµγ. Included in this study is the first direct measurement of radiative tau decay. Using e+e- annihilation data taken at 29 GeV center-of-mass energy with the Mark II detector, we find the ratio of the measured τ- → ντ µ-νµγ branching fraction to the expected value from QED to be 1.03 ± 0.42. The ratio of measured-to-predicted number of events from radiative T production, e+e- → τ+τ-γ, where one of the τ's decay to μνν is found to be 0.91 ± 0.20. We have not seen an indication of anomalous behavior in radiative tau events.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of the continuation to complex values of the angular momentum of the partial wave amplitude is examined for the simplest production process, that of two particles → three particles. The presence of so-called "anomalous singularities" complicates the procedure followed relative to that used for quasi two-body scattering amplitudes. The anomalous singularities are shown to lead to exchange degenerate amplitudes with possible poles in much the same way as "normal" singularities lead to the usual signatured amplitudes. The resulting exchange-degenerate trajectories would also be expected to occur in two-body amplitudes.

The representation of the production amplitude in terms of the singularities of the partial wave amplitude is then developed and applied to the high energy region, with attention being paid to the emergence of "double Regge" terms. Certain new results are obtained for the behavior of the amplitude at zero momentum transfer, and some predictions of polarization and minima in momentum transfer distributions are made. A calculation of the polarization of the ρo meson in the reaction π - p → π - ρop at high energy with small momentum transfer to the proton is compared with data taken at 25 Gev by W. D. Walker and collaborators. The result is favorable, although limited by the statistics of the available data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of codes, classically motivated by the need to communicate information reliably in the presence of error, has found new life in fields as diverse as network communication, distributed storage of data, and even has connections to the design of linear measurements used in compressive sensing. But in all contexts, a code typically involves exploiting the algebraic or geometric structure underlying an application. In this thesis, we examine several problems in coding theory, and try to gain some insight into the algebraic structure behind them.

The first is the study of the entropy region - the space of all possible vectors of joint entropies which can arise from a set of discrete random variables. Understanding this region is essentially the key to optimizing network codes for a given network. To this end, we employ a group-theoretic method of constructing random variables producing so-called "group-characterizable" entropy vectors, which are capable of approximating any point in the entropy region. We show how small groups can be used to produce entropy vectors which violate the Ingleton inequality, a fundamental bound on entropy vectors arising from the random variables involved in linear network codes. We discuss the suitability of these groups to design codes for networks which could potentially outperform linear coding.

The second topic we discuss is the design of frames with low coherence, closely related to finding spherical codes in which the codewords are unit vectors spaced out around the unit sphere so as to minimize the magnitudes of their mutual inner products. We show how to build frames by selecting a cleverly chosen set of representations of a finite group to produce a "group code" as described by Slepian decades ago. We go on to reinterpret our method as selecting a subset of rows of a group Fourier matrix, allowing us to study and bound our frames' coherences using character theory. We discuss the usefulness of our frames in sparse signal recovery using linear measurements.

The final problem we investigate is that of coding with constraints, most recently motivated by the demand for ways to encode large amounts of data using error-correcting codes so that any small loss can be recovered from a small set of surviving data. Most often, this involves using a systematic linear error-correcting code in which each parity symbol is constrained to be a function of some subset of the message symbols. We derive bounds on the minimum distance of such a code based on its constraints, and characterize when these bounds can be achieved using subcodes of Reed-Solomon codes.