27 resultados para Conjunctive expressions
em CaltechTHESIS
Resumo:
In this thesis I investigate some aspects of the thermal budget of pahoehoe lava flows. This is done with a combination of general field observations, quantitative modeling, and specific field experiments. The results of this work apply to pahoehoe flows in general, even though the vast bulk of the work has been conducted on the lavas formed by the Pu'u 'O'o - Kupaianaha eruption of Kilauea Volcano on Hawai'i. The field observations rely heavily on discussions with the staff of the United States Geological Survey's Hawaiian Volcano Observatory (HVO), under whom I labored repeatedly in 1991-1993 for a period totaling about 10 months.
The quantitative models I have constructed are based on the physical processes observed by others and myself to be active on pahoehoe lava flows. By building up these models from the basic physical principles involved, this work avoids many of the pitfalls of earlier attempts to fit field observations with "intuitively appropriate" mathematical expressions. Unlike many earlier works, my model results can be analyzed in terms of the interactions between the different physical processes. I constructed models to: (1) describe the initial cooling of small pahoehoe flow lobes and (2) understand the thermal budget of lava tubes.
The field experiments were designed either to validate model results or to constrain key input parameters. In support of the cooling model for pahoehoe flow lobes, attempts were made to measure: (1) the cooling within the flow lobes, (2) the amount of heat transported away from the lava by wind, and (3) the growth of the crust on the lobes. Field data collected by Jones [1992], Hon et al. [1994b], and Denlinger [Keszthelyi and Denlinger, in prep.] were also particularly useful in constraining my cooling model for flow lobes. Most of the field observations I have used to constrain the thermal budget of lava tubes were collected by HVO (geological and geophysical monitoring) and the Jet Propulsion Laboratory (airborne infrared imagery [Realmuto et al., 1992]). I was able to assist HVO for part of their lava tube monitoring program and also to collect helicopterborne and ground-based IR video in collaboration with JPL [Keszthelyi et al., 1993].
The most significant results of this work are (1) the quantitative demonstration that the emplacement of pahoehoe and 'a'a flows are the fundamentally different, (2) confirmation that even the longest lava flows observed in our Solar System could have formed as low effusion rate, tube-fed pahoehoe flows, and (3) the recognition that the atmosphere plays a very important role throughout the cooling of history of pahoehoe lava flows. In addition to answering specific questions about the thermal budget of tube-fed pahoehoe lava flows, this thesis has led to some additional, more general, insights into the emplacement of these lava flows. This general understanding of the tube-fed pahoehoe lava flow as a system has suggested foci for future research in this part of physical volcanology.
Resumo:
In response to infection or tissue dysfunction, immune cells develop into highly heterogeneous repertoires with diverse functions. Capturing the full spectrum of these functions requires analysis of large numbers of effector molecules from single cells. However, currently only 3-5 functional proteins can be measured from single cells. We developed a single cell functional proteomics approach that integrates a microchip platform with multiplex cell purification. This approach can quantitate 20 proteins from >5,000 phenotypically pure single cells simultaneously. With a 1-million fold miniaturization, the system can detect down to ~100 molecules and requires only ~104 cells. Single cell functional proteomic analysis finds broad applications in basic, translational and clinical studies. In the three studies conducted, it yielded critical insights for understanding clinical cancer immunotherapy, inflammatory bowel disease (IBD) mechanism and hematopoietic stem cell (HSC) biology.
To study phenotypically defined cell populations, single cell barcode microchips were coupled with upstream multiplex cell purification based on up to 11 parameters. Statistical algorithms were developed to process and model the high dimensional readouts. This analysis evaluates rare cells and is versatile for various cells and proteins. (1) We conducted an immune monitoring study of a phase 2 cancer cellular immunotherapy clinical trial that used T-cell receptor (TCR) transgenic T cells as major therapeutics to treat metastatic melanoma. We evaluated the functional proteome of 4 antigen-specific, phenotypically defined T cell populations from peripheral blood of 3 patients across 8 time points. (2) Natural killer (NK) cells can play a protective role in chronic inflammation and their surface receptor – killer immunoglobulin-like receptor (KIR) – has been identified as a risk factor of IBD. We compared the functional behavior of NK cells that had differential KIR expressions. These NK cells were retrieved from the blood of 12 patients with different genetic backgrounds. (3) HSCs are the progenitors of immune cells and are thought to have no immediate functional capacity against pathogen. However, recent studies identified expression of Toll-like receptors (TLRs) on HSCs. We studied the functional capacity of HSCs upon TLR activation. The comparison of HSCs from wild-type mice against those from genetics knock-out mouse models elucidates the responding signaling pathway.
In all three cases, we observed profound functional heterogeneity within phenotypically defined cells. Polyfunctional cells that conduct multiple functions also produce those proteins in large amounts. They dominate the immune response. In the cancer immunotherapy, the strong cytotoxic and antitumor functions from transgenic TCR T cells contributed to a ~30% tumor reduction immediately after the therapy. However, this infused immune response disappeared within 2-3 weeks. Later on, some patients gained a second antitumor response, consisted of the emergence of endogenous antitumor cytotoxic T cells and their production of multiple antitumor functions. These patients showed more effective long-term tumor control. In the IBD mechanism study, we noticed that, compared with others, NK cells expressing KIR2DL3 receptor secreted a large array of effector proteins, such as TNF-α, CCLs and CXCLs. The functions from these cells regulated disease-contributing cells and protected host tissues. Their existence correlated with IBD disease susceptibility. In the HSC study, the HSCs exhibited functional capacity by producing TNF-α, IL-6 and GM-CSF. TLR stimulation activated the NF-κB signaling in HSCs. Single cell functional proteome contains rich information that is independent from the genome and transcriptome. In all three cases, functional proteomic evaluation uncovered critical biological insights that would not be resolved otherwise. The integrated single cell functional proteomic analysis constructed a detail kinetic picture of the immune response that took place during the clinical cancer immunotherapy. It revealed concrete functional evidence that connected genetics to IBD disease susceptibility. Further, it provided predictors that correlated with clinical responses and pathogenic outcomes.
Resumo:
The dispersion of an isolated, spherical, Brownian particle immersed in a Newtonian fluid between infinite parallel plates is investigated. Expressions are developed for both a 'molecular' contribution to dispersion, which arises from random thermal fluctuations, and a 'convective' contribution, arising when a shear flow is applied between the plates. These expressions are evaluated numerically for all sizes of the particle relative to the bounding plates, and the method of matched asymptotic expansions is used to develop analytical expressions for the dispersion coefficients as a function of particle size to plate spacing ratio for small values of this parameter.
It is shown that both the molecular and convective dispersion coefficients decrease as the size of the particle relative to the bounding plates increase. When the particle is small compared to the plate spacing, the coefficients decrease roughly proportional to the particle size to plate spacing ratio. When the particle closely fills the space between the plates, the molecular dispersion coefficient approaches zero slowly as an inverse logarithmic function of the particle size to plate spacing ratio, and the convective dispersion coefficent approaches zero approximately proportional to the width of the gap between the edges of the sphere and the bounding plates.
Resumo:
In Part I, a method for finding solutions of certain diffusive dispersive nonlinear evolution equations is introduced. The method consists of a straightforward iteration procedure, applied to the equation as it stands (in most cases), which can be carried out to all terms, followed by a summation of the resulting infinite series, sometimes directly and other times in terms of traces of inverses of operators in an appropriate space.
We first illustrate our method with Burgers' and Thomas' equations, and show how it quickly leads to the Cole-Hopft transformation, which is known to linearize these equations.
We also apply this method to the Korteweg and de Vries, nonlinear (cubic) Schrödinger, Sine-Gordon, modified KdV and Boussinesq equations. In all these cases the multisoliton solutions are easily obtained and new expressions for some of them follow. More generally we show that the Marcenko integral equations, together with the inverse problem that originates them, follow naturally from our expressions.
Only solutions that are small in some sense (i.e., they tend to zero as the independent variable goes to ∞) are covered by our methods. However, by the study of the effect of writing the initial iterate u_1 = u_(1)(x,t) as a sum u_1 = ^∼/u_1 + ^≈/u_1 when we know the solution which results if u_1 = ^∼/u_1, we are led to expressions that describe the interaction of two arbitrary solutions, only one of which is small. This should not be confused with Backlund transformations and is more in the direction of performing the inverse scattering over an arbitrary “base” solution. Thus we are able to write expressions for the interaction of a cnoidal wave with a multisoliton in the case of the KdV equation; these expressions are somewhat different from the ones obtained by Wahlquist (1976). Similarly, we find multi-dark-pulse solutions and solutions describing the interaction of envelope-solitons with a uniform wave train in the case of the Schrodinger equation.
Other equations tractable by our method are presented. These include the following equations: Self-induced transparency, reduced Maxwell-Bloch, and a two-dimensional nonlinear Schrodinger. Higher order and matrix-valued equations with nonscalar dispersion functions are also presented.
In Part II, the second Painleve transcendent is treated in conjunction with the similarity solutions of the Korteweg-de Vries equat ion and the modified Korteweg-de Vries equation.
Resumo:
In the first part of this thesis a study of the effect of the longitudinal distribution of optical intensity and electron density on the static and dynamic behavior of semiconductor lasers is performed. A static model for above threshold operation of a single mode laser, consisting of multiple active and passive sections, is developed by calculating the longitudinal optical intensity distribution and electron density distribution in a self-consistent manner. Feedback from an index and gain Bragg grating is included, as well as feedback from discrete reflections at interfaces and facets. Longitudinal spatial holeburning is analyzed by including the dependence of the gain and the refractive index on the electron density. The mechanisms of spatial holeburning in quarter wave shifted DFB lasers are analyzed. A new laser structure with a uniform optical intensity distribution is introduced and an implementation is simulated, resulting in a large reduction of the longitudinal spatial holeburning effect.
A dynamic small-signal model is then developed by including the optical intensity and electron density distribution, as well as the dependence of the grating coupling coefficients on the electron density. Expressions are derived for the intensity and frequency noise spectrum, the spontaneous emission rate into the lasing mode, the linewidth enhancement factor, and the AM and FM modulation response. Different chirp components are identified in the FM response, and a new adiabatic chirp component is discovered. This new adiabatic chirp component is caused by the nonuniform longitudinal distributions, and is found to dominate at low frequencies. Distributed feedback lasers with partial gain coupling are analyzed, and it is shown how the dependence of the grating coupling coefficients on the electron density can result in an enhancement of the differential gain with an associated enhancement in modulation bandwidth and a reduction in chirp.
In the second part, spectral characteristics of passively mode-locked two-section multiple quantum well laser coupled to an external cavity are studied. Broad-band wavelength tuning using an external grating is demonstrated for the first time in passively mode-locked semiconductor lasers. A record tuning range of 26 nm is measured, with pulse widths of typically a few picosecond and time-bandwidth products of more than 10 times the transform limit. It is then demonstrated that these large time-bandwidth products are due to a strong linear upchirp, by performing pulse compression by a factor of 15 to a record pulse widths as low 320 fs.
A model for pulse propagation through a saturable medium with self-phase-modulation, due to the a-parameter, is developed for quantum well material, including the frequency dependence of the gain medium. This model is used to simulate two-section devices coupled to an external cavity. When no self-phase-modulation is present, it is found that the pulses are asymmetric with a sharper rising edge, that the pulse tails have an exponential behavior, and that the transform limit is 0.3. Inclusion of self-phase-modulation results in a linear upchirp imprinted on the pulse after each round-trip. This linear upchirp is due to a combination of self-phase-modulation in a gain section and absorption of the leading edge of the pulse in the saturable absorber.
Resumo:
In noncooperative cost sharing games, individually strategic agents choose resources based on how the welfare (cost or revenue) generated at each resource (which depends on the set of agents that choose the resource) is distributed. The focus is on finding distribution rules that lead to stable allocations, which is formalized by the concept of Nash equilibrium, e.g., Shapley value (budget-balanced) and marginal contribution (not budget-balanced) rules.
Recent work that seeks to characterize the space of all such rules shows that the only budget-balanced distribution rules that guarantee equilibrium existence in all welfare sharing games are generalized weighted Shapley values (GWSVs), by exhibiting a specific 'worst-case' welfare function which requires that GWSV rules be used. Our work provides an exact characterization of the space of distribution rules (not necessarily budget-balanced) for any specific local welfare functions remains, for a general class of scalable and separable games with well-known applications, e.g., facility location, routing, network formation, and coverage games.
We show that all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to GWSV rules on some 'ground' welfare functions. Therefore, it is neither the existence of some worst-case welfare function, nor the restriction of budget-balance, which limits the design to GWSVs. Also, in order to guarantee equilibrium existence, it is necessary to work within the class of potential games, since GWSVs result in (weighted) potential games.
We also provide an alternative characterization—all games conditioned on any fixed local welfare functions possess an equilibrium if and only if the distribution rules are equivalent to generalized weighted marginal contribution (GWMC) rules on some 'ground' welfare functions. This result is due to a deeper fundamental connection between Shapley values and marginal contributions that our proofs expose—they are equivalent given a transformation connecting their ground welfare functions. (This connection leads to novel closed-form expressions for the GWSV potential function.) Since GWMCs are more tractable than GWSVs, a designer can tradeoff budget-balance with computational tractability in deciding which rule to implement.
Resumo:
Home to hundreds of millions of souls and land of excessiveness, the Himalaya is also the locus of a unique seismicity whose scope and peculiarities still remain to this day somewhat mysterious. Having claimed the lives of kings, or turned ancient timeworn cities into heaps of rubbles and ruins, earthquakes eerily inhabit Nepalese folk tales with the fatalistic message that nothing lasts forever. From a scientific point of view as much as from a human perspective, solving the mysteries of Himalayan seismicity thus represents a challenge of prime importance. Documenting geodetic strain across the Nepal Himalaya with various GPS and leveling data, we show that unlike other subduction zones that exhibit a heterogeneous and patchy coupling pattern along strike, the last hundred kilometers of the Main Himalayan Thrust fault, or MHT, appear to be uniformly locked, devoid of any of the “creeping barriers” that traditionally ward off the propagation of large events. The approximately 20 mm/yr of reckoned convergence across the Himalaya matching previously established estimates of the secular deformation at the front of the arc, the slip accumulated at depth has to somehow elastically propagate all the way to the surface at some point. And yet, neither large events from the past nor currently recorded microseismicity nearly compensate for the massive moment deficit that quietly builds up under the giant mountains. Along with this large unbalanced moment deficit, the uncommonly homogeneous coupling pattern on the MHT raises the question of whether or not the locked portion of the MHT can rupture all at once in a giant earthquake. Univocally answering this question appears contingent on the still elusive estimate of the magnitude of the largest possible earthquake in the Himalaya, and requires tight constraints on local fault properties. What makes the Himalaya enigmatic also makes it the potential source of an incredible wealth of information, and we exploit some of the oddities of Himalayan seismicity in an effort to improve the understanding of earthquake physics and cipher out the properties of the MHT. Thanks to the Himalaya, the Indo-Gangetic plain is deluged each year under a tremendous amount of water during the annual summer monsoon that collects and bears down on the Indian plate enough to pull it away from the Eurasian plate slightly, temporarily relieving a small portion of the stress mounting on the MHT. As the rainwater evaporates in the dry winter season, the plate rebounds and tension is increased back on the fault. Interestingly, the mild waggle of stress induced by the monsoon rains is about the same size as that from solid-Earth tides which gently tug at the planets solid layers, but whereas changes in earthquake frequency correspond with the annually occurring monsoon, there is no such correlation with Earth tides, which oscillate back-and-forth twice a day. We therefore investigate the general response of the creeping and seismogenic parts of MHT to periodic stresses in order to link these observations to physical parameters. First, the response of the creeping part of the MHT is analyzed with a simple spring-and-slider system bearing rate-strengthening rheology, and we show that at the transition with the locked zone, where the friction becomes near velocity neutral, the response of the slip rate may be amplified at some periods, which values are analytically related to the physical parameters of the problem. Such predictions therefore hold the potential of constraining fault properties on the MHT, but still await observational counterparts to be applied, as nothing indicates that the variations of seismicity rate on the locked part of the MHT are the direct expressions of variations of the slip rate on its creeping part, and no variations of the slip rate have been singled out from the GPS measurements to this day. When shifting to the locked seismogenic part of the MHT, spring-and-slider models with rate-weakening rheology are insufficient to explain the contrasted responses of the seismicity to the periodic loads that tides and monsoon both place on the MHT. Instead, we resort to numerical simulations using the Boundary Integral CYCLes of Earthquakes algorithm and examine the response of a 2D finite fault embedded with a rate-weakening patch to harmonic stress perturbations of various periods. We show that such simulations are able to reproduce results consistent with a gradual amplification of sensitivity as the perturbing period get larger, up to a critical period corresponding to the characteristic time of evolution of the seismicity in response to a step-like perturbation of stress. This increase of sensitivity was not reproduced by simple 1D-spring-slider systems, probably because of the complexity of the nucleation process, reproduced only by 2D-fault models. When the nucleation zone is close to its critical unstable size, its growth becomes highly sensitive to any external perturbations and the timings of produced events may therefore find themselves highly affected. A fully analytical framework has yet to be developed and further work is needed to fully describe the behavior of the fault in terms of physical parameters, which will likely provide the keys to deduce constitutive properties of the MHT from seismological observations.
Resumo:
In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.
Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.
Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.
Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.
Resumo:
Theoretical and experimental investigations of charge-carrier dynamics at semiconductor/liquid interfaces, specifically with respect to interfacial electron transfer and surface recombination, are presented.
Fermi's golden rule has been used to formulate rate expressions for charge transfer of delocalized carriers in a nondegenerately doped semiconducting electrode to localized, outer-sphere redox acceptors in an electrolyte phase. The treatment allows comparison between charge-transfer kinetic data at metallic, semimetallic, and semiconducting electrodes in terms of parameters such as the electronic coupling to the electrode, the attenuation of coupling with distance into the electrolyte, and the reorganization energy of the charge-transfer event. Within this framework, rate constant values expected at representative semiconducting electrodes have been determined from experimental data for charge transfer at metallic electrodes. The maximum rate constant (i.e., at optimal exoergicity) for outer-sphere processes at semiconducting electrodes is computed to be in the range 10-17-10-16 cm4 s-1, which is in excellent agreement with prior theoretical models and experimental results for charge-transfer kinetics at semiconductor/liquid interfaces.
Double-layer corrections have been evaluated for semiconductor electrodes in both depletion and accumulation conditions. In conjuction with the Gouy-Chapman-Stern model, a finite difference approach has been used to calculate potential drops at a representative solid/liquid interface. Under all conditions that were simulated, the correction to the driving force used to evaluate the interfacial rate constant was determined to be less than 2% of the uncorrected interfacial rate constant.
Photoconductivity decay lifetimes have been obtained for Si(111) in contact with solutions of CH3OH or tetrahydrofuran containing one-electron oxidants. Silicon surfaces in contact with electrolyte solutions having Nernstian redox potentials > 0 V vs. SCE exhibited low effective surface recombination velocities regardless of the different surface chemistries. The formation of an inversion layer, and not a reduced density of electrical trap sites on the surface, is shown to be responsible for the long charge-carrier lifetimes observed for these systems. In addition, a method for preparing an air-stable, low surface recombination velocity Si surface through a two-step, chlorination/alkylation reaction is described.
Resumo:
The 0.2% experimental accuracy of the 1968 Beers and Hughes measurement of the annihilation lifetime of ortho-positronium motivates the attempt to compute the first order quantum electrodynamic corrections to this lifetime. The theoretical problems arising in this computation are here studied in detail up to the point of preparing the necessary computer programs and using them to carry out some of the less demanding steps -- but the computation has not yet been completed. Analytic evaluation of the contributing Feynman diagrams is superior to numerical evaluation, and for this process can be carried out with the aid of the Reduce algebra manipulation computer program.
The relation of the positronium decay rate to the electronpositron annihilation-in-flight amplitude is derived in detail, and it is shown that at threshold annihilation-in-flight, Coulomb divergences appear while infrared divergences vanish. The threshold Coulomb divergences in the amplitude cancel against like divergences in the modulating continuum wave function.
Using the lowest order diagrams of electron-positron annihilation into three photons as a test case, various pitfalls of computer algebraic manipulation are discussed along with ways of avoiding them. The computer manipulation of artificial polynomial expressions is preferable to the direct treatment of rational expressions, even though redundant variables may have to be introduced.
Special properties of the contributing Feynman diagrams are discussed, including the need to restore gauge invariance to the sum of the virtual photon-photon scattering box diagrams by means of a finite subtraction.
A systematic approach to the Feynman-Brown method of Decomposition of single loop diagram integrals with spin-related tensor numerators is developed in detail. This approach allows the Feynman-Brown method to be straightforwardly programmed in the Reduce algebra manipulation language.
The fundamental integrals needed in the wake of the application of the Feynman-Brown decomposition are exhibited and the methods which were used to evaluate them -- primarily dis persion techniques are briefly discussed.
Finally, it is pointed out that while the techniques discussed have permitted the computation of a fair number of the simpler integrals and diagrams contributing to the first order correction of the ortho-positronium annihilation rate, further progress with the more complicated diagrams and with the evaluation of traces is heavily contingent on obtaining access to adequate computer time and core capacity.
Resumo:
Current measures of global gene expression analyses, such as correlation and mutual information-based approaches, largely depend on the degree of association between mRNA levels and to a lesser extent on variability. I develop and implement a new approach, called the Ratiometric method, which is based on the coefficient of variation of the expression ratio of two genes, relying more on variation than previous methods. The advantage of such modus operandi is the ability to detect possible gene pair interactions regardless of the degree of expression dispersion across the sample group. Gene pairs with low expression dispersion, i.e., their absolute expressions remain constant across the sample group, are systematically missed by correlation and mutual information analyses. The superiority of the Ratiometric method in finding these gene pair interactions is demonstrated in a data set of RNA-seq B-cell samples from the 1000 Genomes Project Consortium. The Ratiometric method renders a more comprehensive recovery of KEGG pathways and GO-terms.
Resumo:
Close to equilibrium, a normal Bose or Fermi fluid can be described by an exact kinetic equation whose kernel is nonlocal in space and time. The general expression derived for the kernel is evaluated to second order in the interparticle potential. The result is a wavevector- and frequency-dependent generalization of the linear Uehling-Uhlenbeck kernel with the Born approximation cross section.
The theory is formulated in terms of second-quantized phase space operators whose equilibrium averages are the n-particle Wigner distribution functions. Convenient expressions for the commutators and anticommutators of the phase space operators are obtained. The two-particle equilibrium distribution function is analyzed in terms of momentum-dependent quantum generalizations of the classical pair distribution function h(k) and direct correlation function c(k). The kinetic equation is presented as the equation of motion of a two -particle correlation function, the phase space density-density anticommutator, and is derived by a formal closure of the quantum BBGKY hierarchy. An alternative derivation using a projection operator is also given. It is shown that the method used for approximating the kernel by a second order expansion preserves all the sum rules to the same order, and that the second-order kernel satisfies the appropriate positivity and symmetry conditions.
Resumo:
This work is concerned with a general analysis of wave interactions in periodic structures and particularly periodic thin film dielectric waveguides.
The electromagnetic wave propagation in an asymmetric dielectric waveguide with a periodically perturbed surface is analyzed in terms of a Floquet mode solution. First order approximate analytical expressions for the space harmonics are obtained. The solution is used to analyze various applications: (1) phase matched second harmonic generation in periodically perturbed optical waveguides; (2) grating couplers and thin film filters; (3) Bragg reflection devices; (4) the calculation of the traveling wave interaction impedance for solid state and vacuum tube optical traveling wave amplifiers which utilize periodic dielectric waveguides. Some of these applications are of interest in the field of integrated optics.
A special emphasis is put on the analysis of traveling wave interaction between electrons and electromagnetic waves in various operation regimes. Interactions with a finite temperature electron beam at the collision-dominated, collisionless, and quantum regimes are analyzed in detail assuming a one-dimensional model and longitudinal coupling.
The analysis is used to examine the possibility of solid state traveling wave devices (amplifiers, modulators), and some monolithic structures of these devices are suggested, designed to operate at the submillimeter-far infrared frequency regime. The estimates of attainable traveling wave interaction gain are quite low (on the order of a few inverse centimeters). However, the possibility of attaining net gain with different materials, structures and operation condition is not ruled out.
The developed model is used to discuss the possibility and the theoretical limitations of high frequency (optical) operation of vacuum electron beam tube; and the relation to other electron-electromagnetic wave interaction effects (Smith-Purcell and Cerenkov radiation and the free electron laser) are pointed out. Finally, the case where the periodic structure is the natural crystal lattice is briefly discussed. The longitudinal component of optical space harmonics in the crystal is calculated and found to be of the order of magnitude of the macroscopic wave, and some comments are made on the possibility of coherent bremsstrahlung and distributed feedback lasers in single crystals.
Resumo:
In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.
Resumo:
The problem is to calculate the attenuation of plane sound waves passing through a viscous, heat-conducting fluid containing small spherical inhomogeneities. The attenuation is calculated by evaluating the rate of increase of entropy caused by two irreversible processes: (1) the mechanical work done by the viscous stresses in the presence of velocity gradients, and (2) the flow of heat down the thermal gradients. The method is first applied to a homogeneous fluid with no spheres and shown to give the classical Stokes-Kirchhoff expressions. The method is then used to calculate the additional viscous and thermal attenuation when small spheres are present. The viscous attenuation agrees with Epstein's result obtained in 1941 for a non-heat-conducting fluid. The thermal attenuation is found to be similar in form to the viscous attenuation and, for gases, of comparable magnitude. The general results are applied to the case of water drops in air and air bubbles in water.
For water drops in air the viscous and thermal attenuations are camparable; the thermal losses occur almost entirely in the air, the thermal dissipation in the water being negligible. The theoretical values are compared with Knudsen's experimental data for fogs and found to agree in order of magnitude and dependence on frequency. For air bubbles in water the viscous losses are negligible and the calculated attenuation is almost completely due to thermal losses occurring in the air inside the bubbles, the thermal dissipation in the water being relatively small. (These results apply only to non-resonant bubbles whose radius changes but slightly during the acoustic cycle.)