943 resultados para Statistical physics


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the quest to develop viable designs for third-generation optical interferometric gravitational-wave detectors, one strategy is to monitor the relative momentum or speed of the test-mass mirrors, rather than monitoring their relative position. The most straightforward design for a speed-meter interferometer that accomplishes this is described and analyzed in Chapter 2. This design (due to Braginsky, Gorodetsky, Khalili, and Thorne) is analogous to a microwave-cavity speed meter conceived by Braginsky and Khalili. A mathematical mapping between the microwave speed meter and the optical interferometric speed meter is developed and used to show (in accord with the speed being a quantum nondemolition observable) that in principle the interferometric speed meter can beat the gravitational-wave standard quantum limit (SQL) by an arbitrarily large amount, over an arbitrarily wide range of frequencies . However, in practice, to reach or beat the SQL, this specific speed meter requires exorbitantly high input light power. The physical reason for this is explored, along with other issues such as constraints on performance due to optical dissipation.

Chapter 3 proposes a more sophisticated version of a speed meter. This new design requires only a modest input power and appears to be a fully practical candidate for third-generation LIGO. It can beat the SQL (the approximate sensitivity of second-generation LIGO interferometers) over a broad range of frequencies (~ 10 to 100 Hz in practice) by a factor h/hSQL ~ √W^(SQL)_(circ)/Wcirc. Here Wcirc is the light power circulating in the interferometer arms and WSQL ≃ 800 kW is the circulating power required to beat the SQL at 100 Hz (the LIGO-II power). If squeezed vacuum (with a power-squeeze factor e-2R) is injected into the interferometer's output port, the SQL can be beat with a much reduced laser power: h/hSQL ~ √W^(SQL)_(circ)/Wcirce-2R. For realistic parameters (e-2R ≃ 10 and Wcirc ≃ 800 to 2000 kW), the SQL can be beat by a factor ~ 3 to 4 from 10 to 100 Hz. [However, as the power increases in these expressions, the speed meter becomes more narrow band; additional power and re-optimization of some parameters are required to maintain the wide band.] By performing frequency-dependent homodyne detection on the output (with the aid of two kilometer-scale filter cavities), one can markedly improve the interferometer's sensitivity at frequencies above 100 Hz.

Chapters 2 and 3 are part of an ongoing effort to develop a practical variant of an interferometric speed meter and to combine the speed meter concept with other ideas to yield a promising third- generation interferometric gravitational-wave detector that entails low laser power.

Chapter 4 is a contribution to the foundations for analyzing sources of gravitational waves for LIGO. Specifically, it presents an analysis of the tidal work done on a self-gravitating body (e.g., a neutron star or black hole) in an external tidal field (e.g., that of a binary companion). The change in the mass-energy of the body as a result of the tidal work, or "tidal heating," is analyzed using the Landau-Lifshitz pseudotensor and the local asymptotic rest frame of the body. It is shown that the work done on the body is gauge invariant, while the body-tidal-field interaction energy contained within the body's local asymptotic rest frame is gauge dependent. This is analogous to Newtonian theory, where the interaction energy is shown to depend on how one localizes gravitational energy, but the work done on the body is independent of that localization. These conclusions play a role in analyses, by others, of the dynamics and stability of the inspiraling neutron-star binaries whose gravitational waves are likely to be seen and studied by LIGO.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Time-of-flight measurements of energetic He atoms, field ionization of cryogenic liquid helium clusters, and time-of-flight and REMPI spectroscopy of radical salt clusters were investigated experimentally. The excited He atoms were generated in a corona discharge. Two strong neutral peaks were observed, accompanied by a prompt photon peak and a charged peak. All peaks were correlated with the pulsing of the discharge. The neutral hyperthermal and metastable atoms were formed by different mechanisms at different stages of the corona discharge. Positively charged helium droplets were produced by ionization of liquid helium in an electrostatic spraying experiment. The fluid emerging from a thin glass capillary was ionized by a high voltage applied to a needle inside the capillary. Fine droplets (less than 10 µm in diameter) were produced in showers with currents as high as 0.4 µA at 2-4 kV. The high currents resulting from field ionization in helium and the low surface tension of He I, led to charge densities that greatly exceeded the Rayleigh limit, thus resulting in coulombic explosion of the liquid. In contrast, liquid nitrogen formed a well-defined Taylor cone with droplets having diameters comparable to the jet (≈100 µm) at lower currents (10 nA) and higher voltages (8 kV). The metal-halide clusters of calcium and chlorine were generated by laser ablation of calcium metal in a Ar/CCl4 expansion. A visible spectrum of the Ca2Cl3 cluster was observed from 651 to 630 nm by 1 +1' REMPI. The spectra were composed of a strong origin band at 15 350.8 cm-1 and several weak vibronic bands. Density functional calculations predicted three minimum energy isomers. The spectrum was assigned to the 2B2 ← X 2A1 transition of a planar C2V structure having a ring of two Cl and two Ca atoms and a terminal Cl atom. The ring isomer of Ca2Cl3 has the unpaired electron localized on one Ca2+ ion to form a Ca+ chromophore. A second electronic band of Ca2Cl3 was observed at 720 nm. The band is sharply different from the 650 nm band and likely due to a different isomer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.

Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.

Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.

Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.

In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.

Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.

The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.

Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.

The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.

The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work proposes a new simulation methodology in which variable density turbulent flows can be studied in the context of a mixing layer with or without the presence of gravity. Specifically, this methodology is developed to probe the nature of non-buoyantly-driven (i.e. isotropically-driven) or buoyantly-driven mixing deep inside a mixing layer. Numerical forcing methods are incorporated into both the velocity and scalar fields, which extends the length of time over which mixing physics can be studied. The simulation framework is designed to allow for independent variation of four non-dimensional parameters, including the Reynolds, Richardson, Atwood, and Schmidt numbers. Additionally, the governing equations are integrated in such a way to allow for the relative magnitude of buoyant energy production and non-buoyant energy production to be varied.

The computational requirements needed to implement the proposed configuration are presented. They are justified in terms of grid resolution, order of accuracy, and transport scheme. Canonical features of turbulent buoyant flows are reproduced as validation of the proposed methodology. These features include the recovery of isotropic Kolmogorov scales under buoyant and non-buoyant conditions, the recovery of anisotropic one-dimensional energy spectra under buoyant conditions, and the preservation of known statistical distributions in the scalar field, as found in other DNS studies.

This simulation methodology is used to perform a parametric study of turbulent buoyant flows to discern the effects of varying the Reynolds, Richardson, and Atwood numbers on the resulting state of mixing. The effects of the Reynolds and Atwood numbers are isolated by looking at two energy dissipation rate conditions under non-buoyant (variable density) and constant density conditions. The effects of Richardson number are isolated by varying the ratio of buoyant energy production to total energy production from zero (non-buoyant) to one (entirely buoyant) under constant Atwood number, Schmidt number, and energy dissipation rate conditions. It is found that the major differences between non-buoyant and buoyant turbulent flows are contained in the transfer spectrum and longitudinal structure functions, while all other metrics are largely similar (e.g. energy spectra, alignment characteristics of the strain-rate tensor). Also, despite the differences noted between fully buoyant and non-buoyant turbulent fields, the scalar field, in all cases, is unchanged by these. The mixing dynamics in the scalar field are found to be insensitive to the source of turbulent kinetic energy production (non-buoyant vs. buoyant).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

For more than 55 years, data have been collected on the population of pike Esox lucius in Windermere, first by the Freshwater Biological Association (FBA) and, since 1989, by the Institute of Freshwater Ecology (IFE) of the NERC Centre for Ecology and Hydrology. The aim of this article is to explore some methodological and statistical issues associated with the precision of pike gill net catches and catch-per-unit-effort (CPUE) data, further to those examined by Bagenal (1972) and especially in the light of the current deployment within the Windermere long-term sampling programme. Specifically, consideration is given to the precision of catch estimates from gill netting, including the effects of sampling different locations, the effectiveness of sampling for distinguishing between years, and the effects of changing fishing effort.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Nearly all young stars are variable, with the variability traditionally divided into two classes: periodic variables and aperiodic or "irregular" variables. Periodic variables have been studied extensively, typically using periodograms, while aperiodic variables have received much less attention due to a lack of standard statistical tools. However, aperiodic variability can serve as a powerful probe of young star accretion physics and inner circumstellar disk structure. For my dissertation, I analyzed data from a large-scale, long-term survey of the nearby North America Nebula complex, using Palomar Transient Factory photometric time series collected on a nightly or every few night cadence over several years. This survey is the most thorough exploration of variability in a sample of thousands of young stars over time baselines of days to years, revealing a rich array of lightcurve shapes, amplitudes, and timescales.

I have constrained the timescale distribution of all young variables, periodic and aperiodic, on timescales from less than a day to ~100 days. I have shown that the distribution of timescales for aperiodic variables peaks at a few days, with relatively few (~15%) sources dominated by variability on tens of days or longer. My constraints on aperiodic timescale distributions are based on two new tools, magnitude- vs. time-difference (Δm-Δt) plots and peak-finding plots, for describing aperiodic lightcurves; this thesis provides simulations of their performance and presents recommendations on how to apply them to aperiodic signals in other time series data sets. In addition, I have measured the error introduced into colors or SEDs from combining photometry of variable sources taken at different epochs. These are the first quantitative results to be presented on the distributions in amplitude and time scale for young aperiodic variables, particularly those varying on timescales of weeks to months.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

There is a sparse number of credible source models available from large-magnitude past earthquakes. A stochastic source model generation algorithm thus becomes necessary for robust risk quantification using scenario earthquakes. We present an algorithm that combines the physics of fault ruptures as imaged in laboratory earthquakes with stress estimates on the fault constrained by field observations to generate stochastic source models for large-magnitude (Mw 6.0-8.0) strike-slip earthquakes. The algorithm is validated through a statistical comparison of synthetic ground motion histories from a stochastically generated source model for a magnitude 7.90 earthquake and a kinematic finite-source inversion of an equivalent magnitude past earthquake on a geometrically similar fault. The synthetic dataset comprises of three-component ground motion waveforms, computed at 636 sites in southern California, for ten hypothetical rupture scenarios (five hypocenters, each with two rupture directions) on the southern San Andreas fault. A similar validation exercise is conducted for a magnitude 6.0 earthquake, the lower magnitude limit for the algorithm. Additionally, ground motions from the Mw7.9 earthquake simulations are compared against predictions by the Campbell-Bozorgnia NGA relation as well as the ShakeOut scenario earthquake. The algorithm is then applied to generate fifty source models for a hypothetical magnitude 7.9 earthquake originating at Parkfield, with rupture propagating from north to south (towards Wrightwood), similar to the 1857 Fort Tejon earthquake. Using the spectral element method, three-component ground motion waveforms are computed in the Los Angeles basin for each scenario earthquake and the sensitivity of ground shaking intensity to seismic source parameters (such as the percentage of asperity area relative to the fault area, rupture speed, and risetime) is studied.

Under plausible San Andreas fault earthquakes in the next 30 years, modeled using the stochastic source algorithm, the performance of two 18-story steel moment frame buildings (UBC 1982 and 1997 designs) in southern California is quantified. The approach integrates rupture-to-rafters simulations into the PEER performance based earthquake engineering (PBEE) framework. Using stochastic sources and computational seismic wave propagation, three-component ground motion histories at 636 sites in southern California are generated for sixty scenario earthquakes on the San Andreas fault. The ruptures, with moment magnitudes in the range of 6.0-8.0, are assumed to occur at five locations on the southern section of the fault. Two unilateral rupture propagation directions are considered. The 30-year probabilities of all plausible ruptures in this magnitude range and in that section of the fault, as forecast by the United States Geological Survey, are distributed among these 60 earthquakes based on proximity and moment release. The response of the two 18-story buildings hypothetically located at each of the 636 sites under 3-component shaking from all 60 events is computed using 3-D nonlinear time-history analysis. Using these results, the probability of the structural response exceeding Immediate Occupancy (IO), Life-Safety (LS), and Collapse Prevention (CP) performance levels under San Andreas fault earthquakes over the next thirty years is evaluated.

Furthermore, the conditional and marginal probability distributions of peak ground velocity (PGV) and displacement (PGD) in Los Angeles and surrounding basins due to earthquakes occurring primarily on the mid-section of southern San Andreas fault are determined using Bayesian model class identification. Simulated ground motions at sites within 55-75km from the source from a suite of 60 earthquakes (Mw 6.0 − 8.0) primarily rupturing mid-section of San Andreas fault are considered for PGV and PGD data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

I. Crossing transformations constitute a group of permutations under which the scattering amplitude is invariant. Using Mandelstem's analyticity, we decompose the amplitude into irreducible representations of this group. The usual quantum numbers, such as isospin or SU(3), are "crossing-invariant". Thus no higher symmetry is generated by crossing itself. However, elimination of certain quantum numbers in intermediate states is not crossing-invariant, and higher symmetries have to be introduced to make it possible. The current literature on exchange degeneracy is a manifestation of this statement. To exemplify application of our analysis, we show how, starting with SU(3) invariance, one can use crossing and the absence of exotic channels to derive the quark-model picture of the tensor nonet. No detailed dynamical input is used.

II. A dispersion relation calculation of the real parts of forward π±p and K±p scattering amplitudes is carried out under the assumption of constant total cross sections in the Serpukhov energy range. Comparison with existing experimental results as well as predictions for future high energy experiments are presented and discussed. Electromagnetic effects are found to be too small to account for the expected difference between the π-p and π+p total cross sections at higher energies.