11 resultados para direct start
em CaltechTHESIS
Resumo:
We perform a measurement of direct CP violation in b to s+gamma Acp, and the measurement of a difference between Acp for neutral B and charged B mesons, Delta A_{X_s\gamma}, using 429 inverse femtobarn of data recorded at the Upsilon(4S) resonance with the BABAR detector. B mesons are reconstructed from 16 exclusive final states. Particle identification is done using an algorithm based on Error Correcting Output Code with an exhaustive matrix. Background rejection and best candidate selection are done using two decision tree-based classifiers. We found $\acp = 1.73%+-1.93%+-1.02% and Delta A_X_sgamma = 4.97%+-3.90%+-1.45% where the uncertainties are statistical and systematic respectively. Based on the measured value of Delta A_X_sgamma, we determine a 90% confidence interval for Im C_8g/C_7gamma, where C_7gamma and C_8g are Wilson coefficients for New Physics amplitudes, at -1.64 < Im C_8g/C_7gamma < 6.52.
Resumo:
The main theme running through these three chapters is that economic agents are often forced to respond to events that are not a direct result of their actions or other agents actions. The optimal response to these shocks will necessarily depend on agents' understanding of how these shocks arise. The economic environment in the first two chapters is analogous to the classic chain store game. In this setting, the addition of unintended trembles by the agents creates an environment better suited to reputation building. The third chapter considers the competitive equilibrium price dynamics in an overlapping generations environment when there are supply and demand shocks.
The first chapter is a game theoretic investigation of a reputation building game. A sequential equilibrium model, called the "error prone agents" model, is developed. In this model, agents believe that all actions are potentially subjected to an error process. Inclusion of this belief into the equilibrium calculation provides for a richer class of reputation building possibilities than when perfect implementation is assumed.
In the second chapter, maximum likelihood estimation is employed to test the consistency of this new model and other models with data from experiments run by other researchers that served as the basis for prominent papers in this field. The alternate models considered are essentially modifications to the standard sequential equilibrium. While some models perform quite well in that the nature of the modification seems to explain deviations from the sequential equilibrium quite well, the degree to which these modifications must be applied shows no consistency across different experimental designs.
The third chapter is a study of price dynamics in an overlapping generations model. It establishes the existence of a unique perfect-foresight competitive equilibrium price path in a pure exchange economy with a finite time horizon when there are arbitrarily many shocks to supply or demand. One main reason for the interest in this equilibrium is that overlapping generations environments are very fruitful for the study of price dynamics, especially in experimental settings. The perfect foresight assumption is an important place to start when examining these environments because it will produce the ex post socially efficient allocation of goods. This characteristic makes this a natural baseline to which other models of price dynamics could be compared.
Resumo:
The epidemic of HIV/AIDS in the United States is constantly changing and evolving, starting from patient zero to now an estimated 650,000 to 900,000 Americans infected. The nature and course of HIV changed dramatically with the introduction of antiretrovirals. This discourse examines many different facets of HIV from the beginning where there wasn't any treatment for HIV until the present era of highly active antiretroviral therapy (HAART). By utilizing statistical analysis of clinical data, this paper examines where we were, where we are and projections as to where treatment of HIV/AIDS is headed.
Chapter Two describes the datasets that were used for the analyses. The primary database utilized was collected by myself from an outpatient HIV clinic. The data included dates from 1984 until the present. The second database was from the Multicenter AIDS Cohort Study (MACS) public dataset. The data from the MACS cover the time between 1984 and October 1992. Comparisons are made between both datasets.
Chapter Three discusses where we were. Before the first anti-HIV drugs (called antiretrovirals) were approved, there was no treatment to slow the progression of HIV. The first generation of antiretrovirals, reverse transcriptase inhibitors such as AZT (zidovudine), DDI (didanosine), DDC (zalcitabine), and D4T (stavudine) provided the first treatment for HIV. The first clinical trials showed that these antiretrovirals had a significant impact on increasing patient survival. The trials also showed that patients on these drugs had increased CD4+ T cell counts. Chapter Three examines the distributions of CD4 T cell counts. The results show that the estimated distributions of CD4 T cell counts are distinctly non-Gaussian. Thus distributional assumptions regarding CD4 T cell counts must be taken, into account when performing analyses with this marker. The results also show the estimated CD4 T cell distributions for each disease stage: asymptomatic, symptomatic and AIDS are non-Gaussian. Interestingly, the distribution of CD4 T cell counts for the asymptomatic period is significantly below that of the CD4 T cell distribution for the uninfected population suggesting that even in patients with no outward symptoms of HIV infection, there exists high levels of immunosuppression.
Chapter Four discusses where we are at present. HIV quickly grew resistant to reverse transcriptase inhibitors which were given sequentially as mono or dual therapy. As resistance grew, the positive effects of the reverse transcriptase inhibitors on CD4 T cell counts and survival dissipated. As the old era faded a new era characterized by a new class of drugs and new technology changed the way that we treat HIV-infected patients. Viral load assays were able to quantify the levels of HIV RNA in the blood. By quantifying the viral load, one now had a faster, more direct way to test antiretroviral regimen efficacy. Protease inhibitors, which attacked a different region of HIV than reverse transcriptase inhibitors, when used in combination with other antiretroviral agents were found to dramatically and significantly reduce the HIV RNA levels in the blood. Patients also experienced significant increases in CD4 T cell counts. For the first time in the epidemic, there was hope. It was hypothesized that with HAART, viral levels could be kept so low that the immune system as measured by CD4 T cell counts would be able to recover. If these viral levels could be kept low enough, it would be possible for the immune system to eradicate the virus. The hypothesis of immune reconstitution, that is bringing CD4 T cell counts up to levels seen in uninfected patients, is tested in Chapter Four. It was found that for these patients, there was not enough of a CD4 T cell increase to be consistent with the hypothesis of immune reconstitution.
In Chapter Five, the effectiveness of long-term HAART is analyzed. Survival analysis was conducted on 213 patients on long-term HAART. The primary endpoint was presence of an AIDS defining illness. A high level of clinical failure, or progression to an endpoint, was found.
Chapter Six yields insights into where we are going. New technology such as viral genotypic testing, that looks at the genetic structure of HIV and determines where mutations have occurred, has shown that HIV is capable of producing resistance mutations that confer multiple drug resistance. This section looks at resistance issues and speculates, ceterus parabis, where the state of HIV is going. This section first addresses viral genotype and the correlates of viral load and disease progression. A second analysis looks at patients who have failed their primary attempts at HAART and subsequent salvage therapy. It was found that salvage regimens, efforts to control viral replication through the administration of different combinations of antiretrovirals, were not effective in 90 percent of the population in controlling viral replication. Thus, primary attempts at therapy offer the best change of viral suppression and delay of disease progression. Documentation of transmission of drug-resistant virus suggests that the public health crisis of HIV is far from over. Drug resistant HIV can sustain the epidemic and hamper our efforts to treat HIV infection. The data presented suggest that the decrease in the morbidity and mortality due to HIV/AIDS is transient. Deaths due to HIV will increase and public health officials must prepare for this eventuality unless new treatments become available. These results also underscore the importance of the vaccine effort.
The final chapter looks at the economic issues related to HIV. The direct and indirect costs of treating HIV/AIDS are very high. For the first time in the epidemic, there exists treatment that can actually slow disease progression. The direct costs for HAART are estimated. It is estimated that the direct lifetime costs for treating each HIV infected patient with HAART is between $353,000 to $598,000 depending on how long HAART prolongs life. If one looks at the incremental cost per year of life saved it is only $101,000. This is comparable with the incremental costs per year of life saved from coronary artery bypass surgery.
Policy makers need to be aware that although HAART can delay disease progression, it is not a cure and HIV is not over. The results presented here suggest that the decreases in the morbidity and mortality due to HIV are transient. Policymakers need to be prepared for the eventual increase in AIDS incidence and mortality. Costs associated with HIV/AIDS are also projected to increase. The cost savings seen recently have been from the dramatic decreases in the incidence of AIDS defining opportunistic infections. As patients who have been on HAART the longest start to progress to AIDS, policymakers and insurance companies will find that the cost of treating HIV/AIDS will increase.
Resumo:
Accurate simulation of quantum dynamics in complex systems poses a fundamental theoretical challenge with immediate application to problems in biological catalysis, charge transfer, and solar energy conversion. The varied length- and timescales that characterize these kinds of processes necessitate development of novel simulation methodology that can both accurately evolve the coupled quantum and classical degrees of freedom and also be easily applicable to large, complex systems. In the following dissertation, the problems of quantum dynamics in complex systems are explored through direct simulation using path-integral methods as well as application of state-of-the-art analytical rate theories.
Resumo:
In this thesis we build a novel analysis framework to perform the direct extraction of all possible effective Higgs boson couplings to the neutral electroweak gauge bosons in the H → ZZ(*) → 4l channel also referred to as the golden channel. We use analytic expressions of the full decay differential cross sections for the H → VV' → 4l process, and the dominant irreducible standard model qq ̄ → 4l background where 4l = 2e2μ,4e,4μ. Detector effects are included through an explicit convolution of these analytic expressions with transfer functions that model the detector responses as well as acceptance and efficiency effects. Using the full set of decay observables, we construct an unbinned 8-dimensional detector level likelihood function which is con- tinuous in the effective couplings, and includes systematics. All potential anomalous couplings of HVV' where V = Z,γ are considered, allowing for general CP even/odd admixtures and any possible phases. We measure the CP-odd mixing between the tree-level HZZ coupling and higher order CP-odd couplings to be compatible with zero, and in the range [−0.40, 0.43], and the mixing between HZZ tree-level coupling and higher order CP -even coupling to be in the ranges [−0.66, −0.57] ∪ [−0.15, 1.00]; namely compatible with a standard model Higgs. We discuss the expected precision in determining the various HVV' couplings in future LHC runs. A powerful and at first glance surprising prediction of the analysis is that with 100-400 fb-1, the golden channel will be able to start probing the couplings of the Higgs boson to diphotons in the 4l channel. We discuss the implications and further optimization of the methods for the next LHC runs.
Resumo:
Proton-coupled electron transfer (PCET) reactions are ubiquitous throughout chemistry and biology. However, challenges arise in both the the experimental and theoretical investigation of PCET reactions; the rare-event nature of the reactions and the coupling between quantum mechanical electron- and proton-transfer with the slower classical dynamics of the surrounding environment necessitates the development of robust simulation methodology. In the following dissertation, novel path-integral based methods are developed and employed for the direct simulation of the reaction dynamics and mechanisms of condensed-phase PCET.
Resumo:
Laser interferometer gravitational wave observatory (LIGO) consists of two complex large-scale laser interferometers designed for direct detection of gravitational waves from distant astrophysical sources in the frequency range 10Hz - 5kHz. Direct detection of space-time ripples will support Einstein's general theory of relativity and provide invaluable information and new insight into physics of the Universe.
Initial phase of LIGO started in 2002, and since then data was collected during six science runs. Instrument sensitivity was improving from run to run due to the effort of commissioning team. Initial LIGO has reached designed sensitivity during the last science run, which ended in October 2010.
In parallel with commissioning and data analysis with the initial detector, LIGO group worked on research and development of the next generation detectors. Major instrument upgrade from initial to advanced LIGO started in 2010 and lasted till 2014.
This thesis describes results of commissioning work done at LIGO Livingston site from 2013 until 2015 in parallel with and after the installation of the instrument. This thesis also discusses new techniques and tools developed at the 40m prototype including adaptive filtering, estimation of quantization noise in digital filters and design of isolation kits for ground seismometers.
The first part of this thesis is devoted to the description of methods for bringing interferometer to the linear regime when collection of data becomes possible. States of longitudinal and angular controls of interferometer degrees of freedom during lock acquisition process and in low noise configuration are discussed in details.
Once interferometer is locked and transitioned to low noise regime, instrument produces astrophysics data that should be calibrated to units of meters or strain. The second part of this thesis describes online calibration technique set up in both observatories to monitor the quality of the collected data in real time. Sensitivity analysis was done to understand and eliminate noise sources of the instrument.
Coupling of noise sources to gravitational wave channel can be reduced if robust feedforward and optimal feedback control loops are implemented. The last part of this thesis describes static and adaptive feedforward noise cancellation techniques applied to Advanced LIGO interferometers and tested at the 40m prototype. Applications of optimal time domain feedback control techniques and estimators to aLIGO control loops are also discussed.
Commissioning work is still ongoing at the sites. First science run of advanced LIGO is planned for September 2015 and will last for 3-4 months. This run will be followed by a set of small instrument upgrades that will be installed on a time scale of few months. Second science run will start in spring 2016 and last for about 6 months. Since current sensitivity of advanced LIGO is already more than factor of 3 higher compared to initial detectors and keeps improving on a monthly basis, upcoming science runs have a good chance for the first direct detection of gravitational waves.
Resumo:
This dissertation reformulates and streamlines the core tools of robustness analysis for linear time invariant systems using now-standard methods in convex optimization. In particular, robust performance analysis can be formulated as a primal convex optimization in the form of a semidefinite program using a semidefinite representation of a set of Gramians. The same approach with semidefinite programming duality is applied to develop a linear matrix inequality test for well-connectedness analysis, and many existing results such as the Kalman-Yakubovich--Popov lemma and various scaled small gain tests are derived in an elegant fashion. More importantly, unlike the classical approach, a decision variable in this novel optimization framework contains all inner products of signals in a system, and an algorithm for constructing an input and state pair of a system corresponding to the optimal solution of robustness optimization is presented based on this information. This insight may open up new research directions, and as one such example, this dissertation proposes a semidefinite programming relaxation of a cardinality constrained variant of the H ∞ norm, which we term sparse H ∞ analysis, where an adversarial disturbance can use only a limited number of channels. Finally, sparse H ∞ analysis is applied to the linearized swing dynamics in order to detect potential vulnerable spots in power networks.
Resumo:
This report presents the results of an investigation of a method of underwater propulsion. The propelling system utilizes the energy of a small mass of expanding gas to accelerate the flow of a large mass of water through an open ended duct of proper shape and dimensions to obtain a resultant thrust. The investigation was limited to making a large number of runs on a hydroduct of arbitrary design, varying between wide limits the water flow and gas flow through the device, and measuring the net thrust caused by the introduction and expansion of the gas.
In comparison with the effective exhaust velocity of about 6,000 feet per second observed in rocket motors, this hydroduct model attained a maximum effective exhaust velocity of more than 27,000 feet per second, using nitrogen gas. Using hydrogen gas, effective exhaust velocities of 146,000 feet per second were obtained. Further investigation should prove this method of propulsion not only to be practical but very efficient.
This investigation was conducted at Project No. 1, Guggenheim Aeronautical Laboratory, California Institute of Technology, Pasadena, California.
Resumo:
From the tunneling characteristics of a tin-tin oxide-lead junction, a direct measurement has been made of the energy-gap variation for a superconductor carrying a current in a compensated geometry. Throughout the region investigated – several temperatures near Tc and down to a reduced temperature t = 0.8 –the observed current dependence agrees quite well with predictions based on the Ginzburg-Landau-Gor’kov theory. Near Tc the predicted temperature dependence is also well verified, though deviations are observed at lower temperatures; even for the latter, the data are internally consistent with the temperature dependence of the experimental critical current. At the lowest temperature investigated, t = 0.8, a small “Josephson” tunneling current allowed further a direct measurement of the electron drift velocity at low current densities. From this, a preliminary experimental value of the critical velocity, believed to be the first reported, can be inferred in the basis of Ginzburg-Landau theory. For tin at t = 0.8, we find vc = 87 m/sec. This value does not appear fully consistent with those predicted by recent theories for superconductors with short electronic mean-free-paths.
Resumo:
The thesis is divided into two parts. Part I generalizes a self-consistent calculation of residue shifts from SU3 symmetry, originally performed by Dashen, Dothan, Frautschi, and Sharp, to include the effects of non-linear terms. Residue factorizability is used to transform an overdetermined set of equations into a variational problem, which is designed to take advantage of the redundancy of the mathematical system. The solution of this problem automatically satisfies the requirement of factorizability and comes close to satisfying all the original equations.
Part II investigates some consequences of direct channel Regge poles and treats the problem of relating Reggeized partial wave expansions made in different reaction channels. An analytic method is introduced which can be used to determine the crossed-channel discontinuity for a large class of direct-channel Regge representations, and this method is applied to some specific representations.
It is demonstrated that the multi-sheeted analytic structure of the Regge trajectory function can be used to resolve apparent difficulties arising from infinitely rising Regge trajectories. Also discussed are the implications of large collections of "daughter trajectories."
Two things are of particular interest: first, the threshold behavior in direct and crossed channels; second, the potentialities of Reggeized representations for us in self-consistent calculations. A new representation is introduced which surpasses previous formulations in these two areas, automatically satisfying direct-channel threshold constraints while being capable of reproducing a reasonable crossed channel discontinuity. A scalar model is investigated for low energies, and a relation is obtained between the mass of the lowest bound state and the slope of the Regge trajectory.