14 resultados para Objective measurement

em CaltechTHESIS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We perform a measurement of direct CP violation in b to s+gamma Acp, and the measurement of a difference between Acp for neutral B and charged B mesons, Delta A_{X_s\gamma}, using 429 inverse femtobarn of data recorded at the Upsilon(4S) resonance with the BABAR detector. B mesons are reconstructed from 16 exclusive final states. Particle identification is done using an algorithm based on Error Correcting Output Code with an exhaustive matrix. Background rejection and best candidate selection are done using two decision tree-based classifiers. We found $\acp = 1.73%+-1.93%+-1.02% and Delta A_X_sgamma = 4.97%+-3.90%+-1.45% where the uncertainties are statistical and systematic respectively. Based on the measured value of Delta A_X_sgamma, we determine a 90% confidence interval for Im C_8g/C_7gamma, where C_7gamma and C_8g are Wilson coefficients for New Physics amplitudes, at -1.64 < Im C_8g/C_7gamma < 6.52.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The dissertation is concerned with the mathematical study of various network problems. First, three real-world networks are considered: (i) the human brain network (ii) communication networks, (iii) electric power networks. Although these networks perform very different tasks, they share similar mathematical foundations. The high-level goal is to analyze and/or synthesis each of these systems from a “control and optimization” point of view. After studying these three real-world networks, two abstract network problems are also explored, which are motivated by power systems. The first one is “flow optimization over a flow network” and the second one is “nonlinear optimization over a generalized weighted graph”. The results derived in this dissertation are summarized below.

Brain Networks: Neuroimaging data reveals the coordinated activity of spatially distinct brain regions, which may be represented mathematically as a network of nodes (brain regions) and links (interdependencies). To obtain the brain connectivity network, the graphs associated with the correlation matrix and the inverse covariance matrix—describing marginal and conditional dependencies between brain regions—have been proposed in the literature. A question arises as to whether any of these graphs provides useful information about the brain connectivity. Due to the electrical properties of the brain, this problem will be investigated in the context of electrical circuits. First, we consider an electric circuit model and show that the inverse covariance matrix of the node voltages reveals the topology of the circuit. Second, we study the problem of finding the topology of the circuit based on only measurement. In this case, by assuming that the circuit is hidden inside a black box and only the nodal signals are available for measurement, the aim is to find the topology of the circuit when a limited number of samples are available. For this purpose, we deploy the graphical lasso technique to estimate a sparse inverse covariance matrix. It is shown that the graphical lasso may find most of the circuit topology if the exact covariance matrix is well-conditioned. However, it may fail to work well when this matrix is ill-conditioned. To deal with ill-conditioned matrices, we propose a small modification to the graphical lasso algorithm and demonstrate its performance. Finally, the technique developed in this work will be applied to the resting-state fMRI data of a number of healthy subjects.

Communication Networks: Congestion control techniques aim to adjust the transmission rates of competing users in the Internet in such a way that the network resources are shared efficiently. Despite the progress in the analysis and synthesis of the Internet congestion control, almost all existing fluid models of congestion control assume that every link in the path of a flow observes the original source rate. To address this issue, a more accurate model is derived in this work for the behavior of the network under an arbitrary congestion controller, which takes into account of the effect of buffering (queueing) on data flows. Using this model, it is proved that the well-known Internet congestion control algorithms may no longer be stable for the common pricing schemes, unless a sufficient condition is satisfied. It is also shown that these algorithms are guaranteed to be stable if a new pricing mechanism is used.

Electrical Power Networks: Optimal power flow (OPF) has been one of the most studied problems for power systems since its introduction by Carpentier in 1962. This problem is concerned with finding an optimal operating point of a power network minimizing the total power generation cost subject to network and physical constraints. It is well known that OPF is computationally hard to solve due to the nonlinear interrelation among the optimization variables. The objective is to identify a large class of networks over which every OPF problem can be solved in polynomial time. To this end, a convex relaxation is proposed, which solves the OPF problem exactly for every radial network and every meshed network with a sufficient number of phase shifters, provided power over-delivery is allowed. The concept of “power over-delivery” is equivalent to relaxing the power balance equations to inequality constraints.

Flow Networks: In this part of the dissertation, the minimum-cost flow problem over an arbitrary flow network is considered. In this problem, each node is associated with some possibly unknown injection, each line has two unknown flows at its ends related to each other via a nonlinear function, and all injections and flows need to satisfy certain box constraints. This problem, named generalized network flow (GNF), is highly non-convex due to its nonlinear equality constraints. Under the assumption of monotonicity and convexity of the flow and cost functions, a convex relaxation is proposed, which always finds the optimal injections. A primary application of this work is in the OPF problem. The results of this work on GNF prove that the relaxation on power balance equations (i.e., load over-delivery) is not needed in practice under a very mild angle assumption.

Generalized Weighted Graphs: Motivated by power optimizations, this part aims to find a global optimization technique for a nonlinear optimization defined over a generalized weighted graph. Every edge of this type of graph is associated with a weight set corresponding to the known parameters of the optimization (e.g., the coefficients). The motivation behind this problem is to investigate how the (hidden) structure of a given real/complex valued optimization makes the problem easy to solve, and indeed the generalized weighted graph is introduced to capture the structure of an optimization. Various sufficient conditions are derived, which relate the polynomial-time solvability of different classes of optimization problems to weak properties of the generalized weighted graph such as its topology and the sign definiteness of its weight sets. As an application, it is proved that a broad class of real and complex optimizations over power networks are polynomial-time solvable due to the passivity of transmission lines and transformers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We have measured inclusive electron-scattering cross sections for targets of ^(4)He, C, Al, Fe, and Au, for kinematics spanning the quasi-elastic peak, with squared, four­ momentum transfers (q^2) between 0.23 and 2.89 (GeV/c)^2. Additional data were measured for Fe with q^2's up to 3.69 (GeV/c)^2 These cross sections were analyzed for the y-scaling behavior expected from a simple, impulse-approximation model, and are found to approach a scaling limit at the highest q^2's. The q^2 approach to scaling is compared with a calculation for infinite nuclear matter, and relationships between the scaling function and nucleon momentum distributions are discussed. Deviations from perfect scaling are used to set limits on possible changes in the size of nucleons inside the nucleus.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.

In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.

We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.

Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.

This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report measurements of the proton form factors, G^p_E and G^p_M, extracted from elastic electron scattering in the range 1 ≤ Q^2 ≤ 3 (GeV/c)^2 with uncertainties of <15% in G^p_E and <3% in G^p_M. The results for G^p_E are somewhat larger than indicated by most theoretical parameterizations. The ratio of Pauli and Dirac form factors, Q^2(F^p_2/F^p_1), is lower in value and demonstrates less Q^2 dependence than these parameterizations have indicated. Comparisons are made to theoretical models, including those based on perturbative QCD, vector-meson dominance, QCD sum rules, and diquark constituents to the proton. A global extraction of the form factors, including previous elastic scattering measurements, is also presented.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The core-level energy shifts observed using X-ray photoelectron spectroscopy (XPS) have been used to determine the band bending at Si(111) surfaces terminated with Si-Br, Si-H, and Si-CH3 groups, respectively. The surface termination influenced the band bending, with the Si 2p3/2 binding energy affected more by the surface chemistry than by the dopant type. The highest binding energies were measured on Si(111)-Br (whose Fermi level was positioned near the conduction band at the surface), followed by Si(111)-H, followed by Si(111)-CH3 (whose Fermi level was positioned near mid-gap at the surface). Si(111)-CH3 surfaces exposed to Br2(g) yielded the lowest binding energies, with the Fermi level positioned between mid-gap and the valence band. The Fermi level position of Br2(g)-exposed Si(111)-CH3 was consistent with the presence of negatively charged bromine-containing ions on such surfaces. The binding energies of all of the species detected on the surface (C, O, Br) shifted with the band bending, illustrating the importance of isolating the effects of band bending when measuring chemical shifts on semiconductor surfaces. The influence of band bending was confirmed by surface photovoltage (SPV) measurements, which showed that the core levels shifted toward their flat-band values upon illumination. Where applicable, the contribution from the X-ray source to the SPV was isolated and quantified. Work functions were measured by ultraviolet photoelectron spectroscopy (UPS), allowing for calculation of the sign and magnitude of the surface dipole in such systems. The values of the surface dipoles were in good agreement with previous measurements as well as with electronegativity considerations. The binding energies of the adventitious carbon signals were affected by band bending as well as by the surface dipole. A model of band bending in which charged surface states are located exterior to the surface dipole is consistent with the XPS and UPS behavior of the chemically functionalized Si(111) surfaces investigated herein.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The study of the strength of a material is relevant to a variety of applications including automobile collisions, armor penetration and inertial confinement fusion. Although dynamic behavior of materials at high pressures and strain-rates has been studied extensively using plate impact experiments, the results provide measurements in one direction only. Material behavior that is dependent on strength is unaccounted for. The research in this study proposes two novel configurations to mitigate this problem.

The first configuration introduced is the oblique wedge experiment, which is comprised of a driver material, an angled target of interest and a backing material used to measure in-situ velocities. Upon impact, a shock wave is generated in the driver material. As the shock encounters the angled target, it is reflected back into the driver and transmitted into the target. Due to the angle of obliquity of the incident wave, a transverse wave is generated that allows the target to be subjected to shear while being compressed by the initial longitudinal shock such that the material does not slip. Using numerical simulations, this study shows that a variety of oblique wedge configurations can be used to study the shear response of materials and this can be extended to strength measurement as well. Experiments were performed on an oblique wedge setup with a copper impactor, polymethylmethacrylate driver, aluminum 6061-t6 target, and a lithium fluoride window. Particle velocities were measured using laser interferometry and results agree well with the simulations.

The second novel configuration is the y-cut quartz sandwich design, which uses the anisotropic properties of y-cut quartz to generate a shear wave that is transmitted into a thin sample. By using an anvil material to back the thin sample, particle velocities measured at the rear surface of the backing plate can be implemented to calculate the shear stress in the material and subsequently the strength. Numerical simulations were conducted to show that this configuration has the ability to measure the strength for a variety of materials.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The free neutron beta decay correlation A0 between neutron polarization and electron emission direction provides the strongest constraint on the ratio λ = gA/gV of the Axial-vector to Vector coupling constants in Weak decay. In conjunction with the CKM Matrix element Vud and the neutron lifetime τn, λ provides a test of Standard Model assumptions for the Weak interaction. Leading high-precision measurements of A0 and τn in the 1995-2005 time period showed discrepancies with prior measurements and Standard Model predictions for the relationship between λ, τn, and Vud. The UCNA experiment was developed to measure A0 from decay of polarized ultracold neutrons (UCN), providing a complementary determination of λ with different systematic uncertainties from prior cold neutron beam experiments. This dissertation describes analysis of the dataset collected by UCNA in 2010, with emphasis on detector response calibrations and systematics. The UCNA measurement is placed in the context of the most recent τn results and cold neutron A0 experiments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The spin dependent cross sections, σT1/2 and σT3/2 , and asymmetries, A and A for 3He have been measured at the Jefferson Lab's Hall A facility. The inclusive scattering process 3He(e,e)X was performed for initial beam energies ranging from 0.86 to 5.1 GeV, at a scattering angle of 15.5°. Data includes measurements from the quasielastic peak, resonance region, and the deep inelastic regime. An approximation for the extended Gerasimov-Drell-Hearn integral is presented at a 4-momentum transfer Q2 of 0.2-1.0 GeV2.

Also presented are results on the performance of the polarized 3He target. Polarization of 3He was achieved by the process of spin-exchange collisions with optically pumped rubidium vapor. The 3He polarization was monitored using the NMR technique of adiabatic fast passage (AFP). The average target polarization was approximately 35% and was determined to have a systematic uncertainty of roughly ±4% relative.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Improved measurement of the neutrino mass via β decay spectroscopy requires the development of new energy measurement techniques and a new β decay source. A promising proposal is to measure the β energy by the frequency of the cyclotron radiation emitted in a magnetic field and to use a high purity atomic tritium source. This thesis examines the feasibility of using a magnetic trap to create and maintain such a source. We demonstrate that the loss rate due to β decay heating is not a limiting factor for the design. We also calculate the loss rate due to evaporative cooling and propose that the tritium can be cooled sufficiently during trap loading as to render this negligible. We further demonstrate a design for the magnetic field which produces a highly uniform field over a large fraction of the trap volume as needed for cyclotron frequency spectroscopy while still providing effective trapping.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

While some of the deepest results in nature are those that give explicit bounds between important physical quantities, some of the most intriguing and celebrated of such bounds come from fields where there is still a great deal of disagreement and confusion regarding even the most fundamental aspects of the theories. For example, in quantum mechanics, there is still no complete consensus as to whether the limitations associated with Heisenberg's Uncertainty Principle derive from an inherent randomness in physics, or rather from limitations in the measurement process itself, resulting from phenomena like back action. Likewise, the second law of thermodynamics makes a statement regarding the increase in entropy of closed systems, yet the theory itself has neither a universally-accepted definition of equilibrium, nor an adequate explanation of how a system with underlying microscopically Hamiltonian dynamics (reversible) settles into a fixed distribution.

Motivated by these physical theories, and perhaps their inconsistencies, in this thesis we use dynamical systems theory to investigate how the very simplest of systems, even with no physical constraints, are characterized by bounds that give limits to the ability to make measurements on them. Using an existing interpretation, we start by examining how dissipative systems can be viewed as high-dimensional lossless systems, and how taking this view necessarily implies the existence of a noise process that results from the uncertainty in the initial system state. This fluctuation-dissipation result plays a central role in a measurement model that we examine, in particular describing how noise is inevitably injected into a system during a measurement, noise that can be viewed as originating either from the randomness of the many degrees of freedom of the measurement device, or of the environment. This noise constitutes one component of measurement back action, and ultimately imposes limits on measurement uncertainty. Depending on the assumptions we make about active devices, and their limitations, this back action can be offset to varying degrees via control. It turns out that using active devices to reduce measurement back action leads to estimation problems that have non-zero uncertainty lower bounds, the most interesting of which arise when the observed system is lossless. One such lower bound, a main contribution of this work, can be viewed as a classical version of a Heisenberg uncertainty relation between the system's position and momentum. We finally also revisit the murky question of how macroscopic dissipation appears from lossless dynamics, and propose alternative approaches for framing the question using existing systematic methods of model reduction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The intensities and relative abundances of galactic cosmic ray protons and antiprotons have been measured with the Isotope Matter Antimatter Experiment (IMAX), a balloon-borne magnet spectrometer. The IMAX payload had a successful flight from Lynn Lake, Manitoba, Canada on July 16, 1992. Particles detected by IMAX were identified by mass and charge via the Cherenkov-Rigidity and TOP-Rigidity techniques, with measured rms mass resolution ≤0.2 amu for Z=1 particles.

Cosmic ray antiprotons are of interest because they can be produced by the interactions of high energy protons and heavier nuclei with the interstellar medium as well as by more exotic sources. Previous cosmic ray antiproton experiments have reported an excess of antiprotons over that expected solely from cosmic ray interactions.

Analysis of the flight data has yielded 124405 protons and 3 antiprotons in the energy range 0.19-0.97 GeV at the instrument, 140617 protons and 8 antiprotons in the energy range 0.97-2.58 GeV, and 22524 protons and 5 antiprotons in the energy range 2.58-3.08 GeV. These measurements are a statistical improvement over previous antiproton measurements, and they demonstrate improved separation of antiprotons from the more abundant fluxes of protons, electrons, and other cosmic ray species.

When these results are corrected for instrumental and atmospheric background and losses, the ratios at the top of the atmosphere are p/p=3.21(+3.49, -1.97)x10^(-5) in the energy range 0.25-1.00 GeV, p/p=5.38(+3.48, -2.45) x10^(-5) in the energy range 1.00-2.61 GeV, and p/p=2.05(+1.79, -1.15) x10^(-4) in the energy range 2.61-3.11 GeV. The corresponding antiproton intensities, also corrected to the top of the atmosphere, are 2.3(+2.5, -1.4) x10^(-2) (m^2 s sr GeV)^(-1), 2.1(+1.4, -1.0) x10^(-2) (m^2 s sr GeV)^(-1), and 4.3(+3.7, -2.4) x10^(-2) (m^2 s sr GeV)^(-1) for the same energy ranges.

The IMAX antiproton fluxes and antiproton/proton ratios are compared with recent Standard Leaky Box Model (SLBM) calculations of the cosmic ray antiproton abundance. According to this model, cosmic ray antiprotons are secondary cosmic rays arising solely from the interaction of high energy cosmic rays with the interstellar medium. The effects of solar modulation of protons and antiprotons are also calculated, showing that the antiproton/proton ratio can vary by as much as an order of magnitude over the solar cycle. When solar modulation is taken into account, the IMAX antiproton measurements are found to be consistent with the most recent calculations of the SLBM. No evidence is found in the IMAX data for excess antiprotons arising from the decay of galactic dark matter, which had been suggested as an interpretation of earlier measurements. Furthermore, the consistency of the current results with the SLBM calculations suggests that the mean antiproton lifetime is at least as large as the cosmic ray storage time in the galaxy (~10^7 yr, based on measurements of cosmic ray ^(10)Be). Recent measurements by two other experiments are consistent with this interpretation of the IMAX antiproton results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An instrument, the Caltech High Energy Isotope Spectrometer Telescope (HEIST), has been developed to measure isotopic abundances of cosmic ray nuclei in the charge range 3 ≤ Z ≤ 28 and the energy range between 30 and 800 MeV/nuc by employing an energy loss -- residual energy technique. Measurements of particle trajectories and energy losses are made using a multiwire proportional counter hodoscope and a stack of CsI(TI) crystal scintillators, respectively. A detailed analysis has been made of the mass resolution capabilities of this instrument.

Landau fluctuations set a fundamental limit on the attainable mass resolution, which for this instrument ranges between ~.07 AMU for z~3 and ~.2 AMU for z~2b. Contributions to the mass resolution due to uncertainties in measuring the path-length and energy losses of the detected particles are shown to degrade the overall mass resolution to between ~.1 AMU (z~3) and ~.3 AMU (z~2b).

A formalism, based on the leaky box model of cosmic ray propagation, is developed for obtaining isotopic abundance ratios at the cosmic ray sources from abundances measured in local interstellar space for elements having three or more stable isotopes, one of which is believed to be absent at the cosmic ray sources. This purely secondary isotope is used as a tracer of secondary production during propagation. This technique is illustrated for the isotopes of the elements O, Ne, S, Ar and Ca.

The uncertainties in the derived source ratios due to errors in fragmentation and total inelastic cross sections, in observed spectral shapes, and in measured abundances are evaluated. It is shown that the dominant sources of uncertainty are uncorrelated errors in the fragmentation cross sections and statistical uncertainties in measuring local interstellar abundances.

These results are applied to estimate the extent to which uncertainties must be reduced in order to distinguish between cosmic ray production in a solar-like environment and in various environments with greater neutron enrichments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The isotopic compositions of galactic cosmic ray boron, carbon, and nitrogen have been measured at energies near 300 MeV amu-1, using a balloon-borne instrument at an atmospheric depth of ~5 g cm-2. The calibrations of the detectors comprising the instrument are described. The saturation properties of the cesium iodide scintilla tors used for measurement of particle energy are studied in the context of analyzing the data for mass. The achieved rms mass resolution varies from ~ 0.3 amu at boron to ~ 0.5 amu at nitrogen, consistent with a theoretical analysis of the contributing factors. Corrected for detector interactions and the effects of the residual atmosphere, the results are ^(10)B/B = 0.33^(+0.17)_(-0.11), ^(13)C/C = 0.06^(+0.13)_(-0.01), and ^(15)N/N = 0.42 (+0.19)_(-0.17). A model of galactic propagation and solar modulation is described. Assuming a cosmic ray source composition of solar-like isotopic abundances, the model predicts abundances near earth consistent with the measurements.