10 resultados para Inovation models in nets
em CaltechTHESIS
Resumo:
Since the discovery of the Higgs boson at the LHC, its use as a probe to search for beyond the standard model physics, such as supersymmetry, has become important, as seen in a recent search by the CMS experiment using razor variables in the diphoton final state. Motivated by this search, this thesis examines the LHC discovery potential of a SUSY scenario involving bottom squark pair production with a Higgs boson in the final state. We design and implement a software-based trigger using the razor variables for the CMS experiment to record events with a bottom quark-antiquark pair from a Higgs boson. We characterize the full range of signatures at the LHC from this Higgs-aware SUSY scenario and demonstrate the sensitivity of the CMS data to this model.
Resumo:
This thesis explores the problem of mobile robot navigation in dense human crowds. We begin by considering a fundamental impediment to classical motion planning algorithms called the freezing robot problem: once the environment surpasses a certain level of complexity, the planner decides that all forward paths are unsafe, and the robot freezes in place (or performs unnecessary maneuvers) to avoid collisions. Since a feasible path typically exists, this behavior is suboptimal. Existing approaches have focused on reducing predictive uncertainty by employing higher fidelity individual dynamics models or heuristically limiting the individual predictive covariance to prevent overcautious navigation. We demonstrate that both the individual prediction and the individual predictive uncertainty have little to do with this undesirable navigation behavior. Additionally, we provide evidence that dynamic agents are able to navigate in dense crowds by engaging in joint collision avoidance, cooperatively making room to create feasible trajectories. We accordingly develop interacting Gaussian processes, a prediction density that captures cooperative collision avoidance, and a "multiple goal" extension that models the goal driven nature of human decision making. Navigation naturally emerges as a statistic of this distribution.
Most importantly, we empirically validate our models in the Chandler dining hall at Caltech during peak hours, and in the process, carry out the first extensive quantitative study of robot navigation in dense human crowds (collecting data on 488 runs). The multiple goal interacting Gaussian processes algorithm performs comparably with human teleoperators in crowd densities nearing 1 person/m2, while a state of the art noncooperative planner exhibits unsafe behavior more than 3 times as often as the multiple goal extension, and twice as often as the basic interacting Gaussian process approach. Furthermore, a reactive planner based on the widely used dynamic window approach proves insufficient for crowd densities above 0.55 people/m2. We also show that our noncooperative planner or our reactive planner capture the salient characteristics of nearly any dynamic navigation algorithm. For inclusive validation purposes, we show that either our non-interacting planner or our reactive planner captures the salient characteristics of nearly any existing dynamic navigation algorithm. Based on these experimental results and theoretical observations, we conclude that a cooperation model is critical for safe and efficient robot navigation in dense human crowds.
Finally, we produce a large database of ground truth pedestrian crowd data. We make this ground truth database publicly available for further scientific study of crowd prediction models, learning from demonstration algorithms, and human robot interaction models in general.
Resumo:
There are two competing models of our universe right now. One is Big Bang with inflation cosmology. The other is the cyclic model with ekpyrotic phase in each cycle. This paper is divided into two main parts according to these two models. In the first part, we quantify the potentially observable effects of a small violation of translational invariance during inflation, as characterized by the presence of a preferred point, line, or plane. We explore the imprint such a violation would leave on the cosmic microwave background anisotropy, and provide explicit formulas for the expected amplitudes $\langle a_{lm}a_{l'm'}^*\rangle$ of the spherical-harmonic coefficients. We then provide a model and study the two-point correlation of a massless scalar (the inflaton) when the stress tensor contains the energy density from an infinitely long straight cosmic string in addition to a cosmological constant. Finally, we discuss if inflation can reconcile with the Liouville's theorem as far as the fine-tuning problem is concerned. In the second part, we find several problems in the cyclic/ekpyrotic cosmology. First of all, quantum to classical transition would not happen during an ekpyrotic phase even for superhorizon modes, and therefore the fluctuations cannot be interpreted as classical. This implies the prediction of scale-free power spectrum in ekpyrotic/cyclic universe model requires more inspection. Secondly, we find that the usual mechanism to solve fine-tuning problems is not compatible with eternal universe which contains infinitely many cycles in both direction of time. Therefore, all fine-tuning problems including the flatness problem still asks for an explanation in any generic cyclic models.
Resumo:
The construction and LHC phenomenology of the razor variables MR, an event-by-event indicator of the heavy particle mass scale, and R, a dimensionless variable related to the transverse momentum imbalance of events and missing transverse energy, are presented. The variables are used in the analysis of the first proton-proton collisions dataset at CMS (35 pb-1) in a search for superpartners of the quarks and gluons, targeting indirect hints of dark matter candidates in the context of supersymmetric theoretical frameworks. The analysis produced the highest sensitivity results for SUSY to date and extended the LHC reach far beyond the previous Tevatron results. A generalized inclusive search is subsequently presented for new heavy particle pairs produced in √s = 7 TeV proton-proton collisions at the LHC using 4.7±0.1 fb-1 of integrated luminosity from the second LHC run of 2011. The selected events are analyzed in the 2D razor-space of MR and R and the analysis is performed in 12 tiers of all-hadronic, single and double leptons final states in the presence and absence of b-quarks, probing the third generation sector using the event heavy-flavor content. The search is sensitive to generic supersymmetry models with minimal assumptions about the superpartner decay chains. No excess is observed in the number or shape of event yields relative to Standard Model predictions. Exclusion limits are derived in the CMSSM framework with gluino masses up to 800 GeV and squark masses up to 1.35 TeV excluded at 95% confidence level, depending on the model parameters. The results are also interpreted for a collection of simplified models, in which gluinos are excluded with masses as large as 1.1 TeV, for small neutralino masses, and the first-two generation squarks, stops and sbottoms are excluded for masses up to about 800, 425 and 400 GeV, respectively.
With the discovery of a new boson by the CMS and ATLAS experiments in the γ-γ and 4 lepton final states, the identity of the putative Higgs candidate must be established through the measurements of its properties. The spin and quantum numbers are of particular importance, and we describe a method for measuring the JPC of this particle using the observed signal events in the H to ZZ* to 4 lepton channel developed before the discovery. Adaptations of the razor kinematic variables are introduced for the H to WW* to 2 lepton/2 neutrino channel, improving the resonance mass resolution and increasing the discovery significance. The prospects for incorporating this channel in an examination of the new boson JPC is discussed, with indications that this it could provide complementary information to the H to ZZ* to 4 lepton final state, particularly for measuring CP-violation in these decays.
Resumo:
A central objective in signal processing is to infer meaningful information from a set of measurements or data. While most signal models have an overdetermined structure (the number of unknowns less than the number of equations), traditionally very few statistical estimation problems have considered a data model which is underdetermined (number of unknowns more than the number of equations). However, in recent times, an explosion of theoretical and computational methods have been developed primarily to study underdetermined systems by imposing sparsity on the unknown variables. This is motivated by the observation that inspite of the huge volume of data that arises in sensor networks, genomics, imaging, particle physics, web search etc., their information content is often much smaller compared to the number of raw measurements. This has given rise to the possibility of reducing the number of measurements by down sampling the data, which automatically gives rise to underdetermined systems.
In this thesis, we provide new directions for estimation in an underdetermined system, both for a class of parameter estimation problems and also for the problem of sparse recovery in compressive sensing. There are two main contributions of the thesis: design of new sampling and statistical estimation algorithms for array processing, and development of improved guarantees for sparse reconstruction by introducing a statistical framework to the recovery problem.
We consider underdetermined observation models in array processing where the number of unknown sources simultaneously received by the array can be considerably larger than the number of physical sensors. We study new sparse spatial sampling schemes (array geometries) as well as propose new recovery algorithms that can exploit priors on the unknown signals and unambiguously identify all the sources. The proposed sampling structure is generic enough to be extended to multiple dimensions as well as to exploit different kinds of priors in the model such as correlation, higher order moments, etc.
Recognizing the role of correlation priors and suitable sampling schemes for underdetermined estimation in array processing, we introduce a correlation aware framework for recovering sparse support in compressive sensing. We show that it is possible to strictly increase the size of the recoverable sparse support using this framework provided the measurement matrix is suitably designed. The proposed nested and coprime arrays are shown to be appropriate candidates in this regard. We also provide new guarantees for convex and greedy formulations of the support recovery problem and demonstrate that it is possible to strictly improve upon existing guarantees.
This new paradigm of underdetermined estimation that explicitly establishes the fundamental interplay between sampling, statistical priors and the underlying sparsity, leads to exciting future research directions in a variety of application areas, and also gives rise to new questions that can lead to stand-alone theoretical results in their own right.
Resumo:
The negative impacts of ambient aerosol particles, or particulate matter (PM), on human health and climate are well recognized. However, owing to the complexity of aerosol particle formation and chemical evolution, emissions control strategies remain difficult to develop in a cost effective manner. In this work, three studies are presented to address several key issues currently stymieing California's efforts to continue improving its air quality.
Gas-phase organic mass (GPOM) and CO emission factors are used in conjunction with measured enhancements in oxygenated organic aerosol (OOA) relative to CO to quantify the significant lack of closure between expected and observed organic aerosol concentrations attributable to fossil-fuel emissions. Two possible conclusions emerge from the analysis to yield consistency with the ambient organic data: (1) vehicular emissions are not a dominant source of anthropogenic fossil SOA in the Los Angeles Basin, or (2) the ambient SOA mass yields used to determine the SOA formation potential of vehicular emissions are substantially higher than those derived from laboratory chamber studies. Additional laboratory chamber studies confirm that, owing to vapor-phase wall loss, the SOA mass yields currently used in virtually all 3D chemical transport models are biased low by as much as a factor of 4. Furthermore, predictions from the Statistical Oxidation Model suggest that this bias could be as high as a factor of 8 if the influence of the chamber walls could be removed entirely.
Once vapor-phase wall loss has been accounted for in a new suite of laboratory chamber experiments, the SOA parameterizations within atmospheric chemical transport models should also be updated. To address the numerical challenges of implementing the next generation of SOA models in atmospheric chemical transport models, a novel mathematical framework, termed the Moment Method, is designed and presented. Assessment of the Moment Method strengths and weaknesses provide valuable insight that can guide future development of SOA modules for atmospheric CTMs.
Finally, regional inorganic aerosol formation and evolution is investigated via detailed comparison of predictions from the Community Multiscale Air Quality (CMAQ version 4.7.1) model against a suite of airborne and ground-based meteorological measurements, gas- and aerosol-phase inorganic measurements, and black carbon (BC) measurements over Southern California during the CalNex field campaign in May/June 2010. Results suggests that continuing to target sulfur emissions with the hopes of reducing ambient PM concentrations may not the most effective strategy for Southern California. Instead, targeting dairy emissions is likely to be an effective strategy for substantially reducing ammonium nitrate concentrations in the eastern part of the Los Angeles Basin.
Resumo:
The works presented in this thesis explore a variety of extensions of the standard model of particle physics which are motivated by baryon number (B) and lepton number (L), or some combination thereof. In the standard model, both baryon number and lepton number are accidental global symmetries violated only by non-perturbative weak effects, though the combination B-L is exactly conserved. Although there is currently no evidence for considering these symmetries as fundamental, there are strong phenomenological bounds restricting the existence of new physics violating B or L. In particular, there are strict limits on the lifetime of the proton whose decay would violate baryon number by one unit and lepton number by an odd number of units.
The first paper included in this thesis explores some of the simplest possible extensions of the standard model in which baryon number is violated, but the proton does not decay as a result. The second paper extends this analysis to explore models in which baryon number is conserved, but lepton flavor violation is present. Special attention is given to the processes of μ to e conversion and μ → eγ which are bound by existing experimental limits and relevant to future experiments.
The final two papers explore extensions of the minimal supersymmetric standard model (MSSM) in which both baryon number and lepton number, or the combination B-L, are elevated to the status of being spontaneously broken local symmetries. These models have a rich phenomenology including new collider signatures, stable dark matter candidates, and alternatives to the discrete R-parity symmetry usually built into the MSSM in order to protect against baryon and lepton number violating processes.
Resumo:
We study some aspects of conformal field theory, wormhole physics and two-dimensional random surfaces. Inspite of being rather different, these topics serve as examples of the issues that are involved, both at high and low energy scales, in formulating a quantum theory of gravity. In conformal field theory we show that fusion and braiding properties can be used to determine the operator product coefficients of the non-diagonal Wess-Zumino-Witten models. In wormhole physics we show how Coleman's proposed probability distribution would result in wormholes determining the value of θQCD. We attempt such a calculation and find the most probable value of θQCD to be π. This hints at a potential conflict with nature. In random surfaces we explore the behaviour of conformal field theories coupled to gravity and calculate some partition functions and correlation functions. Our results throw some light on the transition that is believed to occur when the central charge of the matter theory gets larger than one.
Resumo:
I report the solubility and diffusivity of water in lunar basalt and an iron-free basaltic analogue at 1 atm and 1350 °C. Such parameters are critical for understanding the degassing histories of lunar pyroclastic glasses. Solubility experiments have been conducted over a range of fO2 conditions from three log units below to five log units above the iron-wüstite buffer (IW) and over a range of pH2/pH2O from 0.03 to 24. Quenched experimental glasses were analyzed by Fourier transform infrared spectroscopy (FTIR) and secondary ionization mass spectrometry (SIMS) and were found to contain up to ~420 ppm water. Results demonstrate that, under the conditions of our experiments: (1) hydroxyl is the only H-bearing species detected by FTIR; (2) the solubility of water is proportional to the square root of pH2O in the furnace atmosphere and is independent of fO2 and pH2/pH2O; (3) the solubility of water is very similar in both melt compositions; (4) the concentration of H2 in our iron-free experiments is <3 ppm, even at oxygen fugacities as low as IW-2.3 and pH2/pH2O as high as 24; and (5) SIMS analyses of water in iron-rich glasses equilibrated under variable fO2 conditions can be strongly influenced by matrix effects, even when the concentrations of water in the glasses are low. Our results can be used to constrain the entrapment pressure of the lunar melt inclusions of Hauri et al. (2011).
Diffusion experiments were conducted over a range of fO2 conditions from IW-2.2 to IW+6.7 and over a range of pH2/pH2O from nominally zero to ~10. The water concentrations measured in our quenched experimental glasses by SIMS and FTIR vary from a few ppm to ~430 ppm. Water concentration gradients are well described by models in which the diffusivity of water (D*water) is assumed to be constant. The relationship between D*water and water concentration is well described by a modified speciation model (Ni et al. 2012) in which both molecular water and hydroxyl are allowed to diffuse. The success of this modified speciation model for describing our results suggests that we have resolved the diffusivity of hydroxyl in basaltic melt for the first time. Best-fit values of D*water for our experiments on lunar basalt vary within a factor of ~2 over a range of pH2/pH2O from 0.007 to 9.7, a range of fO2 from IW-2.2 to IW+4.9, and a water concentration range from ~80 ppm to ~280 ppm. The relative insensitivity of our best-fit values of D*water to variations in pH2 suggests that H2 diffusion was not significant during degassing of the lunar glasses of Saal et al. (2008). D*water during dehydration and hydration in H2/CO2 gas mixtures are approximately the same, which supports an equilibrium boundary condition for these experiments. However, dehydration experiments into CO2 and CO/CO2 gas mixtures leave some scope for the importance of kinetics during dehydration into H-free environments. The value of D*water chosen by Saal et al. (2008) for modeling the diffusive degassing of the lunar volcanic glasses is within a factor of three of our measured value in our lunar basaltic melt at 1350 °C.
In Chapter 4 of this thesis, I document significant zonation in major, minor, trace, and volatile elements in naturally glassy olivine-hosted melt inclusions from the Siqueiros Fracture Zone and the Galapagos Islands. Components with a higher concentration in the host olivine than in the melt (MgO, FeO, Cr2O3, and MnO) are depleted at the edges of the zoned melt inclusions relative to their centers, whereas except for CaO, H2O, and F, components with a lower concentration in the host olivine than in the melt (Al2O3, SiO2, Na2O, K2O, TiO2, S, and Cl) are enriched near the melt inclusion edges. This zonation is due to formation of an olivine-depleted boundary layer in the adjacent melt in response to cooling and crystallization of olivine on the walls of the melt inclusions concurrent with diffusive propagation of the boundary layer toward the inclusion center.
Concentration profiles of some components in the melt inclusions exhibit multicomponent diffusion effects such as uphill diffusion (CaO, FeO) or slowing of the diffusion of typically rapidly diffusing components (Na2O, K2O) by coupling to slow diffusing components such as SiO2 and Al2O3. Concentrations of H2O and F decrease towards the edges of some of the Siqueiros melt inclusions, suggesting either that these components have been lost from the inclusions into the host olivine late in their cooling histories and/or that these components are exhibiting multicomponent diffusion effects.
A model has been developed of the time-dependent evolution of MgO concentration profiles in melt inclusions due to simultaneous depletion of MgO at the inclusion walls due to olivine growth and diffusion of MgO in the melt inclusions in response to this depletion. Observed concentration profiles were fit to this model to constrain their thermal histories. Cooling rates determined by a single-stage linear cooling model are 150–13,000 °C hr-1 from the liquidus down to ~1000 °C, consistent with previously determined cooling rates for basaltic glasses; compositional trends with melt inclusion size observed in the Siqueiros melt inclusions are described well by this simple single-stage linear cooling model. Despite the overall success of the modeling of MgO concentration profiles using a single-stage cooling history, MgO concentration profiles in some melt inclusions are better fit by a two-stage cooling history with a slower-cooling first stage followed by a faster-cooling second stage; the inferred total duration of cooling from the liquidus down to ~1000 °C is 40 s to just over one hour.
Based on our observations and models, compositions of zoned melt inclusions (even if measured at the centers of the inclusions) will typically have been diffusively fractionated relative to the initially trapped melt; for such inclusions, the initial composition cannot be simply reconstructed based on olivine-addition calculations, so caution should be exercised in application of such reconstructions to correct for post-entrapment crystallization of olivine on inclusion walls. Off-center analyses of a melt inclusion can also give results significantly fractionated relative to simple olivine crystallization.
All melt inclusions from the Siqueiros and Galapagos sample suites exhibit zoning profiles, and this feature may be nearly universal in glassy, olivine-hosted inclusions. If so, zoning profiles in melt inclusions could be widely useful to constrain late-stage syneruptive processes and as natural diffusion experiments.
Resumo:
In this thesis we are concerned with finding representations of the algebra of SU(3) vector and axial-vector charge densities at infinite momentum (the "current algebra") to describe the mesons, idealizing the real continua of multiparticle states as a series of discrete resonances of zero width. Such representations would describe the masses and quantum numbers of the mesons, the shapes of their Regge trajectories, their electromagnetic and weak form factors, and (approximately, through the PCAC hypothesis) pion emission or absorption amplitudes.
We assume that the mesons have internal degrees of freedom equivalent to being made of two quarks (one an antiquark) and look for models in which the mass is SU(3)-independent and the current is a sum of contributions from the individual quarks. Requiring that the current algebra, as well as conditions of relativistic invariance, be satisfied turns out to be very restrictive, and, in fact, no model has been found which satisfies all requirements and gives a reasonable mass spectrum. We show that using more general mass and current operators but keeping the same internal degrees of freedom will not make the problem any more solvable. In particular, in order for any two-quark solution to exist it must be possible to solve the "factorized SU(2) problem," in which the currents are isospin currents and are carried by only one of the component quarks (as in the K meson and its excited states).
In the free-quark model the currents at infinite momentum are found using a manifestly covariant formalism and are shown to satisfy the current algebra, but the mass spectrum is unrealistic. We then consider a pair of quarks bound by a potential, finding the current as a power series in 1/m where m is the quark mass. Here it is found impossible to satisfy the algebra and relativistic invariance with the type of potential tried, because the current contributions from the two quarks do not commute with each other to order 1/m3. However, it may be possible to solve the factorized SU(2) problem with this model.
The factorized problem can be solved exactly in the case where all mesons have the same mass, using a covariant formulation in terms of an internal Lorentz group. For a more realistic, nondegenerate mass there is difficulty in covariantly solving even the factorized problem; one model is described which almost works but appears to require particles of spacelike 4-momentum, which seem unphysical.
Although the search for a completely satisfactory model has been unsuccessful, the techniques used here might eventually reveal a working model. There is also a possibility of satisfying a weaker form of the current algebra with existing models.