947 resultados para non-standard neutrino interactions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We re-analyse the non-standard interaction (NSI) solutions to the solar neutrino problem in the light of the latest solar as well as atmospheric neutrino data. The latter require oscillations (OSC), while the former do not. Within such a three-neutrino framework the solar and atmospheric neutrino sectors are connected not only by the neutrino mixing angle theta(13) constrained by reactor and atmospheric data, but also by the flavour-changing (FC) and non-universal (NU) parameters accounting for the solar data. Since the NSI solution is energy-independent the spectrum is undistorted, so that the global analysis observables are the solar neutrino rates in all experiments as well as the Super-Kamiokande day-night measurements. We find that the NSI description of solar data is slightly better than that of the OSC solution and that the allowed NSI regions are determined mainly by the rate analysis. By using a few simplified ansatzes for the NSI interactions we explicitly demonstrate that the NSI values indicated by the solar data analysis are fully acceptable also for the atmospheric data. (C) 2002 Elsevier B.V. B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Aim A new method of penumbral analysis is implemented which allows an unambiguous determination of field size and penumbra size and quality for small fields and other non-standard fields. Both source occlusion and lateral electronic disequilibrium will affect the size and shape of cross-axis profile penumbrae; each is examined in detail. Method A new method of penumbral analysis is implemented where the square of the derivative of the cross-axis profile is plotted. The resultant graph displays two peaks in the place of the two penumbrae. This allows a strong visualisation of the quality of a field penumbra, as well as a mathematically consistent method of determining field size (distance between the two peak’s maxima), and penumbra (full-widthtenth-maximum of peak). Cross-axis profiles were simulated in a water phantom at a depth of 5 cm using Monte Carlo modelling, for field sizes between 5 and 30 mm. The field size and penumbra size of each field was calculated using the method above, as well as traditional definitions set out in IEC976. The effect of source occlusion and lateral electronic disequilibrium on the penumbrae was isolated by repeating the simulations removing electron transport and using an electron spot size of 0 mm, respectively. Results All field sizes calculated using the traditional and proposed methods agreed within 0.2 mm. The penumbra size measured using the proposed method was systematically 1.8 mm larger than the traditional method at all field sizes. The size of the source had a larger effect on the size of the penumbra than did lateral electronic disequilibrium, particularly at very small field sizes. Conclusion Traditional methods of calculating field size and penumbra are proved to be mathematically adequate for small fields. However, the field size definition proposed in this study would be more robust amongst other nonstandard fields, such as flattening filter free. Source occlusion plays a bigger role than lateral electronic disequilibrium in small field penumbra size.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-standard finite difference methods (NSFDM) introduced by Mickens [Non-standard Finite Difference Models of Differential Equations, World Scientific, Singapore, 1994] are interesting alternatives to the traditional finite difference and finite volume methods. When applied to linear hyperbolic conservation laws, these methods reproduce exact solutions. In this paper, the NSFDM is first extended to hyperbolic systems of conservation laws, by a novel utilization of the decoupled equations using characteristic variables. In the second part of this paper, the NSFDM is studied for its efficacy in application to nonlinear scalar hyperbolic conservation laws. The original NSFDMs introduced by Mickens (1994) were not in conservation form, which is an important feature in capturing discontinuities at the right locations. Mickens [Construction and analysis of a non-standard finite difference scheme for the Burgers–Fisher equations, Journal of Sound and Vibration 257 (4) (2002) 791–797] recently introduced a NSFDM in conservative form. This method captures the shock waves exactly, without any numerical dissipation. In this paper, this algorithm is tested for the case of expansion waves with sonic points and is found to generate unphysical expansion shocks. As a remedy to this defect, we use the strategy of composite schemes [R. Liska, B. Wendroff, Composite schemes for conservation laws, SIAM Journal of Numerical Analysis 35 (6) (1998) 2250–2271] in which the accurate NSFDM is used as the basic scheme and localized relaxation NSFDM is used as the supporting scheme which acts like a filter. Relaxation schemes introduced by Jin and Xin [The relaxation schemes for systems of conservation laws in arbitrary space dimensions, Communications in Pure and Applied Mathematics 48 (1995) 235–276] are based on relaxation systems which replace the nonlinear hyperbolic conservation laws by a semi-linear system with a stiff relaxation term. The relaxation parameter (λ) is chosen locally on the three point stencil of grid which makes the proposed method more efficient. This composite scheme overcomes the problem of unphysical expansion shocks and captures the shock waves with an accuracy better than the upwind relaxation scheme, as demonstrated by the test cases, together with comparisons with popular numerical methods like Roe scheme and ENO schemes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In a search for new phenomena in a signature suppressed in the standard model of elementary particles (SM), we compare the inclusive production of events containing a lepton, a photon, significant transverse momentum imbalance (MET), and a jet identified as containing a b-quark, to SM predictions. The search uses data produced in proton-antiproton collisions at 1.96 TeV corresponding to 1.9 fb-1 of integrated luminosity taken with the CDF detector at the Fermilab Tevatron. We find 28 lepton+photon+MET+b events versus an expectation of 31.0+4.1/-3.5 events. If we further require events to contain at least three jets and large total transverse energy, simulations predict that the largest SM source is top-quark pair production with an additional radiated photon, ttbar+photon. In the data we observe 16 ttbar+photon candidate events versus an expectation from SM sources of 11.2+2.3/-2.1. Assuming the difference between the observed number and the predicted non-top-quark total is due to SM top quark production, we estimate the ttg cross section to be 0.15 +- 0.08 pb.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Daya Bay Reactor Antineutrino Experiment observed the disappearance of reactor $\bar{\nu}_e$ from six $2.9~GW_{th}$ reactor cores in Daya Bay, China. The Experiment consists of six functionally identical $\bar{\nu}_e$ detectors, which detect $\bar{\nu}_e$ by inverse beta decay using a total of about 120 metric tons of Gd-loaded liquid scintillator as the target volume. These $\bar{\nu}_e$ detectors were installed in three underground experimental halls, two near halls and one far hall, under the mountains near Daya Bay, with overburdens of 250 m.w.e, 265 m.w.e and 860 m.w.e. and flux-weighted baselines of 470 m, 576 m and 1648 m. A total of 90179 $\bar{\nu}_e$ candidates were observed in the six detectors over a period of 55 days, 57549 at the Daya Bay near site, 22169 at the Ling Ao near site and 10461 at the far site. By performing a rate-only analysis, the value of $sin^2 2\theta_{13}$ was determined to be $0.092 \pm 0.017$.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis explores the dynamics of scale interactions in a turbulent boundary layer through a forcing-response type experimental study. An emphasis is placed on the analysis of triadic wavenumber interactions since the governing Navier-Stokes equations for the flow necessitate a direct coupling between triadically consist scales. Two sets of experiments were performed in which deterministic disturbances were introduced into the flow using a spatially-impulsive dynamic wall perturbation. Hotwire anemometry was employed to measure the downstream turbulent velocity and study the flow response to the external forcing. In the first set of experiments, which were based on a recent investigation of dynamic forcing effects in a turbulent boundary layer, a 2D (spanwise constant) spatio-temporal normal mode was excited in the flow; the streamwise length and time scales of the synthetic mode roughly correspond to the very-large-scale-motions (VLSM) found naturally in canonical flows. Correlation studies between the large- and small-scale velocity signals reveal an alteration of the natural phase relations between scales by the synthetic mode. In particular, a strong phase-locking or organizing effect is seen on directly coupled small-scales through triadic interactions. Having characterized the bulk influence of a single energetic mode on the flow dynamics, a second set of experiments aimed at isolating specific triadic interactions was performed. Two distinct 2D large-scale normal modes were excited in the flow, and the response at the corresponding sum and difference wavenumbers was isolated from the turbulent signals. Results from this experiment serve as an unique demonstration of direct non-linear interactions in a fully turbulent wall-bounded flow, and allow for examination of phase relationships involving specific interacting scales. A direct connection is also made to the Navier-Stokes resolvent operator framework developed in recent literature. Results and analysis from the present work offer insights into the dynamical structure of wall turbulence, and have interesting implications for design of practical turbulence manipulation or control strategies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss solvability issues of H_-/H_2/infinity optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard H_-/H_2/infinity optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the H- index, we provide a complete solution of this problem in the case of H2-norm. Furthermore, we discuss the solvability issues in the case of H-infinity-norm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss solvability issues of ℍ -/ℍ 2/∞ optimal fault detection problems in the most general setting. A solution approach is presented which successively reduces the initial problem to simpler ones. The last computational step generally may involve the solution of a non-standard ℍ -/ ℍ 2/∞ optimization problem for which we discuss possible solution approaches. Using an appropriate definition of the ℍ -- index, we provide a complete solution of this problem in the case of ℍ 2-norm. Furthermore, we discuss the solvability issues in the case of ℍ ∞-norm. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

One of the tasks of teaching (Ball, Thames, & Phelps, 2008) concerns the work of interpreting student error and evaluating alternative algorithms used by students. Teachers’ abilities to understand nonstandard student work affects their instructional decisions, the explanations they provide in the classroom, the way they guide their students, and how they conduct mathematical discussions. However, their knowledge or their perceptions of the knowledge may not correspond to the actual level of knowledge that will support flexibility and fluency in a mathematics classroom. In this paper, we focus on Norwegian and Portuguese teachers’ reflections when trying to give sense to students’ use of nonstandard subtraction algorithms and of the mathematics imbedded in such. By discussing teachers’ mathematical knowledge associated with these situations and revealed in their reflections, we can perceive the difficulties teachers have in making sense of students’ solutions that differ from those most commonly reached.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Durant la dernière décennie, les développements technologiques en radiothérapie ont transformé considérablement les techniques de traitement. Les nouveaux faisceaux non standard améliorent la conformité de la dose aux volumes cibles, mais également complexifient les procédures dosimétriques. Puisque des études récentes ont démontré l’invalidité de ces protocoles actuels avec les faisceaux non standard, un nouveau protocole applicable à la dosimétrie de référence de ces faisceaux est en préparation par l’IAEA-AAPM. Le but premier de cette étude est de caractériser les facteurs responsables des corrections non unitaires en dosimétrie des faisceaux non standard, et ainsi fournir des solutions conceptuelles afin de minimiser l’ordre de grandeur des corrections proposées dans le nouveau formalisme de l’IAEA-AAPM. Le deuxième but de l’étude est de construire des méthodes servant à estimer les incertitudes d’une manière exacte en dosimétrie non standard, et d’évaluer les niveaux d’incertitudes réalistes pouvant être obtenus dans des situations cliniques. Les résultats de l’étude démontrent que de rapporter la dose au volume sensible de la chambre remplie d’eau réduit la correction d’environ la moitié sous de hauts gradients de dose. Une relation théorique entre le facteur de correction de champs non standard idéaux et le facteur de gradient du champ de référence est obtenue. En dosimétrie par film radiochromique, des niveaux d’incertitude de l’ordre de 0.3% sont obtenus par l’application d’une procédure stricte, ce qui démontre un intérêt potentiel pour les mesures de faisceaux non standard. Les résultats suggèrent également que les incertitudes expérimentales des faisceaux non standard doivent être considérées sérieusement, que ce soit durant les procédures quotidiennes de vérification ou durant les procédures de calibration. De plus, ces incertitudes pourraient être un facteur limitatif dans la nouvelle génération de protocoles.