984 resultados para SCALAR
Resumo:
We calculate the anomalous dimensions of operators with large global charge J in certain strongly coupled conformal field theories in three dimensions, such as the O(2) model and the supersymmetric fixed point with a single chiral superfield and a W = Φ3 superpotential. Working in a 1/J expansion, we find that the large-J sector of both examples is controlled by a conformally invariant effective Lagrangian for a Goldstone boson of the global symmetry. For both these theories, we find that the lowest state with charge J is always a scalar operator whose dimension ΔJ satisfies the sum rule J2ΔJ−(J22+J4+316)ΔJ−1−(J22+J4+316)ΔJ+1=0.04067 up to corrections that vanish at large J . The spectrum of low-lying excited states is also calculable explcitly: for example, the second-lowest primary operator has spin two and dimension ΔJ+3√. In the supersymmetric case, the dimensions of all half-integer-spin operators lie above the dimensions of the integer-spin operators by a gap of order J+12. The propagation speeds of the Goldstone waves and heavy fermions are 12√ and ±12 times the speed of light, respectively. These values, including the negative one, are necessary for the consistent realization of the superconformal symmetry at large J.
Resumo:
We present results on the nucleon scalar, axial, and tensor charges as well as on the momentum fraction, and the helicity and transversity moments. The pion momentum fraction is also presented. The computation of these key observables is carried out using lattice QCD simulations at a physical value of the pion mass. The evaluation is based on gauge configurations generated with two degenerate sea quarks of twisted mass fermions with a clover term. We investigate excited states contributions with the nucleon quantum numbers by analyzing three sink-source time separations. We find that, for the scalar charge, excited states contribute significantly and to a less degree to the nucleon momentum fraction and helicity moment. Our result for the nucleon axial charge agrees with the experimental value. Furthermore, we predict a value of 1.027(62) in the MS¯¯¯¯¯ scheme at 2 GeV for the isovector nucleon tensor charge directly at the physical point. The pion momentum fraction is found to be ⟨x⟩π±u−d=0.214(15)(+12−9) in the MS¯¯¯¯¯ at 2 GeV.
Resumo:
We study the emergence of Heisenberg (Bianchi II) algebra in hyper-Kähler and quaternionic spaces. This is motivated by the rôle these spaces with this symmetry play in N = 2 hypermultiplet scalar manifolds. We show how to construct related pairs of hyper-Kähler and quaternionic spaces under general symmetry assumptions, the former being a zooming-in limit of the latter at vanishing scalar curvature. We further apply this method for the two hyper-Kähler spaces with Heisenberg algebra, which is reduced to U (1) × U (1) at the quaternionic level. We also show that no quaternionic spaces exist with a strict Heisenberg symmetry – as opposed to Heisenberg U (1). We finally discuss the realization of the latter by gauging appropriate Sp(2, 4) generators in N = 2 conformal supergravity.
Resumo:
We study the effects of a finite cubic volume with twisted boundary conditions on pseudoscalar mesons. We apply Chiral Perturbation Theory in the p-regime and introduce the twist by means of a constant vector field. The corrections of masses, decay constants, pseudoscalar coupling constants and form factors are calculated at next-to-leading order. We detail the derivations and compare with results available in the literature. In some case there is disagreement due to a different treatment of new extra terms generated from the breaking of the cubic invariance. We advocate to treat such terms as renormalization terms of the twisting angles and reabsorb them in the on-shell conditions. We confirm that the corrections of masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. Furthermore, we show that the matrix elements of the scalar (resp. vector) form factor satisfies the Feynman–Hellman Theorem (resp. the Ward–Takahashi identity). To show the Ward–Takahashi identity we construct an effective field theory for charged pions which is invariant under electromagnetic gauge transformations and which reproduces the results obtained with Chiral Perturbation Theory at a vanishing momentum transfer. This generalizes considerations previously published for periodic boundary conditions to twisted boundary conditions. Another method to estimate the corrections in finite volume are asymptotic formulae. Asymptotic formulae were introduced by Lüscher and relate the corrections of a given physical quantity to an integral of a specific amplitude, evaluated in infinite volume. Here, we revise the original derivation of Lüscher and generalize it to finite volume with twisted boundary conditions. In some cases, the derivation involves complications due to extra terms generated from the breaking of the cubic invariance. We isolate such terms and treat them as renormalization terms just as done before. In that way, we derive asymptotic formulae for masses, decay constants, pseudoscalar coupling constants and scalar form factors. At the same time, we derive also asymptotic formulae for renormalization terms. We apply all these formulae in combination with Chiral Perturbation Theory and estimate the corrections beyond next-to-leading order. We show that asymptotic formulae for masses, decay constants, pseudoscalar coupling constants are related by means of chiral Ward identities. A similar relation connects in an independent way asymptotic formulae for renormalization terms. We check these relations for charged pions through a direct calculation. To conclude, a numerical analysis quantifies the importance of finite volume corrections at next-to-leading order and beyond. We perform a generic Analysis and illustrate two possible applications to real simulations.
Resumo:
This article gives an overview on the status of experimental searches for dark matter at the end of 2014. The main focus is on direct searches for weakly interacting massive particles (WIMPs) using underground-based low-background detectors, especially on the new results published in 2014. WIMPs are excellent dark matter candidates, predicted by many theories beyond the standard model of particle physics, and are expected to interact with the target nuclei either via spin-independent (scalar) or spin-dependent (axial-vector) couplings. Non-WIMP dark matter candidates, especially axions and axion-like particles are also briefly discussed.
Resumo:
The usage of intensity modulated radiotherapy (IMRT) treatments necessitates a significant amount of patient-specific quality assurance (QA). This research has investigated the precision and accuracy of Kodak EDR2 film measurements for IMRT verifications, the use of comparisons between 2D dose calculations and measurements to improve treatment plan beam models, and the dosimetric impact of delivery errors. New measurement techniques and software were developed and used clinically at M. D. Anderson Cancer Center. The software implemented two new dose comparison parameters, the 2D normalized agreement test (NAT) and the scalar NAT index. A single-film calibration technique using multileaf collimator (MLC) delivery was developed. EDR2 film's optical density response was found to be sensitive to several factors: radiation time, length of time between exposure and processing, and phantom material. Precision of EDR2 film measurements was found to be better than 1%. For IMRT verification, EDR2 film measurements agreed with ion chamber results to 2%/2mm accuracy for single-beam fluence map verifications and to 5%/2mm for transverse plane measurements of complete plan dose distributions. The same system was used to quantitatively optimize the radiation field offset and MLC transmission beam modeling parameters for Varian MLCs. While scalar dose comparison metrics can work well for optimization purposes, the influence of external parameters on the dose discrepancies must be minimized. The ability of 2D verifications to detect delivery errors was tested with simulated data. The dosimetric characteristics of delivery errors were compared to patient-specific clinical IMRT verifications. For the clinical verifications, the NAT index and percent of pixels failing the gamma index were exponentially distributed and dependent upon the measurement phantom but not the treatment site. Delivery errors affecting all beams in the treatment plan were flagged by the NAT index, although delivery errors impacting only one beam could not be differentiated from routine clinical verification discrepancies. Clinical use of this system will flag outliers, allow physicists to examine their causes, and perhaps improve the level of agreement between radiation dose distribution measurements and calculations. The principles used to design and evaluate this system are extensible to future multidimensional dose measurements and comparisons. ^
Resumo:
Next-generation DNA sequencing platforms can effectively detect the entire spectrum of genomic variation and is emerging to be a major tool for systematic exploration of the universe of variants and interactions in the entire genome. However, the data produced by next-generation sequencing technologies will suffer from three basic problems: sequence errors, assembly errors, and missing data. Current statistical methods for genetic analysis are well suited for detecting the association of common variants, but are less suitable to rare variants. This raises great challenge for sequence-based genetic studies of complex diseases.^ This research dissertation utilized genome continuum model as a general principle, and stochastic calculus and functional data analysis as tools for developing novel and powerful statistical methods for next generation of association studies of both qualitative and quantitative traits in the context of sequencing data, which finally lead to shifting the paradigm of association analysis from the current locus-by-locus analysis to collectively analyzing genome regions.^ In this project, the functional principal component (FPC) methods coupled with high-dimensional data reduction techniques will be used to develop novel and powerful methods for testing the associations of the entire spectrum of genetic variation within a segment of genome or a gene regardless of whether the variants are common or rare.^ The classical quantitative genetics suffer from high type I error rates and low power for rare variants. To overcome these limitations for resequencing data, this project used functional linear models with scalar response to develop statistics for identifying quantitative trait loci (QTLs) for both common and rare variants. To illustrate their applications, the functional linear models were applied to five quantitative traits in Framingham heart studies. ^ This project proposed a novel concept of gene-gene co-association in which a gene or a genomic region is taken as a unit of association analysis and used stochastic calculus to develop a unified framework for testing the association of multiple genes or genomic regions for both common and rare alleles. The proposed methods were applied to gene-gene co-association analysis of psoriasis in two independent GWAS datasets which led to discovery of networks significantly associated with psoriasis.^
Resumo:
A Bayesian approach to estimation of the regression coefficients of a multinominal logit model with ordinal scale response categories is presented. A Monte Carlo method is used to construct the posterior distribution of the link function. The link function is treated as an arbitrary scalar function. Then the Gauss-Markov theorem is used to determine a function of the link which produces a random vector of coefficients. The posterior distribution of the random vector of coefficients is used to estimate the regression coefficients. The method described is referred to as a Bayesian generalized least square (BGLS) analysis. Two cases involving multinominal logit models are described. Case I involves a cumulative logit model and Case II involves a proportional-odds model. All inferences about the coefficients for both cases are described in terms of the posterior distribution of the regression coefficients. The results from the BGLS method are compared to maximum likelihood estimates of the regression coefficients. The BGLS method avoids the nonlinear problems encountered when estimating the regression coefficients of a generalized linear model. The method is not complex or computationally intensive. The BGLS method offers several advantages over Bayesian approaches. ^
Resumo:
El trabajo apunta al estudio de los ecosistemas como entidades complejas y jerárquicas, tanto desde el punto de vista escalar como de su diversidad estructural, en el norte de la provincia de Mendoza, hasta los 34º de latitud sur. Se enfoca el estudio espacial jerárquico desde las diferentes escalas de análisis: micro, meso y macroescala. La macroescala es semejante al nivel de los biomas (macroecosistema); la mesoescala, a los ecosistemas definidos, por la diferenciación geomorfológica, entre otros factores, (mesoecosistema) y la microescala, a los sub-ecosistemas que surgen de las diferenciaciones topográficas y edáficas vinculadas con las formaciones vegetales y su ambiente (microecosistema). Para dicho estudio se toma un factor de control que conduce nuestro camino en el análisis, cual es el clima en sus tres jerarquías. Por otro lado se consideran las diferenciaciones jerárquicas espaciales de los ecosistemas basados en: el clima zonal, las unidades geomorfológicas que modifican el clima zonal y la topografía el suelo con su disponibilidad de agua que modifica el clima local. Los objetivos generales se basan en la identificación y localización de los ecosistemas jerárquicos y su análisis multiescalar integrado. Las hipótesis planteadas afirman que, en las diferentes escalas de ecosistemas, el clima es el denominador común que organiza la distribución de los mismos y además se afirma que existen agentes degradadores en todos los niveles de análisis. Se utiliza el método geográfico. En el análisis, se aplican los métodos deductivo-inductivos vinculando las escalas jerárquicas de los ecosistemas y los estudios de casos. Con este trabajo se pretende profundizar el conocimiento de la complejidad de los ecosistemas mendocinos, con un enfoque original.