981 resultados para large vector autoregression
Resumo:
Using the published KTeV samples of K(L) -> pi(+/-)e(-/+)nu and K(L) -> pi(+/-)mu(-/+)nu decays, we perform a reanalysis of the scalar and vector form factors based on the dispersive parametrization. We obtain phase-space integrals I(K)(e) = 0.15446 +/- 0.00025 and I(K)(mu) = 0.10219 +/- 0.00025. For the scalar form factor parametrization, the only free parameter is the normalized form factor value at the Callan-Treiman point (C); our best-fit results in InC = 0.1915 +/- 0.0122. We also study the sensitivity of C to different parametrizations of the vector form factor. The results for the phase-space integrals and C are then used to make tests of the standard model. Finally, we compare our results with lattice QCD calculations of F(K)/F(pi) and f(+)(0).
Resumo:
We calculate the nuclear cross section for coherent and incoherent vector meson production within the QCD color dipole picture, including saturation effects. Theoretical estimates for scattering on both light and heavy nuclei are given over a wide range of energy.
Resumo:
The theory of nonlinear diffraction of intensive light beams propagating through photorefractive media is developed. Diffraction occurs on a reflecting wire embedded in the nonlinear medium at a relatively small angle with respect to the direction of the beam propagation. It is shown that this process is analogous to the generation of waves by a flow of a superfluid past an obstacle. The ""equation of state"" of such a superfluid is determined by the nonlinear properties of the medium. On the basis of this hydrodynamic analogy, the notion of the ""Mach number"" is introduced where the transverse component of the wave vector plays the role of the fluid velocity. It is found that the Mach cone separates two regions of the diffraction pattern: inside the Mach cone oblique dark solitons are generated and outside the Mach cone the region of ""optical ship waves"" (the wave pattern formed by a two-dimensional packet of linear waves) is situated. Analytical theory of the ""optical ship waves"" is developed and two-dimensional dark soliton solutions of the generalized two-dimensional nonlinear Schrodinger equation describing the light beam propagation are found. Stability of dark solitons with respect to their decay into vortices is studied and it is shown that they are stable for large enough values of the Mach number.
Resumo:
The lightest supersymmetric particle may decay with branching ratios that correlate with neutrino oscillation parameters. In this case the CERN Large Hadron Collider (LHC) has the potential to probe the atmospheric neutrino mixing angle with sensitivity competitive to its low-energy determination by underground experiments. Under realistic detection assumptions, we identify the necessary conditions for the experiments at CERN's LHC to probe the simplest scenario for neutrino masses induced by minimal supergravity with bilinear R parity violation.
Resumo:
The mechanism of electroweak symmetry breaking ( EWSB) will be directly scrutinized soon at the CERN Large Hadron Collider. We analyze the LHC potential to look for new vector bosons associated with the EWSB sector, presenting a possible model independent approach to search for these new spin-1 resonances. We show that the analyses of the processes pp -> l(+)l(1-)E(T), l +/- jjE(T), l(1 +/-)l(+)l(-)E(T), l(+/-)jjE(T), and l(+)l(-) jj (with l, l' = e or mu and j = jet) have a large reach at the LHC and can lead to the discovery or exclusion of many EWSB scenarios such as Higgsless models.
Resumo:
We consider a model where sterile neutrinos can propagate in a large compactified extra dimension giving rise to Kaluza-Klein (KK) modes and the standard model left-handed neutrinos are confined to a 4-dimensional spacetime brane. The KK modes mix with the standard neutrinos modifying their oscillation pattern. We examine former and current experiments such as CHOOZ, KamLAND, and MINOS to estimate the impact of the possible presence of such KK modes on the determination of the neutrino oscillation parameters and simultaneously obtain limits on the size of the largest extra dimension. We found that the presence of the KK modes does not essentially improve the quality of the fit compared to the case of the standard oscillation. By combining the results from CHOOZ, KamLAND, and MINOS, in the limit of a vanishing lightest neutrino mass, we obtain the stronger bound on the size of the extra dimension as similar to 1.0(0.6) mu m at 99% C.L. for normal (inverted) mass hierarchy. If the lightest neutrino mass turns out to be larger, 0.2 eV, for example, we obtain the bound similar to 0.1 mu m. We also discuss the expected sensitivities on the size of the extra dimension for future experiments such as Double CHOOZ, T2K, and NO nu A.
Resumo:
We study extensions of the standard model with a strongly coupled fourth generation. This occurs in models where electroweak symmetry breaking is triggered by the condensation of at least some of the fourth-generation fermions. With focus on the phenomenology at the LHC, we study the pair production of fourth-generation down quarks, D(4). We consider the typical masses that could be associated with a strongly coupled fermion sector, in the range (300-600) GeV. We show that the production and successive decay of these heavy quarks into final states with same-sign dileptons, trileptons, and four leptons can be easily seen above background with relatively low luminosity. On the other hand, in order to confirm the presence of a new strong interaction responsible for fourth-generation condensation, we study its contribution to D(4) pair production, and the potential to separate it from standard QCD-induced heavy quark production. We show that this separation might require large amounts of data. This is true even if it is assumed that the new interaction is mediated by a massive colored vector boson, since its strong coupling to the fourth generation renders its width of the order of its mass. We conclude that, although this class of models can be falsified at early stages of the LHC running, its confirmation would require high integrated luminosities.
Resumo:
We study the potential of the CERN large hadron collider to probe the spin of new massive vector boson resonances predicted by Higgsless models. We consider its production via weak boson fusion which relies only on the coupling between the new resonances and the weak gauge bosons. We show that the large hadron collider will be able to unravel the spin of the particles associated with the partial restoration of unitarity in vector boson scattering for integrated luminosities of 150-560 fb(-1), depending on the new state mass and on the method used in the analyses.
Resumo:
The appearance of spin-1 resonances associated with the electroweak symmetry breaking sector is expected in many extensions of the standard model. We analyze the CERN Large Hadron Collider potential to probe the spin of possible new charged and neutral vector resonances through the purely leptonic processes pp -> Z' -> l(+) l'(-) E(T), and pp -> W' -> l'(+/-) l(+) l(-) E(T), with l, l' = e or mu. We perform a model-independent analysis and demonstrate that the spin of the new states can be determined with 99% C. L. in a large fraction of the parameter space where these resonances can be observed with 100 fb(-1). We show that the best sensitivity to the spin is obtained by directly studying correlations between the final state leptons, without the need of reconstructing the events in their center-of-mass frames.
Resumo:
Glycosylphosphatidylinositol (GPI) anchoring is a common, relevant posttranslational modification of eukaryotic surface proteins. Here, we developed a fast, simple, and highly sensitive (high attomole-low femtomole range) method that uses liquid chromatography-tandem mass spectrometry (LC-MS(n)) for the first large-scale analysis of GPI-anchored molecules (i.e., the GPIome) of a eukaryote, Trypanosoma cruzi, the etiologic agent of Chagas disease. Our genome-wise prediction analysis revealed that approximately 12% of T. cruzi genes possibly encode GPI-anchored proteins. By analyzing the GPIome of T. cruzi insect-dwelling epimastigote stage using LC-MS(n), we identified 90 GPI species, of which 79 were novel. Moreover, we determined that mucins coded by the T. cruzi small mucin-like gene (TcSMUG S) family are the major GPI-anchored proteins expressed on the epimastigote cell surface. TcSMUG S mucin mature sequences are short (56-85 amino acids) and highly O-glycosylated, and contain few proteolytic sites, therefore, less likely susceptible to proteases of the midgut of the insect vector. We propose that our approach could be used for the high throughput GPIomic analysis of other lower and higher eukaryotes. Molecular Systems Biology 7 April 2009; doi:10.1038/msb.2009.13
Resumo:
Large scale enzymatic resolution of racemic sulcatol 2 has been useful for stereoselective biocatalysis. This reaction was fast and selective, using vinyl acetate as donor of acyl group and lipase from Candida antarctica (CALB) as catalyst. The large scale reaction (5.0 g, 39 mmol) afforded high optical purities for S-(+)-sulcatol 2 and R-(+)-sulcatyl acetate 3, i.e., ee > 99 per cent and good yields (45 per cent) within a short time (40 min). Thermodynamic parameters for the chemoesterification of sulcatol 2 by vinyl acetate were evaluated. The enthalpy and Gibbs free energy values of this reaction were negative, indicating that this process is exothermic and spontaneous which is in agreement with the reaction obtained enzymatically.
Resumo:
Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Age-related changes in running kinematics have been reported in the literature using classical inferential statistics. However, this approach has been hampered by the increased number of biomechanical gait variables reported and subsequently the lack of differences presented in these studies. Data mining techniques have been applied in recent biomedical studies to solve this problem using a more general approach. In the present work, we re-analyzed lower extremity running kinematic data of 17 young and 17 elderly male runners using the Support Vector Machine (SVM) classification approach. In total, 31 kinematic variables were extracted to train the classification algorithm and test the generalized performance. The results revealed different accuracy rates across three different kernel methods adopted in the classifier, with the linear kernel performing the best. A subsequent forward feature selection algorithm demonstrated that with only six features, the linear kernel SVM achieved 100% classification performance rate, showing that these features provided powerful combined information to distinguish age groups. The results of the present work demonstrate potential in applying this approach to improve knowledge about the age-related differences in running gait biomechanics and encourages the use of the SVM in other clinical contexts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
State of Sao Paulo Research Foundation (FAPESP)