90 resultados para parameter-space graph
Resumo:
In a 2D parameter space, by using nine experimental time series of a Clitia`s circuit, we characterized three codimension-1 chaotic fibers parallel to a period-3 window. To show the local preservation of the properties of the chaotic attractors in each fiber, we applied the closed return technique and two distinct topological methods. With the first topological method we calculated the linking, numbers in the sets of unstable periodic orbits, and with the second one we obtained the symbolic planes and the topological entropies by applying symbolic dynamic analysis. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Models of dynamical dark energy unavoidably possess fluctuations in the energy density and pressure of that new component. In this paper we estimate the impact of dark energy fluctuations on the number of galaxy clusters in the Universe using a generalization of the spherical collapse model and the Press-Schechter formalism. The observations we consider are several hypothetical Sunyaev-Zel`dovich and weak lensing (shear maps) cluster surveys, with limiting masses similar to ongoing (SPT, DES) as well as future (LSST, Euclid) surveys. Our statistical analysis is performed in a 7-dimensional cosmological parameter space using the Fisher matrix method. We find that, in some scenarios, the impact of these fluctuations is large enough that their effect could already be detected by existing instruments such as the South Pole Telescope, when priors from other standard cosmological probes are included. We also show how dark energy fluctuations can be a nuisance for constraining cosmological parameters with cluster counts, and point to a degeneracy between the parameter that describes dark energy pressure on small scales (the effective sound speed) and the parameters describing its equation of state.
Resumo:
We study the collider phenomenology of bilinear R-parity violating supergravity, the simplest effective model for supersymmetric neutrino masses accounting for the current neutrino oscillation data. At the CERN Large Hadron Collider the center-of-mass energy will be high enough to probe directly these models through the search for the superpartners of the Standard Model (SM) particles. We analyze the impact of R-parity violation on the canonical supersymmetry searches-that is, we examine how the decay of the lightest supersymmetric particle (LSP) via bilinear R-parity violating interactions degrades the average expected missing momentum of the reactions and show how this diminishes the reach in the usual channels for supersymmetry searches. However, the R-parity violating interactions lead to an enhancement of the final states containing isolated same-sign di-leptons and trileptons, compensating the reach loss in the fully inclusive channel. We show how the searches for displaced vertices associated to LSP decay substantially increase the coverage in supergravity parameter space, giving the corresponding reaches for two reference luminosities of 10 and 100 fb(-1) and compare with those of the R-parity conserving minimal supergravity model.
Resumo:
We perform an analysis of the electroweak precision observables in the Lee-Wick Standard Model. The most stringent restrictions come from the S and T parameters that receive important tree level and one loop contributions. In general the model predicts a large positive S and a negative T. To reproduce the electroweak data, if all the Lee-Wick masses are of the same order, the Lee-Wick scale is of order 5 TeV. We show that it is possible to find some regions in the parameter space with a fermionic state as light as 2.4-3.5 TeV, at the price of rising all the other masses to be larger than 5-8 TeV. To obtain a light Higgs with such heavy resonances a fine-tuning of order a few per cent, at least, is needed. We also propose a simple extension of the model including a fourth generation of Standard Model fermions with their Lee-Wick partners. We show that in this case it is possible to pass the electroweak constraints with Lee-Wick fermionic masses of order 0.4-1.5 TeV and Lee-Wick gauge masses of order 3 TeV.
Resumo:
We review some issues related to the implications of different missing data mechanisms on statistical inference for contingency tables and consider simulation studies to compare the results obtained under such models to those where the units with missing data are disregarded. We confirm that although, in general, analyses under the correct missing at random and missing completely at random models are more efficient even for small sample sizes, there are exceptions where they may not improve the results obtained by ignoring the partially classified data. We show that under the missing not at random (MNAR) model, estimates on the boundary of the parameter space as well as lack of identifiability of the parameters of saturated models may be associated with undesirable asymptotic properties of maximum likelihood estimators and likelihood ratio tests; even in standard cases the bias of the estimators may be low only for very large samples. We also show that the probability of a boundary solution obtained under the correct MNAR model may be large even for large samples and that, consequently, we may not always conclude that a MNAR model is misspecified because the estimate is on the boundary of the parameter space.
Resumo:
Although the asymptotic distributions of the likelihood ratio for testing hypotheses of null variance components in linear mixed models derived by Stram and Lee [1994. Variance components testing in longitudinal mixed effects model. Biometrics 50, 1171-1177] are valid, their proof is based on the work of Self and Liang [1987. Asymptotic properties of maximum likelihood estimators and likelihood tests under nonstandard conditions. J. Amer. Statist. Assoc. 82, 605-610] which requires identically distributed random variables, an assumption not always valid in longitudinal data problems. We use the less restrictive results of Vu and Zhou [1997. Generalization of likelihood ratio tests under nonstandard conditions. Ann. Statist. 25, 897-916] to prove that the proposed mixture of chi-squared distributions is the actual asymptotic distribution of such likelihood ratios used as test statistics for null variance components in models with one or two random effects. We also consider a limited simulation study to evaluate the appropriateness of the asymptotic distribution of such likelihood ratios in moderately sized samples. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
This paper pursues the study carried out in [ 10], focusing on the codimension one Hopf bifurcations in the hexagonal Watt governor system. Here are studied Hopf bifurcations of codimensions two, three and four and the pertinent Lyapunov stability coefficients and bifurcation diagrams. This allows to determine the number, types and positions of bifurcating small amplitude periodic orbits. As a consequence it is found an open region in the parameter space where two attracting periodic orbits coexist with an attracting equilibrium point.
Resumo:
Aims. A model-independent reconstruction of the cosmic expansion rate is essential to a robust analysis of cosmological observations. Our goal is to demonstrate that current data are able to provide reasonable constraints on the behavior of the Hubble parameter with redshift, independently of any cosmological model or underlying gravity theory. Methods. Using type Ia supernova data, we show that it is possible to analytically calculate the Fisher matrix components in a Hubble parameter analysis without assumptions about the energy content of the Universe. We used a principal component analysis to reconstruct the Hubble parameter as a linear combination of the Fisher matrix eigenvectors (principal components). To suppress the bias introduced by the high redshift behavior of the components, we considered the value of the Hubble parameter at high redshift as a free parameter. We first tested our procedure using a mock sample of type Ia supernova observations, we then applied it to the real data compiled by the Sloan Digital Sky Survey (SDSS) group. Results. In the mock sample analysis, we demonstrate that it is possible to drastically suppress the bias introduced by the high redshift behavior of the principal components. Applying our procedure to the real data, we show that it allows us to determine the behavior of the Hubble parameter with reasonable uncertainty, without introducing any ad-hoc parameterizations. Beyond that, our reconstruction agrees with completely independent measurements of the Hubble parameter obtained from red-envelope galaxies.
Resumo:
We construct a nonrelativistic wave equation for spinning particles in the noncommutative space (in a sense, a theta modification of the Pauli equation). To this end, we consider the nonrelativistic limit of the theta-modified Dirac equation. To complete the consideration, we present a pseudoclassical model (a la Berezin-Marinov) for the corresponding nonrelativistic particle in the noncommutative space. To justify the latter model, we demonstrate that its quantization leads to the theta-modified Pauli equation. We extract theta-modified interaction between a nonrelativistic spin and a magnetic field from such a Pauli equation and construct a theta modification of the Heisenberg model for two coupled spins placed in an external magnetic field. In the framework of such a model, we calculate the probability transition between two orthogonal Einstein-Podolsky-Rosen states for a pair of spins in an oscillatory magnetic field and show that some of such transitions, which are forbidden in the commutative space, are possible due to the space noncommutativity. This allows us to estimate an upper bound on the noncommutativity parameter.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
We show that the S parameter is not finite in theories of electroweak symmetry breaking in a slice of anti-de Sitter five-dimensional space, with the light fermions localized in the ultraviolet. We compute the one-loop contributions to S from the Higgs sector and show that they are logarithmically dependent on the cutoff of the theory. We discuss the renormalization of S, as well as the implications for bounds from electroweak precision measurements on these models. We argue that, although in principle the choice of renormalization condition could eliminate the S parameter constraint, a more consistent condition would still result in a large and positive S. On the other hand, we show that the dependence on the Higgs mass in S can be entirely eliminated by the renormalization procedure, making it impossible in these theories to extract a Higgs mass bound from electroweak precision constraints.
Resumo:
This study evaluated the sealing ability of different lengths of remaining root canal filling and post space preparation against coronal leakage of Enterococcus faecalis. Forty-one roots of maxillary incisors were biomechanically prepared, maintaining standardized canal diameter at the middle and coronal thirds. The roots were autoclaved and all subsequent steps were undertaken in a laminar flow chamber. The canals of 33 roots were obturated with AH Plus sealer and gutta-percha. The root canal fillings were reduced to 3 predetermined lengths (n=11): G1=6 mm, G2=4 mm and G3=2 mm. The remaining roots served as positive and negative controls. Bacterial leakage test apparatuses were fabricated with the roots attached to Eppendorf tubes keeping 2 mm of apex submerged in BHI in glass flasks. The specimens received an E. faecalis inoculum of 1 x 107 cfu/mL every 3 days and were observed for bacterial leakage daily during 60 days. Data were submitted to ANOVA, Tukey's test and Fisher's test. At 60 days, G1 (6 mm) and G2 (4 mm) presented statistically similar results (p>0.05) (54.4% of specimens with bacterial leakage) and both groups differed significantly (p<0.01) from G3 (2 mm), which presented 100% of specimens with E. faecalis leakage. It may be concluded that the shortest endodontic obturation remnant leaked considerably more than the other lengths, although none of the tested conditions avoids coronal leakage of E. faecalis.
Resumo:
OBJETIVO: Analisar a acurácia do diagnóstico de dois protocolos de imunofluorescência indireta para leishmaniose visceral canina. MÉTODOS: Cães provenientes de inquérito soroepidemiológico realizado em área endêmica nos municípios de Araçatuba e de Andradina, na região noroeste do estado de São Paulo, em 2003, e área não endêmica da região metropolitana de São Paulo, foram utilizados para avaliar comparativamente dois protocolos da reação de imunofluorescência indireta (RIFI) para leishmaniose: um utilizando antígeno heterólogo Leishmania major (RIFI-BM) e outro utilizando antígeno homólogo Leishmania chagasi (RIFI-CH). Para estimar acurácia utilizou-se a análise two-graph receiver operating characteristic (TG-ROC). A análise TG-ROC comparou as leituras da diluição 1:20 do antígeno homólogo (RIFI-CH), consideradas como teste referência, com as diluições da RIFI-BM (antígeno heterólogo). RESULTADOS: A diluição 1:20 do teste RIFI-CH apresentou o melhor coeficiente de contingência (0,755) e a maior força de associação entre as duas variáveis estudadas (qui-quadrado=124,3), sendo considerada a diluição-referência do teste nas comparações com as diferentes diluições do teste RIFI-BM. Os melhores resultados do RIFI-BM foram obtidos na diluição 1:40, com melhor coeficiente de contingência (0,680) e maior força de associação (qui-quadrado=80,8). Com a mudança do ponto de corte sugerido nesta análise para a diluição 1:40 da RIFI-BM, o valor do parâmetro especificidade aumentou de 57,5% para 97,7%, embora a diluição 1:80 tivesse apresentado a melhor estimativa para sensibilidade (80,2%) com o novo ponto de corte. CONCLUSÕES: A análise TG-ROC pode fornecer importantes informações sobre os testes de diagnósticos, além de apresentar sugestões sobre pontos de cortes que podem melhorar as estimativas de sensibilidade e especificidade do teste, e avaliá-los a luz do melhor custo-benefício.
Resumo:
Using series solutions and time-domain evolutions, we probe the eikonal limit of the gravitational and scalar-field quasinormal modes of large black holes and black branes in anti-de Sitter backgrounds. These results are particularly relevant for the AdS/CFT correspondence, since the eikonal regime is characterized by the existence of long-lived modes which (presumably) dominate the decay time scale of the perturbations. We confirm all the main qualitative features of these slowly damped modes as predicted by Festuccia and Liu [G. Festuccia and H. Liu, arXiv:0811.1033.] for the scalar-field (tensor-type gravitational) fluctuations. However, quantitatively we find dimensional-dependent correction factors. We also investigate the dependence of the quasinormal mode frequencies on the horizon radius of the black hole (brane) and the angular momentum (wave number) of vector- and scalar-type gravitational perturbations.
Resumo:
The synchronizing properties of two diffusively coupled hyperchaotic Lorenz 4D systems are investigated by calculating the transverse Lyapunov exponents and by observing the phase space trajectories near the synchronization hyperplane. The effect of parameter mismatch is also observed. A simple electrical circuit described by the Lorenz 4D equations is proposed. Some results from laboratory experiments with two coupled circuits are presented. Copyright (C) 2009 Ruy Barboza.