951 resultados para Statistical hypothesis testing
Resumo:
Context. About 2/3 of the Be stars present the so-called V/R variations, a phenomenon characterized by the quasi-cyclic variation in the ratio between the violet and red emission peaks of the HI emission lines. These variations are generally explained by global oscillations in the circumstellar disk forming a one-armed spiral density pattern that precesses around the star with a period of a few years. Aims. This paper presents self-consistent models of polarimetric, photometric, spectrophotometric, and interferometric observations of the classical Be star zeta Tauri. The primary goal is to conduct a critical quantitative test of the global oscillation scenario. Methods. Detailed three-dimensional, NLTE radiative transfer calculations were carried out using the radiative transfer code HDUST. The most up-to-date research on Be stars was used as input for the code in order to include a physically realistic description for the central star and the circumstellar disk. The model adopts a rotationally deformed, gravity darkened central star, surrounded by a disk whose unperturbed state is given by a steady-state viscous decretion disk model. It is further assumed that this disk is in vertical hydrostatic equilibrium. Results. By adopting a viscous decretion disk model for zeta Tauri and a rigorous solution of the radiative transfer, a very good fit of the time-average properties of the disk was obtained. This provides strong theoretical evidence that the viscous decretion disk model is the mechanism responsible for disk formation. The global oscillation model successfully fitted spatially resolved VLTI/AMBER observations and the temporal V/R variations in the H alpha and Br gamma lines. This result convincingly demonstrates that the oscillation pattern in the disk is a one-armed spiral. Possible model shortcomings, as well as suggestions for future improvements, are also discussed.
Resumo:
Background: Discussion surrounding the settlement of the New World has recently gained momentum with advances in molecular biology, archaeology and bioanthropology. Recent evidence from these diverse fields is found to support different colonization scenarios. The currently available genetic evidence suggests a ""single migration'' model, in which both early and later Native American groups derive from one expansion event into the continent. In contrast, the pronounced anatomical differences between early and late Native American populations have led others to propose more complex scenarios, involving separate colonization events of the New World and a distinct origin for these groups. Methodology/Principal Findings: Using large samples of Early American crania, we: 1) calculated the rate of morphological differentiation between Early and Late American samples under three different time divergence assumptions, and compared our findings to the predicted morphological differentiation under neutral conditions in each case; and 2) further tested three dispersal scenarios for the colonization of the New World by comparing the morphological distances among early and late Amerindians, East Asians, Australo-Melanesians and early modern humans from Asia to geographical distances associated with each dispersion model. Results indicate that the assumption of a last shared common ancestor outside the continent better explains the observed morphological differences between early and late American groups. This result is corroborated by our finding that a model comprising two Asian waves of migration coming through Bering into the Americas fits the cranial anatomical evidence best, especially when the effects of diversifying selection to climate are taken into account. Conclusions: We conclude that the morphological diversity documented through time in the New World is best accounted for by a model postulating two waves of human expansion into the continent originating in East Asia and entering through Beringia.
Resumo:
Ecological systems are vulnerable to irreversible change when key system properties are pushed over thresholds, resulting in the loss of resilience and the precipitation of a regime shift. Perhaps the most important of such properties in human-modified landscapes is the total amount of remnant native vegetation. In a seminal study Andren proposed the existence of a fragmentation threshold in the total amount of remnant vegetation, below which landscape-scale connectivity is eroded and local species richness and abundance become dependent on patch size. Despite the fact that species patch-area effects have been a mainstay of conservation science there has yet to be a robust empirical evaluation of this hypothesis. Here we present and test a new conceptual model describing the mechanisms and consequences of biodiversity change in fragmented landscapes, identifying the fragmentation threshold as a first step in a positive feedback mechanism that has the capacity to impair ecological resilience, and drive a regime shift in biodiversity. The model considers that local extinction risk is defined by patch size, and immigration rates by landscape vegetation cover, and that the recovery from local species losses depends upon the landscape species pool. Using a unique dataset on the distribution of non-volant small mammals across replicate landscapes in the Atlantic forest of Brazil, we found strong evidence for our model predictions - that patch-area effects are evident only at intermediate levels of total forest cover, where landscape diversity is still high and opportunities for enhancing biodiversity through local management are greatest. Furthermore, high levels of forest loss can push native biota through an extinction filter, and result in the abrupt, landscape-wide loss of forest-specialist taxa, ecological resilience and management effectiveness. The proposed model links hitherto distinct theoretical approaches within a single framework, providing a powerful tool for analysing the potential effectiveness of management interventions.
Resumo:
We show that the one-loop effective action at finite temperature for a scalar field with quartic interaction has the same renormalized expression as at zero temperature if written in terms of a certain classical field phi(c), and if we trade free propagators at zero temperature for their finite-temperature counterparts. The result follows if we write the partition function as an integral over field eigenstates (boundary fields) of the density matrix element in the functional Schrodinger field representation, and perform a semiclassical expansion in two steps: first, we integrate around the saddle point for fixed boundary fields, which is the classical field phi(c), a functional of the boundary fields; then, we perform a saddle-point integration over the boundary fields, whose correlations characterize the thermal properties of the system. This procedure provides a dimensionally reduced effective theory for the thermal system. We calculate the two-point correlation as an example.
Resumo:
A search for a sidereal modulation in the MINOS near detector neutrino data was performed. If present, this signature could be a consequence of Lorentz and CPT violation as predicted by the effective field theory called the standard-model extension. No evidence for a sidereal signal in the data set was found, implying that there is no significant change in neutrino propagation that depends on the direction of the neutrino beam in a sun-centered inertial frame. Upper limits on the magnitudes of the Lorentz and CPT violating terms in the standard-model extension lie between 10(-4) and 10(-2) of the maximum expected, assuming a suppression of these signatures by a factor of 10(-17).
Resumo:
We propose a statistical model to account for the gel-fluid anomalous phase transitions in charged bilayer- or lamellae-forming ionic lipids. The model Hamiltonian comprises effective attractive interactions to describe neutral-lipid membranes as well as the effect of electrostatic repulsions of the discrete ionic charges on the lipid headgroups. The latter can be counterion dissociated (charged) or counterion associated (neutral), while the lipid acyl chains may be in gel (low-temperature or high-lateral-pressure) or fluid (high-temperature or low-lateral-pressure) states. The system is modeled as a lattice gas with two distinct particle types-each one associated, respectively, with the polar-headgroup and the acyl-chain states-which can be mapped onto an Ashkin-Teller model with the inclusion of cubic terms. The model displays a rich thermodynamic behavior in terms of the chemical potential of counterions (related to added salt concentration) and lateral pressure. In particular, we show the existence of semidissociated thermodynamic phases related to the onset of charge order in the system. This type of order stems from spatially ordered counterion association to the lipid headgroups, in which charged and neutral lipids alternate in a checkerboard-like order. Within the mean-field approximation, we predict that the acyl-chain order-disorder transition is discontinuous, with the first-order line ending at a critical point, as in the neutral case. Moreover, the charge order gives rise to continuous transitions, with the associated second-order lines joining the aforementioned first-order line at critical end points. We explore the thermodynamic behavior of some physical quantities, like the specific heat at constant lateral pressure and the degree of ionization, associated with the fraction of charged lipid headgroups.
Resumo:
We consider a simple Maier-Saupe statistical model with the inclusion of disorder degrees of freedom to mimic the phase diagram of a mixture of rodlike and disklike molecules. A quenched distribution of shapes leads to a phase diagram with two uniaxial and a biaxial nematic structure. A thermalized distribution, however, which is more adequate to liquid mixtures, precludes the stability of this biaxial phase. We then use a two-temperature formalism, and assume a separation of relaxation times, to show that a partial degree of annealing is already sufficient to stabilize a biaxial nematic structure.
Resumo:
We consider a model where sterile neutrinos can propagate in a large compactified extra dimension giving rise to Kaluza-Klein (KK) modes and the standard model left-handed neutrinos are confined to a 4-dimensional spacetime brane. The KK modes mix with the standard neutrinos modifying their oscillation pattern. We examine former and current experiments such as CHOOZ, KamLAND, and MINOS to estimate the impact of the possible presence of such KK modes on the determination of the neutrino oscillation parameters and simultaneously obtain limits on the size of the largest extra dimension. We found that the presence of the KK modes does not essentially improve the quality of the fit compared to the case of the standard oscillation. By combining the results from CHOOZ, KamLAND, and MINOS, in the limit of a vanishing lightest neutrino mass, we obtain the stronger bound on the size of the extra dimension as similar to 1.0(0.6) mu m at 99% C.L. for normal (inverted) mass hierarchy. If the lightest neutrino mass turns out to be larger, 0.2 eV, for example, we obtain the bound similar to 0.1 mu m. We also discuss the expected sensitivities on the size of the extra dimension for future experiments such as Double CHOOZ, T2K, and NO nu A.
Resumo:
Cosmological analyses based on currently available observations are unable to rule out a sizeable coupling between dark energy and dark matter. However, the signature of the coupling is not easy to grasp, since the coupling is degenerate with other cosmological parameters, such as the dark energy equation of state and the dark matter abundance. We discuss possible ways to break such degeneracy. Based on the perturbation formalism, we carry out the global fitting by using the latest observational data and get a tight constraint on the interaction between dark sectors. We find that the appropriate interaction can alleviate the coincidence problem.
Resumo:
Eleven density functionals are compared with regard to their performance for the lattice constants of solids. We consider standard functionals, such as the local-density approximation and the Perdew-Burke-Ernzerhof (PBE) generalized-gradient approximation (GGA), as well as variations of PBE GGA, such as PBEsol and similar functionals, PBE-type functionals employing a tighter Lieb-Oxford bound, and combinations thereof. On a test set of 60 solids, we perform a system-by-system analysis for selected functionals and a full statistical analysis for all of them. The impact of restoring the gradient expansion and of tightening the Lieb-Oxford bound is discussed, and confronted with previous results obtained from other codes, functionals or test sets. No functional is uniformly good for all investigated systems, but surprisingly, and pleasingly, the simplest possible modifications to PBE turn out to have the most beneficial effect on its performance. The atomization energy of molecules was also considered and on a testing set of six molecules, we found that the PBE functional is clearly the best, the others leading to strong overbinding.
Resumo:
Online music databases have increased significantly as a consequence of the rapid growth of the Internet and digital audio, requiring the development of faster and more efficient tools for music content analysis. Musical genres are widely used to organize music collections. In this paper, the problem of automatic single and multi-label music genre classification is addressed by exploring rhythm-based features obtained from a respective complex network representation. A Markov model is built in order to analyse the temporal sequence of rhythmic notation events. Feature analysis is performed by using two multi-variate statistical approaches: principal components analysis (unsupervised) and linear discriminant analysis (supervised). Similarly, two classifiers are applied in order to identify the category of rhythms: parametric Bayesian classifier under the Gaussian hypothesis (supervised) and agglomerative hierarchical clustering (unsupervised). Qualitative results obtained by using the kappa coefficient and the obtained clusters corroborated the effectiveness of the proposed method.
Resumo:
Background: DAPfinder and DAPview are novel BRB-ArrayTools plug-ins to construct gene coexpression networks and identify significant differences in pairwise gene-gene coexpression between two phenotypes. Results: Each significant difference in gene-gene association represents a Differentially Associated Pair (DAP). Our tools include several choices of filtering methods, gene-gene association metrics, statistical testing methods and multiple comparison adjustments. Network results are easily displayed in Cytoscape. Analyses of glioma experiments and microarray simulations demonstrate the utility of these tools. Conclusions: DAPfinder is a new friendly-user tool for reconstruction and comparison of biological networks.
Resumo:
Background: There are several studies in the literature depicting measurement error in gene expression data and also, several others about regulatory network models. However, only a little fraction describes a combination of measurement error in mathematical regulatory networks and shows how to identify these networks under different rates of noise. Results: This article investigates the effects of measurement error on the estimation of the parameters in regulatory networks. Simulation studies indicate that, in both time series (dependent) and non-time series (independent) data, the measurement error strongly affects the estimated parameters of the regulatory network models, biasing them as predicted by the theory. Moreover, when testing the parameters of the regulatory network models, p-values computed by ignoring the measurement error are not reliable, since the rate of false positives are not controlled under the null hypothesis. In order to overcome these problems, we present an improved version of the Ordinary Least Square estimator in independent (regression models) and dependent (autoregressive models) data when the variables are subject to noises. Moreover, measurement error estimation procedures for microarrays are also described. Simulation results also show that both corrected methods perform better than the standard ones (i.e., ignoring measurement error). The proposed methodologies are illustrated using microarray data from lung cancer patients and mouse liver time series data. Conclusions: Measurement error dangerously affects the identification of regulatory network models, thus, they must be reduced or taken into account in order to avoid erroneous conclusions. This could be one of the reasons for high biological false positive rates identified in actual regulatory network models.
Resumo:
Objective: The purpose of this study was to evaluate in vitro the Knoop microhardness (Knoop hardness number [KHN]) and the degree of conversion using FT-Raman spectroscopy of a light-cured microhybrid resin composite (Z350-3M-ESPE) Vita shade A3 photopolymerized with a halogen lamp or an argon ion laser. Background Data: Optimal polymerization of resin-based dental materials is important for longevity of restorations in dentistry. Materials and Methods: Thirty specimens were prepared and inserted into a disc-shaped polytetrafluoroethylene mold that was 2.0 mm thick and 3 mm in diameter. The specimens were divided into three groups (n = 10 each). Group 1 (G1) was light-cured for 20 sec with an Optilux 501 halogen light with an intensity of 1000 mW/cm(2). Group 2 (G2) was photopolymerized with an argon laser with a power of 150 mW for 10 sec, and group 3 (G3) was photopolymerized with an argon laser at 200 mW of power for 10 sec. All specimens were stored in distilled water for 24 h at 37 degrees C and kept in lightproof containers. For the KHN test five indentations were made and a depth of 100 mu m was maintained in each specimen. One hundred and fifty readings were obtained using a 25-g load for 45 sec. The degree of conversion values were measured by Raman spectroscopy. KHN and degree of conversion values were obtained on opposite sides of the irradiated surface. KHN and degree of conversion data were analyzed by one-way ANOVA and Tukey tests with statistical significance set at p < 0.05. Results: The results of KHN testing were G1 = 37.428 +/- 4.765; G2 = 23.588 +/- 6.269; and G3 = 21.652 +/- 4.393. The calculated degrees of conversion (DC%) were G1 = 48.57 +/- 2.11; G2 = 43.71 +/- 3.93; and G3 = 44.19 +/- 2.71. Conclusions: Polymerization with the halogen lamp ( G1) attained higher microhardness values than polymerization with the argon laser at power levels of 150 and 200 mW; there was no difference in hardness between the two argon laser groups. The results showed no statistically significant different degrees of conversion for the polymerization of composite samples with the two light sources tested.
Resumo:
The degree of homogeneity is normally assessed by the variability of the results of independent analyses of several (e.g., 15) normal-scale replicates. Large sample instrumental neutron activation analysis (LS-INAA) with a collimated Ge detector allows inspecting the degree of homogeneity of the initial batch material, using a kilogram-size sample. The test is based on the spatial distributions of induced radioactivity. Such test was applied to samples of Brazilian whole (green) coffee beans (Coffea arabica and Coffea canephora) of approximately I kg in the frame of development of a coffee reference material. Results indicated that the material do not contain significant element composition inhomogeneities between batches of approximately 30-50 g, masses typically forming the starting base of a reference material.