963 resultados para count data models
Resumo:
Disconnectivity between the Default Mode Network (DMN) nodes can cause clinical symptoms and cognitive deficits in Alzheimer׳s disease (AD). We aimed to examine the structural connectivity between DMN nodes, to verify the extent in which white matter disconnection affects cognitive performance. MRI data of 76 subjects (25 mild AD, 21 amnestic Mild Cognitive Impairment subjects and 30 controls) were acquired on a 3.0T scanner. ExploreDTI software (fractional Anisotropy threshold=0.25 and the angular threshold=60°) calculated axial, radial, and mean diffusivities, fractional anisotropy and streamline count. AD patients showed lower fractional anisotropy (P=0.01) and streamline count (P=0.029), and higher radial diffusivity (P=0.014) than controls in the cingulum. After correction for white matter atrophy, only fractional anisotropy and radial diffusivity remained significantly lower in AD compared to controls (P=0.003 and P=0.05). In the parahippocampal bundle, AD patients had lower mean and radial diffusivities (P=0.048 and P=0.013) compared to controls, from which only radial diffusivity survived for white matter adjustment (P=0.05). Regression models revealed that cognitive performance is also accounted for by white matter microstructural values. Structural connectivity within the DMN is important to the execution of high-complexity tasks, probably due to its relevant role in the integration of the network.
Resumo:
The aim of this study was to comparatively assess dental arch width, in the canine and molar regions, by means of direct measurements from plaster models, photocopies and digitized images of the models. The sample consisted of 130 pairs of plaster models, photocopies and digitized images of the models of white patients (n = 65), both genders, with Class I and Class II Division 1 malocclusions, treated by standard Edgewise mechanics and extraction of the four first premolars. Maxillary and mandibular intercanine and intermolar widths were measured by a calibrated examiner, prior to and after orthodontic treatment, using the three modes of reproduction of the dental arches. Dispersion of the data relative to pre- and posttreatment intra-arch linear measurements (mm) was represented as box plots. The three measuring methods were compared by one-way ANOVA for repeated measurements (α = 0.05). Initial / final mean values varied as follows: 33.94 to 34.29 mm / 34.49 to 34.66 mm (maxillary intercanine width); 26.23 to 26.26 mm / 26.77 to 26.84 mm (mandibular intercanine width); 49.55 to 49.66 mm / 47.28 to 47.45 mm (maxillary intermolar width) and 43.28 to 43.41 mm / 40.29 to 40.46 mm (mandibular intermolar width). There were no statistically significant differences between mean dental arch widths estimated by the three studied methods, prior to and after orthodontic treatment. It may be concluded that photocopies and digitized images of the plaster models provided reliable reproductions of the dental arches for obtaining transversal intra-arch measurements.
Resumo:
The purpose of this study was to develop and validate equations to estimate the aboveground phytomass of a 30 years old plot of Atlantic Forest. In two plots of 100 m², a total of 82 trees were cut down at ground level. For each tree, height and diameter were measured. Leaves and woody material were separated in order to determine their fresh weights in field conditions. Samples of each fraction were oven dried at 80 °C to constant weight to determine their dry weight. Tree data were divided into two random samples. One sample was used for the development of the regression equations, and the other for validation. The models were developed using single linear regression analysis, where the dependent variable was the dry mass, and the independent variables were height (h), diameter (d) and d²h. The validation was carried out using Pearson correlation coefficient, paired t-Student test and standard error of estimation. The best equations to estimate aboveground phytomass were: lnDW = -3.068+2.522lnd (r² = 0.91; s y/x = 0.67) and lnDW = -3.676+0.951ln d²h (r² = 0.94; s y/x = 0.56).
Resumo:
In this work we report on a comparison of some theoretical models usually used to fit the dependence on temperature of the fundamental energy gap of semiconductor materials. We used in our investigations the theoretical models of Viña, Pässler-p and Pässler-ρ to fit several sets of experimental data, available in the literature for the energy gap of GaAs in the temperature range from 12 to 974 K. Performing several fittings for different values of the upper limit of the analyzed temperature range (Tmax), we were able to follow in a systematic way the evolution of the fitting parameters up to the limit of high temperatures and make a comparison between the zero-point values obtained from the different models by extrapolating the linear dependence of the gaps at high T to T = 0 K and that determined by the dependence of the gap on isotope mass. Using experimental data measured by absorption spectroscopy, we observed the non-linear behavior of Eg(T) of GaAs for T > ΘD.
Resumo:
The objective of this study was to estimate the regressions calibration for the dietary data that were measured using the quantitative food frequency questionnaire (QFFQ) in the Natural History of HPV Infection in Men: the HIM Study in Brazil. A sample of 98 individuals from the HIM study answered one QFFQ and three 24-hour recalls (24HR) at interviews. The calibration was performed using linear regression analysis in which the 24HR was the dependent variable and the QFFQ was the independent variable. Age, body mass index, physical activity, income and schooling were used as adjustment variables in the models. The geometric means between the 24HR and the calibration-corrected QFFQ were statistically equal. The dispersion graphs between the instruments demonstrate increased correlation after making the correction, although there is greater dispersion of the points with worse explanatory power of the models. Identification of the regressions calibration for the dietary data of the HIM study will make it possible to estimate the effect of the diet on HPV infection, corrected for the measurement error of the QFFQ.
Resumo:
study-specific results, their findings should be interpreted with caution
Resumo:
Gene clustering is a useful exploratory technique to group together genes with similar expression levels under distinct cell cycle phases or distinct conditions. It helps the biologist to identify potentially meaningful relationships between genes. In this study, we propose a clustering method based on multivariate normal mixture models, where the number of clusters is predicted via sequential hypothesis tests: at each step, the method considers a mixture model of m components (m = 2 in the first step) and tests if in fact it should be m - 1. If the hypothesis is rejected, m is increased and a new test is carried out. The method continues (increasing m) until the hypothesis is accepted. The theoretical core of the method is the full Bayesian significance test, an intuitive Bayesian approach, which needs no model complexity penalization nor positive probabilities for sharp hypotheses. Numerical experiments were based on a cDNA microarray dataset consisting of expression levels of 205 genes belonging to four functional categories, for 10 distinct strains of Saccharomyces cerevisiae. To analyze the method's sensitivity to data dimension, we performed principal components analysis on the original dataset and predicted the number of classes using 2 to 10 principal components. Compared to Mclust (model-based clustering), our method shows more consistent results.
Resumo:
Nowadays, digital computer systems and networks are the main engineering tools, being used in planning, design, operation, and control of all sizes of building, transportation, machinery, business, and life maintaining devices. Consequently, computer viruses became one of the most important sources of uncertainty, contributing to decrease the reliability of vital activities. A lot of antivirus programs have been developed, but they are limited to detecting and removing infections, based on previous knowledge of the virus code. In spite of having good adaptation capability, these programs work just as vaccines against diseases and are not able to prevent new infections based on the network state. Here, a trial on modeling computer viruses propagation dynamics relates it to other notable events occurring in the network permitting to establish preventive policies in the network management. Data from three different viruses are collected in the Internet and two different identification techniques, autoregressive and Fourier analyses, are applied showing that it is possible to forecast the dynamics of a new virus propagation by using the data collected from other viruses that formerly infected the network. Copyright (c) 2008 J. R. C. Piqueira and F. B. Cesar. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
Resumo:
Diagnostic methods have been an important tool in regression analysis to detect anomalies, such as departures from error assumptions and the presence of outliers and influential observations with the fitted models. Assuming censored data, we considered a classical analysis and Bayesian analysis assuming no informative priors for the parameters of the model with a cure fraction. A Bayesian approach was considered by using Markov Chain Monte Carlo Methods with Metropolis-Hasting algorithms steps to obtain the posterior summaries of interest. Some influence methods, such as the local influence, total local influence of an individual, local influence on predictions and generalized leverage were derived, analyzed and discussed in survival data with a cure fraction and covariates. The relevance of the approach was illustrated with a real data set, where it is shown that, by removing the most influential observations, the decision about which model best fits the data is changed.
Resumo:
Survival or longevity is an economically important trait in beef cattle. The main inconvenience for its inclusion in selection criteria is delayed recording of phenotypic data and the high computational demand for including survival in proportional hazard models. Thus, identification of a longevity-correlated trait that could be recorded early in life would be very useful for selection purposes. We estimated the genetic relationship of survival with productive and reproductive traits in Nellore cattle, including weaning weight (WW), post-weaning growth (PWG), muscularity (MUSC), scrotal circumference at 18 months (SC18), and heifer pregnancy (HP). Survival was measured in discrete time intervals and modeled through a sequential threshold model. Five independent bivariate Bayesian analyses were performed, accounting for cow survival and the five productive and reproductive traits. Posterior mean estimates for heritability (standard deviation in parentheses) were 0.55 (0.01) for WW, 0.25 (0.01) for PWG, 0.23 (0.01) for MUSC, and 0.48 (0.01) for SC18. The posterior mean estimates (95% confidence interval in parentheses) for the genetic correlation with survival were 0.16 (0.13-0.19), 0.30 (0.25-0.34), 0.31 (0.25-0.36), 0.07 (0.02-0.12), and 0.82 (0.78-0.86) for WW, PWG, MUSC, SC18, and HP, respectively. Based on the high genetic correlation and heritability (0.54) posterior mean estimates for HP, the expected progeny difference for HP can be used to select bulls for longevity, as well as for post-weaning gain and muscle score.
Resumo:
The mass function of cluster-size halos and their redshift distribution are computed for 12 distinct accelerating cosmological scenarios and confronted to the predictions of the conventional flat Lambda CDM model. The comparison with Lambda CDM is performed by a two-step process. First, we determine the free parameters of all models through a joint analysis involving the latest cosmological data, using supernovae type Ia, the cosmic microwave background shift parameter, and baryon acoustic oscillations. Apart from a braneworld inspired cosmology, it is found that the derived Hubble relation of the remaining models reproduces the Lambda CDM results approximately with the same degree of statistical confidence. Second, in order to attempt to distinguish the different dark energy models from the expectations of Lambda CDM, we analyze the predicted cluster-size halo redshift distribution on the basis of two future cluster surveys: (i) an X-ray survey based on the eROSITA satellite, and (ii) a Sunayev-Zeldovich survey based on the South Pole Telescope. As a result, we find that the predictions of 8 out of 12 dark energy models can be clearly distinguished from the Lambda CDM cosmology, while the predictions of 4 models are statistically equivalent to those of the Lambda CDM model, as far as the expected cluster mass function and redshift distribution are concerned. The present analysis suggests that such a technique appears to be very competitive to independent tests probing the late time evolution of the Universe and the associated dark energy effects.
Resumo:
The exact composition of a specific class of compact stars, historically referred to as ""neutron stars,'' is still quite unknown. Possibilities ranging from hadronic to quark degrees of freedom, including self-bound versions of the latter, have been proposed. We specifically address the suitability of strange star models (including pairing interactions) in this work, in the light of new measurements available for four compact stars. The analysis shows that these data might be explained by such an exotic equation of state, actually selecting a small window in parameter space, but still new precise measurements and also further theoretical developments are needed to settle the subject.
Resumo:
The kinematic approach to cosmological tests provides direct evidence to the present accelerating stage of the Universe that does not depend on the validity of general relativity, as well as on the matter-energy content of the Universe. In this context, we consider here a linear two-parameter expansion for the decelerating parameter, q(z)=q(0)+q(1)z, where q(0) and q(1) are arbitrary constants to be constrained by the union supernovae data. By assuming a flat Universe we find that the best fit to the pair of free parameters is (q(0),q(1))=(-0.73,1.5) whereas the transition redshift is z(t)=0.49(-0.07)(+0.14)(1 sigma) +0.54-0.12(2 sigma). This kinematic result is in agreement with some independent analyses and more easily accommodates many dynamical flat models (like Lambda CDM).
Resumo:
Background: Bayesian mixing models have allowed for the inclusion of uncertainty and prior information in the analysis of trophic interactions using stable isotopes. Formulating prior distributions is relatively straightforward when incorporating dietary data. However, the use of data that are related, but not directly proportional, to diet (such as prey availability data) is often problematic because such information is not necessarily predictive of diet, and the information required to build a reliable prior distribution for all prey species is often unavailable. Omitting prey availability data impacts the estimation of a predator's diet and introduces the strong assumption of consumer ultrageneralism (where all prey are consumed in equal proportions), particularly when multiple prey have similar isotope values. Methodology: We develop a procedure to incorporate prey availability data into Bayesian mixing models conditional on the similarity of isotope values between two prey. If a pair of prey have similar isotope values (resulting in highly uncertain mixing model results), our model increases the weight of availability data in estimating the contribution of prey to a predator's diet. We test the utility of this method in an intertidal community against independently measured feeding rates. Conclusions: Our results indicate that our weighting procedure increases the accuracy by which consumer diets can be inferred in situations where multiple prey have similar isotope values. This suggests that the exchange of formalism for predictive power is merited, particularly when the relationship between prey availability and a predator's diet cannot be assumed for all species in a system.
Resumo:
Perturbative Quantum Chromodynamics (pQCD) predicts that the small-x gluons in the hadron wavefunction should form a Color Glass Condensate (CGC), which has universal properties, which are the same for nucleon or nuclei. Making use of the results in V.P. Goncalves, M.S. Kugeratski, M.V.T. Machado, F.S. Navarra, Phys. Lett. B643, 273 (2006), we study the behavior of the anomalous dimension in the saturation models as a function of the photon virtuality and of the scaling variable rQ(s), since the main difference among the known parameterizations are characterized by this quantity.