14 resultados para Variances

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In many animals, sexual selection on male traits results from female mate choice decisions made during a sequence of courtship behaviors. We use a bower-building cichlid fish, Nyassachromis cf. microcephalus, to show how applying standard selection analysis to data on sequential female assessment provides new insights into sexual selection by mate choice. We first show that the cumulative selection differentials confirm previous results suggesting female choice favors males holding large volcano-shaped sand bowers. The sequential assessment analysis reveals these cumulative differentials are the result of selection acting on different bower dimensions during the courtship sequence; females choose to follow males courting from tall bowers, but choose to engage in premating circling with males holding bowers with large diameter platforms. The approach we present extends standard selection analysis by partitioning the variances of increasingly accurate estimates of male reproductive fitness and is applicable to systems in which sequential female assessment drives sexual selection on male traits.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The problem of estimating the numbers of motor units N in a muscle is embedded in a general stochastic model using the notion of thinning from point process theory. In the paper a new moment type estimator for the numbers of motor units in a muscle is denned, which is derived using random sums with independently thinned terms. Asymptotic normality of the estimator is shown and its practical value is demonstrated with bootstrap and approximative confidence intervals for a data set from a 31-year-old healthy right-handed, female volunteer. Moreover simulation results are presented and Monte-Carlo based quantiles, means, and variances are calculated for N in{300,600,1000}.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Knowledge of the time interval from death (post-mortem interval, PMI) has an enormous legal, criminological and psychological impact. Aiming to find an objective method for the determination of PMIs in forensic medicine, 1H-MR spectroscopy (1H-MRS) was used in a sheep head model to follow changes in brain metabolite concentrations after death. Following the characterization of newly observed metabolites (Ith et al., Magn. Reson. Med. 2002; 5: 915-920), the full set of acquired spectra was analyzed statistically to provide a quantitative estimation of PMIs with their respective confidence limits. In a first step, analytical mathematical functions are proposed to describe the time courses of 10 metabolites in the decomposing brain up to 3 weeks post-mortem. Subsequently, the inverted functions are used to predict PMIs based on the measured metabolite concentrations. Individual PMIs calculated from five different metabolites are then pooled, being weighted by their inverse variances. The predicted PMIs from all individual examinations in the sheep model are compared with known true times. In addition, four human cases with forensically estimated PMIs are compared with predictions based on single in situ MRS measurements. Interpretation of the individual sheep examinations gave a good correlation up to 250 h post-mortem, demonstrating that the predicted PMIs are consistent with the data used to generate the model. Comparison of the estimated PMIs with the forensically determined PMIs in the four human cases shows an adequate correlation. Current PMI estimations based on forensic methods typically suffer from uncertainties in the order of days to weeks without mathematically defined confidence information. In turn, a single 1H-MRS measurement of brain tissue in situ results in PMIs with defined and favorable confidence intervals in the range of hours, thus offering a quantitative and objective method for the determination of PMIs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: The link between decreased heart rate variability (HRV) and atherosclerosis progression is elusive. We hypothesized that reduced HRV relates to increased levels of prothrombotic factors previously shown to predict coronary risk. METHODS: We studied 257 women (aged 56 +/- 7 years) between 3 and 6 months after an acute coronary event and obtained very low frequency (VLF), low frequency (LF), and high frequency (HF) power, and LF/HF ratio from 24-hour ambulatory ECG recordings. Plasma levels of activated clotting factor VII (FVIIa), fibrinogen, von Willebrand factor antigen (VWF:Ag), and plasminogen activator inhibitor-1 (PAI-1) activity were determined, and their levels were aggregated into a standardized composite index of prothrombotic activity. RESULTS: In bivariate analyses, all HRV indices were inversely correlated with the prothrombotic index explaining between 6% and 14% of the variance (p < 0.001). After controlling for sociodemographic factors, index event, menopausal status, cardiac medication, lifestyle factors, self-rated health, metabolic variables, and heart rate, VLF power, LF power, and HF power explained 2%, 5%, and 3%, respectively, of the variance in the prothrombotic index (p < 0.012). There were also independent relationships between VLF power and PAI-1 activity, between LF power and fibrinogen, VWF:Ag, and PAI-1 activity, between HF power and FVIIa and fibrinogen, and between the LF/HF power ratio and PAI-1 activity, explaining between 2% and 3% of the respective variances (p < 0.05). CONCLUSIONS: Decreased HRV was associated with prothrombotic changes partially independent of covariates. Alteration in autonomic function might contribute to prothrombotic activity in women with coronary artery disease (CAD).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, a lot of effort has been spent in the efficient computation of kriging predictors when observations are assimilated sequentially. In particular, kriging update formulae enabling significant computational savings were derived. Taking advantage of the previous kriging mean and variance computations helps avoiding a costly matrix inversion when adding one observation to the TeX already available ones. In addition to traditional update formulae taking into account a single new observation, Emery (2009) proposed formulae for the batch-sequential case, i.e. when TeX new observations are simultaneously assimilated. However, the kriging variance and covariance formulae given in Emery (2009) for the batch-sequential case are not correct. In this paper, we fix this issue and establish correct expressions for updated kriging variances and covariances when assimilating observations in parallel. An application in sequential conditional simulation finally shows that coupling update and residual substitution approaches may enable significant speed-ups.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND Empirical research has illustrated an association between study size and relative treatment effects, but conclusions have been inconsistent about the association of study size with the risk of bias items. Small studies give generally imprecisely estimated treatment effects, and study variance can serve as a surrogate for study size. METHODS We conducted a network meta-epidemiological study analyzing 32 networks including 613 randomized controlled trials, and used Bayesian network meta-analysis and meta-regression models to evaluate the impact of trial characteristics and study variance on the results of network meta-analysis. We examined changes in relative effects and between-studies variation in network meta-regression models as a function of the variance of the observed effect size and indicators for the adequacy of each risk of bias item. Adjustment was performed both within and across networks, allowing for between-networks variability. RESULTS Imprecise studies with large variances tended to exaggerate the effects of the active or new intervention in the majority of networks, with a ratio of odds ratios of 1.83 (95% CI: 1.09,3.32). Inappropriate or unclear conduct of random sequence generation and allocation concealment, as well as lack of blinding of patients and outcome assessors, did not materially impact on the summary results. Imprecise studies also appeared to be more prone to inadequate conduct. CONCLUSIONS Compared to more precise studies, studies with large variance may give substantially different answers that alter the results of network meta-analyses for dichotomous outcomes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Commercially available assays for the simultaneous detection of multiple inflammatory and cardiac markers in porcine blood samples are currently lacking. Therefore, this study was aimed at developing a bead-based, multiplexed flow cytometric assay to simultaneously detect porcine cytokines [interleukin (IL)-1β, IL-6, IL-10, and tumor necrosis factor alpha], chemokines (IL-8 and monocyte chemotactic protein 1), growth factors [basic fibroblast growth factor (bFGF), vascular endothelial growth factor, and platelet-derived growth factor-bb], and injury markers (cardiac troponin-I) as well as complement activation markers (C5a and sC5b-9). The method was based on the Luminex xMAP technology, resulting in the assembly of a 6- and 11-plex from the respective individual singleplex situation. The assay was evaluated for dynamic range, sensitivity, cross-reactivity, intra-assay and interassay variance, spike recovery, and correlation between multiplex and commercially available enzyme-linked immunosorbent assay as well as the respective singleplex. The limit of detection ranged from 2.5 to 30,000 pg/ml for all analytes (6- and 11-plex assays), except for soluble C5b-9 with a detection range of 2-10,000 ng/ml (11-plex). Typically, very low cross-reactivity (<3% and <1.4% by 11- and 6-plex, respectively) between analytes was found. Intra-assay variances ranged from 4.9 to 7.4% (6-plex) and 5.3 to 12.9% (11-plex). Interassay variances for cytokines were between 8.1 and 28.8% (6-plex) and 10.1 and 26.4% (11-plex). Correlation coefficients with singleplex assays for 6-plex as well as for 11-plex were high, ranging from 0.988 to 0.997 and 0.913 to 0.999, respectively. In this study, a bead-based porcine 11-plex and 6-plex assay with a good assay sensitivity, broad dynamic range, and low intra-assay variance and cross-reactivity was established. These assays therefore represent a new, useful tool for the analysis of samples generated from experiments with pigs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

While the use of thromboelastometry analysis (ROTEM®) in evaluation of haemostasis is rapidly increasing, important validity parameters of testing remain inadequately examined. We aimed to study systematically the consistency of thromboelastometry parameters within individual tests regarding measurements between different analysers, between different channels of the same analyser, between morning and afternoon measurements (circadian variation), and if measured four weeks apart. Citrated whole blood samples from 40 healthy volunteers were analysed with two analysers in parallel. EXTEM, INTEM, FIBTEM, HEPTEM and APTEM tests were conducted. A Bland-Altman comparison was performed and homogeneity of variances was tested using the pitman test. P-value ranges were used to classify the level of homogeneity (p<0.15 - low homogeneity, p = 0.15 to 0.5 - intermediate homogeneity, p>0.5 high homogeneity). Less than half of all comparisons made showed high homogeneity of variances (p>0.5) and in about a fifth of comparisons data distributions were heterogeneous (p<0.15). There was no clear pattern for homogeneity. On average, comparisons of MCF, ML and LI30 measurements tended to be better, but none of the tests assessed outperformed another. In conclusion, systematic investigation reveals large differences in the results of some thromboelastometry parameters and lack of consistency. Clinicians and scientists should take these inconsistencies into account and focus on parameters with a higher homogeneity such as MCF.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Sample size calculations are advocated by the CONSORT group to justify sample sizes in randomized controlled trials (RCTs). The aim of this study was primarily to evaluate the reporting of sample size calculations, to establish the accuracy of these calculations in dental RCTs and to explore potential predictors associated with adequate reporting. Electronic searching was undertaken in eight leading specific and general dental journals. Replication of sample size calculations was undertaken where possible. Assumed variances or odds for control and intervention groups were also compared against those observed. The relationship between parameters including journal type, number of authors, trial design, involvement of methodologist, single-/multi-center study and region and year of publication, and the accuracy of sample size reporting was assessed using univariable and multivariable logistic regression. Of 413 RCTs identified, sufficient information to allow replication of sample size calculations was provided in only 121 studies (29.3%). Recalculations demonstrated an overall median overestimation of sample size of 15.2% after provisions for losses to follow-up. There was evidence that journal, methodologist involvement (OR = 1.97, CI: 1.10, 3.53), multi-center settings (OR = 1.86, CI: 1.01, 3.43) and time since publication (OR = 1.24, CI: 1.12, 1.38) were significant predictors of adequate description of sample size assumptions. Among journals JCP had the highest odds of adequately reporting sufficient data to permit sample size recalculation, followed by AJODO and JDR, with 61% (OR = 0.39, CI: 0.19, 0.80) and 66% (OR = 0.34, CI: 0.15, 0.75) lower odds, respectively. Both assumed variances and odds were found to underestimate the observed values. Presentation of sample size calculations in the dental literature is suboptimal; incorrect assumptions may have a bearing on the power of RCTs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Bell, Marcus, and Goodlad (2013) recently conducted a meta-analysis of randomized controlled additive trials and found that adding an additional component to an existing treatment vis-à-vis the existing treatment produced larger effect sizes on targeted outcomes at 6-months follow-up than at termination, an effect they labeled as a sleeper effect. One of the limitations with Bell et al.'s detection of the sleeper effect was that they did not conduct a statistical test of the size of the effect at follow-up versus termination. METHOD: To statistically test if the differences of effect sizes between the additive conditions and the control conditions at follow-up differed from those at termination, we used a restricted maximum-likelihood random-effect model with known variances to conduct a multilevel longitudinal meta-analysis (k = 30). RESULTS: Although the small effects at termination detected by Bell et al. were replicated (ds = 0.17-0.23), none of the analyses of growth from termination to follow-up produced statistically significant effects (ds < 0.08; p > .20), and when asymmetry was considered using trim-and-fill procedure or the studies after 2000 were analyzed, magnitude of the sleeper effect was negligible (d = 0.00). CONCLUSION: There is no empirical evidence to support the sleeper effect.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

* Hundreds of experiments have now manipulated species richness (SR) of various groups of organisms and examined how this aspect of biological diversity influences ecosystem functioning. Ecologists have recently expanded this field to look at whether phylogenetic diversity (PD) among species, often quantified as the sum of branch lengths on a molecular phylogeny leading to all species in a community, also predicts ecological function. Some have hypothesized that phylogenetic divergence should be a superior predictor of ecological function than SR because evolutionary relatedness represents the degree of ecological and functional differentiation among species. But studies to date have provided mixed support for this hypothesis. * Here, we reanalyse data from 16 experiments that have manipulated plant SR in grassland ecosystems and examined the impact on above-ground biomass production over multiple time points. Using a new molecular phylogeny of the plant species used in these experiments, we quantified how the PD of plants impacts average community biomass production as well as the stability of community biomass production through time. * Using four complementary analyses, we show that, after statistically controlling for variation in SR, PD (the sum of branches in a molecular phylogenetic tree connecting all species in a community) is neither related to mean community biomass nor to the temporal stability of biomass. These results run counter to past claims. However, after controlling for SR, PD was positively related to variation in community biomass over time due to an increase in the variances of individual species, but this relationship was not strong enough to influence community stability. * In contrast to the non-significant relationships between PD, biomass and stability, our analyses show that SR per se tends to increase the mean biomass production of plant communities, after controlling for PD. The relationship between SR and temporal variation in community biomass was either positive, non-significant or negative depending on which analysis was used. However, the increases in community biomass with SR, independently of PD, always led to increased stability. These results suggest that PD is no better as a predictor of ecosystem functioning than SR. * Synthesis. Our study on grasslands offers a cautionary tale when trying to relate PD to ecosystem functioning suggesting that there may be ecologically important trait and functional variation among species that is not explained by phylogenetic relatedness. Our results fail to support the hypothesis that the conservation of evolutionarily distinct species would be more effective than the conservation of SR as a way to maintain productive and stable communities under changing environmental conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

HYPOTHESIS A multielectrode probe in combination with an optimized stimulation protocol could provide sufficient sensitivity and specificity to act as an effective safety mechanism for preservation of the facial nerve in case of an unsafe drill distance during image-guided cochlear implantation. BACKGROUND A minimally invasive cochlear implantation is enabled by image-guided and robotic-assisted drilling of an access tunnel to the middle ear cavity. The approach requires the drill to pass at distances below 1 mm from the facial nerve and thus safety mechanisms for protecting this critical structure are required. Neuromonitoring is currently used to determine facial nerve proximity in mastoidectomy but lacks sensitivity and specificity necessaries to effectively distinguish the close distance ranges experienced in the minimally invasive approach, possibly because of current shunting of uninsulated stimulating drilling tools in the drill tunnel and because of nonoptimized stimulation parameters. To this end, we propose an advanced neuromonitoring approach using varying levels of stimulation parameters together with an integrated bipolar and monopolar stimulating probe. MATERIALS AND METHODS An in vivo study (sheep model) was conducted in which measurements at specifically planned and navigated lateral distances from the facial nerve were performed to determine if specific sets of stimulation parameters in combination with the proposed neuromonitoring system could reliably detect an imminent collision with the facial nerve. For the accurate positioning of the neuromonitoring probe, a dedicated robotic system for image-guided cochlear implantation was used and drilling accuracy was corrected on postoperative microcomputed tomographic images. RESULTS From 29 trajectories analyzed in five different subjects, a correlation between stimulus threshold and drill-to-facial nerve distance was found in trajectories colliding with the facial nerve (distance <0.1 mm). The shortest pulse duration that provided the highest linear correlation between stimulation intensity and drill-to-facial nerve distance was 250 μs. Only at low stimulus intensity values (≤0.3 mA) and with the bipolar configurations of the probe did the neuromonitoring system enable sufficient lateral specificity (>95%) at distances to the facial nerve below 0.5 mm. However, reduction in stimulus threshold to 0.3 mA or lower resulted in a decrease of facial nerve distance detection range below 0.1 mm (>95% sensitivity). Subsequent histopathology follow-up of three representative cases where the neuromonitoring system could reliably detect a collision with the facial nerve (distance <0.1 mm) revealed either mild or inexistent damage to the nerve fascicles. CONCLUSION Our findings suggest that although no general correlation between facial nerve distance and stimulation threshold existed, possibly because of variances in patient-specific anatomy, correlations at very close distances to the facial nerve and high levels of specificity would enable a binary response warning system to be developed using the proposed probe at low stimulation currents.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND In an effort to reduce firearm mortality rates in the USA, US states have enacted a range of firearm laws to either strengthen or deregulate the existing main federal gun control law, the Brady Law. We set out to determine the independent association of different firearm laws with overall firearm mortality, homicide firearm mortality, and suicide firearm mortality across all US states. We also projected the potential reduction of firearm mortality if the three most strongly associated firearm laws were enacted at the federal level. METHODS We constructed a cross-sectional, state-level dataset from Nov 1, 2014, to May 15, 2015, using counts of firearm-related deaths in each US state for the years 2008-10 (stratified by intent [homicide and suicide]) from the US Centers for Disease Control and Prevention's Web-based Injury Statistics Query and Reporting System, data about 25 firearm state laws implemented in 2009, and state-specific characteristics such as firearm ownership for 2013, firearm export rates, and non-firearm homicide rates for 2009, and unemployment rates for 2010. Our primary outcome measure was overall firearm-related mortality per 100 000 people in the USA in 2010. We used Poisson regression with robust variances to derive incidence rate ratios (IRRs) and 95% CIs. FINDINGS 31 672 firearm-related deaths occurred in 2010 in the USA (10·1 per 100 000 people; mean state-specific count 631·5 [SD 629·1]). Of 25 firearm laws, nine were associated with reduced firearm mortality, nine were associated with increased firearm mortality, and seven had an inconclusive association. After adjustment for relevant covariates, the three state laws most strongly associated with reduced overall firearm mortality were universal background checks for firearm purchase (multivariable IRR 0·39 [95% CI 0·23-0·67]; p=0·001), ammunition background checks (0·18 [0·09-0·36]; p<0·0001), and identification requirement for firearms (0·16 [0·09-0·29]; p<0·0001). Projected federal-level implementation of universal background checks for firearm purchase could reduce national firearm mortality from 10·35 to 4·46 deaths per 100 000 people, background checks for ammunition purchase could reduce it to 1·99 per 100 000, and firearm identification to 1·81 per 100 000. INTERPRETATION Very few of the existing state-specific firearm laws are associated with reduced firearm mortality, and this evidence underscores the importance of focusing on relevant and effective firearms legislation. Implementation of universal background checks for the purchase of firearms or ammunition, and firearm identification nationally could substantially reduce firearm mortality in the USA. FUNDING None.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The decomposition technique introduced by Blinder (1973) and Oaxaca (1973) is widely used to study outcome differences between groups. For example, the technique is commonly applied to the analysis of the gender wage gap. However, despite the procedure's frequent use, very little attention has been paid to the issue of estimating the sampling variances of the decomposition components. We therefore suggest an approach that introduces consistent variance estimators for several variants of the decomposition. The accuracy of the new estimators under ideal conditions is illustrated with the results of a Monte Carlo simulation. As a second check, the estimators are compared to bootstrap results obtained using real data. In contrast to previously proposed statistics, the new method takes into account the extra variation imposed by stochastic regressors.