47 resultados para Data replication processes
Resumo:
High density spatial and temporal sampling of EEG data enhances the quality of results of electrophysiological experiments. Because EEG sources typically produce widespread electric fields (see Chapter 3) and operate at frequencies well below the sampling rate, increasing the number of electrodes and time samples will not necessarily increase the number of observed processes, but mainly increase the accuracy of the representation of these processes. This is namely the case when inverse solutions are computed. As a consequence, increasing the sampling in space and time increases the redundancy of the data (in space, because electrodes are correlated due to volume conduction, and time, because neighboring time points are correlated), while the degrees of freedom of the data change only little. This has to be taken into account when statistical inferences are to be made from the data. However, in many ERP studies, the intrinsic correlation structure of the data has been disregarded. Often, some electrodes or groups of electrodes are a priori selected as the analysis entity and considered as repeated (within subject) measures that are analyzed using standard univariate statistics. The increased spatial resolution obtained with more electrodes is thus poorly represented by the resulting statistics. In addition, the assumptions made (e.g. in terms of what constitutes a repeated measure) are not supported by what we know about the properties of EEG data. From the point of view of physics (see Chapter 3), the natural “atomic” analysis entity of EEG and ERP data is the scalp electric field
Resumo:
Monte Carlo simulation was used to evaluate properties of a simple Bayesian MCMC analysis of the random effects model for single group Cormack-Jolly-Seber capture-recapture data. The MCMC method is applied to the model via a logit link, so parameters p, S are on a logit scale, where logit(S) is assumed to have, and is generated from, a normal distribution with mean μ and variance σ2 . Marginal prior distributions on logit(p) and μ were independent normal with mean zero and standard deviation 1.75 for logit(p) and 100 for μ ; hence minimally informative. Marginal prior distribution on σ2 was placed on τ2=1/σ2 as a gamma distribution with α=β=0.001 . The study design has 432 points spread over 5 factors: occasions (t) , new releases per occasion (u), p, μ , and σ . At each design point 100 independent trials were completed (hence 43,200 trials in total), each with sample size n=10,000 from the parameter posterior distribution. At 128 of these design points comparisons are made to previously reported results from a method of moments procedure. We looked at properties of point and interval inference on μ , and σ based on the posterior mean, median, and mode and equal-tailed 95% credibility interval. Bayesian inference did very well for the parameter μ , but under the conditions used here, MCMC inference performance for σ was mixed: poor for sparse data (i.e., only 7 occasions) or σ=0 , but good when there were sufficient data and not small σ .
Resumo:
Dynamic changes in ERP topographies can be conveniently analyzed by means of microstates, the so-called "atoms of thoughts", that represent brief periods of quasi-stable synchronized network activation. Comparing temporal microstate features such as on- and offset or duration between groups and conditions therefore allows a precise assessment of the timing of cognitive processes. So far, this has been achieved by assigning the individual time-varying ERP maps to spatially defined microstate templates obtained from clustering the grand mean data into predetermined numbers of topographies (microstate prototypes). Features obtained from these individual assignments were then statistically compared. This has the problem that the individual noise dilutes the match between individual topographies and templates leading to lower statistical power. We therefore propose a randomization-based procedure that works without assigning grand-mean microstate prototypes to individual data. In addition, we propose a new criterion to select the optimal number of microstate prototypes based on cross-validation across subjects. After a formal introduction, the method is applied to a sample data set of an N400 experiment and to simulated data with varying signal-to-noise ratios, and the results are compared to existing methods. In a first comparison with previously employed statistical procedures, the new method showed an increased robustness to noise, and a higher sensitivity for more subtle effects of microstate timing. We conclude that the proposed method is well-suited for the assessment of timing differences in cognitive processes. The increased statistical power allows identifying more subtle effects, which is particularly important in small and scarce patient populations.
Resumo:
OBJECTIVES Although the use of an adjudication committee (AC) for outcomes is recommended in randomized controlled trials, there are limited data on the process of adjudication. We therefore aimed to assess whether the reporting of the adjudication process in venous thromboembolism (VTE) trials meets existing quality standards and which characteristics of trials influence the use of an AC. STUDY DESIGN AND SETTING We systematically searched MEDLINE and the Cochrane Library from January 1, 2003, to June 1, 2012, for randomized controlled trials on VTE. We abstracted information about characteristics and quality of trials and reporting of adjudication processes. We used stepwise backward logistic regression model to identify trial characteristics independently associated with the use of an AC. RESULTS We included 161 trials. Of these, 68.9% (111 of 161) reported the use of an AC. Overall, 99.1% (110 of 111) of trials with an AC used independent or blinded ACs, 14.4% (16 of 111) reported how the adjudication decision was reached within the AC, and 4.5% (5 of 111) reported on whether the reliability of adjudication was assessed. In multivariate analyses, multicenter trials [odds ratio (OR), 8.6; 95% confidence interval (CI): 2.7, 27.8], use of a data safety-monitoring board (OR, 3.7; 95% CI: 1.2, 11.6), and VTE as the primary outcome (OR, 5.7; 95% CI: 1.7, 19.4) were associated with the use of an AC. Trials without random allocation concealment (OR, 0.3; 95% CI: 0.1, 0.8) and open-label trials (OR, 0.3; 95% CI: 0.1, 1.0) were less likely to report an AC. CONCLUSION Recommended processes of adjudication are underreported and lack standardization in VTE-related clinical trials. The use of an AC varies substantially by trial characteristics.
Resumo:
Molybdenum isotopes are increasingly widely applied in Earth Sciences. They are primarily used to investigate the oxygenation of Earth's ocean and atmosphere. However, more and more fields of application are being developed, such as magmatic and hydrothermal processes, planetary sciences or the tracking of environmental pollution. Here, we present a proposal for a unifying presentation of Mo isotope ratios in the studies of mass-dependent isotope fractionation. We suggest that the δ98/95Mo of the NIST SRM 3134 be defined as +0.25‰. The rationale is that the vast majority of published data are presented relative to reference materials that are similar, but not identical, and that are all slightly lighter than NIST SRM 3134. Our proposed data presentation allows a direct first-order comparison of almost all old data with future work while referring to an international measurement standard. In particular, canonical δ98/95Mo values such as +2.3‰ for seawater and −0.7‰ for marine Fe–Mn precipitates can be kept for discussion. As recent publications show that the ocean molybdenum isotope signature is homogeneous, the IAPSO ocean water standard or any other open ocean water sample is suggested as a secondary measurement standard, with a defined δ98/95Mo value of +2.34 ± 0.10‰ (2s). Les isotopes du molybdène (Mo) sont de plus en plus largement utilisés dans les sciences de la Terre. Ils sont principalement utilisés pour étudier l'oxygénation de l'océan et de l'atmosphère de la Terre. Cependant, de plus en plus de domaines d'application sont en cours de développement, tels que ceux concernant les processus magmatiques et hydrothermaux, les sciences planétaires ou encore le suivi de la pollution environnementale. Ici, nous présentons une proposition de présentation unifiée des rapports isotopiques du Mo dans les études du fractionnement isotopique dépendant de la masse. Nous suggérons que le δ98/95Mo du NIST SRM 3134 soit définit comme étant égal à +0.25 ‰. La raison est que la grande majorité des données publiées sont présentés par rapport à des matériaux de référence qui sont similaires, mais pas identiques, et qui sont tous légèrement plus léger que le NIST SRM 3134. Notre proposition de présentation des données permet une comparaison directe au premier ordre de presque toutes les anciennes données avec les travaux futurs en se référant à un standard international. En particulier, les valeurs canoniques du δ98/95Mo comme celle de +2,3 ‰ pour l'eau de mer et de -0,7 ‰ pour les précipités de Fe-Mn marins peuvent être conservés pour la discussion. Comme les publications récentes montrent que la signature isotopique moyenne du molybdène de l'océan est homogène, le standard de l'eau océanique IAPSO ou tout autre échantillon d'eau provenant de l'océan ouvert sont proposé comme standards secondaires, avec une valeur définie du δ98/95 Mo de 2.34 ± 0.10 ‰ (2s).
Resumo:
Traditionally, critical swimming speed has been defined as the speed when a fish can no longer propel itself forward, and is exhausted. To gain a better understanding of the metabolic processes at work during a U(crit) swim test, and that lead to fatigue, we developed a method using in vivo (31)P-NMR spectroscopy in combination with a Brett-type swim tunnel. Our data showed that a metabolic transition point is reached when the fish change from using steady state aerobic metabolism to non-steady state anaerobic metabolism, as indicated by a significant increase in inorganic phosphate levels from 0.3+/-0.3 to 9.5+/-3.4 mol g(-1), and a drop in intracellular pH from 7.48+/-0.03 to 6.81+/-0.05 in muscle. This coincides with the point when the fish change gait from subcarangiform swimming to kick-and-glide bursts. As the number of kicks increased, so too did the Pi concentration, and the pH(i) dropped. Both changes were maximal at U(crit). A significant drop in Gibbs free energy change of ATP hydrolysis from -55.6+/-1.4 to -49.8+/-0.7 kJ mol(-1) is argued to have been involved in fatigue. This confirms earlier findings that the traditional definition of U(crit), unlike other critical points that are typically marked by a transition from aerobic to anaerobic metabolism, is the point of complete exhaustion of both aerobic and anaerobic resources.
Resumo:
The importance of long-term historical information derived from paleoecological studies has long been recognized as a fundamental aspect of effective conservation. However, there remains some uncertainty regarding the extent to which paleoecology can inform on specific issues of high conservation priority, at the scale for which conservation policy decisions often take place. Here we review to what extent the past occurrence of three fundamental aspects of forest conservation can be assessed using paleoecological data, with a focus on northern Europe. These aspects are (1) tree species composition, (2) old/large trees and coarse woody debris, and (3) natural disturbances. We begin by evaluating the types of relevant historical information available from contemporary forests, then evaluate common paleoecological techniques, namely dendrochronology, pollen, macrofossil, charcoal, and fossil insect and wood analyses. We conclude that whereas contemporary forests can be used to estimate historical, natural occurrences of several of the aspects addressed here (e.g. old/large trees), paleoecological techniques are capable of providing much greater temporal depth, as well as robust quantitative data for tree species composition and fire disturbance, qualitative insights regarding old/large trees and woody debris, but limited indications of past windstorms and insect outbreaks. We also find that studies of fossil wood and paleoentomology are perhaps the most underutilized sources of information. Not only can paleoentomology provide species specific information, but it also enables the reconstruction of former environmental conditions otherwise unavailable. Despite the potential, the majority of conservation-relevant paleoecological studies primarily focus on describing historical forest conditions in broad terms and for large spatial scales, addressing former climate, land-use, and landscape developments, often in the absence of a specific conservation context. In contrast, relatively few studies address the most pressing conservation issues in northern Europe, often requiring data on the presence or quantities of dead wood, large trees or specific tree species, at the scale of the stand or reserve. Furthermore, even fewer examples exist of detailed paleoecological data being used for conservation planning, or the setting of operative restorative baseline conditions at local scales. If ecologist and conservation biologists are going to benefit to the full extent possible from the ever-advancing techniques developed by the paleoecological sciences, further integration of these disciplines is desirable.
Resumo:
In addition to classically defined immune mechanisms, cell-intrinsic processes can restrict virus infection and have shaped virus evolution. The details of this virus-host interaction are still emerging. Following a genome-wide siRNA screen for host factors affecting replication of Semliki Forest virus (SFV), a positive-strand RNA (+RNA) virus, we found that depletion of nonsense-mediated mRNA decay (NMD) pathway components Upf1, Smg5, and Smg7 led to increased levels of viral proteins and RNA and higher titers of released virus. The inhibitory effect of NMD was stronger when virus replication efficiency was impaired by mutations or deletions in the replicase proteins. Consequently, depletion of NMD components resulted in a more than 20-fold increase in production of these attenuated viruses. These findings indicate that a cellular mRNA quality control mechanism serves as an intrinsic barrier to the translation of early viral proteins and the amplification of +RNA viruses in animal cells.
Resumo:
The spatial distributions of non-reactive natural tracers (anions, stable water isotopes, noble gases) in pore water of clay-rich formations were studied at nine sites. Regular curved profiles were identified in most cases. Transport modeling considering diffusion, advection and available constraints on the paleo-hydrogeological evolution indicates generally that diffusion alone can explain the observations, whereas a marked advective component would distort the profiles and so is not consistent with the data.
Resumo:
Ecosystem management policies increasingly emphasize provision of multiple, as opposed to single, ecosystem services. Management for such "multifunctionality" has stimulated research into the role that biodiversity plays in providing desired rates of multiple ecosystem processes. Positive effects of biodiversity on indices of multifunctionality are consistently found, primarily because species that are redundant for one ecosystem process under a given set of environmental conditions play a distinct role under different conditions or in the provision of another ecosystem process. Here we show that the positive effects of diversity (specifically community composition) on multifunctionality indices can also arise from a statistical fallacy analogous to Simpson's paradox (where aggregating data obscures causal relationships). We manipulated soil faunal community composition in combination with nitrogen fertilization of model grassland ecosystems and repeatedly measured five ecosystem processes related to plant productivity, carbon storage, and nutrient turnover. We calculated three common multifunctionality indices based on these processes and found that the functional complexity of the soil communities had a consistent positive effect on the indices. However, only two of the five ecosystem processes also responded positively to increasing complexity, whereas the other three responded neutrally or negatively. Furthermore, none of the individual processes responded to both the complexity and the nitrogen manipulations in a manner consistent with the indices. Our data show that multifunctionality indices can obscure relationships that exist between communities and key ecosystem processes, leading us to question their use in advancing theoretical understanding-and in management decisions-about how biodiversity is related to the provision of multiple ecosystem services.
Resumo:
The hadronic light-by-light contribution to the anomalous magnetic moment of the muon was recently analyzed in the framework of dispersion theory, providing a systematic formalism where all input quantities are expressed in terms of on-shell form factors and scattering amplitudes that are in principle accessible in experiment. We briefly review the main ideas behind this framework and discuss the various experimental ingredients needed for the evaluation of one- and two-pion intermediate states. In particular, we identify processes that in the absence of data for doubly-virtual pion–photon interactions can help constrain parameters in the dispersive reconstruction of the relevant input quantities, the pion transition form factor and the helicity partial waves for γ⁎γ⁎→ππ.
Resumo:
These data result from an investigation examining the interplay between dyadic rapport and consequential behavior-mirroring. Participants responded to a variety of interpersonally-focused pretest measures prior to their engagement in videotaped interdependent tasks (coded for interactional synchrony using Motion Energy Analysis [17,18]). A post-task evaluation of rapport and other related constructs followed each exchange. Four studies shared these same dependent measures, but asked distinct questions: Study 1 (Ndyad = 38) explored the influence of perceived responsibility and gender-specificity of the task; Study 2 (Ndyad = 51) focused on dyad sex-makeup; Studies 3 (Ndyad = 41) and 4 (Ndyad = 63) examined cognitive load impacts on the interactions. Versions of the data are structured with both individual and dyad as the unit of analysis. Our data possess strong reuse potential for theorists interested in dyadic processes and are especially pertinent to questions about dyad agreement and interpersonal perception / behavior association relationships.
Resumo:
A genome-wide siRNA screen against host factors that affect the infection of Semliki Forest virus (SFV), a positive-strand (+)RNA virus, revealed that components of the nonsense-mediated mRNA decay (NMD) pathway restrict early, post-entry steps of the infection cycle. In HeLa cells and primary human fibroblasts, knockdown of UPF1, SMG5 and SMG7 leads to increased levels of viral proteins and RNA and to higher titers of released virus. The inhibitory effect of NMD was stronger when the efficiency of virus replication was impaired by mutations or deletions in the replicase proteins. Accordingly, impairing NMD resulted in a more than 20-fold increased production of these attenuated viruses. Our data suggest that intrinsic features of genomic and sub-genomic viral mRNAs, most likely the extended 3'-UTR length, make them susceptible to NMD. The fact that SFV replication is entirely cytoplasmic strongly suggests that degradation of the viral RNA occurs through the exon junction complex (EJC)-independent mode of NMD. Collectively, our findings uncover a new biological function for NMD as an intrinsic barrier to the translation of early viral proteins and the amplification of (+)RNA viruses in animal cells. Thus, in addition to its role in mRNA surveillance and post-transcriptional gene regulation, NMD also contributes to protect cells from RNA viruses.
Resumo:
Abstract: Research on human values within the family focuses on value congruence between the family members (Knafo & Schwartz, 2004), based on the assumption that transmission of values is part of a child’s socialization process. Within the family, values are not only implicitly transmitted through this process but also explicitly conveyed through the educational goals of parents (Grusec et al., 2000; Knafo & Schwartz, 2003; 2004, 2009). However, there is a lack of empirical evidence on the role of family characteristics in the value transmission process, especially for families with young children. Thus, the study presented had multiple aims: Firstly, it analyzed the congruency between mothers’ and fathers’ values and their value-based educational goals. Secondly, it examined the influence of mothers’ and fathers’ socio-demographic characteristics on their educational goals. Thirdly, it analyzed the differences in parental educational goals in families with daughters and families with sons. Finally, it examined the congruency between children’s values and the value-based educational goals of their parents. The value transmission process within families with young children was analyzed using data from complete families (child, mother and father) in Switzerland (N = 265). The survey of children consisted of 139 boys and 126 girls aged between 7 and 9 years. Parents’ values and parental educational goals were assessed using the Portrait Value Questionnaire (PVQ-21) (Schwartz, 2005). Children’s’ values were assessed using the Picture-Based Value Survey for Children (PBVS-C) (Döring et al., 2010). Regarding the role of the family context in the process of shaping children’s values, the results of the study show that, on average, parents are similar not only with respect to their value profiles but also with regard to their notion as to which values they would like to transmit to their children. Our findings also suggest that children’s values at an early age are shaped more strongly by mothers’ values than by fathers’ values. Moreover, our results show differences in value transmission with respect to the child’s gender. In particular, they suggest that value transmission within the family has a greater influence on female than on male offspring.
Resumo:
One of the earliest accounts of duration perception by Karl von Vierordt implied a common process underlying the timing of intervals in the sub-second and the second range. To date, there are two major explanatory approaches for the timing of brief intervals: the Common Timing Hypothesis and the Distinct Timing Hypothesis. While the common timing hypothesis also proceeds from a unitary timing process, the distinct timing hypothesis suggests two dissociable, independent mechanisms for the timing of intervals in the sub-second and the second range, respectively. In the present paper, we introduce confirmatory factor analysis (CFA) to elucidate the internal structure of interval timing in the sub-second and the second range. Our results indicate that the assumption of two mechanisms underlying the processing of intervals in the second and the sub-second range might be more appropriate than the assumption of a unitary timing mechanism. In contrast to the basic assumption of the distinct timing hypothesis, however, these two timing mechanisms are closely associated with each other and share 77% of common variance. This finding suggests either a strong functional relationship between the two timing mechanisms or a hierarchically organized internal structure. Findings are discussed in the light of existing psychophysical and neurophysiological data.