35 resultados para inter-block analysis
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
BACKGROUND: Mechanisms underlying improvement of myocardial contractile function after cell therapy as well as arrhythmic side effect remain poorly understood. We hypothesised that cell therapy might affect the mechanical properties of isolated host cardiomyocytes. METHODS: Two weeks after myocardial infarction (MI), rats were treated by intramyocardial myoblast injection (SkM, n=8), intramyocardial vehicle injection (Medium, n=6), or sham operation (Sham, n=7). Cardiac function was assessed by echocardiography. Cardiomyocytes were isolated in a modified Langendorff perfusion system, their contraction was measured by video-based inter-sarcomeric analysis. Data were compared with a control-group without myocardial infarction (Control, n=5). RESULTS: Three weeks post-treatment, ejection fraction (EF) further deteriorated in vehicle-injected and non-injected rats (respectively 40.7+/-11.4% to 33+/-5.5% and 41.8+/-8% to 33.5+/-8.3%), but was stabilised in SkM group (35.9+/-6% to 36.4+/-9.7%). Significant cell hypertrophy induced by MI was maintained after cell therapy. Single cell contraction (dL/dt(max)) decreased in SkM and vehicle groups compared to non-injected group as well as cell shortening and relaxation (dL/dt(min)) in vehicle group. A significantly increased predisposition for alternation of strong and weak contractions was observed in isolated cardiomyocytes of the SkM group. CONCLUSION: Our study provides the first evidence that injection of materials into the myocardium alters host cardiomyocytes contractile function independently of the global beneficial effect of the heart function. These findings may be important in understanding possible adverse effects.
Resumo:
The momentary, global functional state of the brain is reflected by its electric field configuration. Cluster analytical approaches consistently extracted four head-surface brain electric field configurations that optimally explain the variance of their changes across time in spontaneous EEG recordings. These four configurations are referred to as EEG microstate classes A, B, C, and D and have been associated with verbal/phonological, visual, attention reorientation, and subjective interoceptive-autonomic processing, respectively. The present study tested these associations via an intra-individual and inter-individual analysis approach. The intra-individual approach tested the effect of task-induced increased modality-specific processing on EEG microstate parameters. The inter-individual approach tested the effect of personal modality-specific parameters on EEG microstate parameters. We obtained multichannel EEG from 61 healthy, right-handed, male students during four eyes-closed conditions: object-visualization, spatial-visualization, verbalization (6 runs each), and resting (7 runs). After each run, we assessed participants' degrees of object-visual, spatial-visual, and verbal thinking using subjective reports. Before and after the recording, we assessed modality-specific cognitive abilities and styles using nine cognitive tests and two questionnaires. The EEG of all participants, conditions, and runs was clustered into four classes of EEG microstates (A, B, C, and D). RMANOVAs, ANOVAs and post-hoc paired t-tests compared microstate parameters between conditions. TANOVAs compared microstate class topographies between conditions. Differences were localized using eLORETA. Pearson correlations assessed interrelationships between personal modality-specific parameters and EEG microstate parameters during no-task resting. As hypothesized, verbal as opposed to visual conditions consistently affected the duration, occurrence, and coverage of microstate classes A and B. Contrary to associations suggested by previous reports, parameters were increased for class A during visualization, and class B during verbalization. In line with previous reports, microstate D parameters were increased during no-task resting compared to the three internal, goal-directed tasks. Topographic differences between conditions concerned particular sub-regions of components of the metabolic default mode network. Modality-specific personal parameters did not consistently correlate with microstate parameters except verbal cognitive style which correlated negatively with microstate class A duration and positively with class C occurrence. This is the first study that aimed to induce EEG microstate class parameter changes based on their hypothesized functional significance. Beyond, the associations of microstate classes A and B with visual and verbal processing, respectively and microstate class D with interoceptive-autonomic processing, our results suggest that a finely-tuned interplay between all four EEG microstate classes is necessary for the continuous formation of visual and verbal thoughts, as well as interoceptive-autonomic processing. Our results point to the possibility that the EEG microstate classes may represent the head-surface measured activity of intra-cortical sources primarily exhibiting inhibitory functions. However, additional studies are needed to verify and elaborate on this hypothesis.
Resumo:
During intertemporal decisions, the preference for smaller, sooner reward over larger-delayed rewards (temporal discounting, TD) exhibits substantial inter-subject variability; however, it is currently unclear what are the mechanisms underlying this apparently idiosyncratic behavior. To answer this question, here we recorded and analyzed mouse movement kinematics during intertemporal choices in a large sample of participants (N = 86). Results revealed a specific pattern of decision dynamics associated with the selection of “immediate” versus “delayed” response alternatives, which well discriminated between a “discounter” versus a “farsighted” behavior—thus representing a reliable behavioral marker of TD preferences. By fitting the Drift Diffusion Model to the data, we showed that differences between discounter and farsighted subjects could be explained in terms of different model parameterizations, corresponding to the use of different choice mechanisms in the two groups. While farsighted subjects were biased toward the “delayed” option, discounter subjects were not correspondingly biased toward the “immediate” option. Rather, as shown by the dynamics of evidence accumulation over time, their behavior was characterized by high choice uncertainty.
Resumo:
Background There is concern that non-inferiority trials might be deliberately designed to conceal that a new treatment is less effective than a standard treatment. In order to test this hypothesis we performed a meta-analysis of non-inferiority trials to assess the average effect of experimental treatments compared with standard treatments. Methods One hundred and seventy non-inferiority treatment trials published in 121 core clinical journals were included. The trials were identified through a search of PubMed (1991 to 20 February 2009). Combined relative risk (RR) from meta-analysis comparing experimental with standard treatments was the main outcome measure. Results The 170 trials contributed a total of 175 independent comparisons of experimental with standard treatments. The combined RR for all 175 comparisons was 0.994 [95% confidence interval (CI) 0.978–1.010] using a random-effects model and 1.002 (95% CI 0.996–1.008) using a fixed-effects model. Of the 175 comparisons, experimental treatment was considered to be non-inferior in 130 (74%). The combined RR for these 130 comparisons was 0.995 (95% CI 0.983–1.006) and the point estimate favoured the experimental treatment in 58% (n = 76) and standard treatment in 42% (n = 54). The median non-inferiority margin (RR) pre-specified by trialists was 1.31 [inter-quartile range (IQR) 1.18–1.59]. Conclusion In this meta-analysis of non-inferiority trials the average RR comparing experimental with standard treatments was close to 1. The experimental treatments that gain a verdict of non-inferiority in published trials do not appear to be systematically less effective than the standard treatments. Importantly, publication bias and bias in the design and reporting of the studies cannot be ruled out and may have skewed the study results in favour of the experimental treatments. Further studies are required to examine the importance of such bias.
Resumo:
The Default Mode Network (DMN) is a higher order functional neural network that displays activation during passive rest and deactivation during many types of cognitive tasks. Accordingly, the DMN is viewed to represent the neural correlate of internally-generated self-referential cognition. This hypothesis implies that the DMN requires the involvement of cognitive processes, like declarative memory. The present study thus examines the spatial and functional convergence of the DMN and the semantic memory system. Using an active block-design functional Magnetic Resonance Imaging (fMRI) paradigm and Independent Component Analysis (ICA), we trace the DMN and fMRI signal changes evoked by semantic, phonological and perceptual decision tasks upon visually-presented words. Our findings show less deactivation during semantic compared to the two non-semantic tasks for the entire DMN unit and within left-hemispheric DMN regions, i.e., the dorsal medial prefrontal cortex, the anterior cingulate cortex, the retrosplenial cortex, the angular gyrus, the middle temporal gyrus and the anterior temporal region, as well as the right cerebellum. These results demonstrate that well-known semantic regions are spatially and functionally involved in the DMN. The present study further supports the hypothesis of the DMN as an internal mentation system that involves declarative memory functions.
Resumo:
When it comes to helping to shape sustainable development, research is most useful when it bridges the science–implementation/management gap and when it brings development specialists and researchers into a dialogue (Hurni et al. 2004); can a peer-reviewed journal contribute to this aim? In the classical system for validation and dissemination of scientific knowledge, journals focus on knowledge exchange within the academic community and do not specifically address a ‘life-world audience’. Within a North-South context, another knowledge divide is added: the peer review process excludes a large proportion of scientists from the South from participating in the production of scientific knowledge (Karlsson et al. 2007). Mountain Research and Development (MRD) is a journal whose mission is based on an editorial strategy to build the bridge between research and development and ensure that authors from the global South have access to knowledge production, ultimately with a view to supporting sustainable development in mountains. In doing so, MRD faces a number of challenges that we would like to discuss with the td-net community, after having presented our experience and strategy as editors of this journal. MRD was launched in 1981 by mountain researchers who wanted mountains to be included in the 1992 Rio process. In the late 1990s, MRD realized that the journal needed to go beyond addressing only the scientific community. It therefore launched a new section addressing a broader audience in 2000, with the aim of disseminating insights into, and recommendations for, the implementation of sustainable development in mountains. In 2006, we conducted a survey among MRD’s authors, reviewers, and readers (Wymann et al. 2007): respondents confirmed that MRD had succeeded in bridging the gap between research and development. But we realized that MRD could become an even more efficient tool for sustainability if development knowledge were validated: in 2009, we began submitting ‘development’ papers (‘transformation knowledge’) to external peer review of a kind different from the scientific-only peer review (for ‘systems knowledge’). At the same time, the journal became open access in order to increase the permeability between science and society, and ensure greater access for readers and authors in the South. We are currently rethinking our review process for development papers, with a view to creating more space for communication between science and society, and enhancing the co-production of knowledge (Roux 2008). Hopefully, these efforts will also contribute to the urgent debate on the ‘publication culture’ needed in transdisciplinary research (Kueffer et al. 2007).
Resumo:
Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.
Resumo:
Three distinct categories of marginal zone lymphomas (MZLs) are currently recognized, principally based on their site of occurrence. They are thought to represent unique entities, but the relationship of one subtype with another is poorly understood. We investigated 17 non-splenic MZLs (seven nodal, 10 extranodal) by gene expression profiling to distinguish between subtypes and determine their cell of origin. Our findings suggest biological inter-relatedness of these entities despite occurrence at different locations and associations with possibly different aetiologies. Furthermore, the expression profiles of non-splenic MZL were similar to memory B cells.
Resumo:
The objective of this study was to investigate whether it is possible to pool together diffusion spectrum imaging data from four different scanners, located at three different sites. Two of the scanners had identical configuration whereas two did not. To measure the variability, we extracted three scalar maps (ADC, FA and GFA) from the DSI and utilized a region and a tract-based analysis. Additionally, a phantom study was performed to rule out some potential factors arising from the scanner performance in case some systematic bias occurred in the subject study. This work was split into three experiments: intra-scanner reproducibility, reproducibility with twin-scanner settings and reproducibility with other configurations. Overall for the intra-scanner and twin-scanner experiments, the region-based analysis coefficient of variation (CV) was in a range of 1%-4.2% and below 3% for almost every bundle for the tract-based analysis. The uncinate fasciculus showed the worst reproducibility, especially for FA and GFA values (CV 3.7-6%). For the GFA and FA maps, an ICC value of 0.7 and above is observed in almost all the regions/tracts. Looking at the last experiment, it was found that there is a very high similarity of the outcomes from the two scanners with identical setting. However, this was not the case for the two other imagers. Given the fact that the overall variation in our study is low for the imagers with identical settings, our findings support the feasibility of cross-site pooling of DSI data from identical scanners.
Resumo:
The Long Term Evolution (LTE) cellular technology is expected to extend the capacity and improve the performance of current 3G cellular networks. Among the key mechanisms in LTE responsible for traffic management is the packet scheduler, which handles the allocation of resources to active flows in both the frequency and time dimension. This paper investigates for various scheduling scheme how they affect the inter-cell interference characteristics and how the interference in turn affects the user’s performance. A special focus in the analysis is on the impact of flow-level dynamics resulting from the random user behaviour. For this we use a hybrid analytical/simulation approach which enables fast evaluation of flow-level performance measures. Most interestingly, our findings show that the scheduling policy significantly affects the inter-cell interference pattern but that the scheduler specific pattern has little impact on the flow-level performance.
Does published orthodontic research account for clustering effects during statistical data analysis?
Resumo:
In orthodontics, multiple site observations within patients or multiple observations collected at consecutive time points are often encountered. Clustered designs require larger sample sizes compared to individual randomized trials and special statistical analyses that account for the fact that observations within clusters are correlated. It is the purpose of this study to assess to what degree clustering effects are considered during design and data analysis in the three major orthodontic journals. The contents of the most recent 24 issues of the American Journal of Orthodontics and Dentofacial Orthopedics (AJODO), Angle Orthodontist (AO), and European Journal of Orthodontics (EJO) from December 2010 backwards were hand searched. Articles with clustering effects and whether the authors accounted for clustering effects were identified. Additionally, information was collected on: involvement of a statistician, single or multicenter study, number of authors in the publication, geographical area, and statistical significance. From the 1584 articles, after exclusions, 1062 were assessed for clustering effects from which 250 (23.5 per cent) were considered to have clustering effects in the design (kappa = 0.92, 95 per cent CI: 0.67-0.99 for inter rater agreement). From the studies with clustering effects only, 63 (25.20 per cent) had indicated accounting for clustering effects. There was evidence that the studies published in the AO have higher odds of accounting for clustering effects [AO versus AJODO: odds ratio (OR) = 2.17, 95 per cent confidence interval (CI): 1.06-4.43, P = 0.03; EJO versus AJODO: OR = 1.90, 95 per cent CI: 0.84-4.24, non-significant; and EJO versus AO: OR = 1.15, 95 per cent CI: 0.57-2.33, non-significant). The results of this study indicate that only about a quarter of the studies with clustering effects account for this in statistical data analysis.
Resumo:
Independent component analysis (ICA) or seed based approaches (SBA) in functional magnetic resonance imaging blood oxygenation level dependent (BOLD) data became widely applied tools to identify functionally connected, large scale brain networks. Differences between task conditions as well as specific alterations of the networks in patients as compared to healthy controls were reported. However, BOLD lacks the possibility of quantifying absolute network metabolic activity, which is of particular interest in the case of pathological alterations. In contrast, arterial spin labeling (ASL) techniques allow quantifying absolute cerebral blood flow (CBF) in rest and in task-related conditions. In this study, we explored the ability of identifying networks in ASL data using ICA and to quantify network activity in terms of absolute CBF values. Moreover, we compared the results to SBA and performed a test-retest analysis. Twelve healthy young subjects performed a fingertapping block-design experiment. During the task pseudo-continuous ASL was measured. After CBF quantification the individual datasets were concatenated and subjected to the ICA algorithm. ICA proved capable to identify the somato-motor and the default mode network. Moreover, absolute network CBF within the separate networks during either condition could be quantified. We could demonstrate that using ICA and SBA functional connectivity analysis is feasible and robust in ASL-CBF data. CBF functional connectivity is a novel approach that opens a new strategy to evaluate differences of network activity in terms of absolute network CBF and thus allows quantifying inter-individual differences in the resting state and task-related activations and deactivations.
Resumo:
With the advent of high through-put sequencing (HTS), the emerging science of metagenomics is transforming our understanding of the relationships of microbial communities with their environments. While metagenomics aims to catalogue the genes present in a sample through assessing which genes are actively expressed, metatranscriptomics can provide a mechanistic understanding of community inter-relationships. To achieve these goals, several challenges need to be addressed from sample preparation to sequence processing, statistical analysis and functional annotation. Here we use an inbred non-obese diabetic (NOD) mouse model in which germ-free animals were colonized with a defined mixture of eight commensal bacteria, to explore methods of RNA extraction and to develop a pipeline for the generation and analysis of metatranscriptomic data. Applying the Illumina HTS platform, we sequenced 12 NOD cecal samples prepared using multiple RNA-extraction protocols. The absence of a complete set of reference genomes necessitated a peptide-based search strategy. Up to 16% of sequence reads could be matched to a known bacterial gene. Phylogenetic analysis of the mapped ORFs revealed a distribution consistent with ribosomal RNA, the majority from Bacteroides or Clostridium species. To place these HTS data within a systems context, we mapped the relative abundance of corresponding Escherichia coli homologs onto metabolic and protein-protein interaction networks. These maps identified bacterial processes with components that were well-represented in the datasets. In summary this study highlights the potential of exploiting the economy of HTS platforms for metatranscriptomics.
Resumo:
BACKGROUND: Left anterior hemiblock (LAHB) is the most frequent conduction abnormality, but its impact on the diagnostic accuracy of the exercise ECG has not been studied. The aim of our study was to determine the diagnostic accuracy of ST depression for predicting ischaemia in the presence of LAHB. PATIENTS: Consecutive patients with known or suspected coronary heart disease undergoing exercise ECG and 99mTc-sestamibi single photon emission computed tomography (SPECT) were included in the analysis. Patients with left bundle branch block, with changes in QRS morphology related to myocardial infarction, and patients who had undergone pharmacological stress testing were excluded. RESULTS: Of 1532 patients assessed, 567 patients qualified for the analysis. In 69 patients with LAHB, ECG stress testing had lower sensitivity (38% vs 86%) and lower negative predictive value (82% vs 92%) than in patients with normal baseline ECG. The reduction of sensitivity appeared to be similar in patients with isolated LAHB (n=43), in patients with right bundle branch block (n=39), and with bifascicular block (n=26). In contrast, the positive predictive value of the test was excellent. CONCLUSION: The diagnostic accuracy of the exercise ECG for prediction of ischaemia is reduced in patients with LAHB.