977 resultados para BIOMEDICAL ANALYSIS
Resumo:
Background Accurate automatic segmentation of the caudate nucleus in magnetic resonance images (MRI) of the brain is of great interest in the analysis of developmental disorders. Segmentation methods based on a single atlas or on multiple atlases have been shown to suitably localize caudate structure. However, the atlas prior information may not represent the structure of interest correctly. It may therefore be useful to introduce a more flexible technique for accurate segmentations. Method We present Cau-dateCut: a new fully-automatic method of segmenting the caudate nucleus in MRI. CaudateCut combines an atlas-based segmentation strategy with the Graph Cut energy-minimization framework. We adapt the Graph Cut model to make it suitable for segmenting small, low-contrast structures, such as the caudate nucleus, by defining new energy function data and boundary potentials. In particular, we exploit information concerning the intensity and geometry, and we add supervised energies based on contextual brain structures. Furthermore, we reinforce boundary detection using a new multi-scale edgeness measure. Results We apply the novel CaudateCut method to the segmentation of the caudate nucleus to a new set of 39 pediatric attention-deficit/hyperactivity disorder (ADHD) patients and 40 control children, as well as to a public database of 18 subjects. We evaluate the quality of the segmentation using several volumetric and voxel by voxel measures. Our results show improved performance in terms of segmentation compared to state-of-the-art approaches, obtaining a mean overlap of 80.75%. Moreover, we present a quantitative volumetric analysis of caudate abnormalities in pediatric ADHD, the results of which show strong correlation with expert manual analysis. Conclusion CaudateCut generates segmentation results that are comparable to gold-standard segmentations and which are reliable in the analysis of differentiating neuroanatomical abnormalities between healthy controls and pediatric ADHD.
Resumo:
This study aimed to investigate the impact of a communication skills training (CST) in oncology on clinicians' linguistic strategies. A verbal communication analysis software (Logiciel d'Analyse de la Communication Verbale) was used to compare simulated patients interviews with oncology clinicians who participated in CST (N = 57) (pre/post with a 6-month interval) with a control group of oncology clinicians who did not (N = 56) (T1/T2 with a 6-month interval). A significant improvement of linguistic strategies related to biomedical, psychological and social issues was observed. Analysis of linguistic aspects of videotaped interviews might become in the future a part of individualised feedback in CST and utilised as a marker for an evaluation of training.
Resumo:
BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH METHODS: For the most recent update, we searched the Cochrane Collaboration Tobacco Addiction Group Specialized Register in July 2012 for studies added since the last update in 2009. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. Results were expressed as a relative risk (RR) for smoking cessation with 95% confidence intervals (CI). Where appropriate, a pooled effect was estimated using a Mantel-Haenszel fixed-effect method. MAIN RESULTS: We included 15 trials using a variety of biomedical tests. Two pairs of trials had sufficiently similar recruitment, setting and interventions to calculate a pooled effect; there was no evidence that carbon monoxide (CO) measurement in primary care (RR 1.06, 95% CI 0.85 to 1.32) or spirometry in primary care (RR 1.18, 95% CI 0.77 to 1.81) increased cessation rates. We did not pool the other 11 trials due to the presence of substantial clinical heterogeneity. Of the remaining 11 trials, two trials detected statistically significant benefits: one trial in primary care detected a significant benefit of lung age feedback after spirometry (RR 2.12, 95% CI 1.24 to 3.62) and one trial that used ultrasonography of carotid and femoral arteries and photographs of plaques detected a benefit (RR 2.77, 95% CI 1.04 to 7.41) but enrolled a population of light smokers and was judged to be at unclear risk of bias in two domains. Nine further trials did not detect significant effects. One of these tested CO feedback alone and CO combined with genetic susceptibility as two different interventions; none of the three possible comparisons detected significant effects. One trial used CO measurement, one used ultrasonography of carotid arteries and two tested for genetic markers. The four remaining trials used a combination of CO and spirometry feedback in different settings. AUTHORS' CONCLUSIONS: There is little evidence about the effects of most types of biomedical tests for risk assessment on smoking cessation. Of the fifteen included studies, only two detected a significant effect of the intervention. Spirometry combined with an interpretation of the results in terms of 'lung age' had a significant effect in a single good quality trial but the evidence is not optimal. A trial of carotid plaque screening using ultrasound also detected a significant effect, but a second larger study of a similar feedback mechanism did not detect evidence of an effect. Only two pairs of studies were similar enough in terms of recruitment, setting, and intervention to allow meta-analyses; neither of these found evidence of an effect. Mixed quality evidence does not support the hypothesis that other types of biomedical risk assessment increase smoking cessation in comparison to standard treatment. There is insufficient evidence with which to evaluate the hypothesis that multiple types of assessment are more effective than single forms of assessment.
Resumo:
This article summarizes the basic principles of scanning electron microscopy and the capabilities of the technique with different examples ofapplications in biomedical and biological research.
Resumo:
BACKGROUND: A possible strategy for increasing smoking cessation rates could be to provide smokers who have contact with healthcare systems with feedback on the biomedical or potential future effects of smoking, e.g. measurement of exhaled carbon monoxide (CO), lung function, or genetic susceptibility to lung cancer. We reviewed systematically data on smoking cessation rates from controlled trials that used biomedical risk assessment and feedback. OBJECTIVES: To determine the efficacy of biomedical risk assessment provided in addition to various levels of counselling, as a contributing aid to smoking cessation. SEARCH STRATEGY: We systematically searched he Cochrane Collaboration Tobacco Addiction Group Specialized Register, Cochrane Central Register of Controlled Trials (CENTRAL), MEDLINE (1966 to 2004), and EMBASE (1980 to 2004). We combined methodological terms with terms related to smoking cessation counselling and biomedical measurements. SELECTION CRITERIA: Inclusion criteria were: a randomized controlled trial design; subjects participating in smoking cessation interventions; interventions based on a biomedical test to increase motivation to quit; control groups receiving all other components of intervention; an outcome of smoking cessation rate at least six months after the start of the intervention. DATA COLLECTION AND ANALYSIS: Two assessors independently conducted data extraction on each paper, with disagreements resolved by consensus. MAIN RESULTS: From 4049 retrieved references, we selected 170 for full text assessment. We retained eight trials for data extraction and analysis. One of the eight used CO alone and CO + Genetic Susceptibility as two different intervention groups, giving rise to three possible comparisons. Three of the trials isolated the effect of exhaled CO on smoking cessation rates resulting in the following odds ratios (ORs) and 95% confidence intervals (95% CI): 0.73 (0.38 to 1.39), 0.93 (0.62 to 1.41), and 1.18 (0.84 to 1.64). Combining CO measurement with genetic susceptibility gave an OR of 0.58 (0.29 to 1.19). Exhaled CO measurement and spirometry were used together in three trials, resulting in the following ORs (95% CI): 0.6 (0.25 to 1.46), 2.45 (0.73 to 8.25), and 3.50 (0.88 to 13.92). Spirometry results alone were used in one other trial with an OR of 1.21 (0.60 to 2.42).Two trials used other motivational feedback measures, with an OR of 0.80 (0.39 to 1.65) for genetic susceptibility to lung cancer alone, and 3.15 (1.06 to 9.31) for ultrasonography of carotid and femoral arteries performed in light smokers (average 10 to 12 cigarettes a day). AUTHORS' CONCLUSIONS: Due to the scarcity of evidence of sufficient quality, we can make no definitive statements about the effectiveness of biomedical risk assessment as an aid for smoking cessation. Current evidence of lower quality does not however support the hypothesis that biomedical risk assessment increases smoking cessation in comparison with standard treatment. Only two studies were similar enough in term of recruitment, setting, and intervention to allow pooling of data and meta-analysis.
Resumo:
AbstractBACKGROUND: Scientists have been trying to understand the molecular mechanisms of diseases to design preventive and therapeutic strategies for a long time. For some diseases, it has become evident that it is not enough to obtain a catalogue of the disease-related genes but to uncover how disruptions of molecular networks in the cell give rise to disease phenotypes. Moreover, with the unprecedented wealth of information available, even obtaining such catalogue is extremely difficult.PRINCIPAL FINDINGS: We developed a comprehensive gene-disease association database by integrating associations from several sources that cover different biomedical aspects of diseases. In particular, we focus on the current knowledge of human genetic diseases including mendelian, complex and environmental diseases. To assess the concept of modularity of human diseases, we performed a systematic study of the emergent properties of human gene-disease networks by means of network topology and functional annotation analysis. The results indicate a highly shared genetic origin of human diseases and show that for most diseases, including mendelian, complex and environmental diseases, functional modules exist. Moreover, a core set of biological pathways is found to be associated with most human diseases. We obtained similar results when studying clusters of diseases, suggesting that related diseases might arise due to dysfunction of common biological processes in the cell.CONCLUSIONS: For the first time, we include mendelian, complex and environmental diseases in an integrated gene-disease association database and show that the concept of modularity applies for all of them. We furthermore provide a functional analysis of disease-related modules providing important new biological insights, which might not be discovered when considering each of the gene-disease association repositories independently. Hence, we present a suitable framework for the study of how genetic and environmental factors, such as drugs, contribute to diseases.AVAILABILITY: The gene-disease networks used in this study and part of the analysis are available at http://ibi.imim.es/DisGeNET/DisGeNETweb.html#Download
Resumo:
This paper describes a realistic simulator for the Computed Tomography (CT) scan process for motion analysis. In fact, we are currently developing a new framework to find small motion from the CT scan. In order to prove the fidelity of this framework, or potentially any other algorithm, we present in this paper a simulator to simulate the whole CT acquisition process with a priori known parameters. In other words, it is a digital phantom for the motion analysis that can be used to compare the results of any related algorithm with the ground-truth realistic analytical model. Such a simulator can be used by the community to test different algorithms in the biomedical imaging domain. The most important features of this simulator are its different considerations to simulate the best the real acquisition process and its generality.
Resumo:
The model plant Arabidopsis thaliana was studied for the search of new metabolites involved in wound signalling. Diverse LC approaches were considered in terms of efficiency and analysis time and a 7-min gradient on a UPLC-TOF-MS system with a short column was chosen for metabolite fingerprinting. This screening step was designed to allow the comparison of a high number of samples over a wide range of time points after stress induction in positive and negative ionisation modes. Thanks to data treatment, clear discrimination was obtained, providing lists of potential stress-induced ions. In a second step, the fingerprinting conditions were transferred to longer column, providing a higher peak capacity able to demonstrate the presence of isomers among the highlighted compounds.
Resumo:
Background. Although peer review is widely considered to be the most credible way of selecting manuscripts and improving the quality of accepted papers in scientific journals, there is little evidence to support its use. Our aim was to estimate the effects on manuscript quality of either adding a statistical peer reviewer or suggesting the use of checklists such as CONSORT or STARD to clinical reviewers or both. Methodology and Principal Findings. Interventions were defined as 1) the addition of a statistical reviewer to the clinical peer review process, and 2) suggesting reporting guidelines to reviewers; with"no statistical expert" and"no checklist" as controls. The two interventions were crossed in a 262 balanced factorial design including original research articles consecutively selected, between May 2004 and March 2005, by the Medicina Clinica (Barc) editorial committee. We randomized manuscripts to minimize differences in terms of baseline quality and type of study (intervention, longitudinal, cross-sectional, others). Sample-size calculations indicated that 100 papers provide an 80% power to test a 55% standardized difference. We specified the main outcome as the increment in quality of papers as measured on the Goodman Scale. Two blinded evaluators rated the quality of manuscripts at initial submission and final post peer review version. Of the 327 manuscripts submitted to the journal, 131 were accepted for further review, and 129 were randomized. Of those, 14 that were lost to follow-up showed no differences in initial quality to the followed-up papers. Hence, 115 were included in the main analysis, with 16 rejected for publication after peer review. 21 (18.3%) of the 115 included papers were interventions, 46 (40.0%) were longitudinal designs, 28 (24.3%) cross-sectional and 20 (17.4%) others. The 16 (13.9%) rejected papers had a significantly lower initial score on the overall Goodman scale than accepted papers (difference 15.0, 95% CI: 4.6- 24.4). The effect of suggesting a guideline to the reviewers had no effect on change in overall quality as measured by the Goodman scale (0.9, 95% CI: 20.3+2.1). The estimated effect of adding a statistical reviewer was 5.5 (95% CI: 4.3-6.7), showing a significant improvement in quality. Conclusions and Significance. This prospective randomized study shows the positive effect of adding a statistical reviewer to the field-expert peers in improving manuscript quality. We did not find a statistically significant positive effect by suggesting reviewers use reporting guidelines.
Resumo:
In 2008, a Swiss Academies of Arts and Sciences working group chaired by Professor Emilio Bossi issued a "Memorandum on scientific integrity and the handling of misconduct in the scientific context", together with a paper setting out principles and procedures concerning integrity in scientific research. In the Memorandum, unjustified claims of authorship in scientific publications are referred to as a form of scientific misconduct - a view widely shared in other countries. In the Principles and Procedures, the main criteria for legitimate authorship are specified, as well as the associated responsibilities. It is in fact not uncommon for disputes about authorship to arise with regard to publications in fields where research is generally conducted by teams rather than individuals. Such disputes may concern not only the question who is or is not to be listed as an author but also, frequently, the precise sequence of names, if the list is to reflect the various authors' roles and contributions. Subjective assessments of the contributions made by the individual members of a research group may differ substantially. As scientific collaboration - often across national boundaries - is now increasingly common, ensuring appropriate recognition of all parties is a complex matter and, where disagreements arise, it may not be easy to reach a consensus. In addition, customs have changed over the past few decades; for example, the practice of granting "honorary" authorship to an eminent researcher - formerly not unusual - is no longer considered acceptable. It should be borne in mind that the publications list has become by far the most important indicator of a researcher's scientific performance; for this reason, appropriate authorship credit has become a decisive factor in the careers of young researchers, and it needs to be managed and protected accordingly. At the international and national level, certain practices have therefore developed concerning the listing of authors and the obligations of authorship. The Scientific Integrity Committee of the Swiss Academies of Arts and Sciences has collated the relevant principles and regulations and formulated recommendations for authorship in scientific publications. These should help to prevent authorship disputes and offer guidance in the event of conflicts.
Resumo:
The objective of this work was to combine the advantages of the dried blood spot (DBS) sampling process with the highly sensitive and selective negative-ion chemical ionization tandem mass spectrometry (NICI-MS-MS) to analyze for recent antidepressants including fluoxetine, norfluoxetine, reboxetine, and paroxetine from micro whole blood samples (i.e., 10 microL). Before analysis, DBS samples were punched out, and antidepressants were simultaneously extracted and derivatized in a single step by use of pentafluoropropionic acid anhydride and 0.02% triethylamine in butyl chloride for 30 min at 60 degrees C under ultrasonication. Derivatives were then separated on a gas chromatograph coupled with a triple-quadrupole mass spectrometer operating in negative selected reaction monitoring mode for a total run time of 5 min. To establish the validity of the method, trueness, precision, and selectivity were determined on the basis of the guidelines of the "Société Française des Sciences et des Techniques Pharmaceutiques" (SFSTP). The assay was found to be linear in the concentration ranges 1 to 500 ng mL(-1) for fluoxetine and norfluoxetine and 20 to 500 ng mL(-1) for reboxetine and paroxetine. Despite the small sampling volume, the limit of detection was estimated at 20 pg mL(-1) for all the analytes. The stability of DBS was also evaluated at -20 degrees C, 4 degrees C, 25 degrees C, and 40 degrees C for up to 30 days. Furthermore, the method was successfully applied to a pharmacokinetic investigation performed on a healthy volunteer after oral administration of a single 40-mg dose of fluoxetine. Thus, this validated DBS method combines an extractive-derivative single step with a fast and sensitive GC-NICI-MS-MS technique. Using microliter blood samples, this procedure offers a patient-friendly tool in many biomedical fields such as checking treatment adherence, therapeutic drug monitoring, toxicological analyses, or pharmacokinetic studies.
Resumo:
Modelling the shoulder's musculature is challenging given its mechanical and geometric complexity. The use of the ideal fibre model to represent a muscle's line of action cannot always faithfully represent the mechanical effect of each muscle, leading to considerable differences between model-estimated and in vivo measured muscle activity. While the musculo-tendon force coordination problem has been extensively analysed in terms of the cost function, only few works have investigated the existence and sensitivity of solutions to fibre topology. The goal of this paper is to present an analysis of the solution set using the concepts of torque-feasible space (TFS) and wrench-feasible space (WFS) from cable-driven robotics. A shoulder model is presented and a simple musculo-tendon force coordination problem is defined. The ideal fibre model for representing muscles is reviewed and the TFS and WFS are defined, leading to the necessary and sufficient conditions for the existence of a solution. The shoulder model's TFS is analysed to explain the lack of anterior deltoid (DLTa) activity. Based on the analysis, a modification of the model's muscle fibre geometry is proposed. The performance with and without the modification is assessed by solving the musculo-tendon force coordination problem for quasi-static abduction in the scapular plane. After the proposed modification, the DLTa reaches 20% of activation.
Resumo:
Biomedical research is currently facing a new type of challenge: an excess of information, both in terms of raw data from experiments and in the number of scientific publications describing their results. Mirroring the focus on data mining techniques to address the issues of structured data, there has recently been great interest in the development and application of text mining techniques to make more effective use of the knowledge contained in biomedical scientific publications, accessible only in the form of natural human language. This thesis describes research done in the broader scope of projects aiming to develop methods, tools and techniques for text mining tasks in general and for the biomedical domain in particular. The work described here involves more specifically the goal of extracting information from statements concerning relations of biomedical entities, such as protein-protein interactions. The approach taken is one using full parsing—syntactic analysis of the entire structure of sentences—and machine learning, aiming to develop reliable methods that can further be generalized to apply also to other domains. The five papers at the core of this thesis describe research on a number of distinct but related topics in text mining. In the first of these studies, we assessed the applicability of two popular general English parsers to biomedical text mining and, finding their performance limited, identified several specific challenges to accurate parsing of domain text. In a follow-up study focusing on parsing issues related to specialized domain terminology, we evaluated three lexical adaptation methods. We found that the accurate resolution of unknown words can considerably improve parsing performance and introduced a domain-adapted parser that reduced the error rate of theoriginal by 10% while also roughly halving parsing time. To establish the relative merits of parsers that differ in the applied formalisms and the representation given to their syntactic analyses, we have also developed evaluation methodology, considering different approaches to establishing comparable dependency-based evaluation results. We introduced a methodology for creating highly accurate conversions between different parse representations, demonstrating the feasibility of unification of idiverse syntactic schemes under a shared, application-oriented representation. In addition to allowing formalism-neutral evaluation, we argue that such unification can also increase the value of parsers for domain text mining. As a further step in this direction, we analysed the characteristics of publicly available biomedical corpora annotated for protein-protein interactions and created tools for converting them into a shared form, thus contributing also to the unification of text mining resources. The introduced unified corpora allowed us to perform a task-oriented comparative evaluation of biomedical text mining corpora. This evaluation established clear limits on the comparability of results for text mining methods evaluated on different resources, prompting further efforts toward standardization. To support this and other research, we have also designed and annotated BioInfer, the first domain corpus of its size combining annotation of syntax and biomedical entities with a detailed annotation of their relationships. The corpus represents a major design and development effort of the research group, with manual annotation that identifies over 6000 entities, 2500 relationships and 28,000 syntactic dependencies in 1100 sentences. In addition to combining these key annotations for a single set of sentences, BioInfer was also the first domain resource to introduce a representation of entity relations that is supported by ontologies and able to capture complex, structured relationships. Part I of this thesis presents a summary of this research in the broader context of a text mining system, and Part II contains reprints of the five included publications.
Resumo:
Connectivity analysis on diffusion MRI data of the whole- brain suffers from distortions caused by the standard echo- planar imaging acquisition strategies. These images show characteristic geometrical deformations and signal destruction that are an important drawback limiting the success of tractography algorithms. Several retrospective correction techniques are readily available. In this work, we use a digital phantom designed for the evaluation of connectivity pipelines. We subject the phantom to a âeurooetheoretically correctâeuro and plausible deformation that resembles the artifact under investigation. We correct data back, with three standard methodologies (namely fieldmap-based, reversed encoding-based, and registration- based). Finally, we rank the methods based on their geometrical accuracy, the dropout compensation, and their impact on the resulting connectivity matrices.