967 resultados para Biomedical Research
Resumo:
Scientific development that has been achieved through decades finds in clinical research a great possibility of translating findings to human health application. Evidence given by clinical trials allows everyone to have access to the best health services. However, the millionaire world of pharmaceutical industries has stained clinical research with doubt and improbability. Study results (fruits of controlled clinical trials) and scientific publications (selective, manipulated and with wrong conclusions) led to an inappropriate clinical practice, favoring the involved economic aspect. In 2005, the International Committee of Medical Journal Editors (ICMJE), supported by the World Association of Medical Editors, started demanding as a requisite for publication that all clinical trials be registered at the database ClinicalTrials.gov. In 2006, the World Health Organization (WHO) created the International Clinical Trial Registry Platform (ICTRP), which gathers several registry centers from all over the world, and required that all researchers and pharmaceutical industries register clinical trials. Such obligatory registration has progressed and will extend to all scientific journals indexed in all worldwide databases. Registration of clinical trials means another step of clinical research towards transparency, ethics and impartiality, resulting in real evidence to the forthcoming changes in clinical practice as well as in the health situation.
Resumo:
In this work, a method for the functionalization of biocompatible, poly(lactic acid)-based nanoparticles with charged moieties or fluorescent labels is presented. Therefore, a miniemulsion solvent evaporation procedure is used in which prepolymerized poly(L-lactic acid) is used together with a previously synthesized copolymer of methacrylic acid or a polymerizable dye, respectively, and an oligo(lactic acid) macromonomer. Alternatively, the copolymerization has been carried out in one step with the miniemulsion solvent evaporation. The increased stability in salty solutions of the carboxyl-modified nanoparticles compared to nanoparticles consisting of poly(lactic acid) only has been shown in light scattering experiments. The properties of the nanoparticles that were prepared with the separately synthesized copolymer were almost identical to those in which the copolymerization and particle fabrication were carried out simultaneously. During the characterization of the fluorescently labeled nanoparticles, the focus was on the stable bonding between the fluorescent dye and the rest of the polymer chain to ensure that none of it is released from the particles, even after longer storage time or during lengthy experiments. In a fluorescence correlation spectroscopy experiment, it could be shown that even after two weeks, no dye has been released into the solvent. Besides biomedical research for which the above described, functionalized nanoparticles were optimized, nanoparticles also play a role in coating technology. One possibility to fabricate coatings is the electrophoretic deposition of particles. In this process, the mobility of nanoparticles near electrode interfaces plays a crucial role. In this thesis, the nanoparticle mobility has been investigated with resonance enhanced dynamic light scattering (REDLS). A new setup has been developed in which the evanescent electromagnetic eld of a surface plasmon that propagates along the gold-sample interface has been used as incident beam for the dynamic light scattering experiment. The gold layer that is necessary for the excitation of the plasmon doubles as an electrode. Due to the penetration depth of the surface plasmon into the sample layer that is limited to ca. 200 nm, insights on the voltage- and frequency dependent mobility of the nanoparticles near the electrode could be gained. Additionally, simultaneous measurements at four different scattering angles can be carried out with this setup, therefore the investigation of samples undergoing changes is feasible. The results were discussed in context with the mechanisms of electrophoretic deposition.
Resumo:
OGOLOD is a Linked Open Data dataset derived from different biomedical resources by an automated pipeline, using a tailored ontology as a scaffold. The key contribution of OGOLOD is that it links, in new RDF triples, genetic human diseases and orthologous genes, paving the way for a more efficient translational biomedical research exploiting the Linked Open Data cloud.
Resumo:
Background: One of the main challenges for biomedical research lies in the computer-assisted integrative study of large and increasingly complex combinations of data in order to understand molecular mechanisms. The preservation of the materials and methods of such computational experiments with clear annotations is essential for understanding an experiment, and this is increasingly recognized in the bioinformatics community. Our assumption is that offering means of digital, structured aggregation and annotation of the objects of an experiment will provide necessary meta-data for a scientist to understand and recreate the results of an experiment. To support this we explored a model for the semantic description of a workflow-centric Research Object (RO), where an RO is defined as a resource that aggregates other resources, e.g., datasets, software, spreadsheets, text, etc. We applied this model to a case study where we analysed human metabolite variation by workflows. Results: We present the application of the workflow-centric RO model for our bioinformatics case study. Three workflows were produced following recently defined Best Practices for workflow design. By modelling the experiment as an RO, we were able to automatically query the experiment and answer questions such as “which particular data was input to a particular workflow to test a particular hypothesis?”, and “which particular conclusions were drawn from a particular workflow?”. Conclusions: Applying a workflow-centric RO model to aggregate and annotate the resources used in a bioinformatics experiment, allowed us to retrieve the conclusions of the experiment in the context of the driving hypothesis, the executed workflows and their input data. The RO model is an extendable reference model that can be used by other systems as well.
Resumo:
Introduction – Based on a previous project of University of Lisbon (UL) – a Bibliometric Benchmarking Analysis of University of Lisbon, for the period of 2000-2009 – a database was created to support research information (ULSR). However this system was not integrated with other existing systems at University, as the UL Libraries Integrated System (SIBUL) and the Repository of University of Lisbon (Repositório.UL). Since libraries were called to be part of the process, the Faculty of Pharmacy Library’ team felt that it was very important to get all systems connected or, at least, to use that data in the library systems. Objectives – The main goals were to centralize all the scientific research produced at Faculty of Pharmacy, made it available to the entire Faculty, involve researchers and library team, capitalize and reinforce team work with the integration of several distinct projects and reducing tasks’ redundancy. Methods – Our basis was the imported data collection from the ISI Web of Science (WoS), for the period of 2000-2009, into ULSR. All the researchers and indexed publications at WoS, were identified. A first validation to identify all the researchers and their affiliation (university, faculty, department and unit) was done. The final validation was done by each researcher. In a second round, concerning the same period, all Pharmacy Faculty researchers identified their published scientific work in other databases/resources (NOT WoS). To our strategy, it was important to get all the references and essential/critical to relate them with the correspondent digital objects. To each researcher previously identified, was requested to register all their references of the ‘NOT WoS’ published works, at ULSR. At the same time, they should submit all PDF files (for both WoS and NOT WoS works) in a personal area of the Web server. This effort enabled us to do a more reliable validation and prepare the data and metadata to be imported to Repository and to Library Catalogue. Results – 558 documents related with 122 researchers, were added into ULSR. 1378 bibliographic records (WoS + NOT WoS) were converted into UNIMARC and Dublin Core formats. All records were integrated in the catalogue and repository. Conclusions – Although different strategies could be adopted, according to each library team, we intend to share this experience and give some tips of what could be done and how Faculty of Pharmacy created and implemented her strategy.
Resumo:
Mode of access: Internet.
Resumo:
The author’s work with a university ethics committee and field research in Pacific New Caledonia is used as a basis to problematise the biomedical research models used by universities in Australia for assessing social research as ethical. The article explores how culturally specific Western emotional bases for ethical decisions are often unexamined. It expresses concerns about gaps in biomedical models by linking the author’s description of field interactions with research participants to debates about the creation of knowledge.
Resumo:
This article represents the proceedings of a symposium at the 2004 International Society for Biomedical Research on Alcoholism in Mannheim, Germany, organized and co-chaired by Susan E. Bergeson and Wolfgang Sommer. The presentations and presenter were (1) Gene Expression in Brains of AlcoholPreferring and Non-Preferring Rats, by Howard J. Edenberg (2) Candidate Treatment Targets for Alcoholism: Leads from Functional Genomics Approaches, by Wolfgang Sommer (3) Microarray Analysis of Acute and Chronic Alcohol Response in Brain, by Susan E. Bergeson (4) On the Integration of QTL and Gene Expression Analysis, by Robert J. Hitzemann (5) Microarray and Proteomic Analysis of the Human Alcoholic Brain, by Peter R. Dodd.
Resumo:
The premise of this dissertation is to create a highly integrated platform that combines the most current recording technologies for brain research through the development of new algorithms for three-dimensional (3D) functional mapping and 3D source localization. The recording modalities that were integrated include: Electroencephalography (EEG), Optical Topographic Maps (OTM), Magnetic Resonance Imaging (MRI), and Diffusion Tensor Imaging (DTI). This work can be divided into two parts: The first part involves the integration of OTM with MRI, where the topographic maps are mapped to both the skull and cortical surface of the brain. This integration process is made possible through the development of new algorithms that determine the probes location on the MRI head model and warping the 2D topographic maps onto the 3D MRI head/brain model. Dynamic changes of the brain activation can be visualized on the MRI head model through a graphical user interface. The second part of this research involves augmenting a fiber tracking system, by adding the ability to integrate the source localization results generated by commercial software named Curry. This task involved registering the EEG electrodes and the dipole results to the MRI data. Such Integration will allow the visualization of fiber tracts, along with the source of the EEG, in a 3D transparent brain structure. The research findings of this dissertation were tested and validated through the participation of patients from Miami Children Hospital (MCH). Such an integrated platform presented to the medical professionals in the form of a user-friendly graphical interface is viewed as a major contribution of this dissertation. It should be emphasized that there are two main aspects to this research endeavor: (1) if a dipole could be situated in time at its different positions, its trajectory may reveal additional information on the extent and nature of the brain malfunction; (2) situating such a dipole trajectory with respect to the fiber tracks could ensure the preservation of these fiber tracks (axons) during surgical interventions, preserving as a consequence these parts of the brain that are responsible for information transmission.
Resumo:
The rapid progression of biomedical research coupled with the explosion of scientific literature has generated an exigent need for efficient and reliable systems of knowledge extraction. This dissertation contends with this challenge through a concentrated investigation of digital health, Artificial Intelligence, and specifically Machine Learning and Natural Language Processing's (NLP) potential to expedite systematic literature reviews and refine the knowledge extraction process. The surge of COVID-19 complicated the efforts of scientists, policymakers, and medical professionals in identifying pertinent articles and assessing their scientific validity. This thesis presents a substantial solution in the form of the COKE Project, an initiative that interlaces machine reading with the rigorous protocols of Evidence-Based Medicine to streamline knowledge extraction. In the framework of the COKE (“COVID-19 Knowledge Extraction framework for next-generation discovery science”) Project, this thesis aims to underscore the capacity of machine reading to create knowledge graphs from scientific texts. The project is remarkable for its innovative use of NLP techniques such as a BERT + bi-LSTM language model. This combination is employed to detect and categorize elements within medical abstracts, thereby enhancing the systematic literature review process. The COKE project's outcomes show that NLP, when used in a judiciously structured manner, can significantly reduce the time and effort required to produce medical guidelines. These findings are particularly salient during times of medical emergency, like the COVID-19 pandemic, when quick and accurate research results are critical.
Resumo:
Often in biomedical research, we deal with continuous (clustered) proportion responses ranging between zero and one quantifying the disease status of the cluster units. Interestingly, the study population might also consist of relatively disease-free as well as highly diseased subjects, contributing to proportion values in the interval [0, 1]. Regression on a variety of parametric densities with support lying in (0, 1), such as beta regression, can assess important covariate effects. However, they are deemed inappropriate due to the presence of zeros and/or ones. To evade this, we introduce a class of general proportion density, and further augment the probabilities of zero and one to this general proportion density, controlling for the clustering. Our approach is Bayesian and presents a computationally convenient framework amenable to available freeware. Bayesian case-deletion influence diagnostics based on q-divergence measures are automatic from the Markov chain Monte Carlo output. The methodology is illustrated using both simulation studies and application to a real dataset from a clinical periodontology study.
Resumo:
Background: Malaria is an important threat to travelers visiting endemic regions. The risk of acquiring malaria is complex and a number of factors including transmission intensity, duration of exposure, season of the year and use of chemoprophylaxis have to be taken into account estimating risk. Materials and methods: A mathematical model was developed to estimate the risk of non-immune individual acquiring falciparum malaria when traveling to the Amazon region of Brazil. The risk of malaria infection to travelers was calculated as a function of duration of exposure and season of arrival. Results: The results suggest significant variation of risk for non-immune travelers depending on arrival season, duration of the visit and transmission intensity. The calculated risk for visitors staying longer than 4 months during peak transmission was 0.5% per visit. Conclusions: Risk estimates based on mathematical modeling based on accurate data can be a valuable tool in assessing risk/benefits and cost/benefits when deciding on the value of interventions for travelers to malaria endemic regions.
Resumo:
Background: Extracellular vesicles in yeast cells are involved in the molecular traffic across the cell wall. In yeast pathogens, these vesicles have been implicated in the transport of proteins, lipids, polysaccharide and pigments to the extracellular space. Cellular pathways required for the biogenesis of yeast extracellular vesicles are largely unknown. Methodology/Principal Findings: We characterized extracellular vesicle production in wild type (WT) and mutant strains of the model yeast Saccharomyces cerevisiae using transmission electron microscopy in combination with light scattering analysis, lipid extraction and proteomics. WT cells and mutants with defective expression of Sec4p, a secretory vesicle-associated Rab GTPase essential for Golgi-derived exocytosis, or Snf7p, which is involved in multivesicular body (MVB) formation, were analyzed in parallel. Bilayered vesicles with diameters at the 100-300 nm range were found in extracellular fractions from yeast cultures. Proteomic analysis of vesicular fractions from the cells aforementioned and additional mutants with defects in conventional secretion pathways (sec1-1, fusion of Golgi-derived exocytic vesicles with the plasma membrane; bos1-1, vesicle targeting to the Golgi complex) or MVB functionality (vps23, late endosomal trafficking) revealed a complex and interrelated protein collection. Semi-quantitative analysis of protein abundance revealed that mutations in both MVB- and Golgi-derived pathways affected the composition of yeast extracellular vesicles, but none abrogated vesicle production. Lipid analysis revealed that mutants with defects in Golgi-related components of the secretory pathway had slower vesicle release kinetics, as inferred from intracellular accumulation of sterols and reduced detection of these lipids in vesicle fractions in comparison with WT cells. Conclusions/Significance: Our results suggest that both conventional and unconventional pathways of secretion are required for biogenesis of extracellular vesicles, which demonstrate the complexity of this process in the biology of yeast cells.
Resumo:
Glycosylphosphatidylinositol (GPI) anchoring is a common, relevant posttranslational modification of eukaryotic surface proteins. Here, we developed a fast, simple, and highly sensitive (high attomole-low femtomole range) method that uses liquid chromatography-tandem mass spectrometry (LC-MS(n)) for the first large-scale analysis of GPI-anchored molecules (i.e., the GPIome) of a eukaryote, Trypanosoma cruzi, the etiologic agent of Chagas disease. Our genome-wise prediction analysis revealed that approximately 12% of T. cruzi genes possibly encode GPI-anchored proteins. By analyzing the GPIome of T. cruzi insect-dwelling epimastigote stage using LC-MS(n), we identified 90 GPI species, of which 79 were novel. Moreover, we determined that mucins coded by the T. cruzi small mucin-like gene (TcSMUG S) family are the major GPI-anchored proteins expressed on the epimastigote cell surface. TcSMUG S mucin mature sequences are short (56-85 amino acids) and highly O-glycosylated, and contain few proteolytic sites, therefore, less likely susceptible to proteases of the midgut of the insect vector. We propose that our approach could be used for the high throughput GPIomic analysis of other lower and higher eukaryotes. Molecular Systems Biology 7 April 2009; doi:10.1038/msb.2009.13
Resumo:
In recent years, the phrase 'genomic medicine' has increasingly been used to describe a new development in medicine that holds great promise for human health. This new approach to health care uses the knowledge of an individual's genetic make-up to identify those that are at a higher risk of developing certain diseases and to intervene at an earlier stage to prevent these diseases. Identifying genes that are involved in disease aetiology will provide researchers with tools to develop better treatments and cures. A major role within this field is attributed to 'predictive genomic medicine', which proposes screening healthy individuals to identify those who carry alleles that increase their susceptibility to common diseases, such as cancers and heart disease. Physicians could then intervene even before the disease manifests and advise individuals with a higher genetic risk to change their behaviour - for instance, to exercise or to eat a healthier diet - or offer drugs or other medical treatment to reduce their chances of developing these diseases. These promises have fallen on fertile ground among politicians, health-care providers and the general public, particularly in light of the increasing costs of health care in developed societies. Various countries have established databases on the DNA and health information of whole populations as a first step towards genomic medicine. Biomedical research has also identified a large number of genes that could be used to predict someone's risk of developing a certain disorder. But it would be premature to assume that genomic medicine will soon become reality, as many problems remain to be solved. Our knowledge about most disease genes and their roles is far from sufficient to make reliable predictions about a patient’s risk of actually developing a disease. In addition, genomic medicine will create new political, social, ethical and economic challenges that will have to be addressed in the near future.