939 resultados para Open Information Extraction


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated the availability and use of audiovisual and electronic resources by distance learning students at the National Open University of Nigeria (NOUN). A questionnaire was administered tothe distance learning students selected across the various departments of the NOUN. The findings revealed that even though NOUN made provision for audiovisual and electronic resources for students' use, a majority of the audiovisual and electronic resources are available through personal provision by the students.The study also revealed regular use of audiovisual and electronic resources by the distance learning students. Constraints on use include poor power supply, poor infrastructure, lack of adequate skill, and high cost of access.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes the design, implementation, and experiences with AcMus, an open and integrated software platform for room acoustics research, which comprises tools for measurement, analysis, and simulation of rooms for music listening and production. Through use of affordable hardware, such as laptops, consumer audio interfaces and microphones, the software allows evaluation of relevant acoustical parameters with stable and consistent results, thus providing valuable information in the diagnosis of acoustical problems, as well as the possibility of simulating modifications in the room through analytical models. The system is open-source and based on a flexible and extensible Java plug-in framework, allowing for cross-platform portability, accessibility and experimentation, thus fostering collaboration of users, developers and researchers in the field of room acoustics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, a method of computing PD stabilising gains for rotating systems is presented based on the D-decomposition technique, which requires the sole knowledge of frequency response functions. By applying this method to a rotating system with electromagnetic actuators, it is demonstrated that the stability boundary locus in the plane of feedback gains can be easily plotted, and the most suitable gains can be found to minimise the resonant peak of the system. Experimental results for a Laval rotor show the feasibility of not only controlling lateral shaft vibration and assuring stability, but also helps in predicting the final vibration level achieved by the closed-loop system. These results are obtained based solely on the input-output response information of the system as a whole.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combining data from multiple analytical platforms is essential for comprehensive study of the molecular phenotype (metabotype) of a given biological sample. The metabolite profiles generated are intrinsically dependent on the analytical platforms, each requiring optimization of instrumental parameters, separation conditions, and sample extraction to deliver maximal biological information. An in-depth evaluation of extraction protocols for characterizing the metabolome of the hepatobiliary fluke Fasciola hepatica, using ultra performance liquid chromatography and capillary electrophoresis coupled with mass spectroscopy is presented. The spectrometric methods were characterized by performance, and metrics of merit were established, including precision, mass accuracy, selectivity, sensitivity, and platform stability. Although a core group of molecules was common to all methods, each platform contributed a unique set, whereby 142 metabolites out of 14,724 features were identified. A mixture design revealed that the chloroform:methanol:water proportion of 15:59:26 was globally the best composition for metabolite extraction across UPLC-MS and CE-MS platforms accommodating different columns and ionization modes. Despite the general assumption of the necessity of platform-adapted protocols for achieving effective metabotype characterization, we show that an appropriately designed single extraction procedure is able to fit the requirements of all technologies. This may constitute a paradigm shift in developing efficient protocols for high-throughput metabolite profiling with more-general analytical applicability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cohabitation for 14 days with Ehrlich tumor-bearing mice was shown to increase locomotor activity, to decrease hypothalamic noradrenaline (NA) levels, to increase NA turnover and to decrease innate immune responses and decrease the animals' resistance to tumor growth. Cage mates of a B16F10 melanoma-bearer mice were also reported to show neuroimmune changes. Chemosignals released by Ehrlich tumor-bearing mice have been reported to be relevant for the neutrophil activity changes induced by cohabitation. The present experiment was designed to further analyze the effects of odor cues on neuroimmune changes induced by cohabitation with a sick cage mate. Specifically, the relevance of chemosignals released by an Ehrlich tumor-bearing mouse was assessed on the following: behavior (open-field and plus maze); hypothalamic NA levels and turnover; adrenaline (A) and NA plasmatic levels; and host resistance induced by tumor growth. To comply with such objectives, devices specifically constructed to analyze the influence of chemosignals released from tumor-bearing mice were employed. The results show that deprivation of odor cues released by Ehrlich tumor-bearing mice reversed the behavioral, neurochemical and immune changes induced by cohabitation. Mice use scents for intraspecies communication in many social contexts. Tumors produce volatile organic compounds released into the atmosphere through breath, sweat, and urine. Our results strongly suggest that volatile compounds released by Ehrlich tumor-injected mice are perceived by their conspecifics, inducing the neuroimmune changes reported for cohabitation with a sick companion. (C) 2011 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: The aim of this prospective clinical study was to investigate the cephalometric changes produced by bonded spurs associated with high-pull chincup therapy in children with Angle Class I malocclusion and anterior open bite. Methods: Thirty patients with an initial mean age of 8.14 years and a mean anterior open bite of -3.93 mm were treated with bonded spurs associated with chincup therapy for 12 months. An untreated control group of 30 subjects with an initial mean age of 8.36 years and a mean anterior open bite of -3.93 mm and the same malocclusion was followed for 12 months for comparison. Student t tests were used for intergroup comparisons. Results: The treated group demonstrated a significantly greater decrease of the gonial angle, and increase in overbite, palatal tipping of the maxillary incisors, and vertical dentoalveolar development of the maxillary and mandibular incisors compared with the control group. Conclusions: The association of bonded spurs with high-pull chincup therapy was efficient for the correction of the open bite in 86.7% of the patients, with a 5.23-mm (SD, +/- 1.69) overbite increase. (Am J Orthod Dentofacial Orthop 2012;142:487-93)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Heavy-flavor production in p + p collisions is a good test of perturbative-quantum-chromodynamics (pQCD) calculations. Modification of heavy-flavor production in heavy-ion collisions relative to binary-collision scaling from p + p results, quantified with the nuclear-modification factor (R-AA), provides information on both cold-and hot-nuclear-matter effects. Midrapidity heavy-flavor R-AA measurements at the Relativistic Heavy Ion Collider have challenged parton-energy-loss models and resulted in upper limits on the viscosity-entropy ratio that are near the quantum lower bound. Such measurements have not been made in the forward-rapidity region. Purpose: Determine transverse-momentum (p(T)) spectra and the corresponding R-AA for muons from heavy-flavor meson decay in p + p and Cu + Cu collisions at root s(NN) = 200 GeV and y = 1.65. Method: Results are obtained using the semileptonic decay of heavy-flavor mesons into negative muons. The PHENIX muon-arm spectrometers measure the p(T) spectra of inclusive muon candidates. Backgrounds, primarily due to light hadrons, are determined with a Monte Carlo calculation using a set of input hadron distributions tuned to match measured-hadron distributions in the same detector and statistically subtracted. Results: The charm-production cross section in p + p collisions at root s = 200 GeV, integrated over p(T) and in the rapidity range 1.4 < y < 1.9, is found to be d(sigma e (e) over bar)/dy = 0.139 +/- 0.029 (stat)(-0.058)(+0.051) (syst) mb. This result is consistent with a perturbative fixed-order-plus-next-to-leading-log calculation within scale uncertainties and is also consistent with expectations based on the corresponding midrapidity charm-production cross section measured by PHENIX. The R-AA for heavy-flavor muons in Cu + Cu collisions is measured in three centrality bins for 1 < p(T) < 4 GeV/c. Suppression relative to binary-collision scaling (R-AA < 1) increases with centrality. Conclusions: Within experimental and theoretical uncertainties, the measured charm yield in p + p collisions is consistent with state-of-the-art pQCD calculations. Suppression in central Cu + Cu collisions suggests the presence of significant cold-nuclear-matter effects and final-state energy loss.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Landfarm soils are employed in industrial and petrochemical residue bioremediation. This process induces selective pressure directed towards microorganisms capable of degrading toxic compounds. Detailed description of taxa in these environments is difficult due to a lack of knowledge of culture conditions required for unknown microorganisms. A metagenomic approach permits identification of organisms without the need for culture. However, a DNA extraction step is first required, which can bias taxonomic representativeness and interfere with cloning steps by extracting interference substances. We developed a simplified DNA extraction procedure coupled with metagenomic DNA amplification in an effort to overcome these limitations. The amplified sequences were used to generate a metagenomic data set and the taxonomic and functional representativeness were evaluated in comparison with a data set built with DNA extracted by conventional methods. The simplified and optimized method of RAPD to access metagenomic information provides better representativeness of the taxonomical and metabolic aspects of the environmental samples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The ongoing efforts to sequence the honey bee genome require additional initiatives to define its transcriptome. Towards this end, we employed the Open Reading frame ESTs (ORESTES) strategy to generate profiles for the life cycle of Apis mellifera workers. Results Of the 5,021 ORESTES, 35.2% matched with previously deposited Apis ESTs. The analysis of the remaining sequences defined a set of putative orthologs whose majority had their best-match hits with Anopheles and Drosophila genes. CAP3 assembly of the Apis ORESTES with the already existing 15,500 Apis ESTs generated 3,408 contigs. BLASTX comparison of these contigs with protein sets of organisms representing distinct phylogenetic clades revealed a total of 1,629 contigs that Apis mellifera shares with different taxa. Most (41%) represent genes that are in common to all taxa, another 21% are shared between metazoans (Bilateria), and 16% are shared only within the Insecta clade. A set of 23 putative genes presented a best match with human genes, many of which encode factors related to cell signaling/signal transduction. 1,779 contigs (52%) did not match any known sequence. Applying a correction factor deduced from a parallel analysis performed with Drosophila melanogaster ORESTES, we estimate that approximately half of these no-match ESTs contigs (22%) should represent Apis-specific genes. Conclusions The versatile and cost-efficient ORESTES approach produced minilibraries for honey bee life cycle stages. Such information on central gene regions contributes to genome annotation and also lends itself to cross-transcriptome comparisons to reveal evolutionary trends in insect genomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The development of protocols for RNA extraction from paraffin-embedded samples facilitates gene expression studies on archival samples with known clinical outcome. Older samples are particularly valuable because they are associated with longer clinical follow up. RNA extracted from formalin-fixed paraffin-embedded (FFPE) tissue is problematic due to chemical modifications and continued degradation over time. We compared quantity and quality of RNA extracted by four different protocols from 14 ten year old and 14 recently archived (three to ten months old) FFPE breast cancer tissues. Using three spin column purification-based protocols and one magnetic bead-based protocol, total RNA was extracted in triplicate, generating 336 RNA extraction experiments. RNA fragment size was assayed by reverse transcription-polymerase chain reaction (RT-PCR) for the housekeeping gene glucose-6-phosphate dehydrogenase (G6PD), testing primer sets designed to target RNA fragment sizes of 67 bp, 151 bp, and 242 bp. Results Biologically useful RNA (minimum RNA integrity number, RIN, 1.4) was extracted in at least one of three attempts of each protocol in 86–100% of older and 100% of recently archived ("months old") samples. Short RNA fragments up to 151 bp were assayable by RT-PCR for G6PD in all ten year old and months old tissues tested, but none of the ten year old and only 43% of months old samples showed amplification if the targeted fragment was 242 bp. Conclusion All protocols extracted RNA from ten year old FFPE samples with a minimum RIN of 1.4. Gene expression of G6PD could be measured in all samples, old and recent, using RT-PCR primers designed for RNA fragments up to 151 bp. RNA quality from ten year old FFPE samples was similar to that extracted from months old samples, but quantity and success rate were generally higher for the months old group. We preferred the magnetic bead-based protocol because of its speed and higher quantity of extracted RNA, although it produced similar quality RNA to other protocols. If a chosen protocol fails to extract biologically useful RNA from a given sample in a first attempt, another attempt and then another protocol should be tried before excluding the case from molecular analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background Transformed cells of Escherichia coli DH5-α with pGFPuv, induced by IPTG (isopropyl-β-d-thiogalactopyranoside), express the green fluorescent protein (gfpuv) during growth phases. E. coli subjected to the combination of selective permeation by freezing/thawing/sonication cycles followed by the three-phase partitioning extraction (TPP) method were compared to the direct application of TPP to the same culture of E. coli on releasing gfpuv from the over-expressing cells. Material and Methods Cultures (37°C/100 rpm/ 24 h; μ = 0.99 h-1 - 1.10 h-1) of transformed (pGFP) Escherichia coli DH5-α, expressing the green fluorescent protein (gfpuv, absorbance at 394 nm and emission at 509 nm) were sonicated in successive intervals of sonication (25 vibrations/pulse) to determine the maximum amount of gfpuv released from the cells. For selective permeation, the transformed previously frozen (-75°C) cells were subjected to three freeze/thaw (-20°C/ 0.83°C/min) cycles interlaid by sonication (3 pulses/ 6 seconds/ 25 vibrations). The intracellular permeate with gfpuv in extraction buffer (TE) solution (25 mM Tris-HCl, pH 8.0, 1 mM β-mercaptoethanol β-ME, 0.1 mM PMSF) was subjected to the three-phase partitioning (TPP) method with t-butanol and 1.6 M ammonium sulfate. Sonication efficiency was verified on the application to the cells previously treated by the TPP method. The intra-cell releases were mixed and eluted through methyl HIC column with a buffer solution (10 mM Tris-HCl, 10 mM EDTA, pH 8.0). Results The sonication maximum released amount obtained from the cells was 327.67 μg gfpuv/mL (20.73 μg gfpuv/mg total proteins – BSA), after 9 min of treatment. Through the selective permeation by three repeated freezing/thawing/sonication cycles applied to the cells, a close content of 241.19 μg gfpuv/mL (29.74 μg gfpuv/mg BSA) was obtained. The specific mass range of gfpuv released from the same cultures, by the three-phase partitioning (TPP) method, in relation to total proteins, was higher, between 107.28 μg/mg and 135.10 μg/mg. Conclusions The selective permeation of gfpuv by freezing/thawing/sonication followed by TPP separation method was equivalent to the amount of gfpuv extracted from the cells directly by TPP; although selective permeation extracts showed better elution through the HIC column.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Atherosclerosis causes millions of deaths, annually yielding billions in expenses round the world. Intravascular Optical Coherence Tomography (IVOCT) is a medical imaging modality, which displays high resolution images of coronary cross-section. Nonetheless, quantitative information can only be obtained with segmentation; consequently, more adequate diagnostics, therapies and interventions can be provided. Since it is a relatively new modality, many different segmentation methods, available in the literature for other modalities, could be successfully applied to IVOCT images, improving accuracies and uses. Method An automatic lumen segmentation approach, based on Wavelet Transform and Mathematical Morphology, is presented. The methodology is divided into three main parts. First, the preprocessing stage attenuates and enhances undesirable and important information, respectively. Second, in the feature extraction block, wavelet is associated with an adapted version of Otsu threshold; hence, tissue information is discriminated and binarized. Finally, binary morphological reconstruction improves the binary information and constructs the binary lumen object. Results The evaluation was carried out by segmenting 290 challenging images from human and pig coronaries, and rabbit iliac arteries; the outcomes were compared with the gold standards made by experts. The resultant accuracy was obtained: True Positive (%) = 99.29 ± 2.96, False Positive (%) = 3.69 ± 2.88, False Negative (%) = 0.71 ± 2.96, Max False Positive Distance (mm) = 0.1 ± 0.07, Max False Negative Distance (mm) = 0.06 ± 0.1. Conclusions In conclusion, by segmenting a number of IVOCT images with various features, the proposed technique showed to be robust and more accurate than published studies; in addition, the method is completely automatic, providing a new tool for IVOCT segmentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.