21 resultados para open data capabilities
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
The Amazonian lowlands include large patches of open vegetation which contrast sharply with the rainforest, and the origin of these patches has been debated. This study focuses on a large area of open vegetation in northern Brazil, where d13C and, in some instances, C/N analyses of the organic matter preserved in late Quaternary sediments were used to achieve floristic reconstructions over time. The main goal was to determine when the modern open vegetation started to develop in this area. The variability in d13C data derived from nine cores ranges from -32.2 to -19.6 parts per thousand, but with nearly 60% of data above -26.5 parts per thousand. The most enriched values were detected only in ecotone and open vegetated areas. The development of open vegetation communities was asynchronous, varying between estimated ages of 6400 and 3000 cal a BP. This suggests that the origin of the studied patches of open vegetation might be linked to sedimentary dynamics of a late Quaternary megafan system. As sedimentation ended, this vegetation type became established over the megafan surface. In addition, the data presented here show that the presence of C4 plants must be used carefully as a proxy to interpret dry paleoclimatic episodes in Amazonian areas. Copyright (c) 2012 John Wiley & Sons, Ltd.
Resumo:
The gecko genus Phyllopezus occurs across South America's open biomes: Cerrado, Seasonally Dry Tropical Forests (SDTF, including Caatinga), and Chaco. We generated a multi-gene dataset and estimated phylogenetic relationships among described Phyllopezus taxa and related species. We included exemplars from both described Phyllopezus pollicaris subspecies, P. p. pollicaris and P. p. przewalskii. Phylogenies from the concatenated data as well as species trees constructed from individual gene trees were largely congruent. All phylogeny reconstruction methods showed Bogertia lutzae as the sister species of Phyllopezus maranjonensis, rendering Phyllopezus paraphyletic. We synonymized the monotypic genus Bogertia with Phyllopezus to maintain a taxonomy that is isomorphic with phylogenetic history. We recovered multiple, deeply divergent, cryptic lineages within P. pollicaris. These cryptic lineages possessed mtDNA distances equivalent to distances among other gekkotan sister taxa. Described P. pollicaris subspecies are not reciprocally monophyletic and current subspecific taxonomy does not accurately reflect evolutionary relationships among cryptic lineages. We highlight the conservation significance of these results in light of the ongoing habitat loss in South America's open biomes. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Juvenile nasopharyngeal angiofibroma is a rare benign vascular tumor of the nasopharynx. Although the treatment of choice is surgery, there is no consensus on what is the best approach. Aim: To compare surgical time and intraoperative transfusion requirements in patients undergoing endoscopic surgery versus open / combined and relate the need for transfusion during surgery with the time between embolization and surgery. Material and Methods: Study descriptive, analytical, retrospective study with a quantitative approach developed in the Otorhinolaryngology department of a teaching hospital. Analyzed 37 patients with angiofibroma undergoing surgical treatment. Data obtained from medical records. Analyzed with tests of the Fisher-Freeman-Halton and Games-Howell. Was considered significant if p <0.05. Study design: Historical cohort study with cross-sectional. Results: The endoscopic approach had a shorter operative time (p <0.0001). There is less need for transfusion during surgery when the embolization was performed on the fourth day. Conclusion: This suggests that the period ahead would be ideal to perform the process of embolization and endoscopic surgery by demanding less time would be associated with a lower morbidity. This study, however, failed to show which group of patients according to tumor stage would benefit from specific technical.
Resumo:
Metronidazole is a BCS (Biopharmaceutics Classification System) class 1 drug, traditionally considered the choice drug in the infections treatment caused by protozoa and anaerobic microorganisms. This study aimed to evaluate bioequivalence between 2 different marketed 250 mg metronidazole immediate release tablets. A randomized, open-label, 2 x 2 crossover study was performed in healthy Brazilian volunteers under fasting conditions with a 7-day washout period. The formulations were administered as single oral dose and blood was sampled over 48 h. Metronidazole plasma concentrations were determined by a liquid chromatography mass spectrometry (LC-MS/MS) method. The plasma concentration vs. time profile was generated for each volunteer and the pharmacokinetic parameters C-max, T-max, AUC(0-t), AUC(0-infinity), k(e), and t(1/2) were calculated using a noncompartmental model. Bioequivalence between pharmaceutical formulations was determined by calculating 90% CIs (Confidence Intervall) for the ratios of C-max, AUC(0-t), and AUC(0-infinity) values for test and reference using log-transformed data. 22 healthy volunteers (11 men, 11 women; mean (SD) age, 28 (6.5) years [range, 21-45 years]; mean (SD) weight, 66 (9.3) kg [range, 51-81 kg]; mean (SD) height, 169 (6.5) cm [range, 156-186 cm]) were enrolled in and completed the study. The 90% CIs for C-max (0.92-1.06), AUC(0-t) (0.97-1.02), and AUC(0-infinity) (0.97-1.03) values for the test and reference products fitted in the interval of 0.80-1.25 proposed by most regulatory agencies, including the Brazilian agency ANVISA. No clinically significant adverse effects were reported. After pharmacokinetics analysis, it concluded that test 250 mg metronidazole formulation is bioequivalent to the reference product according to the Brazilian agency requirements.
Resumo:
A semi-autonomous unmanned underwater vehicle (UUV), named LAURS, is being developed at the Laboratory of Sensors and Actuators at the University of Sao Paulo. The vehicle has been designed to provide inspection and intervention capabilities in specific missions of deep water oil fields. In this work, a method of modeling and identification of yaw motion dynamic system model of an open-frame underwater vehicle is presented. Using an on-board low cost magnetic compass sensor the method is based on the utilization of an uncoupled 1-DOF (degree of freedom) dynamic system equation and the application of the integral method which is the classical least squares algorithm applied to the integral form of the dynamic system equations. Experimental trials with the actual vehicle have been performed in a test tank and diving pool. During these experiments, thrusters responsible for yaw motion are driven by sinusoidal voltage signal profiles. An assessment of the feasibility of the method reveals that estimated dynamic system models are more reliable when considering slow and small sinusoidal voltage signal profiles, i.e. with larger periods and with relatively small amplitude and offset.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
This article describes the development of a visual stimulus generator to be used in neuroscience experiments with invertebrates such as flies. The experiment consists in the visualization of a fixed image that is displaced horizontally according to the stimulus data. The system is capable of displaying 640 x 480 pixels with 256 intensity levels at 200 frames per second (FPS) on conventional raster monitors. To double the possible horizontal positioning possibilities from 640 to 1280, a novel technique is presented introducing artificial inter-pixel steps. The implementation consists in using two video frame buffers containing each a distinct view of the desired image pattern. This implementation generates a visual effect capable of doubling the horizontal positioning capabilities of the visual stimulus generator allowing more precise and movements more contiguous. (C) 2011 Elsevier Inc. All rights reserved.
Resumo:
Statistical methods have been widely employed to assess the capabilities of credit scoring classification models in order to reduce the risk of wrong decisions when granting credit facilities to clients. The predictive quality of a classification model can be evaluated based on measures such as sensitivity, specificity, predictive values, accuracy, correlation coefficients and information theoretical measures, such as relative entropy and mutual information. In this paper we analyze the performance of a naive logistic regression model (Hosmer & Lemeshow, 1989) and a logistic regression with state-dependent sample selection model (Cramer, 2004) applied to simulated data. Also, as a case study, the methodology is illustrated on a data set extracted from a Brazilian bank portfolio. Our simulation results so far revealed that there is no statistically significant difference in terms of predictive capacity between the naive logistic regression models and the logistic regression with state-dependent sample selection models. However, there is strong difference between the distributions of the estimated default probabilities from these two statistical modeling techniques, with the naive logistic regression models always underestimating such probabilities, particularly in the presence of balanced samples. (C) 2012 Elsevier Ltd. All rights reserved.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.
Resumo:
Abstract Background One goal of gene expression profiling is to identify signature genes that robustly distinguish different types or grades of tumors. Several tumor classifiers based on expression profiling have been proposed using microarray technique. Due to important differences in the probabilistic models of microarray and SAGE technologies, it is important to develop suitable techniques to select specific genes from SAGE measurements. Results A new framework to select specific genes that distinguish different biological states based on the analysis of SAGE data is proposed. The new framework applies the bolstered error for the identification of strong genes that separate the biological states in a feature space defined by the gene expression of a training set. Credibility intervals defined from a probabilistic model of SAGE measurements are used to identify the genes that distinguish the different states with more reliability among all gene groups selected by the strong genes method. A score taking into account the credibility and the bolstered error values in order to rank the groups of considered genes is proposed. Results obtained using SAGE data from gliomas are presented, thus corroborating the introduced methodology. Conclusion The model representing counting data, such as SAGE, provides additional statistical information that allows a more robust analysis. The additional statistical information provided by the probabilistic model is incorporated in the methodology described in the paper. The introduced method is suitable to identify signature genes that lead to a good separation of the biological states using SAGE and may be adapted for other counting methods such as Massive Parallel Signature Sequencing (MPSS) or the recent Sequencing-By-Synthesis (SBS) technique. Some of such genes identified by the proposed method may be useful to generate classifiers.
Resumo:
Abstract Background Transcript enumeration methods such as SAGE, MPSS, and sequencing-by-synthesis EST "digital northern", are important high-throughput techniques for digital gene expression measurement. As other counting or voting processes, these measurements constitute compositional data exhibiting properties particular to the simplex space where the summation of the components is constrained. These properties are not present on regular Euclidean spaces, on which hybridization-based microarray data is often modeled. Therefore, pattern recognition methods commonly used for microarray data analysis may be non-informative for the data generated by transcript enumeration techniques since they ignore certain fundamental properties of this space. Results Here we present a software tool, Simcluster, designed to perform clustering analysis for data on the simplex space. We present Simcluster as a stand-alone command-line C package and as a user-friendly on-line tool. Both versions are available at: http://xerad.systemsbiology.net/simcluster. Conclusion Simcluster is designed in accordance with a well-established mathematical framework for compositional data analysis, which provides principled procedures for dealing with the simplex space, and is thus applicable in a number of contexts, including enumeration-based gene expression data.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Resumo:
Abstract Background Smallpox is a lethal disease that was endemic in many parts of the world until eradicated by massive immunization. Due to its lethality, there are serious concerns about its use as a bioweapon. Here we analyze publicly available microarray data to further understand survival of smallpox infected macaques, using systems biology approaches. Our goal is to improve the knowledge about the progression of this disease. Results We used KEGG pathways annotations to define groups of genes (or modules), and subsequently compared them to macaque survival times. This technique provided additional insights about the host response to this disease, such as increased expression of the cytokines and ECM receptors in the individuals with higher survival times. These results could indicate that these gene groups could influence an effective response from the host to smallpox. Conclusion Macaques with higher survival times clearly express some specific pathways previously unidentified using regular gene-by-gene approaches. Our work also shows how third party analysis of public datasets can be important to support new hypotheses to relevant biological problems.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.