14 resultados para automated thematic analysis of textual data
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo
Resumo:
Objective: To review the presentation of hyperinsulinemic hypoglycemia of the infancy (HHI), its treatment and histology in Brazilian pediatric endocrinology sections. Materials and method: The protocol analyzed data of birth, laboratory results, treatment, surgery, and pancreas histology. Results: Twenty-five cases of HHI from six centers were analyzed: 15 male, 3/25 born by vaginal delivery. The average age at diagnosis was 10.3 days. Glucose and insulin levels in the critical sample showed an average of 24.7 mg/dL and 26.3 UI/dL. Intravenous infusion of the glucose was greater than 10 mg/kg/min in all cases (M:19,1). Diazoxide was used in 15/25 of the cases, octreotide in 10, glucocorticoid in 8, growth hormone in 3, nifedipine in 2 and glucagon in 1. Ten of the cases underwent pancreatectomy and histology results showed the diffuse form of disease. Conclusion: This is the first critic review of a Brazilian sample with congenital HHI. Arq Bras Endocrinol Metab. 2012; 56(9): 666-71
Resumo:
Objective: To carry out an anatomical study of the axis with the use of computed tomography (CT) in children aged from two to ten years, measuring the lamina angle, lamina and pedicle length and thickness, and lateral mass length. Methods: Sixty-four CTs were studied from patients aged 24 to 120 months old, of both sexes and without any cervical anomaly. The measurements obtained were correlated with the data on age and sex of the patients. Statistical analysis was performed using the Students "t" tests. Results: We found that within the age range 24-48 months, 5.5% of the lamina and 8.3% of the pedicles had thicknesses of less than 3.5mm, which is the minimum thickness needed for insertion of the screw. Between 49 and 120 months, there were no lamina thicknesses of less than 3.5mm, and 1.2% of the pedicle thicknesses were less than 3.5mm values. Neither of the age groups had any lamina and pedicle lengths of less than 12mm, or lateral mass lengths greater than 12mm. Conclusion: The analysis of the data obtained demonstrates that most of the time, is possible to use a 3.5mm pedicle screw in the laminas and pedicles of the axis in children. Level of Evidence: II, Development of diagnostic criteria in consecutive patients.
Resumo:
A new method for analysis of scattering data from lamellar bilayer systems is presented. The method employs a form-free description of the cross-section structure of the bilayer and the fit is performed directly to the scattering data, introducing also a structure factor when required. The cross-section structure (electron density profile in the case of X-ray scattering) is described by a set of Gaussian functions and the technique is termed Gaussian deconvolution. The coefficients of the Gaussians are optimized using a constrained least-squares routine that induces smoothness of the electron density profile. The optimization is coupled with the point-of-inflection method for determining the optimal weight of the smoothness. With the new approach, it is possible to optimize simultaneously the form factor, structure factor and several other parameters in the model. The applicability of this method is demonstrated by using it in a study of a multilamellar system composed of lecithin bilayers, where the form factor and structure factor are obtained simultaneously, and the obtained results provided new insight into this very well known system.
Resumo:
In this work, different methods to estimate the value of thin film residual stresses using instrumented indentation data were analyzed. This study considered procedures proposed in the literature, as well as a modification on one of these methods and a new approach based on the effect of residual stress on the value of hardness calculated via the Oliver and Pharr method. The analysis of these methods was centered on an axisymmetric two-dimensional finite element model, which was developed to simulate instrumented indentation testing of thin ceramic films deposited onto hard steel substrates. Simulations were conducted varying the level of film residual stress, film strain hardening exponent, film yield strength, and film Poisson's ratio. Different ratios of maximum penetration depth h(max) over film thickness t were also considered, including h/t = 0.04, for which the contribution of the substrate in the mechanical response of the system is not significant. Residual stresses were then calculated following the procedures mentioned above and compared with the values used as input in the numerical simulations. In general, results indicate the difference that each method provides with respect to the input values depends on the conditions studied. The method by Suresh and Giannakopoulos consistently overestimated the values when stresses were compressive. The method provided by Wang et al. has shown less dependence on h/t than the others.
Resumo:
We present an analysis of observations made with the Arcminute Microkelvin Imager (AMI) and the CanadaFranceHawaii Telescope (CFHT) of six galaxy clusters in a redshift range of 0.160.41. The cluster gas is modelled using the SunyaevZeldovich (SZ) data provided by AMI, while the total mass is modelled using the lensing data from the CFHT. In this paper, we (i) find very good agreement between SZ measurements (assuming large-scale virialization and a gas-fraction prior) and lensing measurements of the total cluster masses out to r200; (ii) perform the first multiple-component weak-lensing analysis of A115; (iii) confirm the unusual separation between the gas and mass components in A1914 and (iv) jointly analyse the SZ and lensing data for the relaxed cluster A611, confirming our use of a simulation-derived masstemperature relation for parametrizing measurements of the SZ effect.
Resumo:
The purpose of this study was to present a spatial analysis of the social vulnerability of teenage pregnancy by geoprocessing data on births and deaths present on the Brazilian Ministry of Health databases in order to support intersectoral management actions and strategies based on spatial analysis in neighborhood areas. The thematic maps of the educational, occupational, birth and marital status of mothers, from all births and deaths in the city, presented a spatial correlation with teenage pregnancy. These maps were superimposed to produce social vulnerability map of adolescent pregnancy and women in general. This process presents itself as a powerful tool for the study of social vulnerability.
Resumo:
A sensitive, selective, and reproducible in-tube solid-phase microextraction and liquid chromatographic (in-tube SPME/LC-UV) method for determination of lidocaine and its metabolite monoethylglycinexylidide (MEGX) in human plasma has been developed, validated, and further applied to pharmacokinetic study in pregnant women with gestational diabetes mellitus (GDM) subjected to epidural anesthesia. Important factors in the optimization of in-tube SPME performance are discussed, including the draw/eject sample volume, draw/eject cycle number, draw/eject flow rate, sample pH, and influence of plasma proteins. The limits of quantification of the in-tube SPME/LC method were 50 ng/mL for both metabolite and lidocaine. The interday and intraday precision had coefficients of variation lower than 8%, and accuracy ranged from 95 to 117%. The response of the in-tube SPME/LC method for analytes was linear over a dynamic range from 50 to 5000 ng/mL, with correlation coefficients higher than 0.9976. The developed in-tube SPME/LC method was successfully used to analyze lidocaine and its metabolite in plasma samples from pregnant women with GDM subjected to epidural anesthesia for pharmacokinetic study.
Resumo:
Dimensionality reduction is employed for visual data analysis as a way to obtaining reduced spaces for high dimensional data or to mapping data directly into 2D or 3D spaces. Although techniques have evolved to improve data segregation on reduced or visual spaces, they have limited capabilities for adjusting the results according to user's knowledge. In this paper, we propose a novel approach to handling both dimensionality reduction and visualization of high dimensional data, taking into account user's input. It employs Partial Least Squares (PLS), a statistical tool to perform retrieval of latent spaces focusing on the discriminability of the data. The method employs a training set for building a highly precise model that can then be applied to a much larger data set very effectively. The reduced data set can be exhibited using various existing visualization techniques. The training data is important to code user's knowledge into the loop. However, this work also devises a strategy for calculating PLS reduced spaces when no training data is available. The approach produces increasingly precise visual mappings as the user feeds back his or her knowledge and is capable of working with small and unbalanced training sets.
Resumo:
OBJECTIVE: To identify clusters of the major occurrences of leprosy and their associated socioeconomic and demographic factors. METHODS: Cases of leprosy that occurred between 1998 and 2007 in Sao Jose do Rio Preto (southeastern Brazil) were geocodified and the incidence rates were calculated by census tract. A socioeconomic classification score was obtained using principal component analysis of socioeconomic variables. Thematic maps to visualize the spatial distribution of the incidence of leprosy with respect to socioeconomic levels and demographic density were constructed using geostatistics. RESULTS: While the incidence rate for the entire city was 10.4 cases per 100,000 inhabitants annually between 1998 and 2007, the incidence rates of individual census tracts were heterogeneous, with values that ranged from 0 to 26.9 cases per 100,000 inhabitants per year. Areas with a high leprosy incidence were associated with lower socioeconomic levels. There were identified clusters of leprosy cases, however there was no association between disease incidence and demographic density. There was a disparity between the places where the majority of ill people lived and the location of healthcare services. CONCLUSIONS: The spatial analysis techniques utilized identified the poorer neighborhoods of the city as the areas with the highest risk for the disease. These data show that health departments must prioritize politico-administrative policies to minimize the effects of social inequality and improve the standards of living, hygiene, and education of the population in order to reduce the incidence of leprosy.
Resumo:
Abstract Background The search for enriched (aka over-represented or enhanced) ontology terms in a list of genes obtained from microarray experiments is becoming a standard procedure for a system-level analysis. This procedure tries to summarize the information focussing on classification designs such as Gene Ontology, KEGG pathways, and so on, instead of focussing on individual genes. Although it is well known in statistics that association and significance are distinct concepts, only the former approach has been used to deal with the ontology term enrichment problem. Results BayGO implements a Bayesian approach to search for enriched terms from microarray data. The R source-code is freely available at http://blasto.iq.usp.br/~tkoide/BayGO in three versions: Linux, which can be easily incorporated into pre-existent pipelines; Windows, to be controlled interactively; and as a web-tool. The software was validated using a bacterial heat shock response dataset, since this stress triggers known system-level responses. Conclusion The Bayesian model accounts for the fact that, eventually, not all the genes from a given category are observable in microarray data due to low intensity signal, quality filters, genes that were not spotted and so on. Moreover, BayGO allows one to measure the statistical association between generic ontology terms and differential expression, instead of working only with the common significance analysis.
Resumo:
Abstract Background Several mathematical and statistical methods have been proposed in the last few years to analyze microarray data. Most of those methods involve complicated formulas, and software implementations that require advanced computer programming skills. Researchers from other areas may experience difficulties when they attempting to use those methods in their research. Here we present an user-friendly toolbox which allows large-scale gene expression analysis to be carried out by biomedical researchers with limited programming skills. Results Here, we introduce an user-friendly toolbox called GEDI (Gene Expression Data Interpreter), an extensible, open-source, and freely-available tool that we believe will be useful to a wide range of laboratories, and to researchers with no background in Mathematics and Computer Science, allowing them to analyze their own data by applying both classical and advanced approaches developed and recently published by Fujita et al. Conclusion GEDI is an integrated user-friendly viewer that combines the state of the art SVR, DVAR and SVAR algorithms, previously developed by us. It facilitates the application of SVR, DVAR and SVAR, further than the mathematical formulas present in the corresponding publications, and allows one to better understand the results by means of available visualizations. Both running the statistical methods and visualizing the results are carried out within the graphical user interface, rendering these algorithms accessible to the broad community of researchers in Molecular Biology.
Resumo:
Abstract Background A popular model for gene regulatory networks is the Boolean network model. In this paper, we propose an algorithm to perform an analysis of gene regulatory interactions using the Boolean network model and time-series data. Actually, the Boolean network is restricted in the sense that only a subset of all possible Boolean functions are considered. We explore some mathematical properties of the restricted Boolean networks in order to avoid the full search approach. The problem is modeled as a Constraint Satisfaction Problem (CSP) and CSP techniques are used to solve it. Results We applied the proposed algorithm in two data sets. First, we used an artificial dataset obtained from a model for the budding yeast cell cycle. The second data set is derived from experiments performed using HeLa cells. The results show that some interactions can be fully or, at least, partially determined under the Boolean model considered. Conclusions The algorithm proposed can be used as a first step for detection of gene/protein interactions. It is able to infer gene relationships from time-series data of gene expression, and this inference process can be aided by a priori knowledge available.
Resumo:
Human endogenous retroviruses (HERVs) arise from ancient infections of the host germline cells by exogenous retroviruses, constituting 8% of the human genome. Elevated level of envelope transcripts from HERVs-W has been detected in CSF, plasma and brain tissues from patients with Multiple Sclerosis (MS), most of them from Xq22.3, 15q21.3, and 6q21 chromosomes. However, since the locus Xq22.3 (ERVWE2) lack the 5' LTR promoter and the putative protein should be truncated due to a stop codon, we investigated the ERVWE2 genomic loci from 84 individuals, including MS patients with active HERV-W expression detected in PBMC. In addition, an automated search for promoter sequences in 20 kb nearby region of ERVWE2 reference sequence was performed. Several putative binding sites for cellular cofactors and enhancers were found, suggesting that transcription may occur via alternative promoters. However, ERVWE2 DNA sequencing of MS and healthy individuals revealed that all of them harbor a stop codon at site 39, undermining the expression of a full-length protein. Finally, since plaque formation in central nervous system (CNS) of MS patients is attributed to immunological mechanisms triggered by autoimmune attack against myelin, we also investigated the level of similarity between envelope protein and myelin oligodendrocyte glycoprotein (MOG). Comparison of the MOG to the envelope identified five retroviral regions similar to the Ig-like domain of MOG. Interestingly, one of them includes T and B cell epitopes, capable to induce T effector functions and circulating Abs in rats. In sum, although no DNA substitutions that would link ERVWE2 to the MS pathogeny was found, the similarity between the envelope protein to MOG extends the idea that ERVEW2 may be involved on the immunopathogenesis of MS, maybe facilitating the MOG recognizing by the immune system. Although awaiting experimental evidences, the data presented here may expand the scope of the endogenous retroviruses involvement on MS pathogenesis
Resumo:
Abstract Background The study and analysis of gene expression measurements is the primary focus of functional genomics. Once expression data is available, biologists are faced with the task of extracting (new) knowledge associated to the underlying biological phenomenon. Most often, in order to perform this task, biologists execute a number of analysis activities on the available gene expression dataset rather than a single analysis activity. The integration of heteregeneous tools and data sources to create an integrated analysis environment represents a challenging and error-prone task. Semantic integration enables the assignment of unambiguous meanings to data shared among different applications in an integrated environment, allowing the exchange of data in a semantically consistent and meaningful way. This work aims at developing an ontology-based methodology for the semantic integration of gene expression analysis tools and data sources. The proposed methodology relies on software connectors to support not only the access to heterogeneous data sources but also the definition of transformation rules on exchanged data. Results We have studied the different challenges involved in the integration of computer systems and the role software connectors play in this task. We have also studied a number of gene expression technologies, analysis tools and related ontologies in order to devise basic integration scenarios and propose a reference ontology for the gene expression domain. Then, we have defined a number of activities and associated guidelines to prescribe how the development of connectors should be carried out. Finally, we have applied the proposed methodology in the construction of three different integration scenarios involving the use of different tools for the analysis of different types of gene expression data. Conclusions The proposed methodology facilitates the development of connectors capable of semantically integrating different gene expression analysis tools and data sources. The methodology can be used in the development of connectors supporting both simple and nontrivial processing requirements, thus assuring accurate data exchange and information interpretation from exchanged data.