874 resultados para Filmic approach methods


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background One goal of gene expression profiling is to identify signature genes that robustly distinguish different types or grades of tumors. Several tumor classifiers based on expression profiling have been proposed using microarray technique. Due to important differences in the probabilistic models of microarray and SAGE technologies, it is important to develop suitable techniques to select specific genes from SAGE measurements. Results A new framework to select specific genes that distinguish different biological states based on the analysis of SAGE data is proposed. The new framework applies the bolstered error for the identification of strong genes that separate the biological states in a feature space defined by the gene expression of a training set. Credibility intervals defined from a probabilistic model of SAGE measurements are used to identify the genes that distinguish the different states with more reliability among all gene groups selected by the strong genes method. A score taking into account the credibility and the bolstered error values in order to rank the groups of considered genes is proposed. Results obtained using SAGE data from gliomas are presented, thus corroborating the introduced methodology. Conclusion The model representing counting data, such as SAGE, provides additional statistical information that allows a more robust analysis. The additional statistical information provided by the probabilistic model is incorporated in the methodology described in the paper. The introduced method is suitable to identify signature genes that lead to a good separation of the biological states using SAGE and may be adapted for other counting methods such as Massive Parallel Signature Sequencing (MPSS) or the recent Sequencing-By-Synthesis (SBS) technique. Some of such genes identified by the proposed method may be useful to generate classifiers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Despite evidence that health and disease occur in social contexts, the vast majority of studies addressing dental pain exclusively assessed information gathered at individual level. Objectives To assess the association between dental pain and contextual and individual characteristics in Brazilian adolescents. In addition, we aimed to test whether contextual Human Development Index is independently associated with dental pain after adjusting for individual level variables of socio-demographics and dental characteristics. Methods The study used data from an oral health survey carried out in São Paulo, Brazil, which included dental pain, dental exams, individual socioeconomic and demographic conditions, and Human Development Index at area level of 4,249 12-year-old and 1,566 15-year-old schoolchildren. The Poisson multilevel analysis was performed. Results Dental pain was found among 25.6% (95%CI = 24.5-26.7) of the adolescents and was 33% less prevalent among those living in more developed areas of the city than among those living in less developed areas. Girls, blacks, those whose parents earn low income and have low schooling, those studying at public schools, and those with dental treatment needs presented higher dental-pain prevalence than their counterparts. Area HDI remained associated with dental pain after adjusting for individual level variables of socio demographic and dental characteristics. Conclusions Girls, students whose parents have low schooling, those with low per capita income, those classified as having black skin color and those with dental treatment needs had higher dental pain prevalence than their counterparts. Students from areas with low Human Development Index had higher prevalence of dental pain than those from the more developed areas regardless of individual characteristics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background Recently, it was realized that the functional connectivity networks estimated from actual brain-imaging technologies (MEG, fMRI and EEG) can be analyzed by means of the graph theory, that is a mathematical representation of a network, which is essentially reduced to nodes and connections between them. Methods We used high-resolution EEG technology to enhance the poor spatial information of the EEG activity on the scalp and it gives a measure of the electrical activity on the cortical surface. Afterwards, we used the Directed Transfer Function (DTF) that is a multivariate spectral measure for the estimation of the directional influences between any given pair of channels in a multivariate dataset. Finally, a graph theoretical approach was used to model the brain networks as graphs. These methods were used to analyze the structure of cortical connectivity during the attempt to move a paralyzed limb in a group (N=5) of spinal cord injured patients and during the movement execution in a group (N=5) of healthy subjects. Results Analysis performed on the cortical networks estimated from the group of normal and SCI patients revealed that both groups present few nodes with a high out-degree value (i.e. outgoing links). This property is valid in the networks estimated for all the frequency bands investigated. In particular, cingulate motor areas (CMAs) ROIs act as ‘‘hubs’’ for the outflow of information in both groups, SCI and healthy. Results also suggest that spinal cord injuries affect the functional architecture of the cortical network sub-serving the volition of motor acts mainly in its local feature property. In particular, a higher local efficiency El can be observed in the SCI patients for three frequency bands, theta (3-6 Hz), alpha (7-12 Hz) and beta (13-29 Hz). By taking into account all the possible pathways between different ROI couples, we were able to separate clearly the network properties of the SCI group from the CTRL group. In particular, we report a sort of compensatory mechanism in the SCI patients for the Theta (3-6 Hz) frequency band, indicating a higher level of “activation” Ω within the cortical network during the motor task. The activation index is directly related to diffusion, a type of dynamics that underlies several biological systems including possible spreading of neuronal activation across several cortical regions. Conclusions The present study aims at demonstrating the possible applications of graph theoretical approaches in the analyses of brain functional connectivity from EEG signals. In particular, the methodological aspects of the i) cortical activity from scalp EEG signals, ii) functional connectivity estimations iii) graph theoretical indexes are emphasized in the present paper to show their impact in a real application.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background The use of the knowledge produced by sciences to promote human health is the main goal of translational medicine. To make it feasible we need computational methods to handle the large amount of information that arises from bench to bedside and to deal with its heterogeneity. A computational challenge that must be faced is to promote the integration of clinical, socio-demographic and biological data. In this effort, ontologies play an essential role as a powerful artifact for knowledge representation. Chado is a modular ontology-oriented database model that gained popularity due to its robustness and flexibility as a generic platform to store biological data; however it lacks supporting representation of clinical and socio-demographic information. Results We have implemented an extension of Chado – the Clinical Module - to allow the representation of this kind of information. Our approach consists of a framework for data integration through the use of a common reference ontology. The design of this framework has four levels: data level, to store the data; semantic level, to integrate and standardize the data by the use of ontologies; application level, to manage clinical databases, ontologies and data integration process; and web interface level, to allow interaction between the user and the system. The clinical module was built based on the Entity-Attribute-Value (EAV) model. We also proposed a methodology to migrate data from legacy clinical databases to the integrative framework. A Chado instance was initialized using a relational database management system. The Clinical Module was implemented and the framework was loaded using data from a factual clinical research database. Clinical and demographic data as well as biomaterial data were obtained from patients with tumors of head and neck. We implemented the IPTrans tool that is a complete environment for data migration, which comprises: the construction of a model to describe the legacy clinical data, based on an ontology; the Extraction, Transformation and Load (ETL) process to extract the data from the source clinical database and load it in the Clinical Module of Chado; the development of a web tool and a Bridge Layer to adapt the web tool to Chado, as well as other applications. Conclusions Open-source computational solutions currently available for translational science does not have a model to represent biomolecular information and also are not integrated with the existing bioinformatics tools. On the other hand, existing genomic data models do not represent clinical patient data. A framework was developed to support translational research by integrating biomolecular information coming from different “omics” technologies with patient’s clinical and socio-demographic data. This framework should present some features: flexibility, compression and robustness. The experiments accomplished from a use case demonstrated that the proposed system meets requirements of flexibility and robustness, leading to the desired integration. The Clinical Module can be accessed in http://dcm.ffclrp.usp.br/caib/pg=iptrans webcite.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Abstract Background The ability to successfully identify and incriminate pathogen vectors is fundamental to effective pathogen control and management. This task is confounded by the existence of cryptic species complexes. Molecular markers can offer a highly effective means of species identification in such complexes and are routinely employed in the study of medical entomology. Here we evaluate a multi-locus system for the identification of potential malaria vectors in the Anopheles strodei subgroup. Methods Larvae, pupae and adult mosquitoes (n = 61) from the An. strodei subgroup were collected from 21 localities in nine Brazilian states and sequenced for the COI, ITS2 and white gene. A Bayesian phylogenetic approach was used to describe the relationships in the Strodei Subgroup and the utility of COI and ITS2 barcodes was assessed using the neighbor joining tree and “best close match” approaches. Results Bayesian phylogenetic analysis of the COI, ITS2 and white gene found support for seven clades in the An. strodei subgroup. The COI and ITS2 barcodes were individually unsuccessful at resolving and identifying some species in the Subgroup. The COI barcode failed to resolve An. albertoi and An. strodei but successfully identified approximately 92% of all species queries, while the ITS2 barcode failed to resolve An. arthuri and successfully identified approximately 60% of all species queries. A multi-locus COI-ITS2 barcode, however, resolved all species in a neighbor joining tree and successfully identified all species queries using the “best close match” approach. Conclusions Our study corroborates the existence of An. albertoi, An. CP Form and An. strodei in the An. strodei subgroup and identifies four species under An. arthuri informally named A-D herein. The use of a multi-locus barcode is proposed for species identification, which has potentially important utility for vector incrimination. Individuals previously found naturally infected with Plasmodium vivax in the southern Amazon basin and reported as An. strodei are likely to have been from An. arthuri C identified in this study.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes two new approaches for the sensitivity analysis of multiobjective design optimization problems whose performance functions are highly susceptible to small variations in the design variables and/or design environment parameters. In both methods, the less sensitive design alternatives are preferred over others during the multiobjective optimization process. While taking the first approach, the designer chooses the design variable and/or parameter that causes uncertainties. The designer then associates a robustness index with each design alternative and adds each index as an objective function in the optimization problem. For the second approach, the designer must know, a priori, the interval of variation in the design variables or in the design environment parameters, because the designer will be accepting the interval of variation in the objective functions. The second method does not require any law of probability distribution of uncontrollable variations. Finally, the authors give two illustrative examples to highlight the contributions of the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To understand how nurses see care delivery to elderly women. METHODS: In this phenomenological study, ten nurses working at Primary Health Care Units were interviewed between September 2010 and January 2011. RESULTS: In care delivery, nurses consider the elderly women's knowledge background and biographical situation, and also value the family's participation as a care mediator. These professionals have the acuity to capture these women's specific demands, but face difficulties to deliver care to these clients. Nurses expect to deliver qualified care to these women. CONCLUSION: The theoretical and methodological approach of social phenomenology permitted revealing that the nurse designs qualified care to elderly women, considering the possibilities in the context. This includes the participation of different social actors and health sectors, assuming collective efforts in action strategies and professional training, in line with the particularities and care needs of elderly women nurses identify.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: This study proposes a new approach that considers uncertainty in predicting and quantifying the presence and severity of diabetic peripheral neuropathy. METHODS: A rule-based fuzzy expert system was designed by four experts in diabetic neuropathy. The model variables were used to classify neuropathy in diabetic patients, defining it as mild, moderate, or severe. System performance was evaluated by means of the Kappa agreement measure, comparing the results of the model with those generated by the experts in an assessment of 50 patients. Accuracy was evaluated by an ROC curve analysis obtained based on 50 other cases; the results of those clinical assessments were considered to be the gold standard. RESULTS: According to the Kappa analysis, the model was in moderate agreement with expert opinions. The ROC analysis (evaluation of accuracy) determined an area under the curve equal to 0.91, demonstrating very good consistency in classifying patients with diabetic neuropathy. CONCLUSION: The model efficiently classified diabetic patients with different degrees of neuropathy severity. In addition, the model provides a way to quantify diabetic neuropathy severity and allows a more accurate patient condition assessment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

INTRODUCTION: The accurate evaluation of error of measurement (EM) is extremely important as in growth studies as in clinical research, since there are usually quantitatively small changes. In any study it is important to evaluate the EM to validate the results and, consequently, the conclusions. Because of its extreme simplicity, the Dahlberg formula is largely used worldwide, mainly in cephalometric studies. OBJECTIVES: (I) To elucidate the formula proposed by Dahlberg in 1940, evaluating it by comparison with linear regression analysis; (II) To propose a simple methodology to analyze the results, which provides statistical elements to assist researchers in obtaining a consistent evaluation of the EM. METHODS: We applied linear regression analysis, hypothesis tests on its parameters and a formula involving the standard deviation of error of measurement and the measured values. RESULTS AND CONCLUSION: we introduced an error coefficient, which is a proportion related to the scale of observed values. This provides new parameters to facilitate the evaluation of the impact of random errors in the research final results.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] [EN] The lexical approach identifies lexis as the basis of language and focuses on the principle that language consists of grammaticalised lexis. in second language acquisition, over the past few years, this approach has generated great interest as an alternative to traditional grammar-based teaching methods. From a psycholinguistic point of view, the lexical approach consists of the capacity of understanding and producing lexical phrases as non-analysed entities (chunks). A growing body of literature concerning spoken fluency is in favour of integrating automaticity and formulaic language units into classroom practice. in line with the latest theories on SlA, we recommend the inclusion of a language awareness component as an integral part of this approach. The purpose is to induce what Schmidt (1990) calls noticing , i.e., registering forms in the input so as to store themin memory. This paper, which is in keeping with the interuniversity Research Project “Evidentialityin a multidisciplinary corpus of English research papers” of the University of las Palmas de Gran Canaria, provides a theoretical overview on theresearch of this approach taking into account both the methodological foundationson the subject and its pedagogical implications for SLA

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] In the last years we have developed some methods for 3D reconstruction. First we began with the problem of reconstructing a 3D scene from a stereoscopic pair of images. We developed some methods based on energy functionals which produce dense disparity maps by preserving discontinuities from image boundaries. Then we passed to the problem of reconstructing a 3D scene from multiple views (more than 2). The method for multiple view reconstruction relies on the method for stereoscopic reconstruction. For every pair of consecutive images we estimate a disparity map and then we apply a robust method that searches for good correspondences through the sequence of images. Recently we have proposed several methods for 3D surface regularization. This is a postprocessing step necessary for smoothing the final surface, which could be afected by noise or mismatch correspondences. These regularization methods are interesting because they use the information from the reconstructing process and not only from the 3D surface. We have tackled all these problems from an energy minimization approach. We investigate the associated Euler-Lagrange equation of the energy functional, and we approach the solution of the underlying partial differential equation (PDE) using a gradient descent method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

[EN] We analyze the discontinuity preserving problem in TV-L1 optical flow methods. This type of methods typically creates rounded effects at flow boundaries, which usually do not coincide with object contours. A simple strategy to overcome this problem consists in inhibiting the diffusion at high image gradients. In this work, we first introduce a general framework for TV regularizers in optical flow and relate it with some standard approaches. Our survey takes into account several methods that use decreasing functions for mitigating the diffusion at image contours. Consequently, this kind of strategies may produce instabilities in the estimation of the optical flows. Hence, we study the problem of instabilities and show that it actually arises from an ill-posed formulation. From this study, it is possible to come across with different schemes to solve this problem. One of these consists in separating the pure TV process from the mitigating strategy. This has been used in another work and we demonstrate here that it has a good performance. Furthermore, we propose two alternatives to avoid the instability problems: (i) we study a fully automatic approach that solves the problem based on the information of the whole image; (ii) we derive a semi-automatic approach that takes into account the image gradients in a close neighborhood adapting the parameter in each position. In the experimental results, we present a detailed study and comparison between the different alternatives. These methods provide very good results, especially for sequences with a few dominant gradients. Additionally, a surprising effect of these approaches is that they can cope with occlusions. This can be easily achieved by using strong regularizations and high penalizations at image contours.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main aim of this Ph.D. dissertation is the study of clustering dependent data by means of copula functions with particular emphasis on microarray data. Copula functions are a popular multivariate modeling tool in each field where the multivariate dependence is of great interest and their use in clustering has not been still investigated. The first part of this work contains the review of the literature of clustering methods, copula functions and microarray experiments. The attention focuses on the K–means (Hartigan, 1975; Hartigan and Wong, 1979), the hierarchical (Everitt, 1974) and the model–based (Fraley and Raftery, 1998, 1999, 2000, 2007) clustering techniques because their performance is compared. Then, the probabilistic interpretation of the Sklar’s theorem (Sklar’s, 1959), the estimation methods for copulas like the Inference for Margins (Joe and Xu, 1996) and the Archimedean and Elliptical copula families are presented. In the end, applications of clustering methods and copulas to the genetic and microarray experiments are highlighted. The second part contains the original contribution proposed. A simulation study is performed in order to evaluate the performance of the K–means and the hierarchical bottom–up clustering methods in identifying clusters according to the dependence structure of the data generating process. Different simulations are performed by varying different conditions (e.g., the kind of margins (distinct, overlapping and nested) and the value of the dependence parameter ) and the results are evaluated by means of different measures of performance. In light of the simulation results and of the limits of the two investigated clustering methods, a new clustering algorithm based on copula functions (‘CoClust’ in brief) is proposed. The basic idea, the iterative procedure of the CoClust and the description of the written R functions with their output are given. The CoClust algorithm is tested on simulated data (by varying the number of clusters, the copula models, the dependence parameter value and the degree of overlap of margins) and is compared with the performance of model–based clustering by using different measures of performance, like the percentage of well–identified number of clusters and the not rejection percentage of H0 on . It is shown that the CoClust algorithm allows to overcome all observed limits of the other investigated clustering techniques and is able to identify clusters according to the dependence structure of the data independently of the degree of overlap of margins and the strength of the dependence. The CoClust uses a criterion based on the maximized log–likelihood function of the copula and can virtually account for any possible dependence relationship between observations. Many peculiar characteristics are shown for the CoClust, e.g. its capability of identifying the true number of clusters and the fact that it does not require a starting classification. Finally, the CoClust algorithm is applied to the real microarray data of Hedenfalk et al. (2001) both to the gene expressions observed in three different cancer samples and to the columns (tumor samples) of the whole data matrix.