996 resultados para PIECEWISE VECTOR FIELDS
Resumo:
Descriptors based on Molecular Interaction Fields (MIF) are highly suitable for drug discovery, but their size (thousands of variables) often limits their application in practice. Here we describe a simple and fast computational method that extracts from a MIF a handful of highly informative points (hot spots) which summarize the most relevant information. The method was specifically developed for drug discovery, is fast, and does not require human supervision, being suitable for its application on very large series of compounds. The quality of the results has been tested by running the method on the ligand structure of a large number of ligand-receptor complexes and then comparing the position of the selected hot spots with actual atoms of the receptor. As an additional test, the hot spots obtained with the novel method were used to obtain GRIND-like molecular descriptors which were compared with the original GRIND. In both cases the results show that the novel method is highly suitable for describing ligand-receptor interactions and compares favorably with other state-of-the-art methods.
Resumo:
Induced pluripotent stem (iPS) cells have generated keen interestdue to their potential use in regenerative medicine. They havebeen obtained from various cell types of both mice and humans byexogenous delivery of different combinations of Oct4, Sox2, Klf4,c-Myc, Nanog, and Lin28. The delivery of these transcription factorshas mostly entailed the use of integrating viral vectors (retrovirusesor lentiviruses), carrying the risk of both insertional mutagenesisand oncogenesis due to misexpression of these exogenousfactors. Therefore, obtaining iPS cells that do not carry integratedtransgene sequences is an important prerequisite for their eventualtherapeutic use. Here we report the generation of iPS cell linesfrom mouse embryonic fibroblasts with no evidence of integrationof the reprogramming vector in their genome, achieved by nucleofectionof a polycistronic construct coexpressing Oct4, Sox2, Klf4,and c-Myc
Resumo:
Building a personalized model to describe the drug concentration inside the human body for each patient is highly important to the clinical practice and demanding to the modeling tools. Instead of using traditional explicit methods, in this paper we propose a machine learning approach to describe the relation between the drug concentration and patients' features. Machine learning has been largely applied to analyze data in various domains, but it is still new to personalized medicine, especially dose individualization. We focus mainly on the prediction of the drug concentrations as well as the analysis of different features' influence. Models are built based on Support Vector Machine and the prediction results are compared with the traditional analytical models.
Resumo:
In this article we present a hybrid approach for automatic summarization of Spanish medical texts. There are a lot of systems for automatic summarization using statistics or linguistics, but only a few of them combining both techniques. Our idea is that to reach a good summary we need to use linguistic aspects of texts, but as well we should benefit of the advantages of statistical techniques. We have integrated the Cortex (Vector Space Model) and Enertex (statistical physics) systems coupled with the Yate term extractor, and the Disicosum system (linguistics). We have compared these systems and afterwards we have integrated them in a hybrid approach. Finally, we have applied this hybrid system over a corpora of medical articles and we have evaluated their performances obtaining good results.
Resumo:
The paper proposes an approach aimed at detecting optimal model parameter combinations to achieve the most representative description of uncertainty in the model performance. A classification problem is posed to find the regions of good fitting models according to the values of a cost function. Support Vector Machine (SVM) classification in the parameter space is applied to decide if a forward model simulation is to be computed for a particular generated model. SVM is particularly designed to tackle classification problems in high-dimensional space in a non-parametric and non-linear way. SVM decision boundaries determine the regions that are subject to the largest uncertainty in the cost function classification, and, therefore, provide guidelines for further iterative exploration of the model space. The proposed approach is illustrated by a synthetic example of fluid flow through porous media, which features highly variable response due to the parameter values' combination.
Resumo:
This paper presents a review of methodology for semi-supervised modeling with kernel methods, when the manifold assumption is guaranteed to be satisfied. It concerns environmental data modeling on natural manifolds, such as complex topographies of the mountainous regions, where environmental processes are highly influenced by the relief. These relations, possibly regionalized and nonlinear, can be modeled from data with machine learning using the digital elevation models in semi-supervised kernel methods. The range of the tools and methodological issues discussed in the study includes feature selection and semisupervised Support Vector algorithms. The real case study devoted to data-driven modeling of meteorological fields illustrates the discussed approach.
Resumo:
BACKGROUND AND PURPOSE: MCI was recently subdivided into sd-aMCI, sd-fMCI, and md-aMCI. The current investigation aimed to discriminate between MCI subtypes by using DTI. MATERIALS AND METHODS: Sixty-six prospective participants were included: 18 with sd-aMCI, 13 with sd-fMCI, and 35 with md-aMCI. Statistics included group comparisons using TBSS and individual classification using SVMs. RESULTS: The group-level analysis revealed a decrease in FA in md-aMCI versus sd-aMCI in an extensive bilateral, right-dominant network, and a more pronounced reduction of FA in md-aMCI compared with sd-fMCI in right inferior fronto-occipital fasciculus and inferior longitudinal fasciculus. The comparison between sd-fMCI and sd-aMCI, as well as the analysis of the other diffusion parameters, yielded no significant group differences. The individual-level SVM analysis provided discrimination between the MCI subtypes with accuracies around 97%. The major limitation is the relatively small number of cases of MCI. CONCLUSIONS: Our data show that, at the group level, the md-aMCI subgroup has the most pronounced damage in white matter integrity. Individually, SVM analysis of white matter FA provided highly accurate classification of MCI subtypes.
Resumo:
The network revenue management (RM) problem arises in airline, hotel, media,and other industries where the sale products use multiple resources. It can be formulatedas a stochastic dynamic program but the dynamic program is computationallyintractable because of an exponentially large state space, and a number of heuristicshave been proposed to approximate it. Notable amongst these -both for their revenueperformance, as well as their theoretically sound basis- are approximate dynamic programmingmethods that approximate the value function by basis functions (both affinefunctions as well as piecewise-linear functions have been proposed for network RM)and decomposition methods that relax the constraints of the dynamic program to solvesimpler dynamic programs (such as the Lagrangian relaxation methods). In this paperwe show that these two seemingly distinct approaches coincide for the network RMdynamic program, i.e., the piecewise-linear approximation method and the Lagrangianrelaxation method are one and the same.
Resumo:
Recently, kernel-based Machine Learning methods have gained great popularity in many data analysis and data mining fields: pattern recognition, biocomputing, speech and vision, engineering, remote sensing etc. The paper describes the use of kernel methods to approach the processing of large datasets from environmental monitoring networks. Several typical problems of the environmental sciences and their solutions provided by kernel-based methods are considered: classification of categorical data (soil type classification), mapping of environmental and pollution continuous information (pollution of soil by radionuclides), mapping with auxiliary information (climatic data from Aral Sea region). The promising developments, such as automatic emergency hot spot detection and monitoring network optimization are discussed as well.
Resumo:
In normal mice, the lentiviral vector (LV) is very efficient to target the RPE cells, but transduces retinal neurons well only during development. In the present study, the tropism of LV has been investigated in the degenerating retina of mice, knowing that the retina structure changes during degeneration. We postulated that the viral transduction would be increased by the alteration of the outer limiting membrane (OLM). Two different LV pseudotypes were tested using the VSVG and the Mokola envelopes, as well as two animal models of retinal degeneration: light-damaged Balb-C and Rhodopsin knockout (Rho-/-) mice. After light damage, the OLM is altered and no significant increase of the number of transduced photoreceptors can be obtained with a LV-VSVG-Rhop-GFP vector. In the Rho-/- mice, an alteration of the OLM was also observed, but the possibility of transducing photoreceptors was decreased, probably by ongoing gliosis. The use of a ubiquitous promoter allows better photoreceptor transduction, suggesting that photoreceptor-specific promoter activity changes during late stages of photoreceptor degeneration. However, the number of targeted photoreceptors remains low. In contrast, LV pseudotyped with the Mokola envelope allows a wide dispersion of the vector into the retina (corresponding to the injection bleb) with preferential targeting of Müller cells, a situation which does not occur in the wild-type retina. Mokola-pseudotyped lentiviral vectors may serve to engineer these glial cells to deliver secreted therapeutic factors to a diseased area of the retina.
Resumo:
This paper explores the relationships between noncooperative bargaining games and the consistent value for non-transferable utility (NTU) cooperative games. A dynamic approach to the consistent value for NTU games is introduced: the consistent vector field. The main contribution of the paper is to show that the consistent field is intimately related to the concept of subgame perfection for finite horizon noncooperative bargaining games, as the horizon goes to infinity and the cost of delay goes to zero. The solutions of the dynamic system associated to the consistent field characterize the subgame perfect equilibrium payoffs of the noncooperative bargaining games. We show that for transferable utility, hyperplane and pure bargaining games, the dynamics of the consistent fields converge globally to the unique consistent value. However, in the general NTU case, the dynamics of the consistent field can be complex. An example is constructed where the consistent field has cyclic solutions; moreover, the finite horizon subgame perfect equilibria do not approach the consistent value.
Resumo:
Closely related species may be very difficult to distinguish morphologically, yet sometimes morphology is the only reasonable possibility for taxonomic classification. Here we present learning-vector-quantization artificial neural networks as a powerful tool to classify specimens on the basis of geometric morphometric shape measurements. As an example, we trained a neural network to distinguish between field and root voles from Procrustes transformed landmark coordinates on the dorsal side of the skull, which is so similar in these two species that the human eye cannot make this distinction. Properly trained neural networks misclassified only 3% of specimens. Therefore, we conclude that the capacity of learning vector quantization neural networks to analyse spatial coordinates is a powerful tool among the range of pattern recognition procedures that is available to employ the information content of geometric morphometrics.