956 resultados para COMPUTATIONAL APPROACH
Resumo:
Tese de Doutoramento (Programa Doutoral em Engenharia Biomédica)
Resumo:
We analyze the classical Bertrand model when consumers exhibit some strategic behavior in deciding from which seller they will buy. We use two related but different tools. Both consider a probabilistic learning (or evolutionary) mechanism, and in the two of them consumers' behavior in uences the competition between the sellers. The results obtained show that, in general, developing some sort of loyalty is a good strategy for the buyers as it works in their best interest. First, we consider a learning procedure described by a deterministic dynamic system and, using strong simplifying assumptions, we can produce a description of the process behavior. Second, we use nite automata to represent the strategies played by the agents and an adaptive process based on genetic algorithms to simulate the stochastic process of learning. By doing so we can relax some of the strong assumptions used in the rst approach and still obtain the same basic results. It is suggested that the limitations of the rst approach (analytical) provide a good motivation for the second approach (Agent-Based). Indeed, although both approaches address the same problem, the use of Agent-Based computational techniques allows us to relax hypothesis and overcome the limitations of the analytical approach.
Resumo:
Minimal models for the explanation of decision-making in computational neuroscience are based on the analysis of the evolution for the average firing rates of two interacting neuron populations. While these models typically lead to multi-stable scenario for the basic derived dynamical systems, noise is an important feature of the model taking into account finite-size effects and robustness of the decisions. These stochastic dynamical systems can be analyzed by studying carefully their associated Fokker-Planck partial differential equation. In particular, we discuss the existence, positivity and uniqueness for the solution of the stationary equation, as well as for the time evolving problem. Moreover, we prove convergence of the solution to the the stationary state representing the probability distribution of finding the neuron families in each of the decision states characterized by their average firing rates. Finally, we propose a numerical scheme allowing for simulations performed on the Fokker-Planck equation which are in agreement with those obtained recently by a moment method applied to the stochastic differential system. Our approach leads to a more detailed analytical and numerical study of this decision-making model in computational neuroscience.
Resumo:
We study the properties of the well known Replicator Dynamics when applied to a finitely repeated version of the Prisoners' Dilemma game. We characterize the behavior of such dynamics under strongly simplifying assumptions (i.e. only 3 strategies are available) and show that the basin of attraction of defection shrinks as the number of repetitions increases. After discussing the difficulties involved in trying to relax the 'strongly simplifying assumptions' above, we approach the same model by means of simulations based on genetic algorithms. The resulting simulations describe a behavior of the system very close to the one predicted by the replicator dynamics without imposing any of the assumptions of the mathematical model. Our main conclusion is that mathematical and computational models are good complements for research in social sciences. Indeed, while computational models are extremely useful to extend the scope of the analysis to complex scenarios hard to analyze mathematically, formal models can be useful to verify and to explain the outcomes of computational models.
Resumo:
We evaluate the performance of different optimization techniques developed in the context of optical flowcomputation with different variational models. In particular, based on truncated Newton methods (TN) that have been an effective approach for large-scale unconstrained optimization, we develop the use of efficient multilevel schemes for computing the optical flow. More precisely, we evaluate the performance of a standard unidirectional multilevel algorithm - called multiresolution optimization (MR/OPT), to a bidrectional multilevel algorithm - called full multigrid optimization (FMG/OPT). The FMG/OPT algorithm treats the coarse grid correction as an optimization search direction and eventually scales it using a line search. Experimental results on different image sequences using four models of optical flow computation show that the FMG/OPT algorithm outperforms both the TN and MR/OPT algorithms in terms of the computational work and the quality of the optical flow estimation.
Resumo:
Recently, the introduction of second generation sequencing and further advance-ments in confocal microscopy have enabled system-level studies for the functional characterization of genes. The degree of complexity intrinsic to these approaches needs the development of bioinformatics methodologies and computational models for extracting meaningful biological knowledge from the enormous amount of experi¬mental data which is continuously generated. This PhD thesis presents several novel bioinformatics methods and computational models to address specific biological questions in Plant Biology by using the plant Arabidopsis thaliana as a model system. First, a spatio-temporal qualitative analysis of quantitative transcript and protein profiles is applied to show the role of the BREVIS RADIX (BRX) protein in the auxin- cytokinin crosstalk for root meristem growth. Core of this PhD work is the functional characterization of the interplay between the BRX protein and the plant hormone auxin in the root meristem by using a computational model based on experimental evidence. Hyphotesis generated by the modelled to the discovery of a differential endocytosis pattern in the root meristem that splits the auxin transcriptional response via the plasma membrane to nucleus partitioning of BRX. This positional information system creates an auxin transcriptional pattern that deviates from the canonical auxin response and is necessary to sustain the expression of a subset of BRX-dependent auxin-responsive genes to drive root meristem growth. In the second part of this PhD thesis, we characterized the genome-wide impact of large scale deletions on four divergent Arabidopsis natural strains, through the integration of Ultra-High Throughput Sequencing data with data from genomic hybridizations on tiling arrays. Analysis of the identified deletions revealed a considerable portion of protein coding genes affected and supported a history of genomic rearrangements shaped by evolution. In the last part of the thesis, we showed that VIP3 gene in Arabidopsis has an evo-lutionary conserved role in the 3' to 5' mRNA degradation machinery, by applying a novel approach for the analysis of mRNA-Seq data from random-primed mRNA. Altogether, this PhD research contains major advancements in the study of natural genomic variation in plants and in the application of computational morphodynamics models for the functional characterization of biological pathways essential for the plant. - Récemment, l'introduction du séquençage de seconde génération et les avancées dans la microscopie confocale ont permis des études à l'échelle des différents systèmes cellulaires pour la caractérisation fonctionnelle de gènes. Le degrés de complexité intrinsèque à ces approches ont requis le développement de méthodologies bioinformatiques et de modèles mathématiques afin d'extraire de la masse de données expérimentale générée, des information biologiques significatives. Ce doctorat présente à la fois des méthodes bioinformatiques originales et des modèles mathématiques pour répondre à certaines questions spécifiques de Biologie Végétale en utilisant la plante Arabidopsis thaliana comme modèle. Premièrement, une analyse qualitative spatio-temporelle de profiles quantitatifs de transcripts et de protéines est utilisée pour montrer le rôle de la protéine BREVIS RADIX (BRX) dans le dialogue entre l'auxine et les cytokinines, des phytohormones, dans la croissance du méristème racinaire. Le noyau de ce travail de thèse est la caractérisation fonctionnelle de l'interaction entre la protéine BRX et la phytohormone auxine dans le méristème de la racine en utilisant des modèles informatiques basés sur des preuves expérimentales. Les hypothèses produites par le modèle ont mené à la découverte d'un schéma différentiel d'endocytose dans le méristème racinaire qui divise la réponse transcriptionnelle à l'auxine par le partitionnement de BRX de la membrane plasmique au noyau de la cellule. Cette information positionnelle crée une réponse transcriptionnelle à l'auxine qui dévie de la réponse canonique à l'auxine et est nécessaire pour soutenir l'expression d'un sous ensemble de gènes répondant à l'auxine et dépendant de BRX pour conduire la croissance du méristème. Dans la seconde partie de cette thèse de doctorat, nous avons caractérisé l'impact sur l'ensemble du génome des délétions à grande échelle sur quatre souches divergentes naturelles d'Arabidopsis, à travers l'intégration du séquençage à ultra-haut-débit avec l'hybridation génomique sur puces ADN. L'analyse des délétions identifiées a révélé qu'une proportion considérable de gènes codant était affectée, supportant l'idée d'un historique de réarrangement génomique modelé durant l'évolution. Dans la dernière partie de cette thèse, nous avons montré que le gène VÏP3 dans Arabidopsis a conservé un rôle évolutif dans la machinerie de dégradation des ARNm dans le sens 3' à 5', en appliquant une nouvelle approche pour l'analyse des données de séquençage d'ARNm issue de transcripts amplifiés aléatoirement. Dans son ensemble, cette recherche de doctorat contient des avancées majeures dans l'étude des variations génomiques naturelles des plantes et dans l'application de modèles morphodynamiques informatiques pour la caractérisation de réseaux biologiques essentiels à la plante. - Le développement des plantes est écrit dans leurs codes génétiques. Pour comprendre comment les plantes sont capables de s'adapter aux changements environnementaux, il est essentiel d'étudier comment leurs gènes gouvernent leur formation. Plus nous essayons de comprendre le fonctionnement d'une plante, plus nous réalisons la complexité des mécanismes biologiques, à tel point que l'utilisation d'outils et de modèles mathématiques devient indispensable. Dans ce travail, avec l'utilisation de la plante modèle Arabidopsis thalicinci nous avons résolu des problèmes biologiques spécifiques à travers le développement et l'application de méthodes informatiques concrètes. Dans un premier temps, nous avons investigué comment le gène BREVIS RADIX (BRX) régule le développement de la racine en contrôlant la réponse à deux hormones : l'auxine et la cytokinine. Nous avons employé une analyse statistique sur des mesures quantitatives de transcripts et de produits de gènes afin de démontrer que BRX joue un rôle antagonisant dans le dialogue entre ces deux hormones. Lorsque ce-dialogue moléculaire est perturbé, la racine primaire voit sa longueur dramatiquement réduite. Pour comprendre comment BRX répond à l'auxine, nous avons développé un modèle informatique basé sur des résultats expérimentaux. Les simulations successives ont mené à la découverte d'un signal positionnel qui contrôle la réponse de la racine à l'auxine par la régulation du mouvement intracellulaire de BRX. Dans la seconde partie de cette thèse, nous avons analysé le génome entier de quatre souches naturelles d'Arabidopsis et nous avons trouvé qu'une grande partie de leurs gènes étaient manquant par rapport à la souche de référence. Ce résultat indique que l'historique des modifications génomiques conduites par l'évolution détermine une disponibilité différentielle des gènes fonctionnels dans ces plantes. Dans la dernière partie de ce travail, nous avons analysé les données du transcriptome de la plante où le gène VIP3 était non fonctionnel. Ceci nous a permis de découvrir le rôle double de VIP3 dans la régulation de l'initiation de la transcription et dans la dégradation des transcripts. Ce rôle double n'avait jusqu'alors été démontrée que chez l'homme. Ce travail de doctorat supporte le développement et l'application de méthodologies informatiques comme outils inestimables pour résoudre la complexité des problèmes biologiques dans la recherche végétale. L'intégration de la biologie végétale et l'informatique est devenue de plus en plus importante pour l'avancée de nos connaissances sur le fonctionnement et le développement des plantes.
Resumo:
Recognition by the T-cell receptor (TCR) of immunogenic peptides (p) presented by Class I major histocompatibility complexes (MHC) is the key event in the immune response against virus-infected cells or tumor cells. A study of the 2C TCR/SIYR/H-2K(b) system using a computational alanine scanning and a much faster binding free energy decomposition based on the Molecular Mechanics-Generalized Born Surface Area (MM-GBSA) method is presented. The results show that the TCR-p-MHC binding free energy decomposition using this approach and including entropic terms provides a detailed and reliable description of the interactions between the molecules at an atomistic level. Comparison of the decomposition results with experimentally determined activity differences for alanine mutants yields a correlation of 0.67 when the entropy is neglected and 0.72 when the entropy is taken into account. Similarly, comparison of experimental activities with variations in binding free energies determined by computational alanine scanning yields correlations of 0.72 and 0.74 when the entropy is neglected or taken into account, respectively. Some key interactions for the TCR-p-MHC binding are analyzed and some possible side chains replacements are proposed in the context of TCR protein engineering. In addition, a comparison of the two theoretical approaches for estimating the role of each side chain in the complexation is given, and a new ad hoc approach to decompose the vibrational entropy term into atomic contributions, the linear decomposition of the vibrational entropy (LDVE), is introduced. The latter allows the rapid calculation of the entropic contribution of interesting side chains to the binding. This new method is based on the idea that the most important contributions to the vibrational entropy of a molecule originate from residues that contribute most to the vibrational amplitude of the normal modes. The LDVE approach is shown to provide results very similar to those of the exact but highly computationally demanding method.
Resumo:
Methods like Event History Analysis can show the existence of diffusion and part of its nature, but do not study the process itself. Nowadays, thanks to the increasing performance of computers, processes can be studied using computational modeling. This thesis presents an agent-based model of policy diffusion mainly inspired from the model developed by Braun and Gilardi (2006). I first start by developing a theoretical framework of policy diffusion that presents the main internal drivers of policy diffusion - such as the preference for the policy, the effectiveness of the policy, the institutional constraints, and the ideology - and its main mechanisms, namely learning, competition, emulation, and coercion. Therefore diffusion, expressed by these interdependencies, is a complex process that needs to be studied with computational agent-based modeling. In a second step, computational agent-based modeling is defined along with its most significant concepts: complexity and emergence. Using computational agent-based modeling implies the development of an algorithm and its programming. When this latter has been developed, we let the different agents interact. Consequently, a phenomenon of diffusion, derived from learning, emerges, meaning that the choice made by an agent is conditional to that made by its neighbors. As a result, learning follows an inverted S-curve, which leads to partial convergence - global divergence and local convergence - that triggers the emergence of political clusters; i.e. the creation of regions with the same policy. Furthermore, the average effectiveness in this computational world tends to follow a J-shaped curve, meaning that not only time is needed for a policy to deploy its effects, but that it also takes time for a country to find the best-suited policy. To conclude, diffusion is an emergent phenomenon from complex interactions and its outcomes as ensued from my model are in line with the theoretical expectations and the empirical evidence.Les méthodes d'analyse de biographie (event history analysis) permettent de mettre en évidence l'existence de phénomènes de diffusion et de les décrire, mais ne permettent pas d'en étudier le processus. Les simulations informatiques, grâce aux performances croissantes des ordinateurs, rendent possible l'étude des processus en tant que tels. Cette thèse, basée sur le modèle théorique développé par Braun et Gilardi (2006), présente une simulation centrée sur les agents des phénomènes de diffusion des politiques. Le point de départ de ce travail met en lumière, au niveau théorique, les principaux facteurs de changement internes à un pays : la préférence pour une politique donnée, l'efficacité de cette dernière, les contraintes institutionnelles, l'idéologie, et les principaux mécanismes de diffusion que sont l'apprentissage, la compétition, l'émulation et la coercition. La diffusion, définie par l'interdépendance des différents acteurs, est un système complexe dont l'étude est rendue possible par les simulations centrées sur les agents. Au niveau méthodologique, nous présenterons également les principaux concepts sous-jacents aux simulations, notamment la complexité et l'émergence. De plus, l'utilisation de simulations informatiques implique le développement d'un algorithme et sa programmation. Cette dernière réalisée, les agents peuvent interagir, avec comme résultat l'émergence d'un phénomène de diffusion, dérivé de l'apprentissage, où le choix d'un agent dépend en grande partie de ceux faits par ses voisins. De plus, ce phénomène suit une courbe en S caractéristique, poussant à la création de régions politiquement identiques, mais divergentes au niveau globale. Enfin, l'efficacité moyenne, dans ce monde simulé, suit une courbe en J, ce qui signifie qu'il faut du temps, non seulement pour que la politique montre ses effets, mais également pour qu'un pays introduise la politique la plus efficace. En conclusion, la diffusion est un phénomène émergent résultant d'interactions complexes dont les résultats du processus tel que développé dans ce modèle correspondent tant aux attentes théoriques qu'aux résultats pratiques.
Resumo:
This paper provides a new and accessible approach to establishing certain results concerning the discounted penalty function. The direct approach consists of two steps. In the first step, closed-form expressions are obtained in the special case in which the claim amount distribution is a combination of exponential distributions. A rational function is useful in this context. For the second step, one observes that the family of combinations of exponential distributions is dense. Hence, it suffices to reformulate the results of the first step to obtain general results. The surplus process has downward and upward jumps, modeled by two independent compound Poisson processes. If the distribution of the upward jumps is exponential, a series of new results can be obtained with ease. Subsequently, certain results of Gerber and Shiu [H. U. Gerber and E. S. W. Shiu, North American Actuarial Journal 2(1): 48–78 (1998)] can be reproduced. The two-step approach is also applied when an independent Wiener process is added to the surplus process. Certain results are related to Zhang et al. [Z. Zhang, H. Yang, and S. Li, Journal of Computational and Applied Mathematics 233: 1773–1 784 (2010)], which uses different methods.
Resumo:
Background The 'database search problem', that is, the strengthening of a case - in terms of probative value - against an individual who is found as a result of a database search, has been approached during the last two decades with substantial mathematical analyses, accompanied by lively debate and centrally opposing conclusions. This represents a challenging obstacle in teaching but also hinders a balanced and coherent discussion of the topic within the wider scientific and legal community. This paper revisits and tracks the associated mathematical analyses in terms of Bayesian networks. Their derivation and discussion for capturing probabilistic arguments that explain the database search problem are outlined in detail. The resulting Bayesian networks offer a distinct view on the main debated issues, along with further clarity. Methods As a general framework for representing and analyzing formal arguments in probabilistic reasoning about uncertain target propositions (that is, whether or not a given individual is the source of a crime stain), this paper relies on graphical probability models, in particular, Bayesian networks. This graphical probability modeling approach is used to capture, within a single model, a series of key variables, such as the number of individuals in a database, the size of the population of potential crime stain sources, and the rarity of the corresponding analytical characteristics in a relevant population. Results This paper demonstrates the feasibility of deriving Bayesian network structures for analyzing, representing, and tracking the database search problem. The output of the proposed models can be shown to agree with existing but exclusively formulaic approaches. Conclusions The proposed Bayesian networks allow one to capture and analyze the currently most well-supported but reputedly counter-intuitive and difficult solution to the database search problem in a way that goes beyond the traditional, purely formulaic expressions. The method's graphical environment, along with its computational and probabilistic architectures, represents a rich package that offers analysts and discussants with additional modes of interaction, concise representation, and coherent communication.
Resumo:
Colorectal cancer is a heterogeneous disease that manifests through diverse clinical scenarios. During many years, our knowledge about the variability of colorectal tumors was limited to the histopathological analysis from which generic classifications associated with different clinical expectations are derived. However, currently we are beginning to understand that under the intense pathological and clinical variability of these tumors there underlies strong genetic and biological heterogeneity. Thus, with the increasing available information of inter-tumor and intra-tumor heterogeneity, the classical pathological approach is being displaced in favor of novel molecular classifications. In the present article, we summarize the most relevant proposals of molecular classifications obtained from the analysis of colorectal tumors using powerful high throughput techniques and devices. We also discuss the role that cancer systems biology may play in the integration and interpretation of the high amount of data generated and the challenges to be addressed in the future development of precision oncology. In addition, we review the current state of implementation of these novel tools in the pathological laboratory and in clinical practice.
Resumo:
We developed a procedure that combines three complementary computational methodologies to improve the theoretical description of the electronic structure of nickel oxide. The starting point is a Car-Parrinello molecular dynamics simulation to incorporate vibrorotational degrees of freedom into the material model. By means ofcomplete active space self-consistent field second-order perturbation theory (CASPT2) calculations on embedded clusters extracted from the resulting trajectory, we describe localized spectroscopic phenomena on NiO with an efficient treatment of electron correlation. The inclusion of thermal motion into the theoretical description allowsus to study electronic transitions that, otherwise, would be dipole forbidden in the ideal structure and results in a natural reproduction of the band broadening. Moreover, we improved the embedded cluster model by incorporating self-consistently at the complete active space self-consistent field (CASSCF) level a discrete (or direct) reaction field (DRF) in the cluster surroundings. The DRF approach offers an efficient treatment ofelectric response effects of the crystalline embedding to the electronic transitions localized in the cluster. We offer accurate theoretical estimates of the absorption spectrum and the density of states around the Fermi level of NiO, and a comprehensive explanation of the source of the broadening and the relaxation of the charge transferstates due to the adaptation of the environment
Resumo:
Human arteries affected by atherosclerosis are characterized by altered wall viscoelastic properties. The possibility of noninvasively assessing arterial viscoelasticity in vivo would significantly contribute to the early diagnosis and prevention of this disease. This paper presents a noniterative technique to estimate the viscoelastic parameters of a vascular wall Zener model. The approach requires the simultaneous measurement of flow variations and wall displacements, which can be provided by suitable ultrasound Doppler instruments. Viscoelastic parameters are estimated by fitting the theoretical constitutive equations to the experimental measurements using an ARMA parameter approach. The accuracy and sensitivity of the proposed method are tested using reference data generated by numerical simulations of arterial pulsation in which the physiological conditions and the viscoelastic parameters of the model can be suitably varied. The estimated values quantitatively agree with the reference values, showing that the only parameter affected by changing the physiological conditions is viscosity, whose relative error was about 27% even when a poor signal-to-noise ratio is simulated. Finally, the feasibility of the method is illustrated through three measurements made at different flow regimes on a cylindrical vessel phantom, yielding a parameter mean estimation error of 25%.
Resumo:
This research investigates the phenomenon of translationese in two monolingual comparable corpora of original and translated Catalan texts. Translationese has been defined as the dialect, sub-language or code of translated language. This study aims at giving empirical evidence of translation universals regardless the source language.Traditionally, research conducted on translation strategies has been mainly intuition-based. Computational Linguistics and Natural Language Processing techniques provide reliable information of lexical frequencies, morphological and syntactical distribution in corpora. Therefore, they have been applied to observe which translation strategies occur in these corpora.Results seem to prove the simplification, interference and explicitation hypotheses, whereas no sign of normalization has been detected with the methodology used.The data collected and the resources created for identifying lexical, morphological and syntactic patterns of translations can be useful for Translation Studies teachers, scholars and students: teachers will have more tools to help students avoid the reproduction of translationese patterns. Resources developed will help in detecting non-genuine or inadequate structures in the target language. This fact may imply an improvement in stylistic quality in translations. Translation professionals can also take advantage of these resources to improve their translation quality.
Resumo:
The Generalized Assignment Problem consists in assigning a setof tasks to a set of agents with minimum cost. Each agent hasa limited amount of a single resource and each task must beassigned to one and only one agent, requiring a certain amountof the resource of the agent. We present new metaheuristics forthe generalized assignment problem based on hybrid approaches.One metaheuristic is a MAX-MIN Ant System (MMAS), an improvedversion of the Ant System, which was recently proposed byStutzle and Hoos to combinatorial optimization problems, and itcan be seen has an adaptive sampling algorithm that takes inconsideration the experience gathered in earlier iterations ofthe algorithm. Moreover, the latter heuristic is combined withlocal search and tabu search heuristics to improve the search.A greedy randomized adaptive search heuristic (GRASP) is alsoproposed. Several neighborhoods are studied, including one basedon ejection chains that produces good moves withoutincreasing the computational effort. We present computationalresults of the comparative performance, followed by concludingremarks and ideas on future research in generalized assignmentrelated problems.