879 resultados para time varying parameter model
Resumo:
This thesis is composed of three main parts. The first consists of a state of the art of the different notions that are significant to understand the elements surrounding art authentication in general, and of signatures in particular, and that the author deemed them necessary to fully grasp the microcosm that makes up this particular market. Individuals with a solid knowledge of the art and expertise area, and that are particularly interested in the present study are advised to advance directly to the fourth Chapter. The expertise of the signature, it's reliability, and the factors impacting the expert's conclusions are brought forward. The final aim of the state of the art is to offer a general list of recommendations based on an exhaustive review of the current literature and given in light of all of the exposed issues. These guidelines are specifically formulated for the expertise of signatures on paintings, but can also be applied to wider themes in the area of signature examination. The second part of this thesis covers the experimental stages of the research. It consists of the method developed to authenticate painted signatures on works of art. This method is articulated around several main objectives: defining measurable features on painted signatures and defining their relevance in order to establish the separation capacities between groups of authentic and simulated signatures. For the first time, numerical analyses of painted signatures have been obtained and are used to attribute their authorship to given artists. An in-depth discussion of the developed method constitutes the third and final part of this study. It evaluates the opportunities and constraints when applied by signature and handwriting experts in forensic science. A brief summary covering each chapter allows a rapid overview of the study and summarizes the aims and main themes of each chapter. These outlines presented below summarize the aims and main themes addressed in each chapter. Part I - Theory Chapter 1 exposes legal aspects surrounding the authentication of works of art by art experts. The definition of what is legally authentic, the quality and types of the experts that can express an opinion concerning the authorship of a specific painting, and standard deontological rules are addressed. The practices applied in Switzerland will be specifically dealt with. Chapter 2 presents an overview of the different scientific analyses that can be carried out on paintings (from the canvas to the top coat). Scientific examinations of works of art have become more common, as more and more museums equip themselves with laboratories, thus an understanding of their role in the art authentication process is vital. The added value that a signature expertise can have in comparison to other scientific techniques is also addressed. Chapter 3 provides a historical overview of the signature on paintings throughout the ages, in order to offer the reader an understanding of the origin of the signature on works of art and its evolution through time. An explanation is given on the transitions that the signature went through from the 15th century on and how it progressively took on its widely known modern form. Both this chapter and chapter 2 are presented to show the reader the rich sources of information that can be provided to describe a painting, and how the signature is one of these sources. Chapter 4 focuses on the different hypotheses the FHE must keep in mind when examining a painted signature, since a number of scenarios can be encountered when dealing with signatures on works of art. The different forms of signatures, as well as the variables that may have an influence on the painted signatures, are also presented. Finally, the current state of knowledge of the examination procedure of signatures in forensic science in general, and in particular for painted signatures, is exposed. The state of the art of the assessment of the authorship of signatures on paintings is established and discussed in light of the theoretical facets mentioned previously. Chapter 5 considers key elements that can have an impact on the FHE during his or her2 examinations. This includes a discussion on elements such as the skill, confidence and competence of an expert, as well as the potential bias effects he might encounter. A better understanding of elements surrounding handwriting examinations, to, in turn, better communicate results and conclusions to an audience, is also undertaken. Chapter 6 reviews the judicial acceptance of signature analysis in Courts and closes the state of the art section of this thesis. This chapter brings forward the current issues pertaining to the appreciation of this expertise by the non- forensic community, and will discuss the increasing number of claims of the unscientific nature of signature authentication. The necessity to aim for more scientific, comprehensive and transparent authentication methods will be discussed. The theoretical part of this thesis is concluded by a series of general recommendations for forensic handwriting examiners in forensic science, specifically for the expertise of signatures on paintings. These recommendations stem from the exhaustive review of the literature and the issues exposed from this review and can also be applied to the traditional examination of signatures (on paper). Part II - Experimental part Chapter 7 describes and defines the sampling, extraction and analysis phases of the research. The sampling stage of artists' signatures and their respective simulations are presented, followed by the steps that were undertaken to extract and determine sets of characteristics, specific to each artist, that describe their signatures. The method is based on a study of five artists and a group of individuals acting as forgers for the sake of this study. Finally, the analysis procedure of these characteristics to assess of the strength of evidence, and based on a Bayesian reasoning process, is presented. Chapter 8 outlines the results concerning both the artist and simulation corpuses after their optical observation, followed by the results of the analysis phase of the research. The feature selection process and the likelihood ratio evaluation are the main themes that are addressed. The discrimination power between both corpuses is illustrated through multivariate analysis. Part III - Discussion Chapter 9 discusses the materials, the methods, and the obtained results of the research. The opportunities, but also constraints and limits, of the developed method are exposed. Future works that can be carried out subsequent to the results of the study are also presented. Chapter 10, the last chapter of this thesis, proposes a strategy to incorporate the model developed in the last chapters into the traditional signature expertise procedures. Thus, the strength of this expertise is discussed in conjunction with the traditional conclusions reached by forensic handwriting examiners in forensic science. Finally, this chapter summarizes and advocates a list of formal recommendations for good practices for handwriting examiners. In conclusion, the research highlights the interdisciplinary aspect of signature examination of signatures on paintings. The current state of knowledge of the judicial quality of art experts, along with the scientific and historical analysis of paintings and signatures, are overviewed to give the reader a feel of the different factors that have an impact on this particular subject. The temperamental acceptance of forensic signature analysis in court, also presented in the state of the art, explicitly demonstrates the necessity of a better recognition of signature expertise by courts of law. This general acceptance, however, can only be achieved by producing high quality results through a well-defined examination process. This research offers an original approach to attribute a painted signature to a certain artist: for the first time, a probabilistic model used to measure the discriminative potential between authentic and simulated painted signatures is studied. The opportunities and limits that lie within this method of scientifically establishing the authorship of signatures on works of art are thus presented. In addition, the second key contribution of this work proposes a procedure to combine the developed method into that used traditionally signature experts in forensic science. Such an implementation into the holistic traditional signature examination casework is a large step providing the forensic, judicial and art communities with a solid-based reasoning framework for the examination of signatures on paintings. The framework and preliminary results associated with this research have been published (Montani, 2009a) and presented at international forensic science conferences (Montani, 2009b; Montani, 2012).
Resumo:
Työn tavoitteena oli mallintaa satamanosturin dynamiikkaa mahdollisimman tarkasti kuvaava yksinkertaistettu malli Simulink-ohjelmalla, jonka jälkeen malli käännettiin edelleen reaaliaikasimulaattorille soveltuvaan muotoon. Nosturin malli yksinkertaistettiin käsittämään kolme osaa: Nosturin rungon, nostovaunun ja kontin. Voimista mallinnettiin pyörien kontaktivoimat, köysivoimat sekä siirtovoima. Reaaliaikasimulaattorina käytettiin Opal-RT:n RT-LAB reaaliaikasimulointiohjelmistoa, sekä tavallisia PC-tietokoneita. Simulointiin liitettiin myös 3D-animaatio, jolla nosturin liikkeet saatiin visualisoitua. Animoitava grafiikka luotiin WorldUp-ohjelmistolla ja liitettiin RT-LAB-simulaatioon RT3D-rajapinnan ja WorldUp Player:n avulla. Työn tuloksena saatiin satamanosturin dynaamista käyttäytymistä kuvaava Simulink-malli, jota on mahdollista käyttää reaaliaikaisessa simuloinnissa. Mallia testattiin RT-LAB reaaliaikasimulaattorissa, ja simuloinnista saatuja tuloksia verrattiin Adams:lla simuloituihin tuloksiin. Saatujen tulosten perusteella mallia voidaan pitää onnistuneena. Myös RT-LAB reaaliaikasimulaattori visualisointeineen vaikuttaa toimivalta kokonaisuudelta.
Resumo:
This work presents new, efficient Markov chain Monte Carlo (MCMC) simulation methods for statistical analysis in various modelling applications. When using MCMC methods, the model is simulated repeatedly to explore the probability distribution describing the uncertainties in model parameters and predictions. In adaptive MCMC methods based on the Metropolis-Hastings algorithm, the proposal distribution needed by the algorithm learns from the target distribution as the simulation proceeds. Adaptive MCMC methods have been subject of intensive research lately, as they open a way for essentially easier use of the methodology. The lack of user-friendly computer programs has been a main obstacle for wider acceptance of the methods. This work provides two new adaptive MCMC methods: DRAM and AARJ. The DRAM method has been built especially to work in high dimensional and non-linear problems. The AARJ method is an extension to DRAM for model selection problems, where the mathematical formulation of the model is uncertain and we want simultaneously to fit several different models to the same observations. The methods were developed while keeping in mind the needs of modelling applications typical in environmental sciences. The development work has been pursued while working with several application projects. The applications presented in this work are: a winter time oxygen concentration model for Lake Tuusulanjärvi and adaptive control of the aerator; a nutrition model for Lake Pyhäjärvi and lake management planning; validation of the algorithms of the GOMOS ozone remote sensing instrument on board the Envisat satellite of European Space Agency and the study of the effects of aerosol model selection on the GOMOS algorithm.
Resumo:
Our research aims to analyze the causal relationships in the behavior of public debt issued by peripheral member countries of the European Economic and Monetary Union -EMU-, with special emphasis on the recent episodes of crisis triggered in the eurozone sovereign debt markets since 2009. With this goal in mind, we make use of a database of daily frequency of yields on 10-year government bonds issued by five EMU countries -Greece, Ireland, Italy, Portugal and Spain-, covering the entire history of the EMU from its inception on 1 January 1999 until 31 December 2010. In the first step, we explore the pair-wise causal relationship between yields, both for the whole sample and for changing subsamples of the data, in order to capture the possible time-varying causal relationship. This approach allows us to detect episodes of contagion between yields on bonds issued by different countries. In the second step, we study the determinants of these contagion episodes, analyzing the role played by different factors, paying special attention to instruments that capture the total national debt -domestic and foreign- in each country.
Resumo:
PURPOSE: To assess baseline predictors and consequences of medication non-adherence in the treatment of pediatric patients with attention-deficit/hyperactivity disorder (ADHD) from Central Europe and East Asia. PATIENTS AND METHODS: Data for this post-hoc analysis were taken from a 1-year prospective, observational study that included a total of 1,068 newly-diagnosed pediatric patients with ADHD symptoms from Central Europe and East Asia. Medication adherence during the week prior to each visit was assessed by treating physicians using a 5-point Likert scale, and then dichotomized into either adherent or non-adherent. Clinical severity was measured by the Clinical Global Impressions-ADHD-Severity (CGI-ADHD) scale and the Child Symptom Inventory-4 (CSI-4) Checklist. Health-Related Quality of Life (HRQoL) was measured using the Child Health and Illness Profile-Child Edition (CHIP-CE). Regression analyses were used to assess baseline predictors of overall adherence during follow-up, and the impact of time-varying adherence on subsequent outcomes: response (defined as a decrease of at least 1 point in CGI), changes in CGI-ADHD, CSI-4, and the five dimensions of CHIP-CE. RESULTS: Of the 860 patients analyzed, 64.5% (71.6% in Central Europe and 55.5% in East Asia) were rated as adherent and 35.5% as non-adherent during follow-up. Being from East Asia was found to be a strong predictor of non-adherence. In East Asia, a family history of ADHD and parental emotional distress were associated with non-adherence, while having no other children living at home was associated with non-adherence in Central Europe as well as in the overall sample. Non-adherence was associated with poorer response and less improvement on CGI-ADHD and CSI-4, but not on CHIP-CE. CONCLUSION: Non-adherence to medication is common in the treatment of ADHD, particularly in East Asia. Non-adherence was associated with poorer response and less improvement in clinical severity. A limitation of this study is that medication adherence was assessed by the treating clinician using a single item question.
Resumo:
The size and complexity of projects in the software development are growing very fast. At the same time, the proportion of successful projects is still quite low according to the previous research. Although almost every project's team knows main areas of responsibility which would help to finish project on time and on budget, this knowledge is rarely used in practice. So it is important to evaluate the success of existing software development projects and to suggest a method for evaluating success chances which can be used in the software development projects. The main aim of this study is to evaluate the success of projects in the selected geographical region (Russia-Ukraine-Belarus). The second aim is to compare existing models of success prediction and to determine their strengths and weaknesses. Research was done as an empirical study. A survey with structured forms and theme-based interviews were used as the data collection methods. The information gathering was done in two stages. At the first stage, project manager or someone with similar responsibilities answered the questions over Internet. At the second stage, the participant was interviewed; his or her answers were discussed and refined. It made possible to get accurate information about each project and to avoid errors. It was found out that there are many problems in the software development projects. These problems are widely known and were discussed in literature many times. The research showed that most of the projects have problems with schedule, requirements, architecture, quality, and budget. Comparison of two models of success prediction presented that The Standish Group overestimates problems in project. At the same time, McConnell's model can help to identify problems in time and avoid troubles in future. A framework for evaluating success chances in distributed projects was suggested. The framework is similar to The Standish Group model but it was customized for distributed projects.
Resumo:
Tämän tutkimuksen päätavoitteena oli luoda yleisellä tasolla teollisuuden kunnossapitopalveluille kustannusmalli käytännön päätöstilanteiden avuksi. Kustannusmallin tarkoitus oli, että sitä voidaan käyttää kaikissa päätöstilanteissa, joissa pyritään vaikuttamaan kunnossapidon aiheuttamiin kustannuksiin, joko kunnossapitopalveluiden tarjoajan tai asiakkaan tai molempien toimesta. Kustannusmalli eroaa perinteisistä kustannusmalleista näkökulmansa osalta. Aiemmin kehitetyissä teollisuuden kunnossapitopalveluiden kustannusmalleissa on esitetty palveluntarjoajan omat kustannukset tai asiakkaan kustannukset, mutta ei molempia kustannuksia samanaikaisesti. Kustannusmallia testattiin pienessä yritysverkostossa, johon kuului Sellutehdas, Kunnossapitoyhtiö ja Laitetoimittaja. Verkostossa kunnossapitoprosesseja tutkittiin tapaustutkimuksen keinoin. Tutkittuja tapauksia oli yhteensä kolme kappaletta, joista kaikki sijoittuivat sellun valmistusprosessin eri vaiheisiin ja ne olivat luonteeltaan hyvin erilaisia. Työn teoriaosa pohjautuu pääasiassa teollisuuden kunnossapitoa käsitteleviin artikkeleihin, tutkimuksiin ja kirjoihin. Lisäksi käsitellään hinnoittelun ja kustannuslaskennan teoriaa. Työn empiirinen osa perustuu haastatteluihin ja yrityksiltä saatuihin kustannustietoihin. Haastattelujen ja kustannustietojen avulla testattiin kustannusmallia. Testauksen jälkeen analysoitiin saatuja tuloksia ja kustannusmallia. Työn keskeiset tulokset liittyivät siihen, mitä kustannuksia tulee huomioida kustannusmallissa ja millainen on käyttökelpoinen kustannusmalli. Kustannusmallin käyttöön ja kehittämiseen liittyi myös ongelmia, jotka tuodaan työn loppuvaiheessa esille. Keskeinen ongelma mallin käytössä oli, että yritysverkoston osapuolet eivät halunneet avata euromääräisiä kustannustietojaan vielä toisilleen. Tämän vuoksi kustannusmallissa esitetään ainoastaan kustannuserien prosentuaalisia osuuksia yritysten aiheutuneista kunnossapidon kokonaiskustannuksista. Pelkkien prosentuaalisten kustannusten analysoiminen oli haasteellisempaa, koska esimerkiksi kustannusten suuruusluokka näkyy vain euroissa.
Resumo:
The objectives of the work were to study the effect of dewatering time varying on formation properties of papersheets, to determine the role of fines fraction in creation of paper with good formation and strength properties of papersheets, and also to study the effect of charge modification of fibers fractionations on formation properties of handsheets. The paper formation is one of the most important structural properties of paper. This property has effect on physical and optical characteristics of paper. In thi work the effect of formation on tensile strength was determined. The formation properties were analyzed by using the AMBERTEC Beta Formation Tester. The PAM addition as a f;locculant agent did some changes in the formation of paper. Paper sheets were also made from different furnishes of both birch and pine pulps. The fibers particles as a fines have great effect on drainability changes. Fines fraction played important role in papermaking. The two kinds of pulps (pine and birch pulps) were also used in this work for investigation of fines role. As it was expected the fines fraction gave positive effect on paper formation, but when fines fraction was added above initial fines content the formation of paper was deteriorated. The effect of paper formation on tensile strength was also determined. In many cases the poor formation of paper had negative effect on strength properties of paper..
Resumo:
I det lilla sammanhanget synliggörs de stora frågorna om både pedagogik, kultur och struktur och om samverkan mellan dem. Studiet av de minsta enheterna i den finländska utbildningen ger kunskap om det viktiga i pedagogiska relationer och i skolgemenskaper, men synliggör också samhällsskeendets inbyggda konflikter om mål och värderingar. Det övergripande syftet med studien är att fördjupa kunskapen om de små skolornas pedagogiska, kulturella och strukturella innebörd och betingelser. Genom djupintervjuer med 12 finlandssvenska lärare i byskolor med färre än 30 elever, analys av skolindragningsdebatt och utbildningspolitiska styrdokument, samt genom teoretiskt förankrade reflektioner skapas en förståelse av såväl lärares arbete och pedagogiska tänkande som skolans funktion i samhället. Avhandlingen byggs upp enligt ett hermeneutiskt och narrativt tänkande. I studien framkommer att byskollärares pedagogiska tänkande syftar till det enskilda barnets optimala och balanserade helhetsutveckling i en gemenskap, där det är centralt att finna den pedagogiska balansen och möjligheten inom kontinuum mellan bl.a. elevbemyndigande och beledsagande. De pedagogiska intentionerna bär syftet att ge rötter och vingar, samhörighet och frihet. I lärarnas beskrivningar uttrycks en sammanvävning mellan deras pedagogiska intentioner och de kontextuella möjligheterna att realisera dessa. Byskollärarna är bärare av en pedagogisk professionalitet som utvecklas i relation till arbetets betingelser. Det praktiska yrkeskunnandet innefattar en balansgång mellan å ena sidan systematik och organisering och å andra sidan flexibilitet och frihet. Att samma lärare handhar eleverna en lång tid är en pedagogisk möjlighet och utmaning. I lärarnas berättelser aktualiseras vad balansgången i en god pedagogisk relation innebär. Både lärare och elever fostras in i en särskild skolkultur och undervisningskultur, i en växelverkan. Yrkeskulturen präglas av olika former av samverkan, både med skolans hela personal som ett teamarbete och med lokalsamhället. Lärarna uttrycker god arbetstrivsel, men friheten och ansvaret i arbetet kan vara både stimulerande och betungande. I en metaanalys skapas teoretiska modeller för vad arbete i närhetens och litenhetens spänningsfält kan innebära och hur det kan påverka lärares professionella utveckling och ork. Den lilla byskolan relaterad till en större samhällskontext öppnar för frågor om vad kvalitet innebär och vilka värderingar som är riktgivande inom utbildningsplanering. Kampen för kontinuiteten i byskolans berättelse tolkas som en kamp för det lokala rummet, för gemenskap, existens, framtid, likvärdighet och trygghet. Kampen för byskolan är ett försvar av både lokal livskvalitet och pedagogisk kvalitet för den enskilde eleven. I det övergripande kulturella sammanhanget synliggörs motsättningar. Det verkar finnas en inbyggd konflikt i utbildningsplaneringens föresatser att samtidigt uppnå jämlikhet, kostnadseffektivitet och kvalitet. En teoretisk modell synliggör hur pedagogik, kultur och struktur samverkar inom utbildning/byskola och påverkar lärares och elevers handlingsutrymme.
Resumo:
Cyanobacteria are unicellular, non-nitrogen-fixing prokaryotes, which perform photosynthesis similarly as higher plants. The cyanobacterium Synechocystis sp. strain PCC 6803 is used as a model organism in photosynthesis research. My research described herein aims at understanding the function of the photosynthetic machinery and how it responds to changes in the environment. Detailed knowledge of the regulation of photosynthesis in cyanobacteria can be utilized for biotechnological purposes, for example in the harnessing of solar energy for biofuel production. In photosynthesis, iron participates in electron transfer. Here, we focused on iron transport in Synechocystis sp. strain PCC 6803 and particularly on the environmental regulation of the genes encoding the FutA2BC ferric iron transporter, which belongs to the ABC transporter family. A homology model built for the ATP-binding subunit FutC indicates that it has a functional ATPbinding site as well as conserved interactions with the channel-forming subunit FutB in the transporter complex. Polyamines are important for the cell proliferation, differentiation and apoptosis in prokaryotic and eukaryotic cells. In plants, polyamines have special roles in stress response and in plant survival. The polyamine metabolism in cyanobacteria in response to environmental stress is of interest in research on stress tolerance of higher plants. In this thesis, the potd gene encoding an polyamine transporter subunit from Synechocystis sp. strain PCC 6803 was characterized for the first time. A homology model built for PotD protein indicated that it has capability of binding polyamines, with the preference for spermidine. Furthermore, in order to investigate the structural features of the substrate specificity, polyamines were docked into the binding site. Spermidine was positioned very similarly in Synechocystis PotD as in the template structure and had most favorable interactions of the docked polyamines. Based on the homology model, experimental work was conducted, which confirmed the binding preference. Flavodiiron proteins (Flv) are enzymes, which protect the cell against toxicity of oxygen and/or nitric oxide by reduction. In this thesis, we present a novel type of photoprotection mechanism in cyanobacteria by the heterodimer of Flv2/Flv4. The constructed homology model of Flv2/Flv4 suggests a functional heterodimer capable of rapid electron transfer. The unknown protein sll0218, encoded by the flv2-flv4 operon, is assumed to facilitate the interaction of the Flv2/Flv4 heterodimer and energy transfer between the phycobilisome and PSII. Flv2/Flv4 provides an alternative electron transfer pathway and functions as an electron sink in PSII electron transfer.
Resumo:
Several papers document idiosyncratic volatility is time-varying and many attempts have been made to reveal whether idiosyncratic risk is priced. This research studies behavior of idiosyncratic volatility around information release dates and also its relation with return after public announcement. The results indicate that when a company discloses specific information to the market, firm’s specific volatility level shifts and short-horizon event-induced volatility vary significantly however, the category to which the announcement belongs is not important in magnitude of change. This event-induced volatility is not small in size and should not be downplayed in event studies. Moreover, this study shows stocks with higher contemporaneous realized idiosyncratic volatility earn lower return after public announcement consistent with “divergence of opinion hypothesis”. While no significant relation is found between EGARCH estimated idiosyncratic volatility and return and also between one-month lagged idiosyncratic volatility and return presumably due to significant jump around public announcement both may provide some signals regarding future idiosyncratic volatility through their correlations with contemporaneous realized idiosyncratic volatility. Finally, the study show that positive relation between return and idiosyncratic volatility based on under-diversification is inadequate to explain all different scenarios and this negative relation after public announcement may provide a useful trading rule.
Resumo:
Osmotic dehydration of cherry tomato as influenced by osmotic agent (sodium chloride and a mixed sodium chloride and sucrose solutions) and solution concentration (10 and 25% w/w) at room temperature (25°C) was studied. Kinetics of water loss and solids uptake were determined by a two parameter model, based on Fick's second law and applied to spherical geometry. The water apparent diffusivity coefficients obtained ranged from 2.17x10-10 to 11.69x10-10 m²/s.
Resumo:
Dans ce texte, nous analysons les développements récents de l’économétrie à la lumière de la théorie des tests statistiques. Nous revoyons d’abord quelques principes fondamentaux de philosophie des sciences et de théorie statistique, en mettant l’accent sur la parcimonie et la falsifiabilité comme critères d’évaluation des modèles, sur le rôle de la théorie des tests comme formalisation du principe de falsification de modèles probabilistes, ainsi que sur la justification logique des notions de base de la théorie des tests (tel le niveau d’un test). Nous montrons ensuite que certaines des méthodes statistiques et économétriques les plus utilisées sont fondamentalement inappropriées pour les problèmes et modèles considérés, tandis que de nombreuses hypothèses, pour lesquelles des procédures de test sont communément proposées, ne sont en fait pas du tout testables. De telles situations conduisent à des problèmes statistiques mal posés. Nous analysons quelques cas particuliers de tels problèmes : (1) la construction d’intervalles de confiance dans le cadre de modèles structurels qui posent des problèmes d’identification; (2) la construction de tests pour des hypothèses non paramétriques, incluant la construction de procédures robustes à l’hétéroscédasticité, à la non-normalité ou à la spécification dynamique. Nous indiquons que ces difficultés proviennent souvent de l’ambition d’affaiblir les conditions de régularité nécessaires à toute analyse statistique ainsi que d’une utilisation inappropriée de résultats de théorie distributionnelle asymptotique. Enfin, nous soulignons l’importance de formuler des hypothèses et modèles testables, et de proposer des techniques économétriques dont les propriétés sont démontrables dans les échantillons finis.
Resumo:
Funding support for this doctoral thesis has been provided by the Canadian Institutes of Health Research-Public Health Agency of Canada, QICSS matching grant, and la Faculté des études supérieures et postdoctorales-Université de Montréal.
Resumo:
Les processus Markoviens continus en temps sont largement utilisés pour tenter d’expliquer l’évolution des séquences protéiques et nucléotidiques le long des phylogénies. Des modèles probabilistes reposant sur de telles hypothèses sont conçus pour satisfaire la non-homogénéité spatiale des contraintes fonctionnelles et environnementales agissant sur celles-ci. Récemment, des modèles Markov-modulés ont été introduits pour décrire les changements temporels dans les taux d’évolution site-spécifiques (hétérotachie). Des études ont d’autre part démontré que non seulement la force mais également la nature de la contrainte sélective agissant sur un site peut varier à travers le temps. Ici nous proposons de prendre en charge cette réalité évolutive avec un modèle Markov-modulé pour les protéines sous lequel les sites sont autorisés à modifier leurs préférences en acides aminés au cours du temps. L’estimation a posteriori des différents paramètres modulants du noyau stochastique avec les méthodes de Monte Carlo est un défi de taille que nous avons su relever partiellement grâce à la programmation parallèle. Des réglages computationnels sont par ailleurs envisagés pour accélérer la convergence vers l’optimum global de ce paysage multidimensionnel relativement complexe. Qualitativement, notre modèle semble être capable de saisir des signaux d’hétérogénéité temporelle à partir d’un jeu de données dont l’histoire évolutive est reconnue pour être riche en changements de régimes substitutionnels. Des tests de performance suggèrent de plus qu’il serait mieux ajusté aux données qu’un modèle équivalent homogène en temps. Néanmoins, les histoires substitutionnelles tirées de la distribution postérieure sont bruitées et restent difficilement interprétables du point de vue biologique.