977 resultados para Binary choice models
Resumo:
Individual learning (e.g., trial-and-error) and social learning (e.g., imitation) are alternative ways of acquiring and expressing the appropriate phenotype in an environment. The optimal choice between using individual learning and/or social learning may be dictated by the life-stage or age of an organism. Of special interest is a learning schedule in which social learning precedes individual learning, because such a schedule is apparently a necessary condition for cumulative culture. Assuming two obligatory learning stages per discrete generation, we obtain the evolutionarily stable learning schedules for the three situations where the environment is constant, fluctuates between generations, or fluctuates within generations. During each learning stage, we assume that an organism may target the optimal phenotype in the current environment by individual learning, and/or the mature phenotype of the previous generation by oblique social learning. In the absence of exogenous costs to learning, the evolutionarily stable learning schedules are predicted to be either pure social learning followed by pure individual learning ("bang-bang" control) or pure individual learning at both stages ("flat" control). Moreover, we find for each situation that the evolutionarily stable learning schedule is also the one that optimizes the learned phenotype at equilibrium.
Resumo:
We present a study of binary mixtures of Bose-Einstein condensates confined in a double-well potential within the framework of the mean field Gross-Pitaevskii (GP) equation. We re-examine both the single component and the binary mixture cases for such a potential, and we investigate what are the situations in which a simpler two-mode approach leads to an accurate description of their dynamics. We also estimate the validity of the most usual dimensionality reductions used to solve the GP equations. To this end, we compare both the semi-analytical two-mode approaches and the numerical simulations of the one-dimensional (1D) reductions with the full 3D numerical solutions of the GP equation. Our analysis provides a guide to clarify the validity of several simplified models that describe mean-field nonlinear dynamics, using an experimentally feasible binary mixture of an F = 1 spinor condensate with two of its Zeeman manifolds populated, m = ±1.
Resumo:
We present a study of binary mixtures of Bose-Einstein condensates confined in a double-well potential within the framework of the mean field Gross-Pitaevskii (GP) equation. We re-examine both the single component and the binary mixture cases for such a potential, and we investigate what are the situations in which a simpler two-mode approach leads to an accurate description of their dynamics. We also estimate the validity of the most usual dimensionality reductions used to solve the GP equations. To this end, we compare both the semi-analytical two-mode approaches and the numerical simulations of the one-dimensional (1D) reductions with the full 3D numerical solutions of the GP equation. Our analysis provides a guide to clarify the validity of several simplified models that describe mean-field nonlinear dynamics, using an experimentally feasible binary mixture of an F = 1 spinor condensate with two of its Zeeman manifolds populated, m = ±1.
Resumo:
The objective of this study was to evaluate the performance of stacked species distribution models in predicting the alpha and gamma species diversity patterns of two important plant clades along elevation in the Andes. We modelled the distribution of the species in the Anthurium genus (53 species) and the Bromeliaceae family (89 species) using six modelling techniques. We combined all of the predictions for the same species in ensemble models based on two different criteria: the average of the rescaled predictions by all techniques and the average of the best techniques. The rescaled predictions were then reclassified into binary predictions (presence/absence). By stacking either the original predictions or binary predictions for both ensemble procedures, we obtained four different species richness models per taxa. The gamma and alpha diversity per elevation band (500 m) was also computed. To evaluate the prediction abilities for the four predictions of species richness and gamma diversity, the models were compared with the real data along an elevation gradient that was independently compiled by specialists. Finally, we also tested whether our richness models performed better than a null model of altitudinal changes of diversity based on the literature. Stacking of the ensemble prediction of the individual species models generated richness models that proved to be well correlated with the observed alpha diversity richness patterns along elevation and with the gamma diversity derived from the literature. Overall, these models tend to overpredict species richness. The use of the ensemble predictions from the species models built with different techniques seems very promising for modelling of species assemblages. Stacking of the binary models reduced the over-prediction, although more research is needed. The randomisation test proved to be a promising method for testing the performance of the stacked models, but other implementations may still be developed.
Resumo:
A rigorous unit operation model is developed for vapor membrane separation. The new model is able to describe temperature, pressure, and concentration dependent permeation as wellreal fluid effects in vapor and gas separation with hydrocarbon selective rubbery polymeric membranes. The permeation through the membrane is described by a separate treatment of sorption and diffusion within the membrane. The chemical engineering thermodynamics is used to describe the equilibrium sorption of vapors and gases in rubbery membranes with equation of state models for polymeric systems. Also a new modification of the UNIFAC model is proposed for this purpose. Various thermodynamic models are extensively compared in order to verify the models' ability to predict and correlate experimental vapor-liquid equilibrium data. The penetrant transport through the selective layer of the membrane is described with the generalized Maxwell-Stefan equations, which are able to account for thebulk flux contribution as well as the diffusive coupling effect. A method is described to compute and correlate binary penetrant¿membrane diffusion coefficients from the experimental permeability coefficients at different temperatures and pressures. A fluid flow model for spiral-wound modules is derived from the conservation equation of mass, momentum, and energy. The conservation equations are presented in a discretized form by using the control volume approach. A combination of the permeation model and the fluid flow model yields the desired rigorous model for vapor membrane separation. The model is implemented into an inhouse process simulator and so vapor membrane separation may be evaluated as an integralpart of a process flowsheet.
Resumo:
Occupational exposure modeling is widely used in the context of the E.U. regulation on the registration, evaluation, authorization, and restriction of chemicals (REACH). First tier tools, such as European Centre for Ecotoxicology and TOxicology of Chemicals (ECETOC) targeted risk assessment (TRA) or Stoffenmanager, are used to screen a wide range of substances. Those of concern are investigated further using second tier tools, e.g., Advanced REACH Tool (ART). Local sensitivity analysis (SA) methods are used here to determine dominant factors for three models commonly used within the REACH framework: ECETOC TRA v3, Stoffenmanager 4.5, and ART 1.5. Based on the results of the SA, the robustness of the models is assessed. For ECETOC, the process category (PROC) is the most important factor. A failure to identify the correct PROC has severe consequences for the exposure estimate. Stoffenmanager is the most balanced model and decision making uncertainties in one modifying factor are less severe in Stoffenmanager. ART requires a careful evaluation of the decisions in the source compartment since it constitutes ∼75% of the total exposure range, which corresponds to an exposure estimate of 20-22 orders of magnitude. Our results indicate that there is a trade off between accuracy and precision of the models. Previous studies suggested that ART may lead to more accurate results in well-documented exposure situations. However, the choice of the adequate model should ultimately be determined by the quality of the available exposure data: if the practitioner is uncertain concerning two or more decisions in the entry parameters, Stoffenmanager may be more robust than ART.
Resumo:
Cooperation and coordination are desirable behaviors that are fundamental for the harmonious development of society. People need to rely on cooperation with other individuals in many aspects of everyday life, such as teamwork and economic exchange in anonymous markets. However, cooperation may easily fall prey to exploitation by selfish individuals who only care about short- term gain. For cooperation to evolve, specific conditions and mechanisms are required, such as kinship, direct and indirect reciprocity through repeated interactions, or external interventions such as punishment. In this dissertation we investigate the effect of the network structure of the population on the evolution of cooperation and coordination. We consider several kinds of static and dynamical network topologies, such as Baraba´si-Albert, social network models and spatial networks. We perform numerical simulations and laboratory experiments using the Prisoner's Dilemma and co- ordination games in order to contrast human behavior with theoretical results. We show by numerical simulations that even a moderate amount of random noise on the Baraba´si-Albert scale-free network links causes a significant loss of cooperation, to the point that cooperation almost vanishes altogether in the Prisoner's Dilemma when the noise rate is high enough. Moreover, when we consider fixed social-like networks we find that current models of social networks may allow cooperation to emerge and to be robust at least as much as in scale-free networks. In the framework of spatial networks, we investigate whether cooperation can evolve and be stable when agents move randomly or performing Le´vy flights in a continuous space. We also consider discrete space adopting purposeful mobility and binary birth-death process to dis- cover emergent cooperative patterns. The fundamental result is that cooperation may be enhanced when this migration is opportunistic or even when agents follow very simple heuristics. In the experimental laboratory, we investigate the issue of social coordination between indi- viduals located on networks of contacts. In contrast to simulations, we find that human players dynamics do not converge to the efficient outcome more often in a social-like network than in a random network. In another experiment, we study the behavior of people who play a pure co- ordination game in a spatial environment in which they can move around and when changing convention is costly. We find that each convention forms homogeneous clusters and is adopted by approximately half of the individuals. When we provide them with global information, i.e., the number of subjects currently adopting one of the conventions, global consensus is reached in most, but not all, cases. Our results allow us to extract the heuristics used by the participants and to build a numerical simulation model that agrees very well with the experiments. Our findings have important implications for policymakers intending to promote specific, desired behaviors in a mobile population. Furthermore, we carry out an experiment with human subjects playing the Prisoner's Dilemma game in a diluted grid where people are able to move around. In contrast to previous results on purposeful rewiring in relational networks, we find no noticeable effect of mobility in space on the level of cooperation. Clusters of cooperators form momentarily but in a few rounds they dissolve as cooperators at the boundaries stop tolerating being cheated upon. Our results highlight the difficulties that mobile agents have to establish a cooperative environment in a spatial setting without a device such as reputation or the possibility of retaliation. i.e. punishment. Finally, we test experimentally the evolution of cooperation in social networks taking into ac- count a setting where we allow people to make or break links at their will. In this work we give particular attention to whether information on an individual's actions is freely available to poten- tial partners or not. Studying the role of information is relevant as information on other people's actions is often not available for free: a recruiting firm may need to call a job candidate's refer- ences, a bank may need to find out about the credit history of a new client, etc. We find that people cooperate almost fully when information on their actions is freely available to their potential part- ners. Cooperation is less likely, however, if people have to pay about half of what they gain from cooperating with a cooperator. Cooperation declines even further if people have to pay a cost that is almost equivalent to the gain from cooperating with a cooperator. Thus, costly information on potential neighbors' actions can undermine the incentive to cooperate in dynamical networks.
Resumo:
Aim: Modelling species at the assemblage level is required to make effective forecast of global change impacts on diversity and ecosystem functioning. Community predictions may be achieved using macroecological properties of communities (MEM), or by stacking of individual species distribution models (S-SDMs). To obtain more realistic predictions of species assemblages, the SESAM framework suggests applying successive filters to the initial species source pool, by combining different modelling approaches and rules. Here we provide a first test of this framework in mountain grassland communities. Location: The western Swiss Alps. Methods: Two implementations of the SESAM framework were tested: a "Probability ranking" rule based on species richness predictions and rough probabilities from SDMs, and a "Trait range" rule that uses the predicted upper and lower bound of community-level distribution of three different functional traits (vegetative height, specific leaf area and seed mass) to constraint a pool of environmentally filtered species from binary SDMs predictions. Results: We showed that all independent constraints expectedly contributed to reduce species richness overprediction. Only the "Probability ranking" rule allowed slightly but significantly improving predictions of community composition. Main conclusion: We tested various ways to implement the SESAM framework by integrating macroecological constraints into S-SDM predictions, and report one that is able to improve compositional predictions. We discuss possible improvements, such as further improving the causality and precision of environmental predictors, using other assembly rules and testing other types of ecological or functional constraints.
Resumo:
Ingvaldsen et al. comment on our study assessing global fish interchanges between the North Atlantic and Pacific oceans for more than 500 species during the entire 21st century. They propose that discrepancies between our model projections and observed data for cod in the Barents Sea are the result of the choice of Atmosphere-Ocean General Circulation Models (AOGCMs). We address this assertion here, re-running the cod model with additional observation data from the Barents Sea1, 3, and show that the lack of open-access, archived data for the Barents Sea was the primary cause of local prediction mismatch. This finding recalls the importance of systematic deposit of biodiversity data in global databases
Resumo:
Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.
Resumo:
OBJECTIVE: We examined the influence of clinical, radiologic, and echocardiographic characteristics on antithrombotic choice in patients with cryptogenic stroke (CS) and patent foramen ovale (PFO), hypothesizing that features suggestive of paradoxical embolism might lead to greater use of anticoagulation. METHODS: The Risk of Paradoxical Embolism Study combined 12 databases to create the largest dataset of patients with CS and known PFO status. We used generalized linear mixed models with a random effect of component study to explore whether anticoagulation was preferentially selected based on the following: (1) younger age and absence of vascular risk factors, (2) "high-risk" echocardiographic features, and (3) neuroradiologic findings. RESULTS: A total of 1,132 patients with CS and PFO treated with anticoagulation or antiplatelets were included. Overall, 438 participants (39%) were treated with anticoagulation with a range (by database) of 22% to 54%. Treatment choice was not influenced by age or vascular risk factors. However, neuroradiologic findings (superficial or multiple infarcts) and high-risk echocardiographic features (large shunts, shunt at rest, and septal hypermobility) were predictors of anticoagulation use. CONCLUSION: Both antithrombotic regimens are widely used for secondary stroke prevention in patients with CS and PFO. Radiologic and echocardiographic features were strongly associated with treatment choice, whereas conventional vascular risk factors were not. Prior observational studies are likely to be biased by confounding by indication.
Resumo:
The main outcome of the master thesis is innovative solution, which can support a choice of business process modeling methodology. Potential users of this tool are people with background in business process modeling and possibilities to collect required information about organization’s business processes. Master thesis states the importance of business process modeling in implementation of strategic goals of organization. It is made by revealing the place of the concept in Business Process Management (BPM) and its particular case Business Process Reengineering (BPR). In order to support the theoretical outcomes of the thesis a case study of Northern Dimension Research Centre (NORDI) in Lappeenranta University of Technology was conducted. On its example several solutions are shown: how to apply business process modeling methodologies in practice; in which way business process models can be useful for BPM and BPR initiatives; how to apply proposed innovative solution for a choice of business process modeling methodology.
Resumo:
Systems biology is a new, emerging and rapidly developing, multidisciplinary research field that aims to study biochemical and biological systems from a holistic perspective, with the goal of providing a comprehensive, system- level understanding of cellular behaviour. In this way, it addresses one of the greatest challenges faced by contemporary biology, which is to compre- hend the function of complex biological systems. Systems biology combines various methods that originate from scientific disciplines such as molecu- lar biology, chemistry, engineering sciences, mathematics, computer science and systems theory. Systems biology, unlike “traditional” biology, focuses on high-level concepts such as: network, component, robustness, efficiency, control, regulation, hierarchical design, synchronization, concurrency, and many others. The very terminology of systems biology is “foreign” to “tra- ditional” biology, marks its drastic shift in the research paradigm and it indicates close linkage of systems biology to computer science. One of the basic tools utilized in systems biology is the mathematical modelling of life processes tightly linked to experimental practice. The stud- ies contained in this thesis revolve around a number of challenges commonly encountered in the computational modelling in systems biology. The re- search comprises of the development and application of a broad range of methods originating in the fields of computer science and mathematics for construction and analysis of computational models in systems biology. In particular, the performed research is setup in the context of two biolog- ical phenomena chosen as modelling case studies: 1) the eukaryotic heat shock response and 2) the in vitro self-assembly of intermediate filaments, one of the main constituents of the cytoskeleton. The range of presented approaches spans from heuristic, through numerical and statistical to ana- lytical methods applied in the effort to formally describe and analyse the two biological processes. We notice however, that although applied to cer- tain case studies, the presented methods are not limited to them and can be utilized in the analysis of other biological mechanisms as well as com- plex systems in general. The full range of developed and applied modelling techniques as well as model analysis methodologies constitutes a rich mod- elling framework. Moreover, the presentation of the developed methods, their application to the two case studies and the discussions concerning their potentials and limitations point to the difficulties and challenges one encounters in computational modelling of biological systems. The problems of model identifiability, model comparison, model refinement, model inte- gration and extension, choice of the proper modelling framework and level of abstraction, or the choice of the proper scope of the model run through this thesis.
Resumo:
It is remarkable the reduction in the number of medical students choosing general surgery as a career. In this context, new possibilities in the field of surgical education should be developed to combat this lack of interest. In this study, a program of surgical training based on learning with models of low-fidelity bench is designed as a complementary alternative to the various methodologies in the teaching of basic surgical skills during medical education, and to develop personal interests in career choice.
Resumo:
Abstract This doctoral thesis concerns the active galactic nucleus (AGN) most often referred to with the catalogue number OJ287. The publications in the thesis present new discoveries of the system in the context of a supermassive binary black hole model. In addition, the introduction discusses general characteristics of the OJ287 system and the physical fundamentals behind these characteristics. The place of OJ287 in the hierarchy of known types of AGN is also discussed. The introduction presents a large selection of fundamental physics required to have a basic understanding of active galactic nuclei, binary black holes, relativistic jets and accretion disks. Particularly the general relativistic nature of the orbits of close binaries of supermassive black holes is explored with some detail. Analytic estimates of some of the general relativistic effects in such a binary are presented, as well as numerical methods to calculate the effects more precisely. It is also shown how these results can be applied to the OJ287 system. The binary orbit model forms the basis for models of the recurring optical outbursts in the OJ287 system. In the introduction, two physical outburst models are presented in some detail and compared. The radiation hydrodynamics of the outbursts are discussed and optical light curve predictions are derived. The precursor outbursts studied in Paper III are also presented, and tied into the model of OJ287. To complete the discussion of the observable features of OJ287, the nature of the relativistic jets in the system, and in active galactic nuclei in general, is discussed. Basic physics of relativistic jets are presented, with additional detail added in the form of helical jet models. The results of Papers II, IV and V concerning the jet of OJ287 are presented, and their relation to other facets of the binary black hole model is discussed. As a whole, the introduction serves as a guide, though terse, for the physics and numerical methods required to successfully understand and simulate a close binary of supermassive black holes. For this purpose, the introduction necessarily combines a large number of both fundamental and specific results from broad disciplines like general relativity and radiation hydrodynamics. With the material included in the introduction, the publications of the thesis, which present new results with a much narrower focus, can be readily understood. Of the publications, Paper I presents newly discovered optical data points for OJ287, detected on archival astronomical plates from the Harvard College Observatory. These data points show the 1900 outburst of OJ287 for the first time. In addition, new data points covering the 1913 outburst allowed the determination of the start of the outburst with more precision than was possible before. These outbursts were then successfully numerically modelled with an N-body simulation of the OJ287 binary and accretion disc. In Paper II, mechanisms for the spin-up of the secondary black hole in OJ287 via interaction with the primary accretion disc and the magnetic fields in the system are discussed. Timescales for spin-up and alignment via both processes are estimated. It is found that the secondary black hole likely has a high spin. Paper III reports a new outburst of OJ287 in March 2013. The outburst was found to be rather similar to the ones reported in 1993 and 2004. All these outbursts happened just before the main outburst season, and are called precursor outbursts. In this paper, a mechanism was proposed for the precursor outbursts, where the secondary black hole collides with a gas cloud in the primary accretion disc corona. From this, estimates of brightness and timescales for the precursor were derived, as well as a prediction of the timing of the next precursor outburst. In Paper IV, observations from the 2004–2006 OJ287 observing program are used to investigate the existence of short periodicities in OJ287. The existence of a _50 day quasiperiodic component is confirmed. In addition, statistically significant 250 day and 3.5 day periods are found. Primary black hole accretion of a spiral density wave in the accretion disc is proposed as the source of the 50 day period, with numerical simulations supporting these results. Lorentz contracted jet re-emission is then proposed as the reason for the 3.5 day timescale. Paper V fits optical observations and mm and cm radio observations of OJ287 with a helical jet model. The jet is found to have a spine–sheath structure, with the sheath having a much lower Lorentz gamma factor than the spine. The sheath opening angle and Lorentz factor, as well as the helical wavelength of the jet are reported for the first time. Tiivistelmä Tässä väitöskirjatutkimuksessa on keskitytty tutkimaan aktiivista galaksiydintä OJ287. Väitöskirjan osana olevat tieteelliset julkaisut esittelevät OJ287-systeemistä saatuja uusia tuloksia kaksoismusta-aukkomallin kontekstissa. Väitöskirjan johdannossa käsitellään OJ287:n yleisiä ominaisuuksia ja niitä fysikaalisia perusilmiöitä, jotka näiden ominaisuuksien taustalla vaikuttavat. Johdanto selvittää myös OJ287-järjestelmän sijoittumisen aktiivisten galaksiytimien hierarkiassa. Johdannossa käydään läpi joitakin perusfysiikan tuloksia, jotka ovat tarpeen aktiivisten galaksiydinten, mustien aukkojen binäärien, relativististen suihkujen ja kertymäkiekkojen ymmärtämiseksi. Kahden toisiaan kiertävän mustan aukon keskinäisen radan suhteellisuusteoreettiset perusteet käydään läpi yksityiskohtaisemmin. Johdannossa esitetään joitakin analyyttisiä tuloksia tällaisessa binäärissä havaittavista suhteellisuusteoreettisista ilmiöistä. Myös numeerisia menetelmiä näiden ilmiöiden tarkempaan laskemiseen esitellään. Tuloksia sovelletaan OJ287-systeemiin, ja verrataan havaintoihin. OJ287:n mustien aukkojen ratamalli muodostaa pohjan systeemin toistuvien optisten purkausten malleille. Johdannossa esitellään yksityiskohtaisemmin kaksi fysikaalista purkausmallia, ja vertaillaan niitä. Purkausten säteilyhydrodynamiikka käydään läpi, ja myös ennusteet purkausten valokäyrille johdetaan. Johdannossa esitellään myös Julkaisussa III johdettu prekursoripurkausten malli, ja osoitetaan sen sopivan yhteen OJ287:n binäärimallin kanssa. Johdanto esittelee myös relativististen suihkujen fysiikkaa sekä OJ287- systeemiin liittyen että aktiivisten galaksiydinten kontekstissa yleisesti. Relativististen suihkujen perusfysiikka esitellään, kuten myös malleja kierteisistä suihkuista. Julkaisujen II, IV ja V OJ287-systeemin suihkuja koskevat tulokset esitellään binäärimallin kontekstissa. Kokonaisuutena johdanto palvelee suppeana oppaana, joka esittelee tarvittavan fysiikan ja tarpeelliset numeeriset menetelmät mustien aukkojen binäärijärjestelmän ymmärtämiseen ja simulointiin. Tätä tarkoitusta varten johdanto yhdistää sekä perustuloksia että joitakin syvällisempiä tuloksia laajoilta fysiikan osa-alueilta kuten suhteellisuusteoriasta ja säteilyhydrodynamiikasta. Johdannon sisältämän materiaalin avulla väitöskirjan julkaisut, ja niiden esittämät tulokset, ovat hyvin ymmärrettävissä. Väitöskirjan julkaisuista ensimmäinen esittelee uusia OJ287-systeemistä saatuja havaintopisteitä, jotka on paikallistettu Harvardin yliopiston observatorion arkiston valokuvauslevyiltä. OJ287:n vuonna 1900 tapahtunut purkaus nähdään ensimmäistä kertaa näissä havaintopisteissä. Uudet havaintopisteet mahdollistivat myös vuoden 1913 purkauksen alun ajoittamisen tarkemmin kuin aiemmin oli mahdollista. Havaitut purkaukset mallinnettiin onnistuneesti simuloimalla OJ287-järjestelmän mustien aukkojen paria ja kertymäkiekkoa. Julkaisussa II käsitellään mekanismeja OJ287:n sekundäärisen mustan aukon spinin kasvamiseen vuorovaikutuksessa primäärin kertymäkiekon ja systeemin magneettikenttien kanssa. Julkaisussa arvioidaan maksimispinin saavuttamisen ja spinin suunnan vakiintumisen aikaskaalat kummallakin mekanismilla. Tutkimuksessa havaitaan sekundäärin spinin olevan todennäköisesti suuri. Julkaisu III esittelee OJ287-systeemissä maaliskuussa 2013 tapahtuneen purkauksen. Purkauksen havaittiin muistuttavan vuosina 1993 ja 2004 tapahtuneita purkauksia, joita kutsutaan yhteisnimityksellä prekursoripurkaus (precursor outburst). Julkaisussa esitellään purkauksen synnylle mekanismi, jossa OJ287-systeemin sekundäärinen musta aukko osuu primäärisen mustan aukon kertymäkiekon koronassa olevaan kaasupilveen. Mekanismin avulla johdetaan arviot prekursoripurkausten kirkkaudelle ja aikaskaalalle. Julkaisussa johdetaan myös ennuste seuraavan prekursoripurkauksen ajankohdalle. Julkaisussa IV käytetään vuosina 2004–2006 kerättyjä havaintoja OJ287- systeemistä lyhyiden jaksollisuuksien etsintään. Julkaisussa varmennetaan systeemissä esiintyvä n. 50 päivän kvasiperiodisuus. Lisäksi tilastollisesti merkittävät 250 päivän ja 3,5 päivän jaksollisuudet havaitaan. Julkaisussa esitetään malli, jossa primäärisen mustan aukon kertymäkiekossa oleva spiraalitiheysaalto aiheuttaa 50 päivän jaksollisuuden. Mallista tehty numeerinen simulaatio tukee tulosta. Systeemin relativistisen suihkun emittoima aikadilatoitunut säteily esitetään aiheuttajaksi 3,5 päivän jaksollisuusaikaskaalalle. Julkaisussa V sovitetaan kierresuihkumalli OJ287-systeemistä tehtyihin optisiin havaintoihin ja millimetri- sekä senttimetriaallonpituuden radiohavaintoihin. Suihkun rakenteen havaitaan olevan kaksijakoinen ja koostuvan ytimestä ja kuoresta. Suihkun kuorella on merkittävästi pienempi Lorentzin gamma-tekijä kuin suihkun ytimellä. Kuoren avautumiskulma ja Lorentztekijä sekä suihkun kierteen aallonpituus raportoidaan julkaisussa ensimmäistä kertaa.