956 resultados para Multinomial logit models with random coefficients (RCL)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much attention has been paid to the effects of climate change on species' range reductions and extinctions. There is however surprisingly little information on how climate change driven threat may impact the tree of life and result in loss of phylogenetic diversity (PD). Some plant families and mammalian orders reveal nonrandom extinction patterns, but many other plant families do not. Do these discrepancies reflect different speciation histories and does climate induced extinction result in the same discrepancies among different groups? Answers to these questions require representative taxon sampling. Here, we combine phylogenetic analyses, species distribution modeling, and climate change projections on two of the largest plant families in the Cape Floristic Region (Proteaceae and Restionaceae), as well as the second most diverse mammalian order in Southern Africa (Chiroptera), and an herbivorous insect genus (Platypleura) in the family Cicadidae to answer this question. We model current and future species distributions to assess species threat levels over the next 70years, and then compare projected with random PD survival. Results for these animal and plant clades reveal congruence. PD losses are not significantly higher under predicted extinction than under random extinction simulations. So far the evidence suggests that focusing resources on climate threatened species alone may not result in disproportionate benefits for the preservation of evolutionary history.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Species distribution models (SDMs) are widely used to explain and predict species ranges and environmental niches. They are most commonly constructed by inferring species' occurrence-environment relationships using statistical and machine-learning methods. The variety of methods that can be used to construct SDMs (e.g. generalized linear/additive models, tree-based models, maximum entropy, etc.), and the variety of ways that such models can be implemented, permits substantial flexibility in SDM complexity. Building models with an appropriate amount of complexity for the study objectives is critical for robust inference. We characterize complexity as the shape of the inferred occurrence-environment relationships and the number of parameters used to describe them, and search for insights into whether additional complexity is informative or superfluous. By building 'under fit' models, having insufficient flexibility to describe observed occurrence-environment relationships, we risk misunderstanding the factors shaping species distributions. By building 'over fit' models, with excessive flexibility, we risk inadvertently ascribing pattern to noise or building opaque models. However, model selection can be challenging, especially when comparing models constructed under different modeling approaches. Here we argue for a more pragmatic approach: researchers should constrain the complexity of their models based on study objective, attributes of the data, and an understanding of how these interact with the underlying biological processes. We discuss guidelines for balancing under fitting with over fitting and consequently how complexity affects decisions made during model building. Although some generalities are possible, our discussion reflects differences in opinions that favor simpler versus more complex models. We conclude that combining insights from both simple and complex SDM building approaches best advances our knowledge of current and future species ranges.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We simulated a meta-population with random dispersal among demes but local mating within demes to investigate conditions under which a dominant female-determining gene W, with no individual selection advantage, can invade and become fixed in females, changing the population from male to female heterogamety. Starting with one mutant W in a single deme, the interaction of sex ratio selection and random genetic drift causes W to be fixed among females more often than a comparable neutral mutation with no influence on sex determination, even when YY males have slightly reduced viability. Meta-population structure and interdeme selection can also favour the fixation of W. The reverse transition from female to male heterogamety can also occur with higher probability than for a comparable neutral mutation. These results help to explain the involvement of sex-determining genes in the evolution of sex chromosomes and in sexual selection and speciation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

BACKGROUND: Acute alcohol consumption has been reported to be an important risk factor for injury, but clear scientific evidence on issues such as injury type is not available. The present study aims to improve the knowledge of the importance of alcohol consumption as an injury determinant with regards to two dimensions of the type of injury, namely the nature and the body region involved. METHODS: Risk relationships between two injury type components and acute alcohol use were estimated through multinomial and logistic regression models based on data from 7,529 patients-among whom 3,682 had injury diagnoses-gathered in a Swiss emergency department. RESULTS: Depending on the type of injury, between 31.1% and 48.7% of casualties report alcohol use before emergency department attendance. The multinomial regression models show that even low alcohol levels are consistently associated with nearly all natures of injury and body regions. A persistent dose-response effect between alcohol levels and risk associations was observed for almost all injury types. CONCLUSIONS: The results highlight the importance and consistency of the risk association between low and moderate levels of acute alcohol consumption and all types of injury. None of the body regions and natures of injury could pride on absence of association between alcohol and injury. Public health, prevention, and care implications are considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Many studies have found considerable variations in the resource intensity of physical therapy episodes. Although they have identified several patient-and provider-related factors, few studies have examined their relative explanatory power. We sought to quantify the contribution of patients and providers to these differences and examine how effective Swiss regulations are (nine-session ceiling per prescription and bonus for first treatments). Methods: Our sample consisted of 87,866 first physical therapy episodes performed by 3,365 physiotherapists based on referrals by 6,131 physicians. We modeled the number of visits per episode using a multilevel log linear regression with crossed random effects for physiotherapists and physicians and with fixed effects for cantons. The three-level explanatory variables were patient, physiotherapist and physician characteristics. Results: The median number of sessions was nine (interquartile range 6-13). Physical therapy use increased with age, women, higher health care costs, lower deductibles, surgery and specific conditions. Use rose with the share of nine-session episodes among physiotherapists or physicians, but fell with the share of new treatments. Geographical area had no influence. Most of the variance was explained at the patient level, but the available factors explained only 4% thereof. Physiotherapists and physicians explained only 6% and 5% respectively of the variance, although the available factors explained most of this variance. Regulations were the most powerful factors. Conclusion: Against the backdrop of abundant physical therapy supply, Swiss financial regulations did not restrict utilization. Given that patient-related factors explained most of the variance, this group should be subject to closer scrutiny. Moreover, further research is needed on the determinants of patient demand.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General introductionThe Human Immunodeficiency/Acquired Immunodeficiency Syndrome (HIV/AIDS) epidemic, despite recent encouraging announcements by the World Health Organization (WHO) is still today one of the world's major health care challenges.The present work lies in the field of health care management, in particular, we aim to evaluate the behavioural and non-behavioural interventions against HIV/AIDS in developing countries through a deterministic simulation model, both in human and economic terms. We will focus on assessing the effectiveness of the antiretroviral therapies (ART) in heterosexual populations living in lesser developed countries where the epidemic has generalized (formerly defined by the WHO as type II countries). The model is calibrated using Botswana as a case study, however our model can be adapted to other countries with similar transmission dynamics.The first part of this thesis consists of reviewing the main mathematical concepts describing the transmission of infectious agents in general but with a focus on human immunodeficiency virus (HIV) transmission. We also review deterministic models assessing HIV interventions with a focus on models aimed at African countries. This review helps us to recognize the need for a generic model and allows us to define a typical structure of such a generic deterministic model.The second part describes the main feed-back loops underlying the dynamics of HIV transmission. These loops represent the foundation of our model. This part also provides a detailed description of the model, including the various infected and non-infected population groups, the type of sexual relationships, the infection matrices, important factors impacting HIV transmission such as condom use, other sexually transmitted diseases (STD) and male circumcision. We also included in the model a dynamic life expectancy calculator which, to our knowledge, is a unique feature allowing more realistic cost-efficiency calculations. Various intervention scenarios are evaluated using the model, each of them including ART in combination with other interventions, namely: circumcision, campaigns aimed at behavioral change (Abstain, Be faithful or use Condoms also named ABC campaigns), and treatment of other STD. A cost efficiency analysis (CEA) is performed for each scenario. The CEA consists of measuring the cost per disability-adjusted life year (DALY) averted. This part also describes the model calibration and validation, including a sensitivity analysis.The third part reports the results and discusses the model limitations. In particular, we argue that the combination of ART and ABC campaigns and ART and treatment of other STDs are the most cost-efficient interventions through 2020. The main model limitations include modeling the complexity of sexual relationships, omission of international migration and ignoring variability in infectiousness according to the AIDS stage.The fourth part reviews the major contributions of the thesis and discusses model generalizability and flexibility. Finally, we conclude that by selecting the adequate interventions mix, policy makers can significantly reduce the adult prevalence in Botswana in the coming twenty years providing the country and its donors can bear the cost involved.Part I: Context and literature reviewIn this section, after a brief introduction to the general literature we focus in section two on the key mathematical concepts describing the transmission of infectious agents in general with a focus on HIV transmission. Section three provides a description of HIV policy models, with a focus on deterministic models. This leads us in section four to envision the need for a generic deterministic HIV policy model and briefly describe the structure of such a generic model applicable to countries with generalized HIV/AIDS epidemic, also defined as pattern II countries by the WHO.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Molecular characterization of Paracoccidioides brasiliensis variant strains that had been preserved under mineral oil for decades was carried out by random amplified polymorphic DNA analysis (RAPD). On P. brasiliensis variants in the transitional phase and strains with typical morphology, RAPD produced reproducible polymorphic amplification products that differentiated them. A dendrogram based on the generated RAPD patterns placed the 14 P. brasiliensis strains into five groups with similarity coefficients of 72%. A high correlation between the genotypic and phenotypic characteristics of the strains was observed. A 750 bp-RAPD fragment found only in the wild-type phenotype strains was cloned and sequenced. Genetic similarity analysis using BLASTx suggested that this RAPD marker represents a putative domain of a hypothetical flavin-binding monooxygenase (FMO)-like protein of Neurospora crassa.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of compositional data analysis through log ratio trans-formations corresponds to a multinomial logit model for the shares themselves.This model is characterized by the property of Independence of Irrelevant Alter-natives (IIA). IIA states that the odds ratio in this case the ratio of shares is invariant to the addition or deletion of outcomes to the problem. It is exactlythis invariance of the ratio that underlies the commonly used zero replacementprocedure in compositional data analysis. In this paper we investigate using thenested logit model that does not embody IIA and an associated zero replacementprocedure and compare its performance with that of the more usual approach ofusing the multinomial logit model. Our comparisons exploit a data set that com-bines voting data by electoral division with corresponding census data for eachdivision for the 2001 Federal election in Australia

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interaction effects are usually modeled by means of moderated regression analysis. Structural equation models with non-linear constraints make it possible to estimate interaction effects while correcting formeasurement error. From the various specifications, Jöreskog and Yang's(1996, 1998), likely the most parsimonious, has been chosen and further simplified. Up to now, only direct effects have been specified, thus wasting much of the capability of the structural equation approach. This paper presents and discusses an extension of Jöreskog and Yang's specification that can handle direct, indirect and interaction effects simultaneously. The model is illustrated by a study of the effects of an interactive style of use of budgets on both company innovation and performance

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative or algorithmic trading is the automatization of investments decisions obeying a fixed or dynamic sets of rules to determine trading orders. It has increasingly made its way up to 70% of the trading volume of one of the biggest financial markets such as the New York Stock Exchange (NYSE). However, there is not a signi cant amount of academic literature devoted to it due to the private nature of investment banks and hedge funds. This projects aims to review the literature and discuss the models available in a subject that publications are scarce and infrequently. We review the basic and fundamental mathematical concepts needed for modeling financial markets such as: stochastic processes, stochastic integration and basic models for prices and spreads dynamics necessary for building quantitative strategies. We also contrast these models with real market data with minutely sampling frequency from the Dow Jones Industrial Average (DJIA). Quantitative strategies try to exploit two types of behavior: trend following or mean reversion. The former is grouped in the so-called technical models and the later in the so-called pairs trading. Technical models have been discarded by financial theoreticians but we show that they can be properly cast into a well defined scientific predictor if the signal generated by them pass the test of being a Markov time. That is, we can tell if the signal has occurred or not by examining the information up to the current time; or more technically, if the event is F_t-measurable. On the other hand the concept of pairs trading or market neutral strategy is fairly simple. However it can be cast in a variety of mathematical models ranging from a method based on a simple euclidean distance, in a co-integration framework or involving stochastic differential equations such as the well-known Ornstein-Uhlenbeck mean reversal ODE and its variations. A model for forecasting any economic or financial magnitude could be properly defined with scientific rigor but it could also lack of any economical value and be considered useless from a practical point of view. This is why this project could not be complete without a backtesting of the mentioned strategies. Conducting a useful and realistic backtesting is by no means a trivial exercise since the \laws" that govern financial markets are constantly evolving in time. This is the reason because we make emphasis in the calibration process of the strategies' parameters to adapt the given market conditions. We find out that the parameters from technical models are more volatile than their counterpart form market neutral strategies and calibration must be done in a high-frequency sampling manner to constantly track the currently market situation. As a whole, the goal of this project is to provide an overview of a quantitative approach to investment reviewing basic strategies and illustrating them by means of a back-testing with real financial market data. The sources of the data used in this project are Bloomberg for intraday time series and Yahoo! for daily prices. All numeric computations and graphics used and shown in this project were implemented in MATLAB^R scratch from scratch as a part of this thesis. No other mathematical or statistical software was used.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Converging evidence speak in favor of an abnormal susceptibility to oxidative stress in schizophrenia. A decreased level of glutathione (GSH), the principal non-protein antioxidant and redox regulator, was observed both in cerebrospinal-fluid and prefrontal cortex of schizophrenia patients (Do et al., 2000). Results: Schizophrenia patients have an abnormal GSH synthesis most likely of genetic origin: Two independent case-control studies showed a significant association between schizophrenia and a GAG trinucleotide repeat (TNR) polymorphism in the GSH key synthesizing enzyme glutamate-cysteine-ligase (GCL) catalytic subunit (GCLC) gene. The most common TNR genotype 7/7 was more frequent in controls, whereas the rarest TNR genotype 8/8 was three times more frequent in patients. The disease-associated genotypes correlated with a decrease in GCLC protein expression, GCL activity and GSH content. Such a redox dysregulation during development could underlie the structural and functional anomalies in connectivity: In experimental models, GSH deficit induced anomalies similar to those observed in patients. (a) morphology: In animal models with GSH deficit during the development we observed in prefrontal cortex a decreased dendritic spines density in pyramidal cells and an abnormal development of parvalbumine (but not of calretinine) immunoreactive GABA interneurones in anterior cingulate cortex. (b) physiology: GSH depletion in hippocampal slices induces NMDA receptors hypofunction and an impairment of long term potentiation. In addition, GSH deficit affected the modulation of dopamine on NMDA-induced Ca 2+ response in cultured cortical neurons. While dopamine enhanced NMDA responses in control neurons, it depressed NMDA responses in GSH-depleted neurons. Antagonist of D2-, but not D1-receptors, prevented this depression, a mechanism contributing to the efficacy of antipsychotics. The redox sensitive ryanodine receptors and L-type calcium channels underlie these observations. (c) cognition: Developing rats with low [GSH] and high dopamine lead deficit in olfactory integration and in object recognition which appears earlier in males that females, in analogy to the delay of the psychosis onset between man and woman. Conclusion: These clinical and experimental evidence, combined with the favorable outcome of a clinical trial with N-Acetyl Cysteine, a GSH precursor, on both the negative symptoms (Berk et al., submitted) and the mismatch negativity in an auditory oddball paradigm supported the proposal that a GSH synthesis impairment of genetic origin represent, among other factors, one major risk factor in schizophrenia.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Floor cleaning is a typical robot application. There are several mobile robots aviable in the market for domestic applications most of them with random path-planning algorithms. In this paper we study the cleaning coverage performances of a random path-planning mobile robot and propose an optimized control algorithm, some methods to estimate the are of the room, the evolution of the cleaning and the time needed for complete coverage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since ethical concerns are calling for more attention within Operational Research, we present three approaches to combine Operational Research models with ethics. Our intention is to clarify the trade-offs faced by the OR community, in particular the tension between the scientific legitimacy of OR models (ethics outside OR models) and the integration of ethics within models (ethics within OR models). Presenting and discussing an approach that combines OR models with the process of OR (ethics beyond OR models), we suggest rigorous ways to express the relation between ethics and OR models. As our work is exploratory, we are trying to avoid a dogmatic attitude and call for further research. We argue that there are interesting avenues for research at the theoretical, methodological and applied levels and that the OR community can contribute to an innovative, constructive and responsible social dialogue about its ethics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract The object of game theory lies in the analysis of situations where different social actors have conflicting requirements and where their individual decisions will all influence the global outcome. In this framework, several games have been invented to capture the essence of various dilemmas encountered in many common important socio-economic situations. Even though these games often succeed in helping us understand human or animal behavior in interactive settings, some experiments have shown that people tend to cooperate with each other in situations for which classical game theory strongly recommends them to do the exact opposite. Several mechanisms have been invoked to try to explain the emergence of this unexpected cooperative attitude. Among them, repeated interaction, reputation, and belonging to a recognizable group have often been mentioned. However, the work of Nowak and May (1992) showed that the simple fact of arranging the players according to a spatial structure and only allowing them to interact with their immediate neighbors is sufficient to sustain a certain amount of cooperation even when the game is played anonymously and without repetition. Nowak and May's study and much of the following work was based on regular structures such as two-dimensional grids. Axelrod et al. (2002) showed that by randomizing the choice of neighbors, i.e. by actually giving up a strictly local geographical structure, cooperation can still emerge, provided that the interaction patterns remain stable in time. This is a first step towards a social network structure. However, following pioneering work by sociologists in the sixties such as that of Milgram (1967), in the last few years it has become apparent that many social and biological interaction networks, and even some technological networks, have particular, and partly unexpected, properties that set them apart from regular or random graphs. Among other things, they usually display broad degree distributions, and show small-world topological structure. Roughly speaking, a small-world graph is a network where any individual is relatively close, in terms of social ties, to any other individual, a property also found in random graphs but not in regular lattices. However, in contrast with random graphs, small-world networks also have a certain amount of local structure, as measured, for instance, by a quantity called the clustering coefficient. In the same vein, many real conflicting situations in economy and sociology are not well described neither by a fixed geographical position of the individuals in a regular lattice, nor by a random graph. Furthermore, it is a known fact that network structure can highly influence dynamical phenomena such as the way diseases spread across a population and ideas or information get transmitted. Therefore, in the last decade, research attention has naturally shifted from random and regular graphs towards better models of social interaction structures. The primary goal of this work is to discover whether or not the underlying graph structure of real social networks could give explanations as to why one finds higher levels of cooperation in populations of human beings or animals than what is prescribed by classical game theory. To meet this objective, I start by thoroughly studying a real scientific coauthorship network and showing how it differs from biological or technological networks using divers statistical measurements. Furthermore, I extract and describe its community structure taking into account the intensity of a collaboration. Finally, I investigate the temporal evolution of the network, from its inception to its state at the time of the study in 2006, suggesting also an effective view of it as opposed to a historical one. Thereafter, I combine evolutionary game theory with several network models along with the studied coauthorship network in order to highlight which specific network properties foster cooperation and shed some light on the various mechanisms responsible for the maintenance of this same cooperation. I point out the fact that, to resist defection, cooperators take advantage, whenever possible, of the degree-heterogeneity of social networks and their underlying community structure. Finally, I show that cooperation level and stability depend not only on the game played, but also on the evolutionary dynamic rules used and the individual payoff calculations. Synopsis Le but de la théorie des jeux réside dans l'analyse de situations dans lesquelles différents acteurs sociaux, avec des objectifs souvent conflictuels, doivent individuellement prendre des décisions qui influenceront toutes le résultat global. Dans ce cadre, plusieurs jeux ont été inventés afin de saisir l'essence de divers dilemmes rencontrés dans d'importantes situations socio-économiques. Bien que ces jeux nous permettent souvent de comprendre le comportement d'êtres humains ou d'animaux en interactions, des expériences ont montré que les individus ont parfois tendance à coopérer dans des situations pour lesquelles la théorie classique des jeux prescrit de faire le contraire. Plusieurs mécanismes ont été invoqués pour tenter d'expliquer l'émergence de ce comportement coopératif inattendu. Parmi ceux-ci, la répétition des interactions, la réputation ou encore l'appartenance à des groupes reconnaissables ont souvent été mentionnés. Toutefois, les travaux de Nowak et May (1992) ont montré que le simple fait de disposer les joueurs selon une structure spatiale en leur permettant d'interagir uniquement avec leurs voisins directs est suffisant pour maintenir un certain niveau de coopération même si le jeu est joué de manière anonyme et sans répétitions. L'étude de Nowak et May, ainsi qu'un nombre substantiel de travaux qui ont suivi, étaient basés sur des structures régulières telles que des grilles à deux dimensions. Axelrod et al. (2002) ont montré qu'en randomisant le choix des voisins, i.e. en abandonnant une localisation géographique stricte, la coopération peut malgré tout émerger, pour autant que les schémas d'interactions restent stables au cours du temps. Ceci est un premier pas en direction d'une structure de réseau social. Toutefois, suite aux travaux précurseurs de sociologues des années soixante, tels que ceux de Milgram (1967), il est devenu clair ces dernières années qu'une grande partie des réseaux d'interactions sociaux et biologiques, et même quelques réseaux technologiques, possèdent des propriétés particulières, et partiellement inattendues, qui les distinguent de graphes réguliers ou aléatoires. Entre autres, ils affichent en général une distribution du degré relativement large ainsi qu'une structure de "petit-monde". Grossièrement parlant, un graphe "petit-monde" est un réseau où tout individu se trouve relativement près de tout autre individu en termes de distance sociale, une propriété également présente dans les graphes aléatoires mais absente des grilles régulières. Par contre, les réseaux "petit-monde" ont, contrairement aux graphes aléatoires, une certaine structure de localité, mesurée par exemple par une quantité appelée le "coefficient de clustering". Dans le même esprit, plusieurs situations réelles de conflit en économie et sociologie ne sont pas bien décrites ni par des positions géographiquement fixes des individus en grilles régulières, ni par des graphes aléatoires. De plus, il est bien connu que la structure même d'un réseau peut passablement influencer des phénomènes dynamiques tels que la manière qu'a une maladie de se répandre à travers une population, ou encore la façon dont des idées ou une information s'y propagent. Ainsi, durant cette dernière décennie, l'attention de la recherche s'est tout naturellement déplacée des graphes aléatoires et réguliers vers de meilleurs modèles de structure d'interactions sociales. L'objectif principal de ce travail est de découvrir si la structure sous-jacente de graphe de vrais réseaux sociaux peut fournir des explications quant aux raisons pour lesquelles on trouve, chez certains groupes d'êtres humains ou d'animaux, des niveaux de coopération supérieurs à ce qui est prescrit par la théorie classique des jeux. Dans l'optique d'atteindre ce but, je commence par étudier un véritable réseau de collaborations scientifiques et, en utilisant diverses mesures statistiques, je mets en évidence la manière dont il diffère de réseaux biologiques ou technologiques. De plus, j'extrais et je décris sa structure de communautés en tenant compte de l'intensité d'une collaboration. Finalement, j'examine l'évolution temporelle du réseau depuis son origine jusqu'à son état en 2006, date à laquelle l'étude a été effectuée, en suggérant également une vue effective du réseau par opposition à une vue historique. Par la suite, je combine la théorie évolutionnaire des jeux avec des réseaux comprenant plusieurs modèles et le réseau de collaboration susmentionné, afin de déterminer les propriétés structurelles utiles à la promotion de la coopération et les mécanismes responsables du maintien de celle-ci. Je mets en évidence le fait que, pour ne pas succomber à la défection, les coopérateurs exploitent dans la mesure du possible l'hétérogénéité des réseaux sociaux en termes de degré ainsi que la structure de communautés sous-jacente de ces mêmes réseaux. Finalement, je montre que le niveau de coopération et sa stabilité dépendent non seulement du jeu joué, mais aussi des règles de la dynamique évolutionnaire utilisées et du calcul du bénéfice d'un individu.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To examine predictors of stroke recurrence in patients with a high vs a low likelihood of having an incidental patent foramen ovale (PFO) as defined by the Risk of Paradoxical Embolism (RoPE) score. METHODS: Patients in the RoPE database with cryptogenic stroke (CS) and PFO were classified as having a probable PFO-related stroke (RoPE score of >6, n = 647) and others (RoPE score of ≤6 points, n = 677). We tested 15 clinical, 5 radiologic, and 3 echocardiographic variables for associations with stroke recurrence using Cox survival models with component database as a stratification factor. An interaction with RoPE score was checked for the variables that were significant. RESULTS: Follow-up was available for 92%, 79%, and 57% at 1, 2, and 3 years. Overall, a higher recurrence risk was associated with an index TIA. For all other predictors, effects were significantly different in the 2 RoPE score categories. For the low RoPE score group, but not the high RoPE score group, older age and antiplatelet (vs warfarin) treatment predicted recurrence. Conversely, echocardiographic features (septal hypermobility and a small shunt) and a prior (clinical) stroke/TIA were significant predictors in the high but not low RoPE score group. CONCLUSION: Predictors of recurrence differ when PFO relatedness is classified by the RoPE score, suggesting that patients with CS and PFO form a heterogeneous group with different stroke mechanisms. Echocardiographic features were only associated with recurrence in the high RoPE score group.