955 resultados para Expectation-conditional Maximization (ecm)


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we implement estimating procedures in order to estimate threshold parameters for the continuous time threshold models driven by stochastic di®erential equations. The ¯rst procedure is based on the EM (expectation-maximization) algorithm applied to the threshold model built from the Brownian motion with drift process. The second procedure mimics one of the fundamental ideas in the estimation of the thresholds in time series context, that is, conditional least squares estimation. We implement this procedure not only for the threshold model built from the Brownian motion with drift process but also for more generic models as the ones built from the geometric Brownian motion or the Ornstein-Uhlenbeck process. Both procedures are implemented for simu- lated data and the least squares estimation procedure is also implemented for real data of daily prices from a set of international funds. The ¯rst fund is the PF-European Sus- tainable Equities-R fund from the Pictet Funds company and the second is the Parvest Europe Dynamic Growth fund from the BNP Paribas company. The data for both funds are daily prices from the year 2004. The last fund to be considered is the Converging Europe Bond fund from the Schroder company and the data are daily prices from the year 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Le développement d’un médicament est non seulement complexe mais les retours sur investissment ne sont pas toujours ceux voulus ou anticipés. Plusieurs médicaments échouent encore en Phase III même avec les progrès technologiques réalisés au niveau de plusieurs aspects du développement du médicament. Ceci se traduit en un nombre décroissant de médicaments qui sont commercialisés. Il faut donc améliorer le processus traditionnel de développement des médicaments afin de faciliter la disponibilité de nouveaux produits aux patients qui en ont besoin. Le but de cette recherche était d’explorer et de proposer des changements au processus de développement du médicament en utilisant les principes de la modélisation avancée et des simulations d’essais cliniques. Dans le premier volet de cette recherche, de nouveaux algorithmes disponibles dans le logiciel ADAPT 5® ont été comparés avec d’autres algorithmes déjà disponibles afin de déterminer leurs avantages et leurs faiblesses. Les deux nouveaux algorithmes vérifiés sont l’itératif à deux étapes (ITS) et le maximum de vraisemblance avec maximisation de l’espérance (MLEM). Les résultats de nos recherche ont démontré que MLEM était supérieur à ITS. La méthode MLEM était comparable à l’algorithme d’estimation conditionnelle de premier ordre (FOCE) disponible dans le logiciel NONMEM® avec moins de problèmes de rétrécissement pour les estimés de variances. Donc, ces nouveaux algorithmes ont été utilisés pour la recherche présentée dans cette thèse. Durant le processus de développement d’un médicament, afin que les paramètres pharmacocinétiques calculés de façon noncompartimentale soient adéquats, il faut que la demi-vie terminale soit bien établie. Des études pharmacocinétiques bien conçues et bien analysées sont essentielles durant le développement des médicaments surtout pour les soumissions de produits génériques et supergénériques (une formulation dont l'ingrédient actif est le même que celui du médicament de marque, mais dont le profil de libération du médicament est différent de celui-ci) car elles sont souvent les seules études essentielles nécessaires afin de décider si un produit peut être commercialisé ou non. Donc, le deuxième volet de la recherche visait à évaluer si les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte pour un individu pouvaient avoir une incidence sur les conclusions d’une étude de bioéquivalence et s’ils devaient être soustraits d’analyses statistiques. Les résultats ont démontré que les paramètres calculer d’une demi-vie obtenue à partir d'une durée d'échantillonnage réputée trop courte influençaient de façon négative les résultats si ceux-ci étaient maintenus dans l’analyse de variance. Donc, le paramètre de surface sous la courbe à l’infini pour ces sujets devrait être enlevé de l’analyse statistique et des directives à cet effet sont nécessaires a priori. Les études finales de pharmacocinétique nécessaires dans le cadre du développement d’un médicament devraient donc suivre cette recommandation afin que les bonnes décisions soient prises sur un produit. Ces informations ont été utilisées dans le cadre des simulations d’essais cliniques qui ont été réalisées durant la recherche présentée dans cette thèse afin de s’assurer d’obtenir les conclusions les plus probables. Dans le dernier volet de cette thèse, des simulations d’essais cliniques ont amélioré le processus du développement clinique d’un médicament. Les résultats d’une étude clinique pilote pour un supergénérique en voie de développement semblaient très encourageants. Cependant, certaines questions ont été soulevées par rapport aux résultats et il fallait déterminer si le produit test et référence seraient équivalents lors des études finales entreprises à jeun et en mangeant, et ce, après une dose unique et des doses répétées. Des simulations d’essais cliniques ont été entreprises pour résoudre certaines questions soulevées par l’étude pilote et ces simulations suggéraient que la nouvelle formulation ne rencontrerait pas les critères d’équivalence lors des études finales. Ces simulations ont aussi aidé à déterminer quelles modifications à la nouvelle formulation étaient nécessaires afin d’améliorer les chances de rencontrer les critères d’équivalence. Cette recherche a apporté des solutions afin d’améliorer différents aspects du processus du développement d’un médicament. Particulièrement, les simulations d’essais cliniques ont réduit le nombre d’études nécessaires pour le développement du supergénérique, le nombre de sujets exposés inutilement au médicament, et les coûts de développement. Enfin, elles nous ont permis d’établir de nouveaux critères d’exclusion pour des analyses statistiques de bioéquivalence. La recherche présentée dans cette thèse est de suggérer des améliorations au processus du développement d’un médicament en évaluant de nouveaux algorithmes pour des analyses compartimentales, en établissant des critères d’exclusion de paramètres pharmacocinétiques (PK) pour certaines analyses et en démontrant comment les simulations d’essais cliniques sont utiles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Nous y introduisons une nouvelle classe de distributions bivariées de type Marshall-Olkin, la distribution Erlang bivariée. La transformée de Laplace, les moments et les densités conditionnelles y sont obtenus. Les applications potentielles en assurance-vie et en finance sont prises en considération. Les estimateurs du maximum de vraisemblance des paramètres sont calculés par l'algorithme Espérance-Maximisation. Ensuite, notre projet de recherche est consacré à l'étude des processus de risque multivariés, qui peuvent être utiles dans l'étude des problèmes de la ruine des compagnies d'assurance avec des classes dépendantes. Nous appliquons les résultats de la théorie des processus de Markov déterministes par morceaux afin d'obtenir les martingales exponentielles, nécessaires pour établir des bornes supérieures calculables pour la probabilité de ruine, dont les expressions sont intraitables.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

En este trabajo se implementa una metodología para incluir momentos de orden superior en la selección de portafolios, haciendo uso de la Distribución Hiperbólica Generalizada, para posteriormente hacer un análisis comparativo frente al modelo de Markowitz.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a two-step procedure to back out the conditional alpha of a given stock using high-frequency data. We rst estimate the realized factor loadings of the stocks, and then retrieve their conditional alphas by estimating the conditional expectation of their risk-adjusted returns. We start with the underlying continuous-time stochastic process that governs the dynamics of every stock price and then derive the conditions under which we may consistently estimate the daily factor loadings and the resulting conditional alphas. We also contribute empiri-cally to the conditional CAPM literature by examining the main drivers of the conditional alphas of the S&P 100 index constituents from January 2001 to December 2008. In addition, to con rm whether these conditional alphas indeed relate to pricing errors, we assess the performance of both cross-sectional and time-series momentum strategies based on the conditional alpha estimates. The ndings are very promising in that these strategies not only seem to perform pretty well both in absolute and relative terms, but also exhibit virtually no systematic exposure to the usual risk factors (namely, market, size, value and momentum portfolios).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hoy en día, con la evolución continua y rápida de las tecnologías de la información y los dispositivos de computación, se recogen y almacenan continuamente grandes volúmenes de datos en distintos dominios y a través de diversas aplicaciones del mundo real. La extracción de conocimiento útil de una cantidad tan enorme de datos no se puede realizar habitualmente de forma manual, y requiere el uso de técnicas adecuadas de aprendizaje automático y de minería de datos. La clasificación es una de las técnicas más importantes que ha sido aplicada con éxito a varias áreas. En general, la clasificación se compone de dos pasos principales: en primer lugar, aprender un modelo de clasificación o clasificador a partir de un conjunto de datos de entrenamiento, y en segundo lugar, clasificar las nuevas instancias de datos utilizando el clasificador aprendido. La clasificación es supervisada cuando todas las etiquetas están presentes en los datos de entrenamiento (es decir, datos completamente etiquetados), semi-supervisada cuando sólo algunas etiquetas son conocidas (es decir, datos parcialmente etiquetados), y no supervisada cuando todas las etiquetas están ausentes en los datos de entrenamiento (es decir, datos no etiquetados). Además, aparte de esta taxonomía, el problema de clasificación se puede categorizar en unidimensional o multidimensional en función del número de variables clase, una o más, respectivamente; o también puede ser categorizado en estacionario o cambiante con el tiempo en función de las características de los datos y de la tasa de cambio subyacente. A lo largo de esta tesis, tratamos el problema de clasificación desde tres perspectivas diferentes, a saber, clasificación supervisada multidimensional estacionaria, clasificación semisupervisada unidimensional cambiante con el tiempo, y clasificación supervisada multidimensional cambiante con el tiempo. Para llevar a cabo esta tarea, hemos usado básicamente los clasificadores Bayesianos como modelos. La primera contribución, dirigiéndose al problema de clasificación supervisada multidimensional estacionaria, se compone de dos nuevos métodos de aprendizaje de clasificadores Bayesianos multidimensionales a partir de datos estacionarios. Los métodos se proponen desde dos puntos de vista diferentes. El primer método, denominado CB-MBC, se basa en una estrategia de envoltura de selección de variables que es voraz y hacia delante, mientras que el segundo, denominado MB-MBC, es una estrategia de filtrado de variables con una aproximación basada en restricciones y en el manto de Markov. Ambos métodos han sido aplicados a dos problemas reales importantes, a saber, la predicción de los inhibidores de la transcriptasa inversa y de la proteasa para el problema de infección por el virus de la inmunodeficiencia humana tipo 1 (HIV-1), y la predicción del European Quality of Life-5 Dimensions (EQ-5D) a partir de los cuestionarios de la enfermedad de Parkinson con 39 ítems (PDQ-39). El estudio experimental incluye comparaciones de CB-MBC y MB-MBC con los métodos del estado del arte de la clasificación multidimensional, así como con métodos comúnmente utilizados para resolver el problema de predicción de la enfermedad de Parkinson, a saber, la regresión logística multinomial, mínimos cuadrados ordinarios, y mínimas desviaciones absolutas censuradas. En ambas aplicaciones, los resultados han sido prometedores con respecto a la precisión de la clasificación, así como en relación al análisis de las estructuras gráficas que identifican interacciones conocidas y novedosas entre las variables. La segunda contribución, referida al problema de clasificación semi-supervisada unidimensional cambiante con el tiempo, consiste en un método nuevo (CPL-DS) para clasificar flujos de datos parcialmente etiquetados. Los flujos de datos difieren de los conjuntos de datos estacionarios en su proceso de generación muy rápido y en su aspecto de cambio de concepto. Es decir, los conceptos aprendidos y/o la distribución subyacente están probablemente cambiando y evolucionando en el tiempo, lo que hace que el modelo de clasificación actual sea obsoleto y deba ser actualizado. CPL-DS utiliza la divergencia de Kullback-Leibler y el método de bootstrapping para cuantificar y detectar tres tipos posibles de cambio: en las predictoras, en la a posteriori de la clase o en ambas. Después, si se detecta cualquier cambio, un nuevo modelo de clasificación se aprende usando el algoritmo EM; si no, el modelo de clasificación actual se mantiene sin modificaciones. CPL-DS es general, ya que puede ser aplicado a varios modelos de clasificación. Usando dos modelos diferentes, el clasificador naive Bayes y la regresión logística, CPL-DS se ha probado con flujos de datos sintéticos y también se ha aplicado al problema real de la detección de código malware, en el cual los nuevos ficheros recibidos deben ser continuamente clasificados en malware o goodware. Los resultados experimentales muestran que nuestro método es efectivo para la detección de diferentes tipos de cambio a partir de los flujos de datos parcialmente etiquetados y también tiene una buena precisión de la clasificación. Finalmente, la tercera contribución, sobre el problema de clasificación supervisada multidimensional cambiante con el tiempo, consiste en dos métodos adaptativos, a saber, Locally Adpative-MB-MBC (LA-MB-MBC) y Globally Adpative-MB-MBC (GA-MB-MBC). Ambos métodos monitorizan el cambio de concepto a lo largo del tiempo utilizando la log-verosimilitud media como métrica y el test de Page-Hinkley. Luego, si se detecta un cambio de concepto, LA-MB-MBC adapta el actual clasificador Bayesiano multidimensional localmente alrededor de cada nodo cambiado, mientras que GA-MB-MBC aprende un nuevo clasificador Bayesiano multidimensional. El estudio experimental realizado usando flujos de datos sintéticos multidimensionales indica los méritos de los métodos adaptativos propuestos. ABSTRACT Nowadays, with the ongoing and rapid evolution of information technology and computing devices, large volumes of data are continuously collected and stored in different domains and through various real-world applications. Extracting useful knowledge from such a huge amount of data usually cannot be performed manually, and requires the use of adequate machine learning and data mining techniques. Classification is one of the most important techniques that has been successfully applied to several areas. Roughly speaking, classification consists of two main steps: first, learn a classification model or classifier from an available training data, and secondly, classify the new incoming unseen data instances using the learned classifier. Classification is supervised when the whole class values are present in the training data (i.e., fully labeled data), semi-supervised when only some class values are known (i.e., partially labeled data), and unsupervised when the whole class values are missing in the training data (i.e., unlabeled data). In addition, besides this taxonomy, the classification problem can be categorized into uni-dimensional or multi-dimensional depending on the number of class variables, one or more, respectively; or can be also categorized into stationary or streaming depending on the characteristics of the data and the rate of change underlying it. Through this thesis, we deal with the classification problem under three different settings, namely, supervised multi-dimensional stationary classification, semi-supervised unidimensional streaming classification, and supervised multi-dimensional streaming classification. To accomplish this task, we basically used Bayesian network classifiers as models. The first contribution, addressing the supervised multi-dimensional stationary classification problem, consists of two new methods for learning multi-dimensional Bayesian network classifiers from stationary data. They are proposed from two different points of view. The first method, named CB-MBC, is based on a wrapper greedy forward selection approach, while the second one, named MB-MBC, is a filter constraint-based approach based on Markov blankets. Both methods are applied to two important real-world problems, namely, the prediction of the human immunodeficiency virus type 1 (HIV-1) reverse transcriptase and protease inhibitors, and the prediction of the European Quality of Life-5 Dimensions (EQ-5D) from 39-item Parkinson’s Disease Questionnaire (PDQ-39). The experimental study includes comparisons of CB-MBC and MB-MBC against state-of-the-art multi-dimensional classification methods, as well as against commonly used methods for solving the Parkinson’s disease prediction problem, namely, multinomial logistic regression, ordinary least squares, and censored least absolute deviations. For both considered case studies, results are promising in terms of classification accuracy as well as regarding the analysis of the learned MBC graphical structures identifying known and novel interactions among variables. The second contribution, addressing the semi-supervised uni-dimensional streaming classification problem, consists of a novel method (CPL-DS) for classifying partially labeled data streams. Data streams differ from the stationary data sets by their highly rapid generation process and their concept-drifting aspect. That is, the learned concepts and/or the underlying distribution are likely changing and evolving over time, which makes the current classification model out-of-date requiring to be updated. CPL-DS uses the Kullback-Leibler divergence and bootstrapping method to quantify and detect three possible kinds of drift: feature, conditional or dual. Then, if any occurs, a new classification model is learned using the expectation-maximization algorithm; otherwise, the current classification model is kept unchanged. CPL-DS is general as it can be applied to several classification models. Using two different models, namely, naive Bayes classifier and logistic regression, CPL-DS is tested with synthetic data streams and applied to the real-world problem of malware detection, where the new received files should be continuously classified into malware or goodware. Experimental results show that our approach is effective for detecting different kinds of drift from partially labeled data streams, as well as having a good classification performance. Finally, the third contribution, addressing the supervised multi-dimensional streaming classification problem, consists of two adaptive methods, namely, Locally Adaptive-MB-MBC (LA-MB-MBC) and Globally Adaptive-MB-MBC (GA-MB-MBC). Both methods monitor the concept drift over time using the average log-likelihood score and the Page-Hinkley test. Then, if a drift is detected, LA-MB-MBC adapts the current multi-dimensional Bayesian network classifier locally around each changed node, whereas GA-MB-MBC learns a new multi-dimensional Bayesian network classifier from scratch. Experimental study carried out using synthetic multi-dimensional data streams shows the merits of both proposed adaptive methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: 62G30, 62E10.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

2000 Mathematics Subject Classification: Primary: 62M10, 62J02, 62F12, 62M05, 62P05, 62P10; secondary: 60G46, 60F15.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Conditional Value-at-Risk (equivalent to the Expected Shortfall, Tail Value-at-Risk and Tail Conditional Expectation in the case of continuous probability distributions) is an increasingly popular risk measure in the fields of actuarial science, banking and finance, and arguably a more suitable alternative to the currently widespread Value-at-Risk. In my paper, I present a brief literature survey, and propose a statistical test of the location of the CVaR, which may be applied by practising actuaries to test whether CVaR-based capital levels are in line with observed data. Finally, I conclude with numerical experiments and some questions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, the relationship between diameter at breast height (d) and total height (h) of individual-tree was modeled with the aim to establish provisory height-diameter (h-d) equations for maritime pine (Pinus pinaster Ait.) stands in the Lomba ZIF, Northeast Portugal. Using data collected locally, several local and generalized h-d equations from the literature were tested and adaptations were also considered. Model fitting was conducted by using usual nonlinear least squares (nls) methods. The best local and generalized models selected, were also tested as mixed models applying a first-order conditional expectation (FOCE) approximation procedure and maximum likelihood methods to estimate fixed and random effects. For the calibration of the mixed models and in order to be consistent with the fitting procedure, the FOCE method was also used to test different sampling designs. The results showed that the local h-d equations with two parameters performed better than the analogous models with three parameters. However a unique set of parameter values for the local model can not be used to all maritime pine stands in Lomba ZIF and thus, a generalized model including covariates from the stand, in addition to d, was necessary to obtain an adequate predictive performance. No evident superiority of the generalized mixed model in comparison to the generalized model with nonlinear least squares parameters estimates was observed. On the other hand, in the case of the local model, the predictive performance greatly improved when random effects were included. The results showed that the mixed model based in the local h-d equation selected is a viable alternative for estimating h if variables from the stand are not available. Moreover, it was observed that it is possible to obtain an adequate calibrated response using only 2 to 5 additional h-d measurements in quantile (or random) trees from the distribution of d in the plot (stand). Balancing sampling effort, accuracy and straightforwardness in practical applications, the generalized model from nls fit is recommended. Examples of applications of the selected generalized equation to the forest management are presented, namely how to use it to complete missing information from forest inventory and also showing how such an equation can be incorporated in a stand-level decision support system that aims to optimize the forest management for the maximization of wood volume production in Lomba ZIF maritime pine stands.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This article considers the conditions placed on the autonomous architectural history discipline often understood at stake in Manfredo Tafuri's 1968 book Teorie e storia dell'architettura.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Carbon monoxide is the chief killer in fires. Dangerous levels of CO can occur when reacting combustion gases are quenched by heat transfer, or by mixing of the fire plume in a cooled under- or overventilated upper layer. In this paper, carbon monoxide predictions for enclosure fires are modeled by the conditional moment closure (CMC) method and are compared with laboratory data. The modeled fire situation is a buoyant, turbulent, diffusion flame burning under a hood. The fire plume entrains fresh air, and the postflame gases are cooled considerably under the hood by conduction and radiation, emulating conditions which occur in enclosure fires and lead to the freezing of CO burnout. Predictions of CO in the cooled layer are presented in the context of a complete computational fluid dynamics solution of velocity, temperature, and major species concentrations. A range of underhood equivalence ratios, from rich to lean, are investigated. The CMC method predicts CO in very good agreement with data. In particular, CMC is able to correctly predict CO concentrations in lean cooled gases, showing its capability in conditions where reaction rates change considerably.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Interleukin 8 (IL-8) is a chemokine related to the initiation and amplification of acute and chronic inflammatory processes. Polymorphisms in the IL8 gene have been associated with inflammatory diseases. We investigated whether the - 845(T/C) and - 738(T/A) single nucleotide polymorphisms (SNPs) in the IL8 gene, as well as the haplotypes they form together with the previously investigated -353(A/T), are associated with susceptibility to chronic periodontitis. Methods: DNA was extracted from buccal epithelial cells of 400 Brazilian individuals (control n =182, periodontitis n=218). SNPs were genotyped by the polymerase chain reaction-restriction fragment length polymorphism (PCR-RFLP) method. Disease associations were analyzed by the chi(2) test, Exact Fisher test and Clump program. Haplotypes were reconstructed using the expectation-maximization algorithm and differences in haplotype distribution between the groups were analyzed to estimate genetic susceptibility for chronic periodontitis development. Results: When analyzed individually, no SNPs showed different distributions between the control and chronic periodontitis groups. Although, nonsmokers carrying the TTA/CAT (OR = 2.35, 95% CI = 1.03-5.36) and TAT/CTA (OR= 6.05, 95% CI = 1.32-27.7) haplotypes were genetically susceptible to chronic periodontitis. The ITT/TAA haplotype was associated with protection against the development of periodontitis (for nonsmokers OR= 0.22, 95% CI = 0.10-0.46). Conclusion: Although none of the investigated SNPs in the IL8 gene was individually associated with periodontitis, some haplotypes showed significant association with susceptibility to, or protection against, chronic periodontitis in a Brazilian population. (C) 2010 Elsevier B.V. All rights reserved.