945 resultados para Estimator standard error and efficiency
Resumo:
About one-third of the carbon dioxide (CO2) released into the atmosphere as a result of human activity has been absorbed by the oceans, where it partitions into the constituent ions of carbonic acid. This leads to ocean acidification, one of the major threats to marine ecosystems and particularly to calcifying organisms such as corals, foraminifera and coccolithophores. Coccolithophores are abundant phytoplankton that are responsible for a large part of modern oceanic carbonate production. Culture experiments investigating the physiological response of coccolithophore calcification to increased CO2 have yielded contradictory results between and even within species. Here we quantified the calcite mass of dominant coccolithophores in the present ocean and over the past forty thousand years, and found a marked pattern of decreasing calcification with increasing partial pressure of CO2 and concomitant decreasing concentrations of CO3. Our analyses revealed that differentially calcified species and morphotypes are distributed in the ocean according to carbonate chemistry. A substantial impact on the marine carbon cycle might be expected upon extrapolation of this correlation to predicted ocean acidification in the future. However, our discovery of a heavily calcified Emiliania huxleyi morphotype in modern waters with low pH highlights the complexity of assemblage-level responses to environmental forcing factors.
Resumo:
Although climate records from several locations around the world show nearly synchronous and abrupt changes, the nature of the inferred teleconnection is still poorly understood. On the basis of preserved laminations and molybdenum enrichments in open margin sediments we demonstrate that the oxygen content of northeast Pacific waters at 800 m depth during the Bölling-Alleröd warm period (15-13 kyr) was greatly reduced. Existing oxygen isotopic records of benthic and planktonic foraminifera suggest that this was probably due to suppressed ventilation at higher latitudes of the North Pacific. Comparison with ventilation records for the North Atlantic indicates an antiphased pattern of convection relative to the North Pacific over the past 22 kyr, perhaps due to variations in water vapor transport across Central America.
Resumo:
Calcification in many invertebrate species is predicted to decline due to ocean acidification. The potential effects of elevated CO2 and reduced carbonate saturation state on other species, such as fish, are less well understood. Fish otoliths (earbones) are composed of aragonite, and thus, might be susceptible to either the reduced availability of carbonate ions in seawater at low pH, or to changes in extracellular concentrations of bicarbonate and carbonate ions caused by acid-base regulation in fish exposed to high pCO2. We reared larvae of the clownfish Amphiprion percula from hatching to settlement at three pHNBS and pCO2 levels (control: ~pH 8.15 and 404 µatm CO2; intermediate: pH 7.8 and 1050 µatm CO2; extreme: pH 7.6 and 1721 µatm CO2) to test the possible effects of ocean acidification on otolith development. There was no effect of the intermediate treatment (pH 7.8 and 1050 µatm CO2) on otolith size, shape, symmetry between left and right otoliths, or otolith elemental chemistry, compared with controls. However, in the more extreme treatment (pH 7.6 and 1721 µatm CO2) otolith area and maximum length were larger than controls, although no other traits were significantly affected. Our results support the hypothesis that pH regulation in the otolith endolymph can lead to increased precipitation of CaCO3 in otoliths of larval fish exposed to elevated CO2, as proposed by an earlier study, however, our results also show that sensitivity varies considerably among species. Importantly, our results suggest that otolith development in clownfishes is robust to even the more pessimistic changes in ocean chemistry predicted to occur by 2100.
Resumo:
Organic carbon-rich shales from localities in England, Italy, and Morocco, which formed during the Cenomanian-Turonian oceanic anoxic event (OAE), have been examined for their total organic carbon (TOC) values together with their carbon, nitrogen, and iron isotope ratios. Carbon isotope stratigraphy (d13Corg and d13Ccarb) allows accurate recognition of the strata that record the oceanic anoxic event, in some cases allowing characterization of isotopic species before, during, and after the OAE. Within the black shales formed during the OAE, relatively heavy nitrogen isotope ratios, which correlate positively with TOC, suggest nitrate reduction (leading ultimately to denitrification and/or anaerobic ammonium oxidation). Black shales deposited before the onset of the OAE in Italy have unusually low bulk d57Fe values, unlike those found in the black shale (Livello Bonarelli) deposited during the oceanic anoxic event itself: These latter conform to the Phanerozoic norm for organic-rich sediments. Pyrite formation in the pre-OAE black shales has apparently taken place via dissimilatory iron reduction (DIR), within the sediment, a suboxic process that causes an approximately -2 per mil fractionation between a lithogenic Fe(III)oxide source and Fe(II)aq. In contrast, bacterial sulfate reduction (BSR), at least partly in the water column, characterized the OAE itself and was accompanied by only minor iron isotope fractionation. This change in the manner of pyrite formation is reflected in a decrease in the average pyrite framboid diameter from ~10 to ~7 µm. The gradual, albeit irregular increase in Fe isotope values during the OAE, as recorded in the Italian section, is taken to demonstrate limited isotopic evolution of the dissolved iron pool, consequent upon ongoing water column precipitation of pyrite under euxinic conditions. Given that evidence exists for both nitrate and sulfate reduction during the OAE, it is evident that redox conditions in the water column were highly variable, in both time and space.
Resumo:
During Ocean Drilling Program (ODP) Leg 180, 11 sites were drilled in the vicinity of the Moresby Seamount to study processes associated with the transition from continental rifting to seafloor spreading in the Woodlark Basin. This paper presents thermochronologic (40Ar/39Ar, 238U/206Pb, and fission track) results from igneous rocks recovered during ODP Leg 180 that help constrain the latest Cretaceous to present-day tectonic development of the Woodlark Basin. Igneous rocks recovered (primarily from Sites 1109, 1114, 1117, and 1118) consist of predominantly diabase and metadiabase, with minor basalt and gabbro. Zircon ion microprobe analyses gave a 238U/206Pb age of 66.4 ± 1.5 Ma, interpreted to date crystallization of the diabase. 40Ar/39Ar plagioclase apparent ages vary considerably according to the degree to which the diabase was altered subsequent to crystallization. The least altered sample (from Site 1109) yielded a plagioclase isochron age of 58.9 ± 5.8 Ma, interpreted to represent cooling following intrusion. The most altered sample (from Site 1117) yielded an isochron age of 31.0 ± 0.9 Ma, interpreted to represent a maximum age for the timing of subsequent hydrothermal alteration. The diabase has not been thermally affected by Miocene-Pliocene rift-related events, supporting our inference that these rocks have remained at shallow and cool levels in the crust (i.e., upper plate) since they were partially reset as a result of middle Oligocene hydrothermal alteration. These results suggest that crustal extension in the vicinity of the Moresby Seamount, immediately west of the active seafloor spreading tip, is being accommodated by normal faulting within latest Cretaceous to early Paleocene oceanic crust. Felsic clasts provide additional evidence for middle Miocene and Pliocene magmatic events in the region. Two rhyolitic clasts (from Sites 1110 and 1111) gave zircon 238U/206Pb ages of 15.7 ± 0.4 Ma and provide evidence for Miocene volcanism in the region. 40Ar/39Ar total fusion ages on single grains of K-feldspar from these clasts yielded younger apparent ages of 12.5 ± 0.2 and 14.4 ± 0.6 Ma due to variable sericitization of K-feldspar phenocrysts. 238U/206Pb zircon, 40Ar/39Ar K-feldspar and biotite total fusion, and apatite fission track analysis of a microgranite clast (from Site 1108) provide evidence for the existence of a rapidly cooled 3.0 to 1.8 Ma granitic protolith. The clast may have been transported longitudinally from the west (e.g., from the D'Entrecasteaux Islands). Alternatively, it may have been derived from a more proximal, but presently unknown, source in the vicinity of the Moresby Seamount.
Resumo:
This thesis investigates the design of optimal tax systems in dynamic environments. The first essay characterizes the optimal tax system where wages depend on stochastic shocks and work experience. In addition to redistributive and efficiency motives, the taxation of inexperienced workers depends on a second-best requirement that encourages work experience, a social insurance motive and incentive effects. Calibrations using U.S. data yield higher expected optimal marginal income tax rates for experienced workers for most of the inexperienced workers. They confirm that the average marginal income tax rate increases (decreases) with age when shocks and work experience are substitutes (complements). Finally, more variability in experienced workers' earnings prospects leads to increasing tax rates since income taxation acts as a social insurance mechanism. In the second essay, the properties of an optimal tax system are investigated in a dynamic private information economy where labor market frictions create unemployment that destroys workers' human capital. A two-skill type model is considered where wages and employment are endogenous. I find that the optimal tax system distorts the first-period wages of all workers below their efficient levels which leads to more employment. The standard no-distortion-at-the-top result no longer holds due to the combination of private information and the destruction of human capital. I show this result analytically under the Maximin social welfare function and confirm it numerically for a general social welfare function. I also investigate the use of a training program and job creation subsidies. The final essay analyzes the optimal linear tax system when there is a population of individuals whose perceptions of savings are linked to their disposable income and their family background through family cultural transmission. Aside from the standard equity/efficiency trade-off, taxes account for the endogeneity of perceptions through two channels. First, taxing labor decreases income, which decreases the perception of savings through time. Second, taxation on savings corrects for the misperceptions of workers and thus savings and labor decisions. Numerical simulations confirm that behavioral issues push labor income taxes upward to finance saving subsidies. Government transfers to individuals are also decreased to finance those same subsidies.
Resumo:
Moisture desorption observations from two bentonite clay mats subjected to ten environmental zones with individually different combinations of laboratory-controlled constant temperatures (between 20 °C and 40 °C) and relative humidity (between 15% and 70%) are presented. These laboratory observations are compared with predictions from mathematical models, such as thin-layer drying equations and kinetic drying models proposed by Page, Wang and Singh, and Henderson and Pabis. The quality of fit of these models is assessed using standard error (SE) of estimate, relative percent of error, and coefficient of correlation. The Page model was found to better predict the drying kinetics of the bentonite clay mats for the simulated tropical climates. Critical study on the drying constant and moisture diffusion coefficient helps to assess the efficacy of a polymer to retain moisture and control desorption through water molecule bonding. This is further substantiated with the Guggenheim–Anderson–De Boer (GAB) desorption isotherm model which is presented.
Resumo:
OBJECTIVE: To test common genetic variants for association with seasonality (seasonal changes in mood and behavior) and to investigate whether there are shared genetic risk factors between psychiatric disorders and seasonality. METHOD: Genome-wide association studies (GWASs) were conducted in Australian (between 1988 and 1990 and between 2010 and 2013) and Amish (between May 2010 and December 2011) samples in whom the Seasonal Pattern Assessment Questionnaire (SPAQ) had been administered, and the results were meta-analyzed in a total sample of 4,156 individuals. Genetic risk scores based on results from prior large GWAS studies of bipolar disorder, major depressive disorder (MDD), and schizophrenia were calculated to test for overlap in risk between psychiatric disorders and seasonality. RESULTS: The most significant association was with rs11825064 (P = 1.7 × 10⁻⁶, β = 0.64, standard error = 0.13), an intergenic single nucleotide polymorphism (SNP) found on chromosome 11. The evidence for overlap in risk factors was strongest for schizophrenia and seasonality, with the schizophrenia genetic profile scores explaining 3% of the variance in log-transformed global seasonality scores. Bipolar disorder genetic profile scores were also associated with seasonality, although at much weaker levels (minimum P value = 3.4 × 10⁻³), and no evidence for overlap in risk was detected between MDD and seasonality. CONCLUSIONS: Common SNPs of large effect most likely do not exist for seasonality in the populations examined. As expected, there were overlapping genetic risk factors for bipolar disorder (but not MDD) with seasonality. Unexpectedly, the risk for schizophrenia and seasonality had the largest overlap, an unprecedented finding that requires replication in other populations and has potential clinical implications considering overlapping cognitive deficits in seasonal affective disorders and schizophrenia.
Resumo:
A investigação na área da saúde e a utilização dos seus resultados tem funcionado como base para a melhoria da qualidade de cuidados, exigindo dos profissionais de saúde conhecimentos na área específica onde desempenham funções, conhecimentos em metodologia de investigação que incluam as técnicas de observação, técnicas de recolha e análise de dados, para mais facilmente serem leitores capacitados dos resultados da investigação. Os profissionais de saúde são observadores privilegiados das respostas humanas à saúde e à doença, podendo contribuir para o desenvolvimento e bem-estar dos indivíduos muitas vezes em situações de grande vulnerabilidade. Em saúde infantil e pediatria o enfoque está nos cuidados centrados na família privilegiando-se o desenvolvimento harmonioso da criança e jovem, valorizando os resultados mensuráveis em saúde que permitam determinar a eficácia das intervenções e a qualidade de saúde e de vida. No contexto pediátrico realçamos as práticas baseadas na evidência, a importância atribuída à pesquisa e à aplicação dos resultados da investigação nas práticas clínicas, assim como o desenvolvimento de instrumentos de mensuração padronizados, nomeadamente as escalas de avaliação, de ampla utilização clínica, que facilitam a apreciação e avaliação do desenvolvimento e da saúde das crianças e jovens e resultem em ganhos em saúde. A observação de forma sistematizada das populações neonatais e pediátricas com escalas de avaliação tem vindo a aumentar, o que tem permitido um maior equilíbrio na avaliação das crianças e também uma observação baseada na teoria e nos resultados da investigação. Alguns destes aspetos serviram de base ao desenvolvimento deste trabalho que pretende dar resposta a 3 objetivos fundamentais. Para dar resposta ao primeiro objetivo, “Identificar na literatura científica, os testes estatísticos mais frequentemente utilizados pelos investigadores da área da saúde infantil e pediatria quando usam escalas de avaliação” foi feita uma revisão sistemática da literatura, que tinha como objetivo analisar artigos científicos cujos instrumentos de recolha de dados fossem escalas de avaliação, na área da saúde da criança e jovem, desenvolvidas com variáveis ordinais, e identificar os testes estatísticos aplicados com estas variáveis. A análise exploratória dos artigos permitiu-nos verificar que os investigadores utilizam diferentes instrumentos com diferentes formatos de medida ordinal (com 3, 4, 5, 7, 10 pontos) e tanto aplicam testes paramétricos como não paramétricos, ou os dois em simultâneo, com este tipo de variáveis, seja qual for a dimensão da amostra. A descrição da metodologia nem sempre explicita se são cumpridas as assunções dos testes. Os artigos consultados nem sempre fazem referência à distribuição de frequência das variáveis (simetria/assimetria) nem à magnitude das correlações entre os itens. A leitura desta bibliografia serviu de suporte à elaboração de dois artigos, um de revisão sistemática da literatura e outro de reflexão teórica. Apesar de terem sido encontradas algumas respostas às dúvidas com que os investigadores e os profissionais, que trabalham com estes instrumentos, se deparam, verifica-se a necessidade de desenvolver estudos de simulação que confirmem algumas situações reais e alguma teoria já existente, e trabalhem outros aspetos nos quais se possam enquadrar os cenários reais de forma a facilitar a tomada de decisão dos investigadores e clínicos que utilizam escalas de avaliação. Para dar resposta ao segundo objetivo “Comparar a performance, em termos de potência e probabilidade de erro de tipo I, das 4 estatísticas da MANOVA paramétrica com 2 estatísticas da MANOVA não paramétrica quando se utilizam variáveis ordinais correlacionadas, geradas aleatoriamente”, desenvolvemos um estudo de simulação, através do Método de Monte Carlo, efetuado no Software R. O delineamento do estudo de simulação incluiu um vetor com 3 variáveis dependentes, uma variável independente (fator com três grupos), escalas de avaliação com um formato de medida com 3, 4, 5, e 7 pontos, diferentes probabilidades marginais (p1 para distribuição simétrica, p2 para distribuição assimétrica positiva, p3 para distribuição assimétrica negativa e p4 para distribuição uniforme) em cada um dos três grupos, correlações de baixa, média e elevada magnitude (r=0.10, r=0.40, r=0.70, respetivamente), e seis dimensões de amostras (n=30, 60, 90, 120, 240, 300). A análise dos resultados permitiu dizer que a maior raiz de Roy foi a estatística que apresentou estimativas de probabilidade de erro de tipo I e de potência de teste mais elevadas. A potência dos testes apresenta comportamentos diferentes, dependendo da distribuição de frequência da resposta aos itens, da magnitude das correlações entre itens, da dimensão da amostra e do formato de medida da escala. Tendo por base a distribuição de frequência, considerámos três situações distintas: a primeira (com probabilidades marginais p1,p1,p4 e p4,p4,p1) em que as estimativas da potência eram muito baixas, nos diferentes cenários; a segunda situação (com probabilidades marginais p2,p3,p4; p1,p2,p3 e p2,p2,p3) em que a magnitude das potências é elevada, nas amostras com dimensão superior ou igual a 60 observações e nas escalas com 3, 4,5 pontos e potências de magnitude menos elevada nas escalas com 7 pontos, mas com a mesma ma magnitude nas amostras com dimensão igual a 120 observações, seja qual for o cenário; a terceira situação (com probabilidades marginais p1,p1,p2; p1,p2,p4; p2,p2,p1; p4,p4,p2 e p2,p2,p4) em que quanto maiores, a intensidade das correlações entre itens e o número de pontos da escala, e menor a dimensão das amostras, menor a potência dos testes, sendo o lambda de Wilks aplicado às ordens mais potente do que todas as outra s estatísticas da MANOVA, com valores imediatamente a seguir à maior raiz de Roy. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com dimensão superior a 90 observações (com correlações de baixa e média magnitude), entre as variáveis dependentes nas escalas com 3, 4 e 5 pontos; e superiores a 240 observações, para correlações de baixa intensidade, nas escalas com 7 pontos. No estudo de simulação e tendo por base a distribuição de frequência, concluímos que na primeira situação de simulação e para os diferentes cenários, as potências são de baixa magnitude devido ao facto de a MANOVA não detetar diferenças entre grupos pela sua similaridade. Na segunda situação de simulação e para os diferentes cenários, a magnitude das potências é elevada em todos os cenários cuja dimensão da amostra seja superior a 60 observações, pelo que é possível aplicar testes paramétricos. Na terceira situação de simulação, e para os diferentes cenários quanto menor a dimensão da amostra e mais elevada a intensidade das correlações e o número de pontos da escala, menor a potência dos testes, sendo a magnitude das potências mais elevadas no teste de Wilks aplicado às ordens, seguido do traço de Pillai aplicado às ordens. No entanto, a magnitude das potências dos testes paramétricos e não paramétricos assemelha-se nas amostras com maior dimensão e correlações de baixa e média magnitude. Para dar resposta ao terceiro objetivo “Enquadrar os resultados da aplicação da MANOVA paramétrica e da MANOVA não paramétrica a dados reais provenientes de escalas de avaliação com um formato de medida com 3, 4, 5 e 7 pontos, nos resultados do estudo de simulação estatística” utilizaram-se dados reais que emergiram da observação de recém-nascidos com a escala de avaliação das competências para a alimentação oral, Early Feeding Skills (EFS), o risco de lesões da pele, com a Neonatal Skin Risk Assessment Scale (NSRAS), e a avaliação da independência funcional em crianças e jovens com espinha bífida, com a Functional Independence Measure (FIM). Para fazer a análise destas escalas foram realizadas 4 aplicações práticas que se enquadrassem nos cenários do estudo de simulação. A idade, o peso, e o nível de lesão medular foram as variáveis independentes escolhidas para selecionar os grupos, sendo os recém-nascidos agrupados por “classes de idade gestacional” e por “classes de peso” as crianças e jovens com espinha bífida por “classes etárias” e “níveis de lesão medular”. Verificou-se um bom enquadramento dos resultados com dados reais no estudo de simulação.
Resumo:
Fully articulated hand tracking promises to enable fundamentally new interactions with virtual and augmented worlds, but the limited accuracy and efficiency of current systems has prevented widespread adoption. Today's dominant paradigm uses machine learning for initialization and recovery followed by iterative model-fitting optimization to achieve a detailed pose fit. We follow this paradigm, but make several changes to the model-fitting, namely using: (1) a more discriminative objective function; (2) a smooth-surface model that provides gradients for non-linear optimization; and (3) joint optimization over both the model pose and the correspondences between observed data points and the model surface. While each of these changes may actually increase the cost per fitting iteration, we find a compensating decrease in the number of iterations. Further, the wide basin of convergence means that fewer starting points are needed for successful model fitting. Our system runs in real-time on CPU only, which frees up the commonly over-burdened GPU for experience designers. The hand tracker is efficient enough to run on low-power devices such as tablets. We can track up to several meters from the camera to provide a large working volume for interaction, even using the noisy data from current-generation depth cameras. Quantitative assessments on standard datasets show that the new approach exceeds the state of the art in accuracy. Qualitative results take the form of live recordings of a range of interactive experiences enabled by this new approach.
Resumo:
The goal of this project is to increase the amount of successful real estate license renewals while reducing the disruption caused by manual processing and calls for assistance with renewals and technical issues. The data utilized in this project will demonstrate that the Real Estate Commission renewal process can be improved by utilizing electronic resources such as more detailed website information and repeat e-mail notices, through modifications to the online renewal process to reduce applicant error, and by increasing the visibility of online renewal log-in instructions while decreasing the visibility and use of mail-in applications.
Resumo:
This work deals with the development of calibration procedures and control systems to improve the performance and efficiency of modern spark ignition turbocharged engines. The algorithms developed are used to optimize and manage the spark advance and the air-to-fuel ratio to control the knock and the exhaust gas temperature at the turbine inlet. The described work falls within the activity that the research group started in the previous years with the industrial partner Ferrari S.p.a. . The first chapter deals with the development of a control-oriented engine simulator based on a neural network approach, with which the main combustion indexes can be simulated. The second chapter deals with the development of a procedure to calibrate offline the spark advance and the air-to-fuel ratio to run the engine under knock-limited conditions and with the maximum admissible exhaust gas temperature at the turbine inlet. This procedure is then converted into a model-based control system and validated with a Software in the Loop approach using the engine simulator developed in the first chapter. Finally, it is implemented in a rapid control prototyping hardware to manage the combustion in steady-state and transient operating conditions at the test bench. The third chapter deals with the study of an innovative and cheap sensor for the in-cylinder pressure measurement, which is a piezoelectric washer that can be installed between the spark plug and the engine head. The signal generated by this kind of sensor is studied, developing a specific algorithm to adjust the value of the knock index in real-time. Finally, with the engine simulator developed in the first chapter, it is demonstrated that the innovative sensor can be coupled with the control system described in the second chapter and that the performance obtained could be the same reachable with the standard in-cylinder pressure sensors.
Resumo:
When it comes to designing a structure, architects and engineers want to join forces in order to create and build the most beautiful and efficient building. From finding new shapes and forms to optimizing the stability and the resistance, there is a constant link to be made between both professions. In architecture, there has always been a particular interest in creating new shapes and types of a structure inspired by many different fields, one of them being nature itself. In engineering, the selection of optimum has always dictated the way of thinking and designing structures. This mindset led through studies to the current best practices in construction. However, both disciplines were limited by the traditional manufacturing constraints at a certain point. Over the last decades, much progress was made from a technological point of view, allowing to go beyond today's manufacturing constraints. With the emergence of Wire-and-Arc Additive Manufacturing (WAAM) combined with Algorithmic-Aided Design (AAD), architects and engineers are offered new opportunities to merge architectural beauty and structural efficiency. Both technologies allow for exploring and building unusual and complex structural shapes in addition to a reduction of costs and environmental impacts. Through this study, the author wants to make use of previously mentioned technologies and assess their potential, first to design an aesthetically appreciated tree-like column with the idea of secondly proposing a new type of standardized and optimized sandwich cross-section to the construction industry. Parametric algorithms to model the dendriform column and the new sandwich cross-section are developed and presented in detail. A catalog draft of the latter and methods to establish it are then proposed and discussed. Finally, the buckling behavior of this latter is assessed considering standard steel and WAAM material properties.
Resumo:
The over-production of reactive oxygen species (ROS) can cause oxidative damage to a large number of molecules, including DNA, and has been associated with the pathogenesis of several disorders, such as diabetes mellitus (DM), dyslipidemia and periodontitis (PD). We hypothesise that the presence of these diseases could proportionally increase the DNA damage. The aim of this study was to assess the micronucleus frequency (MNF), as a biomarker for DNA damage, in individuals with type 2 DM, dyslipidemia and PD. One hundred and fifty patients were divided into five groups based upon diabetic, dyslipidemic and periodontal status (Group 1 - poor controlled DM with dyslipidemia and PD; Group 2 - well-controlled DM with dyslipidemia and PD; Group 3 - without DM with dyslipidemia and PD; Group 4 - without DM, without dyslipidemia and with PD; and Group 5 - without DM, dyslipidemia and PD). Blood analyses were carried out for fasting plasma glucose, HbA1c and lipid profile. Periodontal examinations were performed, and venous blood was collected and processed for micronucleus (MN) assay. The frequency of micronuclei was evaluated by cell culture cytokinesis-block MN assay. The general characteristics of each group were described by the mean and standard deviation and the data were submitted to the Mann-Whitney, Kruskal-Wallis, Multiple Logistic Regression and Spearman tests. The Groups 1, 2 and 3 were similarly dyslipidemic presenting increased levels of total cholesterol, low density lipoprotein cholesterol and triglycerides. Periodontal tissue destruction and local inflammation were significantly more severe in diabetics, particularly in Group 1. Frequency of bi-nucleated cells with MN and MNF, as well as nucleoplasmic bridges, were significantly higher for poor controlled diabetics with dyslipidemia and PD in comparison with those systemically healthy, even after adjusting for age, and considering Bonferroni's correction. Elevated frequency of micronuclei was found in patients affected by type 2 diabetes, dyslipidemia and PD. This result suggests that these three pathologies occurring simultaneously promote an additional role to produce DNA impairment. In addition, the micronuclei assay was useful as a biomarker for DNA damage in individuals with chronic degenerative diseases.