33 resultados para FLUCTUATION THEOREM
em Helda - Digital Repository of University of Helsinki
Resumo:
The present dissertation analyses 36 local vernaculars of villages surrounding the northern Russian city of Vologda in relation to the system of the vowels in the stressed syllables and those preceding the stressed syllables by using the available dialectological researches. The system in question differs from the corresponding standard Russian system by that the palatalisation of the surrounding consonants affects the vowels much more significantly in the vernaculars, whereas the phonetic difference between the stressed and non-stressed vowels is less obvious in them. The detailed information on the local vernaculars is retrieved from the Dialektologičeskij Atlas Russkogo Jazyka dialect atlas, the data for which were collected, for the most part, in the 1940 s and 1950 s. The theoretical framework of the research consists of a brief cross-section of western sociolinguistic theory related to language change and that of historical linguistics related to the Slavonic vowel development, which includes some new theories concerning the development of the Russian vowel phonemes. The author has collected dialect data in one of the 36 villages and three villages surrounding it. During the fieldwork, speech of nine elderly persons and ten school children was recorded. The speech data were then transcribed with coded information on the corresponding etymological vowels, the phonetic position, and the factual pronunciation at each appearance of vowels in the phonetic positions named above. The data from both of the dialect strata were then systematised to two corresponding systems that were compared with the information retrievable from the dialect atlas and other dialectological literature on the vowel phoneme system of the traditional local vernacular. As a result, it was found out (as hypothesised) that the vernacular vowel phoneme system has approached that of the standard language but has nonetheless not become similar to it. The phoneme quantity of the traditional vernacular is by one greater than that of the standard language, whereas the vowel phoneme quantity in the speech of the school children coincides with that in the standard language, although the phonetic realisations differ to some extent. The analysis of the speech of the elderly people resulted in that it is quite difficult to define the exact phoneme quantity of this stratum due to the fluctuation and irregularities in the realisation of the old phoneme that has ceased to exist in the newest stratum. It was noticed that the effect of the quality of the surrounding consonants on the phonetic realisation of the vowel phonemes has diminished, and the dependence of the phonetic realisation of a vowel phoneme on its place in a word in relation to the word stress has become more and more obvious, which is the state of affairs in the standard language as well.
Resumo:
One of the most fundamental questions in the philosophy of mathematics concerns the relation between truth and formal proof. The position according to which the two concepts are the same is called deflationism, and the opposing viewpoint substantialism. In an important result of mathematical logic, Kurt Gödel proved in his first incompleteness theorem that all consistent formal systems containing arithmetic include sentences that can neither be proved nor disproved within that system. However, such undecidable Gödel sentences can be established to be true once we expand the formal system with Alfred Tarski s semantical theory of truth, as shown by Stewart Shapiro and Jeffrey Ketland in their semantical arguments for the substantiality of truth. According to them, in Gödel sentences we have an explicit case of true but unprovable sentences, and hence deflationism is refuted. Against that, Neil Tennant has shown that instead of Tarskian truth we can expand the formal system with a soundness principle, according to which all provable sentences are assertable, and the assertability of Gödel sentences follows. This way, the relevant question is not whether we can establish the truth of Gödel sentences, but whether Tarskian truth is a more plausible expansion than a soundness principle. In this work I will argue that this problem is best approached once we think of mathematics as the full human phenomenon, and not just consisting of formal systems. When pre-formal mathematical thinking is included in our account, we see that Tarskian truth is in fact not an expansion at all. I claim that what proof is to formal mathematics, truth is to pre-formal thinking, and the Tarskian account of semantical truth mirrors this relation accurately. However, the introduction of pre-formal mathematics is vulnerable to the deflationist counterargument that while existing in practice, pre-formal thinking could still be philosophically superfluous if it does not refer to anything objective. Against this, I argue that all truly deflationist philosophical theories lead to arbitrariness of mathematics. In all other philosophical accounts of mathematics there is room for a reference of the pre-formal mathematics, and the expansion of Tarkian truth can be made naturally. Hence, if we reject the arbitrariness of mathematics, I argue in this work, we must accept the substantiality of truth. Related subjects such as neo-Fregeanism will also be covered, and shown not to change the need for Tarskian truth. The only remaining route for the deflationist is to change the underlying logic so that our formal languages can include their own truth predicates, which Tarski showed to be impossible for classical first-order languages. With such logics we would have no need to expand the formal systems, and the above argument would fail. From the alternative approaches, in this work I focus mostly on the Independence Friendly (IF) logic of Jaakko Hintikka and Gabriel Sandu. Hintikka has claimed that an IF language can include its own adequate truth predicate. I argue that while this is indeed the case, we cannot recognize the truth predicate as such within the same IF language, and the need for Tarskian truth remains. In addition to IF logic, also second-order logic and Saul Kripke s approach using Kleenean logic will be shown to fail in a similar fashion.
Resumo:
Catechol-O-methyltransferase (COMT) metabolizes catecholamines such as dopamine (DA), noradrenaline (NA) and adrenaline, which are vital neurotransmitters and hormones that play important roles in the regulation of physiological processes. COMT enzyme has a functional Val158Met polymorphism in humans, which affects the subjects COMT activity. Increasing evidence suggests that this functional polymorphism may play a role in the etiology of various diseases from schizophrenia to cancers. The aim of this project was to provide novel biochemical information on the physiological and especially pathophysiological roles of COMT enzyme as well as the effects of COMT inhibition in the brain and in the cardiovascular and renal system. To assess the roles of COMT and COMT inhibition in pathophysiology, we used four different study designs. The possible beneficial effects of COMT inhibition were studied in double-transgenic rats (dTGRs) harbouring human angiotensinogen and renin genes. Due to angiotensin II (Ang II) overexpression, these animals exhibit severe hypetension, cardiovascular and renal end-organ damage and mortality of approximately 25-40% at the age of 7-weeks. The dTGRs and their Sprague-Dawley controls tissue samples were assessed with light microscopy, immunohistochemistry, reverse transcriptase-polymerase chain reaction (RT-PCR) and high-pressure liquid chromatography (HPLC) to evaluate the tissue damages and the possible protective effects pharmacological intervention with COMT inhibitors. In a second study, the consequence of genetic and pharmacological COMT blockade in blood pressure regulation during normal and high-sodium was elucidated using COMT-deficient mice. The blood pressure and the heart rate were measured using direct radiotelemetric blood pressure surveillance. In a third study, the effects of acute and subchronic COMT inhibition during combined levodopa (L-DOPA) + dopa decarboxylase inhibitor treatment in homocysteine formation was evaluated. Finally, we assessed the COMT enzyme expression, activity and cellular localization in the CNS during inflammation-induced neurodegeneration using Western blotting, HPLC and various enzymatic assays. The effects of pharmacological COMT inhibition on neurodegeneration were also studied. The COMT inhibitor entacapone protected against the Ang II-induced perivascular inflammation, renal damage and cardiovascular mortality in dTGRs. COMT inhibitors reduced the albuminuria by 85% and prevented the cardiovascular mortality completely. Entacapone treatment was shown to ameliorate oxidative stress and inflammation. Furthermore, we established that the genetic and pharmacological COMT enzyme blockade protects against the blood pressure-elevating effects of high sodium intake in mice. These effects were mediated via enhanced renal dopaminergic tone and suggest an important role of COMT enzyme, especially in salt-sensitive hypertension. Entacapone also ameliorated the L-DOPA-induced hyperhomocysteinemia in rats. This is important, since decreased homocysteine levels may decrease the risk of cardiovascular diseases in Parkinson´s disease (PD) patients using L-DOPA. The Lipopolysaccharide (LPS)-induced inflammation and subsequent delayed dopaminergic neurodegeneration were accompanied by up-regulation of COMT expression and activity in microglial cells as well as in perivascular cells. Interestingly, similar perivascular up-regulation of COMT expression in inflamed renal tissue was previously noted in dTGRs. These results suggest that inflammation reactions may up-regulate COMT expression. Furthermore, this increased glial and perivascular COMT activity in the central nervous system (CNS) may decrease the bioavailability of L-DOPA and be related to the motor fluctuation noted during L-DOPA therapy in PD patients.
Resumo:
The forest simulator is a computerized model for predicting forest growth and future development as well as effects of forest harvests and treatments. The forest planning system is a decision support tool, usually including a forest simulator and an optimisation model, for finding the optimal forest management actions. The information produced by forest simulators and forest planning systems is used for various analytical purposes and in support of decision making. However, the quality and reliability of this information can often be questioned. Natural variation in forest growth and estimation errors in forest inventory, among other things, cause uncertainty in predictions of forest growth and development. This uncertainty stemming from different sources has various undesirable effects. In many cases outcomes of decisions based on uncertain information are something else than desired. The objective of this thesis was to study various sources of uncertainty and their effects in forest simulators and forest planning systems. The study focused on three notable sources of uncertainty: errors in forest growth predictions, errors in forest inventory data, and stochastic fluctuation of timber assortment prices. Effects of uncertainty were studied using two types of forest growth models, individual tree-level models and stand-level models, and with various error simulation methods. New method for simulating more realistic forest inventory errors was introduced and tested. Also, three notable sources of uncertainty were combined and their joint effects on stand-level net present value estimates were simulated. According to the results, the various sources of uncertainty can have distinct effects in different forest growth simulators. The new forest inventory error simulation method proved to produce more realistic errors. The analysis on the joint effects of various sources of uncertainty provided interesting knowledge about uncertainty in forest simulators.
Resumo:
Inadvertent climate modification has led to an increase in urban temperatures compared to the surrounding rural area. The main reason for the temperature rise is the altered energy portioning of input net radiation to heat storage and sensible and latent heat fluxes in addition to the anthropogenic heat flux. The heat storage flux and anthropogenic heat flux have not yet been determined for Helsinki and they are not directly measurable. To the contrary, turbulent fluxes of sensible and latent heat in addition to net radiation can be measured, and the anthropogenic heat flux together with the heat storage flux can be solved as a residual. As a result, all inaccuracies in the determination of the energy balance components propagate to the residual term and special attention must be paid to the accurate determination of the components. One cause of error in the turbulent fluxes is the fluctuation attenuation at high frequencies which can be accounted for by high frequency spectral corrections. The aim of this study is twofold: to assess the relevance of high frequency corrections to water vapor fluxes and to assess the temporal variation of the energy fluxes. Turbulent fluxes of sensible and latent heat have been measured at SMEAR III station, Helsinki, since December 2005 using the eddy covariance technique. In addition, net radiation measurements have been ongoing since July 2007. The used calculation methods in this study consist of widely accepted eddy covariance data post processing methods in addition to Fourier and wavelet analysis. The high frequency spectral correction using the traditional transfer function method is highly dependent on relative humidity and has an 11% effect on the latent heat flux. This method is based on an assumption of spectral similarity which is shown not to be valid. A new correction method using wavelet analysis is thus initialized and it seems to account for the high frequency variation deficit. Anyhow, the resulting wavelet correction remains minimal in contrast to the traditional transfer function correction. The energy fluxes exhibit a behavior characteristic for urban environments: the energy input is channeled to sensible heat as latent heat flux is restricted by water availability. The monthly mean residual of the energy balance ranges from 30 Wm-2 in summer to -35 Wm-2 in winter meaning a heat storage to the ground during summer. Furthermore, the anthropogenic heat flux is approximated to be 50 Wm-2 during winter when residential heating is important.
Resumo:
The efforts of combining quantum theory with general relativity have been great and marked by several successes. One field where progress has lately been made is the study of noncommutative quantum field theories that arise as a low energy limit in certain string theories. The idea of noncommutativity comes naturally when combining these two extremes and has profound implications on results widely accepted in traditional, commutative, theories. In this work I review the status of one of the most important connections in physics, the spin-statistics relation. The relation is deeply ingrained in our reality in that it gives us the structure for the periodic table and is of crucial importance for the stability of all matter. The dramatic effects of noncommutativity of space-time coordinates, mainly the loss of Lorentz invariance, call the spin-statistics relation into question. The spin-statistics theorem is first presented in its traditional setting, giving a clarifying proof starting from minimal requirements. Next the notion of noncommutativity is introduced and its implications studied. The discussion is essentially based on twisted Poincaré symmetry, the space-time symmetry of noncommutative quantum field theory. The controversial issue of microcausality in noncommutative quantum field theory is settled by showing for the first time that the light wedge microcausality condition is compatible with the twisted Poincaré symmetry. The spin-statistics relation is considered both from the point of view of braided statistics, and in the traditional Lagrangian formulation of Pauli, with the conclusion that Pauli's age-old theorem stands even this test so dramatic for the whole structure of space-time.
Resumo:
Whether a statistician wants to complement a probability model for observed data with a prior distribution and carry out fully probabilistic inference, or base the inference only on the likelihood function, may be a fundamental question in theory, but in practice it may well be of less importance if the likelihood contains much more information than the prior. Maximum likelihood inference can be justified as a Gaussian approximation at the posterior mode, using flat priors. However, in situations where parametric assumptions in standard statistical models would be too rigid, more flexible model formulation, combined with fully probabilistic inference, can be achieved using hierarchical Bayesian parametrization. This work includes five articles, all of which apply probability modeling under various problems involving incomplete observation. Three of the papers apply maximum likelihood estimation and two of them hierarchical Bayesian modeling. Because maximum likelihood may be presented as a special case of Bayesian inference, but not the other way round, in the introductory part of this work we present a framework for probability-based inference using only Bayesian concepts. We also re-derive some results presented in the original articles using the toolbox equipped herein, to show that they are also justifiable under this more general framework. Here the assumption of exchangeability and de Finetti's representation theorem are applied repeatedly for justifying the use of standard parametric probability models with conditionally independent likelihood contributions. It is argued that this same reasoning can be applied also under sampling from a finite population. The main emphasis here is in probability-based inference under incomplete observation due to study design. This is illustrated using a generic two-phase cohort sampling design as an example. The alternative approaches presented for analysis of such a design are full likelihood, which utilizes all observed information, and conditional likelihood, which is restricted to a completely observed set, conditioning on the rule that generated that set. Conditional likelihood inference is also applied for a joint analysis of prevalence and incidence data, a situation subject to both left censoring and left truncation. Other topics covered are model uncertainty and causal inference using posterior predictive distributions. We formulate a non-parametric monotonic regression model for one or more covariates and a Bayesian estimation procedure, and apply the model in the context of optimal sequential treatment regimes, demonstrating that inference based on posterior predictive distributions is feasible also in this case.
Resumo:
It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.
Resumo:
The monograph dissertation deals with kernel integral operators and their mapping properties on Euclidean domains. The associated kernels are weakly singular and examples of such are given by Green functions of certain elliptic partial differential equations. It is well known that mapping properties of the corresponding Green operators can be used to deduce a priori estimates for the solutions of these equations. In the dissertation, natural size- and cancellation conditions are quantified for kernels defined in domains. These kernels induce integral operators which are then composed with any partial differential operator of prescribed order, depending on the size of the kernel. The main object of study in this dissertation being the boundedness properties of such compositions, the main result is the characterization of their Lp-boundedness on suitably regular domains. In case the aforementioned kernels are defined in the whole Euclidean space, their partial derivatives of prescribed order turn out to be so called standard kernels that arise in connection with singular integral operators. The Lp-boundedness of singular integrals is characterized by the T1 theorem, which is originally due to David and Journé and was published in 1984 (Ann. of Math. 120). The main result in the dissertation can be interpreted as a T1 theorem for weakly singular integral operators. The dissertation deals also with special convolution type weakly singular integral operators that are defined on Euclidean spaces.
Resumo:
The concept of an atomic decomposition was introduced by Coifman and Rochberg (1980) for weighted Bergman spaces on the unit disk. By the Riemann mapping theorem, functions in every simply connected domain in the complex plane have an atomic decomposition. However, a decomposition resulting from a conformal mapping of the unit disk tends to be very implicit and often lacks a clear connection to the geometry of the domain that it has been mapped into. The lattice of points, where the atoms of the decomposition are evaluated, usually follows the geometry of the original domain, but after mapping the domain into another this connection is easily lost and the layout of points becomes seemingly random. In the first article we construct an atomic decomposition directly on a weighted Bergman space on a class of regulated, simply connected domains. The construction uses the geometric properties of the regulated domain, but does not explicitly involve any conformal Riemann map from the unit disk. It is known that the Bergman projection is not bounded on the space L-infinity of bounded measurable functions. Taskinen (2004) introduced the locally convex spaces LV-infinity consisting of measurable and HV-infinity of analytic functions on the unit disk with the latter being a closed subspace of the former. They have the property that the Bergman projection is continuous from LV-infinity onto HV-infinity and, in some sense, the space HV-infinity is the smallest possible substitute to the space H-infinity of analytic functions. In the second article we extend the above result to a smoothly bounded strictly pseudoconvex domain. Here the related reproducing kernels are usually not known explicitly, and thus the proof of continuity of the Bergman projection is based on generalised Forelli-Rudin estimates instead of integral representations. The minimality of the space LV-infinity is shown by using peaking functions first constructed by Bell (1981). Taskinen (2003) showed that on the unit disk the space HV-infinity admits an atomic decomposition. This result is generalised in the third article by constructing an atomic decomposition for the space HV-infinity on a smoothly bounded strictly pseudoconvex domain. In this case every function can be presented as a linear combination of atoms such that the coefficient sequence belongs to a suitable Köthe co-echelon space.
Resumo:
The research in model theory has extended from the study of elementary classes to non-elementary classes, i.e. to classes which are not completely axiomatizable in elementary logic. The main theme has been the attempt to generalize tools from elementary stability theory to cover more applications arising in other branches of mathematics. In this doctoral thesis we introduce finitary abstract elementary classes, a non-elementary framework of model theory. These classes are a special case of abstract elementary classes (AEC), introduced by Saharon Shelah in the 1980's. We have collected a set of properties for classes of structures, which enable us to develop a 'geometric' approach to stability theory, including an independence calculus, in a very general framework. The thesis studies AEC's with amalgamation, joint embedding, arbitrarily large models, countable Löwenheim-Skolem number and finite character. The novel idea is the property of finite character, which enables the use of a notion of a weak type instead of the usual Galois type. Notions of simplicity, superstability, Lascar strong type, primary model and U-rank are inroduced for finitary classes. A categoricity transfer result is proved for simple, tame finitary classes: categoricity in any uncountable cardinal transfers upwards and to all cardinals above the Hanf number. Unlike the previous categoricity transfer results of equal generality the theorem does not assume the categoricity cardinal being a successor. The thesis consists of three independent papers. All three papers are joint work with Tapani Hyttinen.
Resumo:
The stochastic filtering has been in general an estimation of indirectly observed states given observed data. This means that one is discussing conditional expected values as being one of the most accurate estimation, given the observations in the context of probability space. In my thesis, I have presented the theory of filtering using two different kind of observation process: the first one is a diffusion process which is discussed in the first chapter, while the third chapter introduces the latter which is a counting process. The majority of the fundamental results of the stochastic filtering is stated in form of interesting equations, such the unnormalized Zakai equation that leads to the Kushner-Stratonovich equation. The latter one which is known also by the normalized Zakai equation or equally by Fujisaki-Kallianpur-Kunita (FKK) equation, shows the divergence between the estimate using a diffusion process and a counting process. I have also introduced an example for the linear gaussian case, which is mainly the concept to build the so-called Kalman-Bucy filter. As the unnormalized and the normalized Zakai equations are in terms of the conditional distribution, a density of these distributions will be developed through these equations and stated by Kushner Theorem. However, Kushner Theorem has a form of a stochastic partial differential equation that needs to be verify in the sense of the existence and uniqueness of its solution, which is covered in the second chapter.
Resumo:
Tools known as maximal functions are frequently used in harmonic analysis when studying local behaviour of functions. Typically they measure the suprema of local averages of non-negative functions. It is essential that the size (more precisely, the L^p-norm) of the maximal function is comparable to the size of the original function. When dealing with families of operators between Banach spaces we are often forced to replace the uniform bound with the larger R-bound. Hence such a replacement is also needed in the maximal function for functions taking values in spaces of operators. More specifically, the suprema of norms of local averages (i.e. their uniform bound in the operator norm) has to be replaced by their R-bound. This procedure gives us the Rademacher maximal function, which was introduced by Hytönen, McIntosh and Portal in order to prove a certain vector-valued Carleson's embedding theorem. They noticed that the sizes of an operator-valued function and its Rademacher maximal function are comparable for many common range spaces, but not for all. Certain requirements on the type and cotype of the spaces involved are necessary for this comparability, henceforth referred to as the “RMF-property”. It was shown, that other objects and parameters appearing in the definition, such as the domain of functions and the exponent p of the norm, make no difference to this. After a short introduction to randomized norms and geometry in Banach spaces we study the Rademacher maximal function on Euclidean spaces. The requirements on the type and cotype are considered, providing examples of spaces without RMF. L^p-spaces are shown to have RMF not only for p greater or equal to 2 (when it is trivial) but also for 1 < p < 2. A dyadic version of Carleson's embedding theorem is proven for scalar- and operator-valued functions. As the analysis with dyadic cubes can be generalized to filtrations on sigma-finite measure spaces, we consider the Rademacher maximal function in this case as well. It turns out that the RMF-property is independent of the filtration and the underlying measure space and that it is enough to consider very simple ones known as Haar filtrations. Scalar- and operator-valued analogues of Carleson's embedding theorem are also provided. With the RMF-property proven independent of the underlying measure space, we can use probabilistic notions and formulate it for martingales. Following a similar result for UMD-spaces, a weak type inequality is shown to be (necessary and) sufficient for the RMF-property. The RMF-property is also studied using concave functions giving yet another proof of its independence from various parameters.
Resumo:
The doctoral thesis defined connections between circadian rhythm disruptions and health problems. Sleep debt, jet-lag, shift work, as well as transitions into and out of the daylight saving time may lead to circadian rhythm disruptions. Disturbed circadian rhythm causes sleep deprivation and decrease of mood and these effects may lead to higher accident rates and trigger mental illnesses. Circadian clock genes are involved in the regulation of the cell cycle and metabolism and thus unstable circadian rhythmicity may also lead to cancer development. In publications I-III it was explored how transitions into and out of the daylight saving time impact the sleep efficiency and the rest-activity cycles of healthy individuals. Also it was explored whether the effect of transition is different in fall as compared to spring, and whether there are subgroup specific differences in the adjustment to transitions into and out of daylight saving time. The healthy participants of studies I-III used actigraphs before and after the transitions and filled in the morningness-eveningness and seasonal pattern assessment questionnaires. In publication IV the incidence of hospital-treated accidents and manic episodes was explored two weeks before and two weeks after the transitions into and out of the daylight saving time in years 1987-2003. In publication V the relationship between circadian rhythm disruption and the prevalence of Non-Hodgkin lymphoma was studied. The study V consisted of all working aged Finns who participated in the national population census in 1970. For our study, all the cancers diagnosed during the years 1971-1995 were extracted from the Finnish Cancer Register and linked with the 1970 census files. In studies I-III it was noticed that transitions into and out of the daylight saving time disturbs the sleep-wake cycle and the sleep efficiency of the healthy participants. We also noticed that short sleepers were more sensitive than long sleepers for sudden changes in the circadian rhythm. Our results also indicated that adaptation to changes in the circadian rhythm is potentially sex, age and chronotype-specific. In study IV no significant increase in the occurrence of hospital treated accidents or manic episodes was noticed. However, interesting observations about the seasonal fluctuation of the occurrence rates of accidents and manic episodes were made. Study V revealed that there might be close relationship between circadian rhythm disruption and cancer. The prevalence of Non-Hodgkin lymphoma was the highest among night workers. The five publications included in this thesis together point out that disturbed circadian rhythms may have adverse effect on health. Disturbed circadian rhythms decrease the quality of sleep and weaken the sleep-wake cycle. A continuous circadian rhythm disruption may also predispose individuals to cancer development. Since circadian rhythm disruptions are common in modern society they might have a remarkable impact on the public health. Thus it is important to continue circadian rhythm research so that better prevention and treatment methods can be developed. Keywords: Circadian rhythm, daylight saving time, manic episodes, accidents, Non-Hodgkin lymphoma 11
Resumo:
Glaucoma, optic neuropathy with excavation in the optic nerve head and corresponding visual field defect, is one of the leading causes for blindness worldwide. However, visual disability can often be avoided or delayed if the disease is diagnosed at an early stage. Therefore, recognising the risk factors for development and progression of glaucoma may prevent further damage. The purpose of the present study was to evaluate factors associated with visual disability caused by glaucoma and the genetic features of two risk factors, exfoliation syndrome (ES) and a positive family history of glaucoma. The present study material consisted of three study groups 1) deceased glaucoma patients from the Ekenäs practice 2) glaucoma families from the Ekenäs region and 3) population based families with and without exfoliation syndrome from Kökar Island. For the retrospective study, 106 patients with open angle glaucoma (OAG) were identified. At the last visit, 17 patients were visually impaired. Blindness induced by glaucoma was found in one or both eyes in 16 patients and in both eyes in six patients. The cumulative incidence of glaucoma caused blindness for one eye was 6% at 5 years, 9% at 10 years, and 15% at 15 years from initialising the treatment. The factors associated with blindness caused by glaucoma were an advanced stage of glaucoma at diagnosis, fluctuation in intraocular pressure during treatment, the presence of exfoliation syndrome, and poor patient compliance. A cross-sectional population based study performed in 1960-1962 on Kökar Island and the same population was followed until 2002. In total 965 subjects (530 over 50 years) have been examined at least once. The prevalence of exfoliation syndrome (ES) was 18% among subjects older than 50 years. Seventy-five of all 78 ES-positives belonged to the same extended pedigree. According to the segregation and family analysis, exfoliation syndrome seemed to be inherited as an autosomal dominant trait with reduced penetrance. The penetrance was more reduced for males, but the risk for glaucoma was higher in males than in females. To find the gene or genes associated with exfoliation syndrome, a genome wide scan was performed for 64 members (28 ES affected and 36 controls) of the Kökar pedigree. A promising result was found: the highest two-point LOD score of 3.45 (θ=0.04) in chromosome18q12.1-21.33. The presence of mutations in glaucoma genes TIGR/MYOC (myocilin) and OPTN (optineurin) was analysed in eight glaucoma families from the Ekenäs region. An inheritance pattern resembling autosomal dominant mode was detected in all these families. Primary open angle glaucoma or exfoliation glaucoma was found in 35% of 136 family members and 28% were suspected to have glaucoma. No mutations were detected in these families.