78 resultados para nearly-stoichiometric LiTaO3


Relevância:

10.00% 10.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fusion energy is a clean and safe solution for the intricate question of how to produce non-polluting and sustainable energy for the constantly growing population. The fusion process does not result in any harmful waste or green-house gases, since small amounts of helium is the only bi-product that is produced when using the hydrogen isotopes deuterium and tritium as fuel. Moreover, deuterium is abundant in seawater and tritium can be bred from lithium, a common metal in the Earth's crust, rendering the fuel reservoirs practically bottomless. Due to its enormous mass, the Sun has been able to utilize fusion as its main energy source ever since it was born. But here on Earth, we must find other means to achieve the same. Inertial fusion involving powerful lasers and thermonuclear fusion employing extreme temperatures are examples of successful methods. However, these have yet to produce more energy than they consume. In thermonuclear fusion, the fuel is held inside a tokamak, which is a doughnut-shaped chamber with strong magnets wrapped around it. Once the fuel is heated up, it is controlled with the help of these magnets, since the required temperatures (over 100 million degrees C) will separate the electrons from the nuclei, forming a plasma. Once the fusion reactions occur, excess binding energy is released as energetic neutrons, which are absorbed in water in order to produce steam that runs turbines. Keeping the power losses from the plasma low, thus allowing for a high number of reactions, is a challenge. Another challenge is related to the reactor materials, since the confinement of the plasma particles is not perfect, resulting in particle bombardment of the reactor walls and structures. Material erosion and activation as well as plasma contamination are expected. Adding to this, the high energy neutrons will cause radiation damage in the materials, causing, for instance, swelling and embrittlement. In this thesis, the behaviour of a material situated in a fusion reactor was studied using molecular dynamics simulations. Simulations of processes in the next generation fusion reactor ITER include the reactor materials beryllium, carbon and tungsten as well as the plasma hydrogen isotopes. This means that interaction models, {\it i.e. interatomic potentials}, for this complicated quaternary system are needed. The task of finding such potentials is nonetheless nearly at its end, since models for the beryllium-carbon-hydrogen interactions were constructed in this thesis and as a continuation of that work, a beryllium-tungsten model is under development. These potentials are combinable with the earlier tungsten-carbon-hydrogen ones. The potentials were used to explain the chemical sputtering of beryllium due to deuterium plasma exposure. During experiments, a large fraction of the sputtered beryllium atoms were observed to be released as BeD molecules, and the simulations identified the swift chemical sputtering mechanism, previously not believed to be important in metals, as the underlying mechanism. Radiation damage in the reactor structural materials vanadium, iron and iron chromium, as well as in the wall material tungsten and the mixed alloy tungsten carbide, was also studied in this thesis. Interatomic potentials for vanadium, tungsten and iron were modified to be better suited for simulating collision cascades that are formed during particle irradiation, and the potential features affecting the resulting primary damage were identified. Including the often neglected electronic effects in the simulations was also shown to have an impact on the damage. With proper tuning of the electron-phonon interaction strength, experimentally measured quantities related to ion-beam mixing in iron could be reproduced. The damage in tungsten carbide alloys showed elemental asymmetry, as the major part of the damage consisted of carbon defects. On the other hand, modelling the damage in the iron chromium alloy, essentially representing steel, showed that small additions of chromium do not noticeably affect the primary damage in iron. Since a complete assessment of the response of a material in a future full-scale fusion reactor is not achievable using only experimental techniques, molecular dynamics simulations are of vital help. This thesis has not only provided insight into complicated reactor processes and improved current methods, but also offered tools for further simulations. It is therefore an important step towards making fusion energy more than a future goal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work is focused on the effects of energetic particle precipitation of solar or magnetospheric origin on the polar middle atmosphere. The energetic charged particles have access to the atmosphere in the polar areas, where they are guided by the Earth's magnetic field. The particles penetrate down to 20-100 km altitudes (stratosphere and mesosphere) ionising the ambient air. This ionisation leads to production of odd nitrogen (NOx) and odd hydrogen species, which take part in catalytic ozone destruction. NOx has a very long chemical lifetime during polar night conditions. Therefore NOx produced at high altitudes during polar night can be transported to lower stratospheric altitudes. Particular emphasis in this work is in the use of both space and ground based observations: ozone and NO2 measurements from the GOMOS instrument on board the European Space Agency's Envisat-satellite are used together with subionospheric VLF radio wave observations from ground stations. Combining the two observation techniques enabled detection of NOx enhancements throughout the middle atmosphere, including tracking the descent of NOx enhancements of high altitude origin down to the stratosphere. GOMOS observations of the large Solar Proton Events of October-November 2003 showed the progression of the SPE initiated NOx enhancements through the polar winter. In the upper stratosphere, nighttime NO2 increased by an order of magnitude, and the effect was observed to last for several weeks after the SPEs. Ozone decreases up to 60 % from the pre-SPE values were observed in the upper stratosphere nearly a month after the events. Over several weeks the GOMOS observations showed the gradual descent of the NOx enhancements to lower altitudes. Measurements from years 2002-2006 were used to study polar winter NOx increases and their connection to energetic particle precipitation. NOx enhancements were found to occur in a good correlation with both increased high-energy particle precipitation and increased geomagnetic activity. The average wintertime polar NOx was found to have a nearly linear relationship with the average wintertime geomagnetic activity. The results from this thesis work show how important energetic particle precipitation from outside the atmosphere is as a source of NOx in the middle atmosphere, and thus its importance to the chemical balance of the atmosphere.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This ethnographic study investigates encounters between volunteers and older people at the Kerava Municipal Health Centre inpatient ward for chronic care. Volunteer activities have been under development, in cooperation with the Voluntary Work Center (Talkoorengas), since the start of the 1990s. When my research began in 2003, nine of the volunteers came to the ward on set days per week or visited the ward according to their own timetables. The volunteers ranged in age from 54 to 78 years. With one exception, all of them were on pension. Nearly all of them had been volunteers for more than ten years. My study is research on ageing, the focal point being older people, whether volunteers or those receiving assistance. The research questions are: How is volunteer work implemented in daily routines at the ward? How is interaction created in encounters between the older people and the volunteers? What meanings does volunteer work create for the older people and the volunteers? The core material of my research is observation material, which is supplemented by interviews, documentation and photographs. The materials have been analysed by using theme analysis and ethnomethodological discussion analysis. In the presentation of the research findings, I have structured the materials into three main chapters: space and time; hands and touch; and words and tones. The chapter on space and time examines time and space paths, privacy and publicness, and celebrations as part of daily life. The volunteers open and create social arenas for the older people through chatting and singing together, celebrations in the dayroom or poetry readings at the bedside. The supporting theme of the chapter on hands and touch is bodily closeness in care and the associated concrete physical presence. The chapter highlights the importance of everyday routines, such as meals and rituals, as elements that bring security. Stimuli in daily life, such as handicrafts in groups, pass time but also give older people the experience of meaningful activity and bring back positive memories of their own life. The chapter on words and tones focuses on the social interaction and identity. The volunteers’ identity is built up into the identity of a helper and caregiver. The older people’s identity is built up into a care recipient’s identity, which in different situations is shaped into, among others, the identity of one who listens, remembers, does not remember, defends, composes poetry or is dying. The cornerstones of voluntary social care are participation, activity, trust and presence. Successful volunteer work calls for mutual trust between the older people, volunteers and the health care personnel, and for clear agreements on questions of responsibility, the status of volunteers and their role alongside professional personnel. This study indicates that volunteer work is a meaningful resource in work with older people.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The purpose of this research is to examine whether short-term communication training can have an impact on the improvement of communication capacity of working communities, and what are prerequisites for the creation of such capacity. Subjects of this research were short-term communication trainings aimed at the managerial and expert levels of enterprises and communities. The research endeavors to find out how communication trainings with an impact should be devised and implemented, and what this requires from the client and provider of the training service. The research data is mostly comprised of quantitative feed-back collected at the end of a training day, as well as delayed interviews. The evaluations have been based on a stakeholder approach, and those concerned were participants to the trainings, clients having commissioned the trainings and communication trainers. The principal method of the qualitative analysis is that of a data-driven content analysis. Two research instruments have been constructed for the analysis and for the presentation of the results: an evaluation circle for the purposes of a holistic evaluation and a development matrix for the structuring of an effective training. The core concept of the matrix is a carrier wave effect, which is needed to carry the abstractions from the training into concrete functions in the everyday life. The relevance of the results has been tested in a pilot organization. The immediate assessment and delayed evaluations gave a very differing picture of the trainings. The immediate feedback was of nearly commendable level, but the effects carried forward into the everyday situations of the working community were small and that the learning rarely was applied into practice. A training session that receives good feedback does not automatically result in the development of individual competence, let alone that of the community. The results show that even short-term communication training can promote communication competence that eventually changes the working culture on an organizational level, provided that the training is designed into a process and that the connections into the participants’ work are ensured. It is essential that all eight elements of the carrier wave effect are taken into account. The entire purchaser-provider -process must function while not omitting the contribution of the participants themselves. The research illustrates the so called bow tie -model of an effective communication training based on the carrier wave effect. Testing the results in pilot trainings showed that a rather small change in the training approach may have a signi¬ficant effect on the outcome of the training as well as those effects that are carried on into the working community. The evaluation circle proved to be a useful tool, which can be used while planning, executing and evaluating training in practice. The development matrix works as a tool for those producing the training service, those using the service as well as those deciding on the purchase of the service in planning and evaluating training that sustainably improves communication capacity. Thus the evaluation circle also works to support and ensure the long-term effects of short-term trainings. In addition to communication trainings, the tools developed for this research are useable for many such needs, where an organization is looking to improve its operations and profitability through training.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The study seeks to find out whether the real burden of the personal taxation has increased or decreased. In order to determine this, we investigate how the same real income has been taxed in different years. Whenever the taxes for the same real income for a given year are higher than for the base year, the real tax burden has increased. If they are lower, the real tax burden has decreased. The study thus seeks to estimate how changes in the tax regulations affect the real tax burden. It should be kept in mind that the progression in the central government income tax schedule ensures that a real change in income will bring about a change in the tax ration. In case of inflation when the tax schedules are kept nominally the same will also increase the real tax burden. In calculations of the study it is assumed that the real income remains constant, so that we can get an unbiased measure of the effects of governmental actions in real terms. The main factors influencing the amount of income taxes an individual must pay are as follows: - Gross income (income subject to central and local government taxes). - Deductions from gross income and taxes calculated according to tax schedules. - The central government income tax schedule (progressive income taxation). - The rates for the local taxes and for social security payments (proportional taxation). In the study we investigate how much a certain group of taxpayers would have paid in taxes according to the actual tax regulations prevailing indifferent years if the income were kept constant in real terms. Other factors affecting tax liability are kept strictly unchanged (as constants). The resulting taxes, expressed in fixed prices, are then compared to the taxes levied in the base year (hypothetical taxation). The question we are addressing is thus how much taxes a certain group of taxpayers with the same socioeconomic characteristics would have paid on the same real income according to the actual tax regulations prevailing in different years. This has been suggested as the main way to measure real changes in taxation, although there are several alternative measures with essentially the same aim. Next an aggregate indicator of changes in income tax rates is constructed. It is designed to show how much the taxation of income has increased or reduced from one year to next year on average. The main question remains: How aggregation over all income levels should be performed? In order to determine the average real changes in the tax scales the difference functions (difference between actual and hypothetical taxation functions) were aggregated using taxable income as weights. Besides the difference functions, the relative changes in real taxes can be used as indicators of change. In this case the ratio between the taxes computed according to the new and the old situation indicates whether the taxation has become heavier or easier. The relative changes in tax scales can be described in a way similar to that used in describing the cost of living, or by means of price indices. For example, we can use Laspeyres´ price index formula for computing the ratio between taxes determined by the new tax scales and the old tax scales. The formula answers the question: How much more or less will be paid in taxes according to the new tax scales than according to the old ones when the real income situation corresponds to the old situation. In real terms the central government tax burden experienced a steady decline from its high post-war level up until the mid-1950s. The real tax burden then drifted upwards until the mid-1970s. The real level of taxation in 1975 was twice that of 1961. In the 1980s there was a steady phase due to the inflation corrections of tax schedules. In 1989 the tax schedule fell drastically and from the mid-1990s tax schedules have decreased the real tax burden significantly. Local tax rates have risen continuously from 10 percent in 1948 to nearly 19 percent in 2008. Deductions have lowered the real tax burden especially in recent years. Aggregate figures indicate how the tax ratio for the same real income has changed over the years according to the prevailing tax regulations. We call the tax ratio calculated in this manner the real income tax ratio. A change in the real income tax ratio depicts an increase or decrease in the real tax burden. The real income tax ratio declined after the war for some years. In the beginning of the 1960s it nearly doubled to mid-1970. From mid-1990s the real income tax ratio has fallen about 35 %.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Part I: Parkinson’s disease is a slowly progressive neurodegenerative disorder in which particularly the dopaminergic neurons of the substantia nigra pars compacta degenerate and die. Current conventional treatment is based on restraining symptoms but it has no effect on the progression of the disease. Gene therapy research has focused on the possibility of restoring the lost brain function by at least two means: substitution of critical enzymes needed for the synthesis of dopamine and slowing down the progression of the disease by supporting the functions of the remaining nigral dopaminergic neurons by neurotrophic factors. The striatal levels of enzymes such as tyrosine hydroxylase, dopadecarboxylase and GTP-CH1 are decreased as the disease progresses. By replacing one or all of the enzymes, dopamine levels in the striatum may be restored to normal and behavioral impairments caused by the disease may be ameliorated especially in the later stages of the disease. The neurotrophic factors glial cell derived neurotrophic factor (GDNF) and neurturin have shown to protect and restore functions of dopaminergic cell somas and terminals as well as improve behavior in animal lesion models. This therapy may be best suited at the early stages of the disease when there are more dopaminergic neurons for neurotrophic factors to reach. Viral vector-mediated gene transfer provides a tool to deliver proteins with complex structures into specific brain locations and provides long-term protein over-expression. Part II: The aim of our study was to investigate the effects of two orally dosed COMT inhibitors entacapone (10 and 30 mg/kg) and tolcapone (10 and 30 mg/kg) with a subsequent administration of a peripheral dopadecarboxylase inhibitor carbidopa (30 mg/kg) and L- dopa (30 mg/kg) on dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens of freely moving rats using dual-probe in vivo microdialysis. Earlier similarly designed studies have only been conducted in the dorsal striatum. We also confirmed the result of earlier ex vivo studies regarding the effects of intraperitoneally dosed tolcapone (30 mg/kg) and entacapone (30 mg/kg) on striatal and hepatic COMT activity. The results obtained from the dorsal striatum were generally in line with earlier studies, where tolcapone tended to increase dopamine and DOPAC levels and decrease HVA levels. Entacapone tended to keep striatal dopamine and HVA levels elevated longer than in controls and also tended to elevate the levels of DOPAC. Surprisingly in the nucleus accumbens, dopamine levels after either dose of entacapone or tolcapone were not elevated. Accumbal DOPAC levels, especially in the tolcapone 30 mg/kg group, were elevated nearly to the same extent as measured in the dorsal striatum. Entacapone 10 mg/kg elevated accumbal HVA levels more than the dose of 30 mg/kg and the effect was more pronounced in the nucleus accumbens than in the dorsal striatum. This suggests that entacapone 30 mg/kg has minor central effects. Also our ex vivo study results obtained from the dorsal striatum suggest that entacapone 30 mg/kg has minor and transient central effects, even though central HVA levels were not suppressed below those of the control group in either brain area in the microdialysis study. Both entacapone and tolcapone suppressed hepatic COMT activity more than striatal COMT activity. Tolcapone was more effective than entacapone in the dorsal striatum. The differences between dopamine and its metabolite levels in the dorsal striatum and nucleus accumbens may be due to different properties of the two brain areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A straightforward computation of the list of the words (the `tail words' of the list) that are distributionally most similar to a given word (the `head word' of the list) leads to the question: How semantically similar to the head word are the tail words; that is: how similar are their meanings to its meaning? And can we do better? The experiment was done on nearly 18,000 most frequent nouns in a Finnish newsgroup corpus. These nouns are considered to be distributionally similar to the extent that they occur in the same direct dependency relations with the same nouns, adjectives and verbs. The extent of the similarity of their computational representations is quantified with the information radius. The semantic classification of head-tail pairs is intuitive; some tail words seem to be semantically similar to the head word, some do not. Each such pair is also associated with a number of further distributional variables. Individually, their overlap for the semantic classes is large, but the trained classification-tree models have some success in using combinations to predict the semantic class. The training data consists of a random sample of 400 head-tail pairs with the tail word ranked among the 20 distributionally most similar to the head word, excluding names. The models are then tested on a random sample of another 100 such pairs. The best success rates range from 70% to 92% of the test pairs, where a success means that the model predicted my intuitive semantic class of the pair. This seems somewhat promising when distributional similarity is used to capture semantically similar words. This analysis also includes a general discussion of several different similarity formulas, arranged in three groups: those that apply to sets with graded membership, those that apply to the members of a vector space, and those that apply to probability mass functions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract (A journey through Danish literature translated into Finnish after 1945): Nearly 80 per cent of all literary translations from Danish into Finnish are done after the Second World War. These translations are obviously only a small selection of the Danish national literature, but nevertheless capture important trends and currents in it. Based on a selection of translated works, the article allows a broad introduction to Danish literature available in Finnish. It focuses on children's and youth literature, feminist literature and realistic, magic and civilization critical novels.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Migraine is the common cause of chronic episodic headache, affecting 12%-15% of the Caucasian population (41 million Europeans and some half a million Finns), and causes considerable loss of quality of life to its sufferers, as well as being linked to increased risk for a wide range of conditions, from depression to stroke. Migraine is the 19th most severe disease in terms of disability-adjusted life years, and 9th among women. It is characterized by attacks of headache accompanied by sensitivity to external stimuli lasting 4-72 hours, and in a third of cases by neurological aura symptoms, such as loss of vision, speech or muscle function. The underlying pathophysiology, including what triggers migraine attacks and why they occur in the first place, is largely unknown. The aim of this study was to identify genetic factors associated with the hereditary susceptibility to migraine, in order to gain a better understanding of migraine mechanisms. In this thesis, we report the results of genetic linkage and association analyses on a Finnish migraine patient collection as well as migraineurs from Australia, Denmark, Germany, Iceland and the Netherlands. Altogether we studied genetic information of nearly 7,000 migraine patients and over 50,000 population-matched controls. We also developed a new migraine analysis method called the trait component analysis, which is based on individual patient responses instead of the clinical diagnosis. Using this method, we detected a number of new genetic loci for migraine, including on chromosome 17p13 (HLOD 4.65) and 10q22-q23 (female-specific HLOD 7.68) with significant evidence of linkage, along with five other loci (2p12, 8q12, 4q28-q31, 18q12-q22, and Xp22) detected with suggestive evidence of linkage. The 10q22-q23 locus was the first genetic finding in migraine to show linkage to the same locus and markers in multiple populations, with consistent detection in six different scans. Traditionally, ion channels have been thought to play a role in migraine susceptibility, but we were able to exclude any significant role for common variants in a candidate gene study of 155 ion transport genes. This was followed up by the first genome-wide association study in migraine, conducted on 2,748 migraine patients and 10,747 matched controls followed by a replication in 3,209 patients and 40,062 controls. In this study, we found interesting results with genome-wide significance, providing targets for future genetic and functional studies. Overall, we found several promising genetic loci for migraine providing a promising base for future studies in migraine.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This dissertation is a synchronic description of adnominal person in the highly synthetic morphological system of Erzya as attested in extensive Erzya-language written-text corpora consisting of nearly 140 publications with over 4.5 million words and over 285,000 unique lexical items. Insight for this description have been obtained from several source grammars in German, Russian, Erzya, Finnish, Estonian and Hungarian, as well as bounteous discussions in the understanding of the language with native speakers and grammarians 1993 2010. Introductory information includes the discussion of the status of Erzya as a lan- guage, the enumeration of phonemes generally used in the transliteration of texts and an in-depth description of adnominal morphology. The reader is then made aware of typological and Erzya-specifc work in the study of adnominal-type person. Methods of description draw upon the prerequisite information required in the development of a two-level morphological analyzer, as can be obtained in the typological description of allomorphic variation in the target language. Indication of original author or dialect background is considered important in the attestation of linguistic phenomena, such that variation might be plotted for a synchronic description of the language. The phonological description includes the establishment of a 6-vowel, 29-consonant phoneme system for use in the transliteration of annotated texts, i.e. two phonemes more than are generally recognized, and numerous rules governing allophonic variation in the language. Erzya adnominal morphology is demonstrated to have a three-way split in stem types and a three-layer system of non-derivative affixation. The adnominal-affixation layers are broken into (a) declension (the categories of case, number and deictic marking); (b) nominal conjugation (non-verb grammatical and oblique-case items can be conjugated), and (c) clitic marking. Each layer is given statistical detail with regard to concatenability. Finally, individual subsections are dedicated to the matters of: possessive declension compatibility in the distinction of sublexica; genitive and dative-case paradigmatic defectivity in the possessive declension, where it is demonstrated to be parametrically diverse, and secondary declension, a proposed typology modifiers without nouns , as compatible with adnominal person.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Plexins (plxn) are receptors of semaphorins (sema), which were originally characterized as axon guidance cues. Semaphorin-plexin signalling has now been implicated in many other developmental and pathological processes. In this thesis, my first aim was to study the expression of plexins during mouse development. My second aim was to study the function of Plexin B2 in the development of the kidney. Thirdly, my objective was to elucidate the evolutionary conservation of Plexin B2 by investigating its sequence, expression and function in developing zebrafish. I show by in situ hybridisation that plexins are widely expressed also in the non-neuronal tissues during mouse development. Plxnb1 and Plxnb2, for example, are expressed also in the ureteric epithelium, developing glomeruli and undifferentiated metanephric mesenchyme of the developing kidney. Plexin B2-deficient (Plxnb2-/-) mice die before birth and have severe defects in the nervous system. I demonstrate that they develop morphologically normal but hypoplastic kidneys. The ureteric epithelium of Plxnb2-/- kidneys has fewer branches and a lower rate of proliferating cells. 10% of the embryos show unilateral double ureters and kidneys. The defect in the branching is intrinsic to the epithelium as the isolated ureteric epithelium grown in vitro fails to respond to Glial-cell-line-derived neurotrophic factor (Gdnf). We prove by co-immunoprecipitation that Plexin B2 interacts with the Gdnf-receptor Ret. Sema4C, the Plexin B2 ligand, increases branching of the ureteric epithelium in controls but not in Plxnb2-/- kidney explants. These results suggest that Sema4C-Plexin B2 signalling modulates ureteric branching in a positive manner, possibly through directly regulating the activation of Ret. I cloned the zebrafish orthologs of Plexin B2, Plexin B2a and B2b. The corresponding proteins contain the conserved domains the B-subfamily plexins. Especially the expression pattern of plxnb2b recapitulates many aspects of the expression pattern of Plxnb2 in mouse. Plxnb2a and plxnb2b are expressed, for example, in the pectoral fins and at the midbrain-hindbrain region during zebrafish development. The nearly complete knockdown of Plexin B2a alone or together with the 45% knockdown of Plexin B2b did not interfere with the normal development of the zebrafish. In conclusion, my thesis reveals that plexins are broadly expressed during mouse embryogenesis. It also shows that Sema4C-Plexin B2 signalling modulates the branching of the ureteric epithelium during kidney development, perhaps through a direct interaction with Ret. Finally, I show that the sequence and expression of Plexin B2a and B2b are conserved in zebrafish. Their knockdown does not, however, result in the exencephaly phenotype of Plxnb2-/- mice.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The open access (OA) model for journals is compared to the open source principle for computer software. Since the early 1990s nearly 1,000 OA scientific journals have emerged – mostly as voluntary community efforts, although recently some professionally operating publishers have used author charges or institutional membership. This study of OA journals without author charges shows that their impact is still relatively small, but awareness of it is increasing. The average number of research articles per year is lower than for major scientific journals but the publication times are shorter.