16 resultados para DYNAMICAL REALIZATIONS
Resumo:
Transport plays an important role in the distribution of long-lived gases such as ozone and water vapour in the atmosphere. Understanding of observed variability in these gases as well as prediction of the future changes depends therefore on our knowledge of the relevant atmospheric dynamics. This dissertation studies certain dynamical processes in the stratosphere and upper troposphere which influence the distribution of ozone and water vapour in the atmosphere. The planetary waves that originate in the troposphere drive the stratospheric circulation. They influence both the meridional transport of substances as well as parameters of the polar vortices. In turn, temperatures inside the polar vortices influence abundance of the Polar Stratospheric Clouds (PSC) and therefore the chemical ozone destruction. Wave forcing of the stratospheric circulation is not uniform during winter. The November-December averaged stratospheric eddy heat flux shows a significant anticorrelation with the January-February averaged eddy heat flux in the midlatitude stratosphere and troposphere. These intraseasonal variations are attributable to the internal stratospheric vacillations. In the period 1979-2002, the wave forcing exhibited a negative trend which was confined to the second half of winter only. In the period 1958-2002, area, strength and longevity of the Arctic polar vortices do not exhibit significant long-term changes while the area with temperatures lower than the threshold temperature for PSC formation shows statistically significant increase. However, the Arctic vortex parameters show significant decadal changes which are mirrored in the ozone variability. Monthly ozone tendencies in the Northern Hemisphere show significant correlations (|r|=0.7) with proxies of the stratospheric circulation. In the Antarctic, the springtime vortex in the lower stratosphere shows statistically significant trends in temperature, longevity and strength (but not in area) in the period 1979-2001. Analysis of the ozone and water vapour vertical distributions in the Arctic UTLS shows that layering below and above the tropopause is often associated with poleward Rossby wave-breaking. These observations together with calculations of cross-tropopause fluxes emphasize the importance of poleward Rossby wave breaking for the stratosphere-troposphere exchange in the Arctic.
Resumo:
In humans with a loss of uricase the final oxidation product of purine catabolism is uric acid (UA). The prevalence of hyperuricemia has been increasing around the world accompanied by a rapid increase in obesity and diabetes. Since hyperuricemia was first described as being associated with hyperglycemia and hypertension by Kylin in 1923, there has been a growing interest in the association between elevated UA and other metabolic abnormalities of hyperglycemia, abdominal obesity, dyslipidemia, and hypertension. The direction of causality between hyperuricemia and metabolic disorders, however, is unceartain. The association of UA with metabolic abnormalities still needs to be delineated in population samples. Our overall aims were to study the prevalence of hyperuricemia and the metabolic factors clustering with hyperuricemia, to explore the dynamical changes in blood UA levels with the deterioration in glucose metabolism and to estimate the predictive capability of UA in the development of diabetes. Four population-based surveys for diabetes and other non-communicable diseases were conducted in 1987, 1992, and 1998 in Mauritius, and in 2001-2002 in Qingdao, China. The Qingdao study comprised 1 288 Chinese men and 2 344 women between 20-74, and the Mauritius study consisted of 3 784 Mauritian Indian and Mauritian Creole men and 4 442 women between 25-74. In Mauritius, re-exams were made in 1992 and/or 1998 for 1 941 men (1 409 Indians and 532 Creoles) and 2 318 non pregnant women (1 645 Indians and 673 Creoles), free of diabetes, cardiovascular diseases, and gout at baseline examinations in 1987 or 1992, using the same study protocol. The questionnaire was designed to collect demographic details, physical examinations and standard 75g oral glucose tolerance tests were performed in all cohorts. Fasting blood UA and lipid profiles were also determined. The age-standardized prevalence in Chinese living in Qingdao was 25.3% for hyperuricemia (defined as fasting serum UA > 420 μmol/l in men and > 360 μmol/l in women) and 0.36% for gout in adults between 20-74. Hyperuricemia was more prevalent in men than in women. One standard deviation increase in UA concentration was associated with the clustering of metabolic risk factors for both men and women in three ethnic groups. Waist circumference, body mass index, and serum triglycerides appeared to be independently associated with hyperuricemia in both sexes and in all ethnic groups except in Chinese women, in whom triglycerides, high-density lipoprotein cholesterol, and total cholesterol were associated with hyperuricemia. Serum UA increased with increasing fasting plasma glucose levels up to a value of 7.0 mmol/l, but significantly decreased thereafter in mainland Chinese. An inverse relationship occurred between 2-h plasma glucose and serum UA when 2-h plasma glucose higher than 8.0 mmol/l. In the prospective study in Mauritius, 337 (17.4%) men and 379 (16.4%) women developed diabetes during the follow-up. Elevated UA levels at baseline increased 1.14-fold in risk of incident diabetes in Indian men and 1.37-fold in Creole men, but no significant risk was observed in women. In conclusion, the prevalence of hyperuricemia was high in Chinese in Qingdao, blood UA was associated with the clustering of metabolic risk factors in Mauritian Indian, Mauritian Creole, and Chinese living in Qingdao, and a high baseline UA level independently predicted the development of diabetes in Mauritian men. The clinical use of UA as a marker of hyperglycemia and other metabolic disorders needs to be further studied. Keywords: Uric acid, Hyperuricemia, Risk factors, Type 2 Diabetes, Incidence, Mauritius, Chinese
Resumo:
Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.
Resumo:
The problem of recovering information from measurement data has already been studied for a long time. In the beginning, the methods were mostly empirical, but already towards the end of the sixties Backus and Gilbert started the development of mathematical methods for the interpretation of geophysical data. The problem of recovering information about a physical phenomenon from measurement data is an inverse problem. Throughout this work, the statistical inversion method is used to obtain a solution. Assuming that the measurement vector is a realization of fractional Brownian motion, the goal is to retrieve the amplitude and the Hurst parameter. We prove that under some conditions, the solution of the discretized problem coincides with the solution of the corresponding continuous problem as the number of observations tends to infinity. The measurement data is usually noisy, and we assume the data to be the sum of two vectors: the trend and the noise. Both vectors are supposed to be realizations of fractional Brownian motions, and the goal is to retrieve their parameters using the statistical inversion method. We prove a partial uniqueness of the solution. Moreover, with the support of numerical simulations, we show that in certain cases the solution is reliable and the reconstruction of the trend vector is quite accurate.
Resumo:
In the course of my research for my thesis The Q Gospel and Psychohistory, I moved on from the accounts of the Cynics ideals to psychohistorical explanations. Studying the texts dealing with the Cynics and the Q Gospel, I was amazed by the fact that these texts actually portrayed people living in greater poverty than they had to. I paid particular attention to the fact that the Q Gospel was born in traumatising, warlike circumstances. Psychiatric traumatology helped me understand the Q Gospel and other ancient documents using historical approaches in a way that would comply with modern behavioural science. Even though I found some answers to the questions I had posed in my research, the main result of my research work is the justification of the question: Is it important to ask whether there is a connection between the ethos expressed by means of the religious language of the Q Gospel and the predominantly war-related life experiences typical to Palestine at the time. As has been convincingly revealed by a number of studies, traumatic events contribute to the development of psychotic experiences. I approached the problematic nature, significance and complexity of the ideal of poverty and this warlike environment by clarifying the history of psychohistorical literary research and the interpretative contexts associated with Sigmund Freud, Jacques Lacan and Melanie Klein. It is justifiable to question abnormal mentality, but there is no reliable return from the abnormal mentality described in any particular text to the only affecting factor. The popular research tendency based on the Oedipus complex is just as controversial as the Oedipus complex itself. The sociological frameworks concerning moral panics and political paranoia of an outer and inner danger fit quite well with the construction of the Q Gospel. Jerrold M. Post, M.D., Professor of Psychiatry, Political Psychology and Interna-tional Affairs at George Washington University, and founder and director of the Center for the Analysis of Personality and Political Behavior for the Central Intelligence Agency, has focused on the role played by charisma in the attracting of followers and detailed the psychological styles of a "charismatic" leader. He wrote the books Political Paranoia and Leaders and Their Followers in a Dangerous World: the Psychology of Political Behavior among others. His psychoanalytic vocabulary was useful for my understanding of the minds and motivations involved in the Q Gospel s formation. The Q sect began to live in a predestined future, with the reality and safety of this world having collapsed in both their experience and their fantasies. The deep and clear-cut divisions into good and evil that are expressed in the Q Gospel reveal the powerful nature of destructive impulses, envy and overwhelming anxiety. Responsible people who influenced the Q Gospel's origination tried to mount an ascetic defense against anxiety, denying their own needs, focusing their efforts on another objective (God s Kingdom) and a regressive, submissive earlier phase of development (a child s carelessness). This spiritual process was primarily an ecclesiastic or group-dynamical tactic to give support to the power of group leaders.
Resumo:
Economic and Monetary Union can be characterised as a complicated set of legislation and institutions governing monetary and fiscal responsibilities. The measures of fiscal responsibility are to be guided by the Stability and Growth Pact, which sets rules for fiscal policy and makes a discretionary fiscal policy virtually impossible. To analyse the effects of the fiscal and monetary policy mix, we modified the New Keynesian framework to allow for supply effects of fiscal policy. We show that defining a supply-side channel for fiscal policy using an endogenous output gap changes the stabilising properties of monetary policy rules. The stability conditions are affected by fiscal policy, so that the dichotomy between active (passive) monetary policy and passive (active) fiscal policy as stabilising regimes does not hold, and it is possible to have an active monetary - active fiscal policy regime consistent with dynamical stability of the economy. We show that, if we take supply-side effects into ac-count, we get more persistent inflation and output reactions. We also show that the dichotomy does not hold for a variety of different fiscal policy rules based on government debt and budget deficit, using the tax smoothing hypothesis and formulating the tax rules as difference equations. The debt rule with active monetary policy results in indeterminacy, while the deficit rule produces a determinate solution with active monetary policy, even with active fiscal policy. The combination of fiscal requirements in a rule results in cyclical responses to shocks. The amplitude of the cycle is larger with more weight on debt than on deficit. Combining optimised monetary policy with fiscal policy rules means that, under a discretionary monetary policy, the fiscal policy regime affects the size of the inflation bias. We also show that commitment to an optimal monetary policy not only corrects the inflation bias but also increases the persistence of output reactions. With fiscal policy rules based on the deficit we can retain the tax smoothing hypothesis also in a sticky price model.
Resumo:
The output of a laser is a high frequency propagating electromagnetic field with superior coherence and brightness compared to that emitted by thermal sources. A multitude of different types of lasers exist, which also translates into large differences in the properties of their output. Moreover, the characteristics of the electromagnetic field emitted by a laser can be influenced from the outside, e.g., by injecting an external optical field or by optical feedback. In the case of free-running solitary class-B lasers, such as semiconductor and Nd:YVO4 solid-state lasers, the phase space is two-dimensional, the dynamical variables being the population inversion and the amplitude of the electromagnetic field. The two-dimensional structure of the phase space means that no complex dynamics can be found. If a class-B laser is perturbed from its steady state, then the steady state is restored after a short transient. However, as discussed in part (i) of this Thesis, the static properties of class-B lasers, as well as their artificially or noise induced dynamics around the steady state, can be experimentally studied in order to gain insight on laser behaviour, and to determine model parameters that are not known ab initio. In this Thesis particular attention is given to the linewidth enhancement factor, which describes the coupling between the gain and the refractive index in the active material. A highly desirable attribute of an oscillator is stability, both in frequency and amplitude. Nowadays, however, instabilities in coupled lasers have become an active area of research motivated not only by the interesting complex nonlinear dynamics but also by potential applications. In part (ii) of this Thesis the complex dynamics of unidirectionally coupled, i.e., optically injected, class-B lasers is investigated. An injected optical field increases the dimensionality of the phase space to three by turning the phase of the electromagnetic field into an important variable. This has a radical effect on laser behaviour, since very complex dynamics, including chaos, can be found in a nonlinear system with three degrees of freedom. The output of the injected laser can be controlled in experiments by varying the injection rate and the frequency of the injected light. In this Thesis the dynamics of unidirectionally coupled semiconductor and Nd:YVO4 solid-state lasers is studied numerically and experimentally.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
New stars in galaxies form in dense, molecular clouds of the interstellar medium. Measuring how the mass is distributed in these clouds is of crucial importance for the current theories of star formation. This is because several open issues in them, such as the strength of different mechanism regulating star formation and the origin of stellar masses, can be addressed using detailed information on the cloud structure. Unfortunately, quantifying the mass distribution in molecular clouds accurately over a wide spatial and dynamical range is a fundamental problem in the modern astrophysics. This thesis presents studies examining the structure of dense molecular clouds and the distribution of mass in them, with the emphasis on nearby clouds that are sites of low-mass star formation. In particular, this thesis concentrates on investigating the mass distributions using the near infrared dust extinction mapping technique. In this technique, the gas column densities towards molecular clouds are determined by examining radiation from the stars that shine through the clouds. In addition, the thesis examines the feasibility of using a similar technique to derive the masses of molecular clouds in nearby external galaxies. The papers presented in this thesis demonstrate how the near infrared dust extinction mapping technique can be used to extract detailed information on the mass distribution in nearby molecular clouds. Furthermore, such information is used to examine characteristics crucial for the star formation in the clouds. Regarding the use of extinction mapping technique in nearby galaxies, the papers of this thesis show that deriving the masses of molecular clouds using the technique suffers from strong biases. However, it is shown that some structural properties can still be examined with the technique.
Resumo:
Time-dependent backgrounds in string theory provide a natural testing ground for physics concerning dynamical phenomena which cannot be reliably addressed in usual quantum field theories and cosmology. A good, tractable example to study is the rolling tachyon background, which describes the decay of an unstable brane in bosonic and supersymmetric Type II string theories. In this thesis I use boundary conformal field theory along with random matrix theory and Coulomb gas thermodynamics techniques to study open and closed string scattering amplitudes off the decaying brane. The calculation of the simplest example, the tree-level amplitude of n open strings, would give us the emission rate of the open strings. However, even this has been unknown. I will organize the open string scattering computations in a more coherent manner and will argue how to make further progress.
Resumo:
This thesis deals with theoretical modeling of the electrodynamics of auroral ionospheres. In the five research articles forming the main part of the thesis we have concentrated on two main themes: Development of new data-analysis techniques and study of inductive phenomena in the ionospheric electrodynamics. The introductory part of the thesis provides a background for these new results and places them in the wider context of ionospheric research. In this thesis we have developed a new tool (called 1D SECS) for analysing ground based magnetic measurements from a 1-dimensional magnetometer chain (usually aligned in the North-South direction) and a new method for obtaining ionospheric electric field from combined ground based magnetic measurements and estimated ionospheric electric conductance. Both these methods are based on earlier work, but contain important new features: 1D SECS respects the spherical geometry of large scale ionospheric electrojet systems and due to an innovative way of implementing boundary conditions the new method for obtaining electric fields can be applied also at local scale studies. These new calculation methods have been tested using both simulated and real data. The tests indicate that the new methods are more reliable than the previous techniques. Inductive phenomena are intimately related to temporal changes in electric currents. As the large scale ionospheric current systems change relatively slowly, in time scales of several minutes or hours, inductive effects are usually assumed to be negligible. However, during the past ten years, it has been realised that induction can play an important part in some ionospheric phenomena. In this thesis we have studied the role of inductive electric fields and currents in ionospheric electrodynamics. We have formulated the induction problem so that only ionospheric electric parameters are used in the calculations. This is in contrast to previous studies, which require knowledge of the magnetospheric-ionosphere coupling. We have applied our technique to several realistic models of typical auroral phenomena. The results indicate that inductive electric fields and currents are locally important during the most dynamical phenomena (like the westward travelling surge, WTS). In these situations induction may locally contribute up to 20-30% of the total ionospheric electric field and currents. Inductive phenomena do also change the field-aligned currents flowing between the ionosphere and magnetosphere, thus modifying the coupling between the two regions.
Resumo:
Earlier work has suggested that large-scale dynamos can reach and maintain equipartition field strengths on a dynamical time scale only if magnetic helicity of the fluctuating field can be shed from the domain through open boundaries. To test this scenario in convection-driven dynamos by comparing results for open and closed boundary conditions. Three-dimensional numerical simulations of turbulent compressible convection with shear and rotation are used to study the effects of boundary conditions on the excitation and saturation level of large-scale dynamos. Open (vertical field) and closed (perfect conductor) boundary conditions are used for the magnetic field. The contours of shear are vertical, crossing the outer surface, and are thus ideally suited for driving a shear-induced magnetic helicity flux. We find that for given shear and rotation rate, the growth rate of the magnetic field is larger if open boundary conditions are used. The growth rate first increases for small magnetic Reynolds number, Rm, but then levels off at an approximately constant value for intermediate values of Rm. For large enough Rm, a small-scale dynamo is excited and the growth rate in this regime increases proportional to Rm^(1/2). In the nonlinear regime, the saturation level of the energy of the mean magnetic field is independent of Rm when open boundaries are used. In the case of perfect conductor boundaries, the saturation level first increases as a function of Rm, but then decreases proportional to Rm^(-1) for Rm > 30, indicative of catastrophic quenching. These results suggest that the shear-induced magnetic helicity flux is efficient in alleviating catastrophic quenching when open boundaries are used. The horizontally averaged mean field is still weakly decreasing as a function of Rm even for open boundaries.
Resumo:
We show that the dynamical Wigner functions for noninteracting fermions and bosons can have complex singularity structures with a number of new solutions accompanying the usual mass-shell dispersion relations. These new shell solutions are shown to encode the information of the quantum coherence between particles and antiparticles, left and right moving chiral states and/or between different flavour states. Analogously to the usual derivation of the Boltzmann equation, we impose this extended phase space structure on the full interacting theory. This extension of the quasiparticle approximation gives rise to a self-consistent equation of motion for a density matrix that combines the quantum mechanical coherence evolution with a well defined collision integral giving rise to decoherence. Several applications of the method are given, for example to the coherent particle production, electroweak baryogenesis and study of decoherence and thermalization.
Resumo:
According to certain arguments, computation is observer-relative either in the sense that many physical systems implement many computations (Hilary Putnam), or in the sense that almost all physical systems implement all computations (John Searle). If sound, these arguments have a potentially devastating consequence for the computational theory of mind: if arbitrary physical systems can be seen to implement arbitrary computations, the notion of computation seems to lose all explanatory power as far as brains and minds are concerned. David Chalmers and B. Jack Copeland have attempted to counter these relativist arguments by placing certain constraints on the definition of implementation. In this thesis, I examine their proposals and find both wanting in some respects. During the course of this examination, I give a formal definition of the class of combinatorial-state automata , upon which Chalmers s account of implementation is based. I show that this definition implies two theorems (one an observation due to Curtis Brown) concerning the computational power of combinatorial-state automata, theorems which speak against founding the theory of implementation upon this formalism. Toward the end of the thesis, I sketch a definition of the implementation of Turing machines in dynamical systems, and offer this as an alternative to Chalmers s and Copeland s accounts of implementation. I demonstrate that the definition does not imply Searle s claim for the universal implementation of computations. However, the definition may support claims that are weaker than Searle s, yet still troubling to the computationalist. There remains a kernel of relativity in implementation at any rate, since the interpretation of physical systems seems itself to be an observer-relative matter, to some degree at least. This observation helps clarify the role the notion of computation can play in cognitive science. Specifically, I will argue that the notion should be conceived as an instrumental rather than as a fundamental or foundational one.
Resumo:
Aerosol particles deteriorate air quality, atmospheric visibility and our health. They affect the Earth s climate by absorbing and scattering sunlight, forming clouds, and also via several feed-back mechanisms. The net effect on the radiative balance is negative, i.e. cooling, which means that particles counteract the effect of greenhouse gases. However, particles are one of the poorly known pieces in the climate puzzle. Some of the airborne particles are natural, some anthropogenic; some enter the atmosphere in particle form, while others form by gas-to-particle conversion. Unless the sources and dynamical processes shaping the particle population are quantified, they cannot be incorporated into climate models. The molecular level understanding of new particle formation is still inadequate, mainly due to the lack of suitable measurement techniques to detect the smallest particles and their precursors. This thesis has contributed to our ability to measure newly formed particles. Three new condensation particle counter applications for measuring the concentration of nano-particles were developed. The suitability of the methods for detecting both charged and electrically neutral particles and molecular clusters as small as 1 nm in diameter was thoroughly tested both in laboratory and field conditions. It was shown that condensation particle counting has reached the size scale of individual molecules, and besides measuring the concentration they can be used for getting size information. In addition to atmospheric research, the particle counters could have various applications in other fields, especially in nanotechnology. Using the new instruments, the first continuous time series of neutral sub-3 nm particle concentrations were measured at two field sites, which represent two different kinds of environments: the boreal forest and the Atlantic coastline, both of which are known to be hot-spots for new particle formation. The contribution of ions to the total concentrations in this size range was estimated, and it could be concluded that the fraction of ions was usually minor, especially in boreal forest conditions. Since the ionization rate is connected to the amount of cosmic rays entering the atmosphere, the relative contribution of neutral to charged nucleation mechanisms extends beyond academic interest, and links the research directly to current climate debate.