19 resultados para gravity
em Helda - Digital Repository of University of Helsinki
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
The description of quarks and gluons, using the theory of quantum chromodynamics (QCD), has been known for a long time. Nevertheless, many fundamental questions in QCD remain unanswered. This is mainly due to problems in solving the theory at low energies, where the theory is strongly interacting. AdS/CFT is a duality between a specific string theory and a conformal field theory. Duality provides new tools to solve the conformal field theory in the strong coupling regime. There is also some evidence that using the duality, one can get at least qualitative understanding of how QCD behaves at strong coupling. In this thesis, we try to address some issues related to QCD and heavy ion collisions, applying the duality in various ways.
Resumo:
This dissertation analyzes the interrelationship between death, the conditions of (wo)man s social being, and the notion of value as it emerges in the fiction of the American novelist Thomas Pynchon (1937 ). Pynchon s present work includes six novels V. (1963), The Crying of Lot 49 (1966), Gravity s Rainbow (1973), Vineland (1990), Mason & Dixon (1997), Against the Day (2006) and several short stories. Death constitues a central thematic in Pynchon s work, and it emerges through recurrent questions of mortality, suicide, mass destruction, sacrifice, afterlife, entropy, the relationship between the animate and the inanimate, and the limits of representation. In Pynchon, death is never a mere biological given (or event); it is always determined within a certain historical, cultural, and ideological context. Throughout his work, Pynchon questions the strict ontological separation of life and death by showing the relationship between this separation and social power. Conceptual divisions also reflect the relationship between society and its others, and death becomes that through which lines of social demarcation are articulated. Determined as a conceptual and social "other side", death in Pynchon forms a challenge to modern culture, and makes an unexpected return: the dead return to haunt the living, the inanimate and the animate fuse, and technoscientific attempts at overcoming and controlling death result in its re-emergence in mass destruction and ecological damage. The questioning of the ontological line also affects the structuration of Pynchon's prose, where the recurrent narrated and narrative desire to reach the limits of representation is openly associated with death. Textualized, death appears in Pynchon's writing as a sudden rupture within the textual functioning, when the "other side", that is, the bare materiality of the signifier is foregrounded. In this study, Pynchon s cultural criticism and his poetics come together, and I analyze the subversive role of death in his fiction through Jean Baudrillard s genealogy of the modern notion of death from L échange symbolique et la mort (1976). Baudrillard sees an intrinsic bond between the social repression of death in modernity and the emergence of modern political economy, and in his analysis economy and language appear as parallel systems for generating value (exchange value/ sign-value). For Baudrillard, the modern notion of death as negativity in relation to the positivity of life, and the fact that death cannot be given a proper meaning, betray an antagonistic relation between death and the notion of value. As a mode of negativity (that is, non-value), death becomes a moment of rupture in relation to value-based thinking in short, rationalism. Through this rupture emerges a form of thinking Baudrillard labels the symbolic, characterized by ambivalence and the subversion of conceptual opposites.
Resumo:
This masters thesis explores some of the most recent developments in noncommutative quantum field theory. This old theme, first suggested by Heisenberg in the late 1940s, has had a renaissance during the last decade due to the firmly held belief that space-time becomes noncommutative at small distances and also due to the discovery that string theory in a background field gives rise to noncommutative field theory as an effective low energy limit. This has led to interesting attempts to create a noncommutative standard model, a noncommutative minimal supersymmetric standard model, noncommutative gravity theories etc. This thesis reviews themes and problems like those of UV/IR mixing, charge quantization, how to deal with the non-commutative symmetries, how to solve the Seiberg-Witten map, its connection to fluid mechanics and the problem of constructing general coordinate transformations to obtain a theory of noncommutative gravity. An emphasis has been put on presenting both the group theoretical results and the string theoretical ones, so that a comparison of the two can be made.
Resumo:
Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.
Resumo:
We study a Hamiltonian describing a pendulum coupled with several anisochronous oscillators, giving a simple construction of unstable KAM tori and their stable and unstable manifolds for analytic perturbations. When the coupling takes place through an even trigonometric polynomial in the angle variables, we extend analytically the solutions of the equations of motion, order by order in the perturbation parameter, to a large neighbourhood of the real line representing time. Subsequently, we devise an asymptotic expansion for the splitting (matrix) associated with a homoclinic point. This expansion consists of contributions that are manifestly exponentially small in the limit of vanishing gravity, by a shift-of-countour argument. Hence, we infer a similar upper bound for the splitting itself. In particular, the derivation of the result does not call for a tree expansion with explicit cancellation mechanisms.
Resumo:
Maltose and maltotriose are the two most abundant sugars in brewer s wort, and thus brewer s yeast s ability to utilize them efficiently is of major importance in the brewing process. The increasing tendency to utilize high and very-high-gravity worts containing increased concentrations of maltose and maltotriose renders the need for efficient transport of these sugars even more pronounced. Residual maltose and especially maltotriose are quite often present especially after high and very-high-gravity fermentations. Sugar uptake capacity has been shown to be the rate limiting factor for maltose and maltotriose utilization. The main aim of the present study was to find novel ways to improve maltose and maltotriose utilization during the main fermentation. Maltose and maltotriose uptake characteristics of several ale and lager strains were studied. Genotype determination of the genes needed for maltose and maltotriose utilization was performed. Maltose uptake inhibition studies were performed to reveal the dominant transporter types actually functioning in each of the strains. Temperature-dependence of maltose transport was studied for ale and for lager strains as well as for each of the single sugar transporter proteins Agt1p, Malx1p and Mtt1p. The AGT1 promoter regions of one ale and two lager strains were sequenced by chromosome walking and the promoter elements were searched for using computational methods. The results showed that ale and lager strains predominantly use different maltose and maltotriose transporter types for maltose and maltotriose uptake. Agt1 transporter was found to be the dominant maltose/maltotriose transporter in the ale strains whereas Malx1 and Mtt1- type transporters dominated in the lager strains. All lager strains studied were found to possess a non-functional Agt1 transporter. The ale strains were observed to be more sensitive to temperature decrease in their maltose uptake compared to the lager strains. Single transporters were observed to differ in their sensitivity to temperature decrease and their temperature-dependence was shown to decrease in the order Agt1≥Malx1>Mtt1. The different temperature-dependence between the ale and lager strains was observed to be due to the different dominant maltose/maltotriose transporters ale and lager strains possessed. The AGT1 promoter regions of ale and lager strains were found to differ markedly from the corresponding regions of laboratory strains. The ale strain was found to possess an extra MAL-activator binding site compared to the lager strains. Improved maltose and maltotriose uptake capacity was obtained with a modified lager strain where the AGT1 gene was repaired and put under the control of a strong promoter. Modified strains fermented wort faster and more completely, producing beers containing more ethanol and less residual maltose and maltotriose. Significant savings in the main fermentation time were obtained when modified strains were used. In high-gravity wort fermentations 8 20% and in very-high-gravity wort fermentations even 11 37% time savings were obtained. These are economically significant changes and would cause a marked increase in annual output from the same-size of brewhouse and fermentor facilities.
Resumo:
Objectives: To evaluate the applicability of visual feedback posturography (VFP) for quantification of postural control, and to characterize the horizontal angular vestibulo-ocular reflex (AVOR) by use of a novel motorized head impulse test (MHIT). Methods: In VFP, subjects standing on a platform were instructed to move their center of gravity to symmetrically placed peripheral targets as fast and accurately as possible. The active postural control movements were measured in healthy subjects (n = 23), and in patients with vestibular schwannoma (VS) before surgery (n = 49), one month (n = 17), and three months (n = 36) after surgery. In MHIT we recorded head and eye position during motorized head impulses (mean velocity of 170º/s and acceleration of 1 550º/s²) in healthy subjects (n = 22), in patients with VS before surgery (n = 38) and about four months afterwards (n = 27). The gain, asymmetry and latency in MHIT were calculated. Results: The intraclass correlation coefficient for VFP parameters during repeated tests was significant (r = 0.78-0.96; p < 0.01), although two of four VFP parameters improved slightly during five test sessions in controls. At least one VFP parameter was abnormal pre- and postoperatively in almost half the patients, and these abnormal preoperative VFP results correlated significantly with abnormal postoperative results. The mean accuracy in postural control in patients was reduced pre- and postoperatively. A significant side difference with VFP was evident in 10% of patients. In the MHIT, the normal gain was close to unity, the asymmetry in gain was within 10%, and the latency was a mean ± standard deviation 3.4 ± 6.3 milliseconds. Ipsilateral gain or asymmetry in gain was preoperatively abnormal in 71% of patients, whereas it was abnormal in every patient after surgery. Preoperative gain (mean ± 95% confidence interval) was significantly lowered to 0.83 ± 0.08 on the ipsilateral side compared to 0.98 ± 0.06 on the contralateral side. The ipsilateral postoperative mean gain of 0.53 ± 0.05 was significantly different from preoperative gain. Conclusion: The VFP is a repeatable, quantitative method to assess active postural control within individual subjects. The mean postural control in patients with VS was disturbed before and after surgery, although not severely. Side difference in postural control in the VFP was rare. The horizontal AVOR results in healthy subjects and in patients with VS, measured with MHIT, were in agreement with published data achieved using other techniques with head impulse stimuli. The MHIT is a non-invasive method which allows reliable clinical assessment of the horizontal AVOR.
Resumo:
Interstellar clouds are not featureless, but show quite complex internal structures of filaments and clumps when observed with high enough resolution. These structures have been generated by 1) turbulent motions driven mainly by supernovae, 2) magnetic fields working on the ions and, through neutral-ion collisions, on neutral gas as well, and 3) self-gravity pulling a dense clump together to form a new star. The study of the cloud structure gives us information on the relative importance of each of these mechanisms, and helps us to gain a better understanding of the details of the star formation process. Interstellar dust is often used as a tracer for the interstellar gas which forms the bulk of the interstellar matter. Some of the methods that are used to derive the column density are summarized in this thesis. A new method, which uses the scattered light to map the column density in large fields with high spatial resolution, is introduced. This thesis also takes a look at the grain alignment with respect to the magnetic fields. The aligned grains give rise to the polarization of starlight and dust emission, thus revealing the magnetic field. The alignment mechanisms have been debated for the last half century. The strongest candidate at present is the radiative torques mechanism. In the first four papers included in this thesis, the scattered light method of column density estimation is formulated, tested in simulations, and finally used to obtain a column density map from observations. They demonstrate that the scattered light method is a very useful and reliable tool in column density estimation, and is able to provide higher resolution than the near-infrared color excess method. These two methods are complementary. The derived column density maps are also used to gain information on the dust emissivity within the observed cloud. The two final papers present simulations of polarized thermal dust emission assuming that the alignment happens by the radiative torques mechanism. We show that the radiative torques can explain the observed decline of the polarization degree towards dense cores. Furthermore, the results indicate that the dense cores themselves might not contribute significantly to the polarized signal, and hence one needs to be careful when interpreting the observations and deriving the magnetic field.
Resumo:
The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.
Resumo:
The superconducting (or cryogenic) gravimeter (SG) is based on the levitation of a superconducting sphere in a stable magnetic field created by current in superconducting coils. Depending on frequency, it is capable of detecting gravity variations as small as 10-11ms-2. For a single event, the detection threshold is higher, conservatively about 10-9 ms-2. Due to its high sensitivity and low drift rate, the SG is eminently suitable for the study of geodynamical phenomena through their gravity signatures. I present investigations of Earth dynamics with the superconducting gravimeter GWR T020 at Metsähovi from 1994 to 2005. The history and key technical details of the installation are given. The data processing methods and the development of the local tidal model at Metsähovi are presented. The T020 is a part of the worldwide GGP (Global Geodynamics Project) network, which consist of 20 working station. The data of the T020 and of other participating SGs are available to the scientific community. The SG T020 have used as a long-period seismometer to study microseismicity and the Earth s free oscillation. The annual variation, spectral distribution, amplitude and the sources of microseism at Metsähovi were presented. Free oscillations excited by three large earthquakes were analyzed: the spectra, attenuation and rotational splitting of the modes. The lowest modes of all different oscillation types are studied, i.e. the radial mode 0S0, the "football mode" 0S2, and the toroidal mode 0T2. The very low level (0.01 nms-1) incessant excitation of the Earth s free oscillation was detected with the T020. The recovery of global and regional variations in gravity with the SG requires the modelling of local gravity effects. The most important of them is hydrology. The variation in the groundwater level at Metsähovi as measured in a borehole in the fractured bedrock correlates significantly (0.79) with gravity. The influence of local precipitation, soil moisture and snow cover are detectable in the gravity record. The gravity effect of the variation in atmospheric mass and that of the non-tidal loading by the Baltic Sea were investigated together, as sea level and air pressure are correlated. Using Green s functions it was calculated that a 1 metre uniform layer of water in the Baltic Sea increases the gravity at Metsähovi by 31 nms-2 and the vertical deformation is -11 mm. The regression coefficient for sea level is 27 nms-2m-1, which is 87% of the uniform model. These studies are associated with temporal height variations using the GPS data of Metsähovi permanent station. Results of long time series at Metsähovi demonstrated high quality of data and correctly carried out offsets and drift corrections. The superconducting gravimeter T020 has been proved to be an eminent and versatile tool in studies of the Earth dynamics.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.
Resumo:
The doctoral thesis deals with Finnish and foreign expert s analyses of Finland s military strategic position and defence capability, dating back to the early years of the Cold War. Finland s military high command prepared assessments of the country s strategic position and of the capability of the Defence Forces as grounds for defence planning. Since Finland was located on the Cold War dividing line, the foreign powers were also monitoring the development of Finland s situation. The research carried out had access to the armed forces internal assessments, as well as to analyses prepared by the military intelligence services of Sweden, Britain and the United States. One of the working hypotheses was that after the WWII the ability military leadership to estimate the security political needs of the country and the organisation of its defence was severely weakened so that the dangers of the international development were not perceived and the gradual erosion of defence capability was partly unnoticed. This hypothesis proved to be wrong. Even if the Finnish military intelligence was much weaker than during the war, it was able to provide the military leadership with information of the international military development for the most part. The military leadership was also fully aware of the weakening of the defence capability of the country. They faced the difficult task of making the country s political leadership, i.e. President Paasikivi and the government, also understand the gravity of the situation. Only in the last years of his term in office Paasikivi started to believe the warnings of the military. According to another hypothesis, outside observers considered the Finnish armed forces to primarily act as reinforcements for the Soviet Red Army, and they believed that, in the event of a full-scale war, the Finns would not have been able or even willing to resist a Soviet invasion of Sweden and Norway through Finland. The study confirmed that this was approximately the view the Swedes, the British and the Americans had of the Finnish forces. Western and Swedish intelligence assessments did not show confidence in Finland s defence ability and the country was regarded almost as a Soviet satellite. Finland s strategic position was, however, considered slightly different from that of the Soviet-occupied Eastern European countries. Finland had been forced to become part of the Soviet sphere of interest and security system and this was sealed by the Finno-Soviet Treaty of Friendship, Cooperation, and Mutual Assistance in 1948. Finland had little importance to the military interests of the Western powers. In Sweden s defence planning, however, Finland played a significant role as an alarm bell of a possible Soviet surprise attack, as well as defensive frontline and buffer zone.