79 resultados para cosmology scalar-tensor theories induced gravity


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this thesis the current status and some open problems of noncommutative quantum field theory are reviewed. The introduction aims to put these theories in their proper context as a part of the larger program to model the properties of quantized space-time. Throughout the thesis, special focus is put on the role of noncommutative time and how its nonlocal nature presents us with problems. Applications in scalar field theories as well as in gauge field theories are presented. The infinite nonlocality of space-time introduced by the noncommutative coordinate operators leads to interesting structure and new physics. High energy and low energy scales are mixed, causality and unitarity are threatened and in gauge theory the tools for model building are drastically reduced. As a case study in noncommutative gauge theory, the Dirac quantization condition of magnetic monopoles is examined with the conclusion that, at least in perturbation theory, it cannot be fulfilled in noncommutative space.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our present-day understanding of fundamental constituents of matter and their interactions is based on the Standard Model of particle physics, which relies on quantum gauge field theories. On the other hand, the large scale dynamical behaviour of spacetime is understood via the general theory of relativity of Einstein. The merging of these two complementary aspects of nature, quantum and gravity, is one of the greatest goals of modern fundamental physics, the achievement of which would help us understand the short-distance structure of spacetime, thus shedding light on the events in the singular states of general relativity, such as black holes and the Big Bang, where our current models of nature break down. The formulation of quantum field theories in noncommutative spacetime is an attempt to realize the idea of nonlocality at short distances, which our present understanding of these different aspects of Nature suggests, and consequently to find testable hints of the underlying quantum behaviour of spacetime. The formulation of noncommutative theories encounters various unprecedented problems, which derive from their peculiar inherent nonlocality. Arguably the most serious of these is the so-called UV/IR mixing, which makes the derivation of observable predictions especially hard by causing new tedious divergencies, to which our previous well-developed renormalization methods for quantum field theories do not apply. In the thesis I review the basic mathematical concepts of noncommutative spacetime, different formulations of quantum field theories in the context, and the theoretical understanding of UV/IR mixing. In particular, I put forward new results to be published, which show that also the theory of quantum electrodynamics in noncommutative spacetime defined via Seiberg-Witten map suffers from UV/IR mixing. Finally, I review some of the most promising ways to overcome the problem. The final solution remains a challenge for the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis we consider the phenomenology of supergravity, and in particular the particle called "gravitino". We begin with an introductory part, where we discuss the theories of inflation, supersymmetry and supergravity. Gravitino production is then investigated into details, by considering the research papers here included. First we study the scattering of massive W bosons in the thermal bath of particles, during the period of reheating. We show that the process generates in the cross section non trivial contributions, which eventually lead to unitarity breaking above a certain scale. This happens because, in the annihilation diagram, the longitudinal degrees of freedom in the propagator of the gauge bosons disappear from the amplitude, by virtue of the supergravity vertex. Accordingly, the longitudinal polarizations of the on-shell W become strongly interacting in the high energy limit. By studying the process with both gauge and mass eigenstates, it is shown that the inclusion of diagrams with off-shell scalars of the MSSM does not cancel the divergences. Next, we approach cosmology more closely, and study the decay of a scalar field S into gravitinos at the end of inflation. Once its mass is comparable to the Hubble rate, the field starts coherent oscillations about the minimum of its potential and decays pertubatively. We embed S in a model of gauge mediation with metastable vacua, where the hidden sector is of the O'Raifeartaigh type. First we discuss the dynamics of the field in the expanding background, then radiative corrections to the scalar potential V(S) and to the Kähler potential are calculated. Constraints on the reheating temperature are accordingly obtained, by demanding that the gravitinos thus produced provide with the observed Dark Matter density. We modify consistently former results in the literature, and find that the gravitino number density and T_R are extremely sensitive to the parameters of the model. This means that it is easy to account for gravitino Dark Matter with an arbitrarily low reheating temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cosmological inflation is the dominant paradigm in explaining the origin of structure in the universe. According to the inflationary scenario, there has been a period of nearly exponential expansion in the very early universe, long before the nucleosynthesis. Inflation is commonly considered as a consequence of some scalar field or fields whose energy density starts to dominate the universe. The inflationary expansion converts the quantum fluctuations of the fields into classical perturbations on superhorizon scales and these primordial perturbations are the seeds of the structure in the universe. Moreover, inflation also naturally explains the high degree of homogeneity and spatial flatness of the early universe. The real challenge of the inflationary cosmology lies in trying to establish a connection between the fields driving inflation and theories of particle physics. In this thesis we concentrate on inflationary models at scales well below the Planck scale. The low scale allows us to seek for candidates for the inflationary matter within extensions of the Standard Model but typically also implies fine-tuning problems. We discuss a low scale model where inflation is driven by a flat direction of the Minimally Supersymmetric Standard Model. The relation between the potential along the flat direction and the underlying supergravity model is studied. The low inflationary scale requires an extremely flat potential but we find that in this particular model the associated fine-tuning problems can be solved in a rather natural fashion in a class of supergravity models. For this class of models, the flatness is a consequence of the structure of the supergravity model and is insensitive to the vacuum expectation values of the fields that break supersymmetry. Another low scale model considered in the thesis is the curvaton scenario where the primordial perturbations originate from quantum fluctuations of a curvaton field, which is different from the fields driving inflation. The curvaton gives a negligible contribution to the total energy density during inflation but its perturbations become significant in the post-inflationary epoch. The separation between the fields driving inflation and the fields giving rise to primordial perturbations opens up new possibilities to lower the inflationary scale without introducing fine-tuning problems. The curvaton model typically gives rise to relatively large level of non-gaussian features in the statistics of primordial perturbations. We find that the level of non-gaussian effects is heavily dependent on the form of the curvaton potential. Future observations that provide more accurate information of the non-gaussian statistics can therefore place constraining bounds on the curvaton interactions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Acceleration of the universe has been established but not explained. During the past few years precise cosmological experiments have confirmed the standard big bang scenario of a flat universe undergoing an inflationary expansion in its earliest stages, where the perturbations are generated that eventually form into galaxies and other structure in matter, most of which is non-baryonic dark matter. Curiously, the universe has presently entered into another period of acceleration. Such a result is inferred from observations of extra-galactic supernovae and is independently supported by the cosmic microwave background radiation and large scale structure data. It seems there is a positive cosmological constant speeding up the universal expansion of space. Then the vacuum energy density the constant describes should be about a dozen times the present energy density in visible matter, but particle physics scales are enormously larger than that. This is the cosmological constant problem, perhaps the greatest mystery of contemporary cosmology. In this thesis we will explore alternative agents of the acceleration. Generically, such are called dark energy. If some symmetry turns off vacuum energy, its value is not a problem but one needs some dark energy. Such could be a scalar field dynamically evolving in its potential, or some other exotic constituent exhibiting negative pressure. Another option is to assume that gravity at cosmological scales is not well described by general relativity. In a modified theory of gravity one might find the expansion rate increasing in a universe filled by just dark matter and baryons. Such possibilities are taken here under investigation. The main goal is to uncover observational consequences of different models of dark energy, the emphasis being on their implications for the formation of large-scale structure of the universe. Possible properties of dark energy are investigated using phenomenological paramaterizations, but several specific models are also considered in detail. Difficulties in unifying dark matter and dark energy into a single concept are pointed out. Considerable attention is on modifications of gravity resulting in second order field equations. It is shown that in a general class of such models the viable ones represent effectively the cosmological constant, while from another class one might find interesting modifications of the standard cosmological scenario yet allowed by observations. The thesis consists of seven research papers preceded by an introductory discussion.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study examines both theoretically an empirically how well the theories of Norman Holland, David Bleich, Wolfgang Iser and Stanley Fish can explain readers' interpretations of literary texts. The theoretical analysis concentrates on their views on language from the point of view of Wittgenstein's Philosophical Investigations. This analysis shows that many of the assumptions related to language in these theories are problematic. The empirical data show that readers often form very similar interpretations. Thus the study challenges the common assumption that literary interpretations tend to be idiosyncratic. The empirical data consists of freely worded written answers to questions on three short stories. The interpretations were made by 27 Finnish university students. Some of the questions addressed issues that were discussed in large parts of the texts, some referred to issues that were mentioned only in passing or implied. The short stories were "The Witch à la Mode" by D. H. Lawrence, "Rain in the Heart" by Peter Taylor and "The Hitchhiking Game" by Milan Kundera. According to Fish, readers create both the formal features of a text and their interpretation of it according to an interpretive strategy. People who agree form an interpretive community. However, a typical answer usually contains ideas repeated by several readers as well as observations not mentioned by anyone else. Therefore it is very difficult to determine which readers belong to the same interpretive community. Moreover, readers with opposing opinions often seem to pay attention to the same textual features and even acknowledge the possibility of an opposing interpretation; therefore they do not seem to create the formal features of the text in different ways. Iser suggests that an interpretation emerges from the interaction between the text and the reader when the reader determines the implications of the text and in this way fills the "gaps" in the text. Iser believes that the text guides the reader, but as he also believes that meaning is on a level beyond words, he cannot explain how the text directs the reader. The similarity in the interpretations and the fact that the agreement is strongest when related to issues that are discussed broadly in the text do, however, support his assumption that readers are guided by the text. In Bleich's view, all interpretations have personal motives and each person has an idiosyncratic language system. The situation where a person learns a word determines the most important meaning it has for that person. In order to uncover the personal etymologies of words, Bleich asks his readers to associate freely on the basis of a text and note down all the personal memories and feelings that the reading experience evokes. Bleich's theory of the idiosyncratic language system seems to rely on a misconceived notion of the role that ostensive definitions have in language use. The readers' responses show that spontaneous associations to personal life seem to colour the readers' interpretations, but such instances are rather rare. According to Holland, an interpretation reflects the reader's identity theme. Language use is regulated by shared rules, but everyone follows the rules in his or her own way. Words mean different things to different people. The problem with this view is that if there is any basis for language use, it seems to be the shared way of following linguistic rules. Wittgenstein suggests that our understanding of words is related to the shared ways of using words and our understanding of human behaviour. This view seems to give better grounds for understanding similarity and differences in literary interpretations than the theories of Holland, Bleich, Fish and Iser.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Space in musical semiosis is a study of musical meaning, spatiality and composition. Earlier studies on musical composition have not adequately treated the problems of musical signification. Here, composition is considered an epitomic process of musical signification. Hence the core problems of composition theory are core problems of musical semiotics. The study employs a framework of naturalist pragmatism, based on C. S. Peirce’s philosophy. It operates on concepts such as subject, experience, mind and inquiry, and incorporates relevant ideas of Aristotle, Peirce and John Dewey into a synthetic view of esthetic, practic, and semiotic for the benefit of grasping musical signification process as a case of semiosis in general. Based on expert accounts, music is depicted as real, communicative, representational, useful, embodied and non-arbitrary. These describe how music and the musical composition process are mental processes. Peirce’s theories are combined with current morphological theories of cognition into a view of mind, in which space is central. This requires an analysis of space, and the acceptance of a relativist understanding of spatiality. This approach to signification suggests that mental processes are spatially embodied, by virtue of hard facts of the world, literal representations of objects, as well as primary and complex metaphors each sharing identities of spatial structures. Consequently, music and the musical composition process are spatially embodied. Composing music appears as a process of constructing metaphors—as a praxis of shaping and reshaping features of sound, representable from simple quality dimensions to complex domains. In principle, any conceptual space, metaphorical or literal, may set off and steer elaboration, depending on the practical bearings on the habits of feeling, thinking and action, induced in musical communication. In this sense, it is evident that music helps us to reorganize our habits of feeling, thinking, and action. These habits, in turn, constitute our existence. The combination of Peirce and morphological approaches to cognition serves well for understanding musical and general signification. It appears both possible and worthwhile to address a variety of issues central to musicological inquiry in the framework of naturalist pragmatism. The study may also contribute to the development of Peircean semiotics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This study concentrates on the contested concept of pastiche in literary studies. It offers the first detailed examination of the history of the concept from its origins in the seventeenth century to the present, showing how pastiche emerged as a critical concept in interaction with the emerging conception of authorial originality and the copyright laws protecting it. One of the key results of this investigation is the contextualisation of the postmodern debate on pastiche. Even though postmodern critics often emphasise the radical novelty of pastiche, they in fact resuscitate older positions and arguments without necessarily reflecting on their historical conditions. This historical background is then used to analyse the distinction between the primarily French conception of pastiche as the imitation of style and the postmodern notion of it as the compilation of different elements. The latter s vagueness and inclusiveness detracts from its value as a critical concept. The study thus concentrates on the notion of stylistic pastiche, challenging the widespread prejudice that it is merely an indication of lack of talent. Because it is multiply based on repetition, pastiche is in fact a highly ambiguous or double-edged practice that calls into question the distinction between repetition and original, thereby undermining the received notion of individual unique authorship as a fundamental aesthetic value. Pastiche does not, however, constitute a radical upheaval of the basic assumptions on which the present institution of literature relies, since, in order to mark its difference, pastiche always refers to a source outside itself against which its difference is measured. Finally, the theoretical analysis of pastiche is applied to literary works. The pastiches written by Marcel Proust demonstrate how it can become an integral part of a writer s poetics: imitation of style is shown to provide Proust with a way of exploring the role of style as a connecting point between inner vision and reality. The pastiches of the Sherlock Holmes stories by Michael Dibdin, Nicholas Meyer and the duo Adrian Conan Doyle and John Dickson Carr illustrate the functions of pastiche within a genre detective fiction that is itself fundamentally repetitive. A.S. Byatt s Possession and D.M. Thomas s Charlotte use Victorian pastiches to investigate the conditions of literary creation in the age of postmodern suspicion of creativity and individuality. The study thus argues that the concept of pastiche has valuable insights to offer to literary criticism and theory, and that literary pastiches, though often dismissed in reviews and criticism, are a particularly interesting object of study precisely because of their characteristic ambiguity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The main goal of this study was to explore experiences induced by playing digital games (i.e. meaning of playing). In addition, the study aimed at structuring the larger entities of gaming experience. This was done by using theory-driven and data grounded approaches. Previously gaming experiences have not been explored as a whole. The consideration of gaming experiences on the basis of psychological theories and studies has also been rare. The secondary goal of this study was to clarify, whether the individual meanings of playing are connected with flow experience in an occasional gaming situation. Flow is an enjoyable experience and usually activities that induce flow are gladly repeated. Previously, flow has been proved to be an essential concept in the context of playing, but the relations between meanings of playing and flow have not been studied. The relations between gender and gaming experiences were examined throughout the study, as well as the relationship between gaming frequency and experiences. The study was divided into two sections, of which the first was composed according to the main goals. Its data was gathered by using an Internet questionnaire. The other section covered the themes that were formulated on the basis of the secondary aims. In that section, the participants played a driving game for 40 minutes and then filled in a questionnaire, which measured flow related experiences. In both sections, the participants were mainly young Finnish adults. All the participants in the second section (n = 60) had already participated in the first section (n = 267). Both qualitative and quantitative research techniques were used in the study. In the first section, freely described gaming experiences were classified according to the grounded theory. After that, the most common categories were further classified into the basic structures of gaming experience, some according to the existing theories of experience structure and some according to the data (i.e. grounded theory). In the other section flow constructs were measured and used as grouping variables in a cluster analysis. Three meaningful groups were compared regarding the meanings of gaming that were explored in the first section. The descriptions of gaming experiences were classified into four main categories, which were conceptions of the gaming process, emotions, motivations and focused attention. All the theory-driven categories were found in the data. This frame of reference can be utilized in future when reliability and validity of already existing methods for measuring gaming experiences are considered or new methods will be developed. The connection between the individual relevance of gaming and flow was minor. However, as the scope was specified to relations between primary meanings of playing and flow, it was noticed that attributing enjoyment to gaming did not lead to the strongest flow-experiences. This implies that the issue should be studied more in future. As a whole this study proves that gamer-related research from numerous vantage points can benefit from concentrating on gaming experiences.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Diet high in dairy products is inversely associated with body mass index, risk of metabolic syndrome and prevalence of type 2 diabetes in several populations. Also a number of intervention studies support the role of increased dairy intake in the prevention and treatment of obesity. Dairy calcium has been suggested to account for the effect of dairy on body weight, but it has been repeatedly shown that the effect of dairy is superior to the effect of supplemental calcium. Dairy proteins are postulated to either enhance the effect of calcium or have an independent effect on body weight, but studies in the area are scarce. The aim of this study was to evaluate the potential of dairy proteins and calcium in the prevention and treatment of diet-induced obesity in C57Bl/6J mice. The effect of dairy proteins and calcium on the liver and adipose tissue was also investigated in order to characterise the potential mechanisms explaining the reduction of risk for metabolic syndrome and type 2 diabetes. A high-calcium diet (1.8%) in combination with dietary whey protein inhibited body weight and fat gain and accelerated body weight and fat loss in high-fat-fed C57Bl/6J mice during long-term studies of 14 to 21 weeks. α-lactalbumin, one of the major whey proteins, was the most effective whey protein fraction showing significantly accelerated weight and fat loss during energy restriction and reduced the amount of visceral fat gain during ad libitum feeding after weight loss. The microarray data suggest sensitisation of insulin signalling in the adipose tissue as a result of a calcium-rich whey protein diet. Lipidomic analysis revealed that weight loss on whey protein-based high-calcium diet was characterised by significant decreases in diabetogenic diacylglycerols and lipotoxic ceramide species. The calcium supplementation led to a small, but statistically significant decrease in fat absorption independent of the protein source of the diet. This augments, but does not fully explain the effects of the studied diets on body weight. A whey protein-containing high-calcium diet had a protective effect against a high-fat diet-induced decline of β3 adrenergic receptor expression in adipose tissue. In addition, a high-calcium diet with whey protein increased the adipose tissue leptin expression which is decreased in this obesity-prone mouse strain. These changes are likely to contribute to the inhibition of weight gain. The potential sensitisation of insulin signalling in adipose tissue together with the less lipotoxic and diabetogenic hepatic lipid profile suggest a novel mechanistic link to explain why increased dairy intake is associated with a lower prevalence of metabolic syndrome and type 2 diabetes in epidemiological studies. Taken together, the intake of a high-calcium diet with dairy proteins has a body weight lowering effect in high-fat-fed C57Bl/6J mice. High-calcium diets containing whey protein prevent weight gain and enhance weight loss, α-lactalbumin being the most effective whey protein fraction. Whey proteins and calcium have also beneficial effects on hepatic lipid profile and adipose tissue gene expression, which suggest a novel mechanistic link to explain the epidemiological findings on dairy intake and metabolic syndrome. The clinical relevance of these findings and the precise mechanisms of action remain an intriguing field of future research.