24 resultados para Observational

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fluid bed granulation is a key pharmaceutical process which improves many of the powder properties for tablet compression. Dry mixing, wetting and drying phases are included in the fluid bed granulation process. Granules of high quality can be obtained by understanding and controlling the critical process parameters by timely measurements. Physical process measurements and particle size data of a fluid bed granulator that are analysed in an integrated manner are included in process analytical technologies (PAT). Recent regulatory guidelines strongly encourage the pharmaceutical industry to apply scientific and risk management approaches to the development of a product and its manufacturing process. The aim of this study was to utilise PAT tools to increase the process understanding of fluid bed granulation and drying. Inlet air humidity levels and granulation liquid feed affect powder moisture during fluid bed granulation. Moisture influences on many process, granule and tablet qualities. The approach in this thesis was to identify sources of variation that are mainly related to moisture. The aim was to determine correlations and relationships, and utilise the PAT and design space concepts for the fluid bed granulation and drying. Monitoring the material behaviour in a fluidised bed has traditionally relied on the observational ability and experience of an operator. There has been a lack of good criteria for characterising material behaviour during spraying and drying phases, even though the entire performance of a process and end product quality are dependent on it. The granules were produced in an instrumented bench-scale Glatt WSG5 fluid bed granulator. The effect of inlet air humidity and granulation liquid feed on the temperature measurements at different locations of a fluid bed granulator system were determined. This revealed dynamic changes in the measurements and enabled finding the most optimal sites for process control. The moisture originating from the granulation liquid and inlet air affected the temperature of the mass and pressure difference over granules. Moreover, the effects of inlet air humidity and granulation liquid feed rate on granule size were evaluated and compensatory techniques used to optimize particle size. Various end-point indication techniques of drying were compared. The ∆T method, which is based on thermodynamic principles, eliminated the effects of humidity variations and resulted in the most precise estimation of the drying end-point. The influence of fluidisation behaviour on drying end-point detection was determined. The feasibility of the ∆T method and thus the similarities of end-point moisture contents were found to be dependent on the variation in fluidisation between manufacturing batches. A novel parameter that describes behaviour of material in a fluid bed was developed. Flow rate of the process air and turbine fan speed were used to calculate this parameter and it was compared to the fluidisation behaviour and the particle size results. The design space process trajectories for smooth fluidisation based on the fluidisation parameters were determined. With this design space it is possible to avoid excessive fluidisation and improper fluidisation and bed collapse. Furthermore, various process phenomena and failure modes were observed with the in-line particle size analyser. Both rapid increase and a decrease in granule size could be monitored in a timely manner. The fluidisation parameter and the pressure difference over filters were also discovered to express particle size when the granules had been formed. The various physical parameters evaluated in this thesis give valuable information of fluid bed process performance and increase the process understanding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distraction in the workplace is increasingly more common in the information age. Several tasks and sources of information compete for a worker's limited cognitive capacities in human-computer interaction (HCI). In some situations even very brief interruptions can have detrimental effects on memory. Nevertheless, in other situations where persons are continuously interrupted, virtually no interruption costs emerge. This dissertation attempts to reveal the mental conditions and causalities differentiating the two outcomes. The explanation, building on the theory of long-term working memory (LTWM; Ericsson and Kintsch, 1995), focuses on the active, skillful aspects of human cognition that enable the storage of task information beyond the temporary and unstable storage provided by short-term working memory (STWM). Its key postulate is called a retrieval structure an abstract, hierarchical knowledge representation built into long-term memory that can be utilized to encode, update, and retrieve products of cognitive processes carried out during skilled task performance. If certain criteria of practice and task processing are met, LTWM allows for the storage of large representations for long time periods, yet these representations can be accessed with the accuracy, reliability, and speed typical of STWM. The main thesis of the dissertation is that the ability to endure interruptions depends on the efficiency in which LTWM can be recruited for maintaing information. An observational study and a field experiment provide ecological evidence for this thesis. Mobile users were found to be able to carry out heavy interleaving and sequencing of tasks while interacting, and they exhibited several intricate time-sharing strategies to orchestrate interruptions in a way sensitive to both external and internal demands. Interruptions are inevitable, because they arise as natural consequences of the top-down and bottom-up control of multitasking. In this process the function of LTWM is to keep some representations ready for reactivation and others in a more passive state to prevent interference. The psychological reality of the main thesis received confirmatory evidence in a series of laboratory experiments. They indicate that after encoding into LTWM, task representations are safeguarded from interruptions, regardless of their intensity, complexity, or pacing. However, when LTWM cannot be deployed, the problems posed by interference in long-term memory and the limited capacity of the STWM surface. A major contribution of the dissertation is the analysis of when users must resort to poorer maintenance strategies, like temporal cues and STWM-based rehearsal. First, one experiment showed that task orientations can be associated with radically different patterns of retrieval cue encodings. Thus the nature of the processing of the interface determines which features will be available as retrieval cues and which must be maintained by other means. In another study it was demonstrated that if the speed of encoding into LTWM, a skill-dependent parameter, is slower than the processing speed allowed for by the task, interruption costs emerge. Contrary to the predictions of competing theories, these costs turned out to involve intrusions in addition to omissions. Finally, it was learned that in rapid visually oriented interaction, perceptual-procedural expectations guide task resumption, and neither STWM nor LTWM are utilized due to the fact that access is too slow. These findings imply a change in thinking about the design of interfaces. Several novel principles of design are presented, basing on the idea of supporting the deployment of LTWM in the main task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Haptices and haptemes: A case study of developmental process in touch-based communication of acquired deafblind people This research is the first systematic, longitudinal process and development description of communication using touch and body with an acquired deafblind person. The research consists of observational and analysed written and video materials mainly from two informants´ experiences during period of 14 years. The research describes the adaptation of Social-Haptic methods between a couple, and other informants´ experiences, which have been collated from biographies and through giving national and international courses. When the hearing and sight deteriorates due to having an acquired deafblind condition, communication consists of multi-systematic and adaptive methods. A person`s expressive language, spoken or Sign Language, usually remains unchanged, but the methods of receiving information could change many times during a person s lifetime. Haptices are made from haptemes that determines which regulations are analysed. When defining haptemes the definition, classification and varied meanings of touch were discovered. Haptices include sharing a personal body space, meaning of touch-contact, context and using different communication channels. Communication distances are classified as exact distance, estimated distance and touch distance. Physical distance can be termed as very long, long, medium or very close. Social body space includes the body areas involved in sending and receiving haptices and applying different types of contacts. One or two hands can produce messages by using different hand shapes and orientations. This research classifies how the body can be identified into different areas such as body orientation, varied body postures, body position levels, social actions and which side of the body is used. Spatial body space includes environmental and situational elements. Haptemes of movements are recognised as the direction of movements, change of directions on the body, directions between people, pressure, speed, frequency, size, length, duration, pause, change of rhythm, shape, macro and micro movements. Haptices share multidimensional meanings and emotions. Research describes haptices in different situations enhancing sensory information and functioning also as an independent language. Haptices includes social-haptic confirmation system, social quick messages, body drawing, contact to the people and the environment, guiding and sharing art experiences through movements. Five stages of emotional differentiation were identified as very light, light, medium, heavy and very heavy touch. Haptices give the possibility to share different art, hobby and game experiences. A new communication system development based on the analysis of the research data is classified into different phases. These are experimental initiation, social deconstruction, developing the description of Social-Haptic communication and generalisation of the theory as well as finding and conceptualising the haptices and haptemes. The use and description of haptices is a social innovation, which illustrates the adaptive function of the body and perceptual senses that can be taught to a third party. Keywords: deafblindness, hapteme, haptic, haptices, movement, social-haptic communication, social-haptic confirmation system, tactile, touch

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis utilises an evidence-based approach to critically evaluate and summarize effectiveness research on physiotherapy, physiotherapy-related motor-based interventions and orthotic devices in children and adolescents with cerebral palsy (CP). It aims to assess the methodological challenges of the systematic reviews and trials, to evaluate the effectiveness of interventions in current use, and to make suggestions for future trials Methods: Systematic reviews were searched from computerized bibliographic databases up to August 2007 for physiotherapy and physiotherapy-related interventions, and up to May 2003 for orthotic devices. Two reviewers independently identified, selected, and assessed the quality of the reviews using the Overview Quality Assessment Questionnaire complemented with decision rules. From a sample of 14 randomized controlled trials (RCT) published between January 1990 and June 2003 we analysed the methods of sampling, recruitment, and comparability of groups; defined the components of a complex intervention; identified outcome measures based on the International Classification of Functioning, Disability and Health (ICF); analysed the clinical interpretation of score changes; and analysed trial reporting using a modified 33-item CONSORT (Consolidated Standards of Reporting Trials) checklist. The effectiveness of physiotherapy and physiotherapy-related interventions in children with diagnosed CP was evaluated in a systematic review of randomised controlled trials that were searched from computerized databases from January 1990 up to February 2007. Two reviewers independently assessed the methodological quality, extracted the data, classified the outcomes using the ICF, and considered the level of evidence according to van Tulder et al. (2003). Results: We identified 21 reviews on physiotherapy and physiotherapy-related interventions and five on orthotic devices. These reviews summarized 23 or 5 randomised controlled trials and 104 or 27 observational studies, respectively. Only six reviews were of high quality. These found some evidence supporting strength training, constraint-induced movement therapy or hippotherapy, and insufficient evidence on comprehensive interventions. Based on the original studies included in the reviews on orthotic devices we found some short-term effects of lower limb casting on passive range of movement, and of ankle-foot orthoses on equinus walk. Long term effects of lower limb orthoses have not been studied. Evidence of upper limb casting or orthoses is conflicting. In the sample of 14 RCTs, most trials used simple randomisation, complemented with matching or stratification, but only three specified the concealed allocation. Numerous studies provided sufficient details on the components of a complex intervention, but the overlap of outcome measures across studies was poor and the clinical interpretation of observed score changes was mostly missing. Almost half (48%) of the applicable CONSORT-based items (range 28 32) were reported adequately. Most reporting inadequacies were in outcome measures, sample size determination, details of the sequence generation, allocation concealment and implementation of the randomization, success of assessor blinding, recruitment and follow-up dates, intention-to-treat analysis, precision of the effect size, co-interventions, and adverse events. The systematic review identified 22 trials on eight intervention categories. Four trials were of high quality. Moderate evidence of effectiveness was established for upper extremity treatments on attained goals, active supination and developmental status, and of constraint-induced therapy on the amount and quality of hand use and new emerging behaviours. Moderate evidence of ineffectiveness was found for strength training's effect on walking speed and stride length. Conflicting evidence was found for strength training's effect on gross motor function. For the other intervention categories the evidence was limited due to the low methodological quality and the statistically insignificant results of the studies. Conclusions: The high-quality reviews provide both supportive and insufficient evidence on some physiotherapy interventions. The poor quality of most reviews calls for caution, although most reviews drew no conclusions on effectiveness due to the poor quality of the primary studies. A considerable number of RCTs of good to fair methodological and reporting quality indicate that informative and well-reported RCTs on complex interventions in children and adolescents with CP are feasible. Nevertheless, methodological improvement is needed in certain areas of the trial design and performance, and the trial authors are encouraged to follow the CONSORT criteria. Based on RCTs we established moderate evidence for some effectiveness of upper extremity training. Due to limitations in methodological quality and variations in population, interventions and outcomes, mostly limited evidence on the effectiveness of most physiotherapy interventions is available to guide clinical practice. Well-designed trials are needed, especially for focused physiotherapy interventions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

One of the unanswered questions of modern cosmology is the issue of baryogenesis. Why does the universe contain a huge amount of baryons but no antibaryons? What kind of a mechanism can produce this kind of an asymmetry? One theory to explain this problem is leptogenesis. In the theory right-handed neutrinos with heavy Majorana masses are added to the standard model. This addition introduces explicit lepton number violation to the theory. Instead of producing the baryon asymmetry directly, these heavy neutrinos decay in the early universe. If these decays are CP-violating, then they produce lepton number. This lepton number is then partially converted to baryon number by the electroweak sphaleron process. In this work we start by reviewing the current observational data on the amount of baryons in the universe. We also introduce Sakharov's conditions, which are the necessary criteria for any theory of baryogenesis. We review the current data on neutrino oscillation, and explain why this requires the existence of neutrino mass. We introduce the different kinds of mass terms which can be added for neutrinos, and explain how the see-saw mechanism naturally explains the observed mass scales for neutrinos motivating the addition of the Majorana mass term. After introducing leptogenesis qualitatively, we derive the Boltzmann equations governing leptogenesis, and give analytical approximations for them. Finally we review the numerical solutions for these equations, demonstrating the capability of leptogenesis to explain the observed baryon asymmetry. In the appendix simple Feynman rules are given for theories with interactions between both Dirac- and Majorana-fermions and these are applied at the tree level to calculate the parameters relevant for the theory.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Habitat fragmentation is currently affecting many species throughout the world. As a consequence, an increasing number of species are structured as metapopulations, i.e. as local populations connected by dispersal. While excellent studies of metapopulations have accumulated over the past 20 years, the focus has recently shifted from single species to studies of multiple species. This has created the concept of metacommunities, where local communities are connected by the dispersal of one or several of their member species. To understand this higher level of organisation, we need to address not only the properties of single species, but also establish the importance of interspecific interactions. However, studies of metacommunities are so far heavily biased towards laboratory-based systems, and empirical data from natural systems are urgently needed. My thesis focuses on a metacommunity of insect herbivores on the pedunculate oak Quercus robur a tree species known for its high diversity of host-specific insects. Taking advantage of the amenability of this system to both observational and experimental studies, I quantify and compare the importance of local and regional factors in structuring herbivore communities. Most importantly, I contrast the impact of direct and indirect competition, host plant genotype and local adaptation (i.e. local factors) to that of regional processes (as reflected by the spatial context of the local community). As a key approach, I use general theory to generate testable hypotheses, controlled experiments to establish causal relations, and observational data to validate the role played by the pinpointed processes in nature. As the central outcome of my thesis, I am able to relegate local forces to a secondary role in structuring oak-based insect communities. While controlled experiments show that direct competition does occur among both conspecifics and heterospecifics, that indirect interactions can be mediated by both the host plant and the parasitoids, and that host plant genotype may affect local adaptation, the size of these effects is much smaller than that of spatial context. Hence, I conclude that dispersal between habitat patches plays a prime role in structuring the insect community, and that the distribution and abundance of the target species can only be understood in a spatial framework. By extension, I suggest that the majority of herbivore communities are dependent on the spatial structure of their landscape and urge fellow ecologists working on other herbivore systems to either support or refute my generalization.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vasomotor hot flushes are complained of by approximately 75% of postmenopausal women, but their frequency and severity show great individual variation. Hot flushes have been present in women attending observational studies showing cardiovascular benefit associated with hormone therapy use, whereas they have been absent or very mild in randomized hormone therapy trials showing cardiovascular harm. Therefore, if hot flushes are a factor connected with vascular health, they could perhaps be one explanation for the divergence of cardiovascular data in observational versus randomized studies. For the present study 150 healthy, recently postmenopausal women showing a large variation in hot flushes were studied in regard to cardiovascular health by way of pulse wave analysis, ambulatory blood pressure and several biochemical vascular markers. In addition, the possible impact of hot flushes on outcomes of hormone therapy was studied. This study shows that women with severe hot flushes exhibit a greater vasodilatory reactivity as assessed by pulse wave analysis than do women without vasomotor symptoms. This can be seen as a hot flush-related vascular benefit. Although severe night-time hot flushes seem to be accompanied by transient increases in blood pressure and heart rate, the diurnal blood pressure and heart rate profiles show no significant differences between women without and with mild, moderate or severe hot flushes. The levels of vascular markers, such as lipids, lipoproteins, C-reactive protein and sex hormone-binding globulin show no association with hot flush status. In the 6-month hormone therapy trial the women were classified as having either tolerable or intolerable hot flushes. These groups were treated in a randomized order with transdermal estradiol gel, oral estradiol alone or in combination with medroxyprogesterone acetate, or with placebo. In women with only tolerable hot flushes, oral estradiol leads to a reduced vasodilatory response and increases in 24-hour and daytime blood pressures as compared to women with intolerable hot flushes receiving the same therapy. No such effects were observed with the other treatment regimes or in women with intolerable hot flushes. The responses of vascular biomarkers to hormone therapy are unaffected by hot flush status. In conclusion, hot flush status contributes to cardiovascular health before and during hormone therapy. Severe hot flushes are associated with an increased vasodilatory, and thus, a beneficial vascular status. Oral estradiol leads to vasoconstrictive changes and increases in blood pressure, and thus to possible vascular harm, but only in women whose hot flushes are so mild that they would probably not lead to the initiation of hormone therapy in clinical practice. Healthy, recently postmenopausal women with moderate to severe hot flushes should be given the opportunity to use hormone therapy alleviate hot flushes, and if estrogen is prescribed for indications other than for the control of hot flushes, transdermal route of administration should be favored.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Venous thromboembolism (VTE) is the greatest single cause of maternal mortality in pregnant women in developed countries. Pregnancy is a hypercoagulable state and brings about an enhanced risk of deep venous thrombosis (DVT) in otherwise healthy women. Traditionally, unfractionated heparin (UFH) has been used for treatment of DVT during pregnancy. We showed in our observational study that low molecular weight heparin (LMWH) is as effective and safe as UFH in the treatment of DVT during pregnancy. Although DVT during pregnancy is often massive, increasing the risk of developing long-term consequences, namely post-thrombotic syndrome (PTS), only 11% of all patients had confirmed PTS 3 4 years after DVT. In our studies the prevalence of PTS was not dependent on treatment (UFH vs LMWH). Low molecular weight heparin is more easily administered, few laboratory controls are required and the hospital stay is shorter, factors that lower the costs of treatment. Cervical insufficiency is defined as repeated very preterm delivery during the second or early third trimester. Infection is a well-known risk factor of preterm delivery. We found overpresentation of thrombophilic mutations (FV Leiden, prothrombin G20210A)among 42 patients with cervical insufficiency compared with controls (OR 6.7, CI 2.7 18.4). Thus, thrombophilia might be a risk factor of cervical insufficiency possibly explained by interaction of coagulation and inflammation processes. The presence of antiphospholipid (aPL) antibodies increases the risk for recurrent miscarriage (RM). Annexins are proteins which all bind to anionic phospholipids (PLs) preventing clotting on vascular phospholipid surfaces. Plasma concentrations of circulating annexin IV and V were investigated in 77 pregnancies at the beginning of pregnancy among women with a history of RM, and in connection to their aPL antibody status. Control group consisted unselected pregnant patients (n=25) without history of adverse pregnancy outcome. Plasma levels of annexin V were significantly higher at the beginning (≤5th week) of pregnancy in women with aPL antibodies compared with those without aPL antibodies (P=0.03). Levels of circulating annexin V were also higher at the 6th (P= 0.01) and 8th week of pregnancy in subjects with aPL antibodies (P=0.01). Results support the hypothesis that aPL could displace annexin from anionic phospholipid surfaces of syncytiotrophoblasts (STBs) and may exert procoagulant activities on the surfaces of STBs Recurrent miscarriage (RM) has been suggested to be caused by mutations in genes coding for various coagulation factors resulting in thrombophilia. In the last study of my thesis were investigated the prevalence of thrombomodulin (TM) and endothelial protein C receptor polymorphism EPCR among 40 couples and six women suffering RM. This study showed that mutations in the TM or EPCR genes are not a major cause of RM in Finnish patients.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The cosmological observations of light from type Ia supernovae, the cosmic microwave background and the galaxy distribution seem to indicate that the expansion of the universe has accelerated during the latter half of its age. Within standard cosmology, this is ascribed to dark energy, a uniform fluid with large negative pressure that gives rise to repulsive gravity but also entails serious theoretical problems. Understanding the physical origin of the perceived accelerated expansion has been described as one of the greatest challenges in theoretical physics today. In this thesis, we discuss the possibility that, instead of dark energy, the acceleration would be caused by an effect of the nonlinear structure formation on light, ignored in the standard cosmology. A physical interpretation of the effect goes as follows: due to the clustering of the initially smooth matter with time as filaments of opaque galaxies, the regions where the detectable light travels get emptier and emptier relative to the average. As the developing voids begin to expand the faster the lower their matter density becomes, the expansion can then accelerate along our line of sight without local acceleration, potentially obviating the need for the mysterious dark energy. In addition to offering a natural physical interpretation to the acceleration, we have further shown that an inhomogeneous model is able to match the main cosmological observations without dark energy, resulting in a concordant picture of the universe with 90% dark matter, 10% baryonic matter and 15 billion years as the age of the universe. The model also provides a smart solution to the coincidence problem: if induced by the voids, the onset of the perceived acceleration naturally coincides with the formation of the voids. Additional future tests include quantitative predictions for angular deviations and a theoretical derivation of the model to reduce the required phenomenology. A spin-off of the research is a physical classification of the cosmic inhomogeneities according to how they could induce accelerated expansion along our line of sight. We have identified three physically distinct mechanisms: global acceleration due to spatial variations in the expansion rate, faster local expansion rate due to a large local void and biased light propagation through voids that expand faster than the average. A general conclusion is that the physical properties crucial to account for the perceived acceleration are the growth of the inhomogeneities and the inhomogeneities in the expansion rate. The existence of these properties in the real universe is supported by both observational data and theoretical calculations. However, better data and more sophisticated theoretical models are required to vindicate or disprove the conjecture that the inhomogeneities are responsible for the acceleration.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Accurate and stable time series of geodetic parameters can be used to help in understanding the dynamic Earth and its response to global change. The Global Positioning System, GPS, has proven to be invaluable in modern geodynamic studies. In Fennoscandia the first GPS networks were set up in 1993. These networks form the basis of the national reference frames in the area, but they also provide long and important time series for crustal deformation studies. These time series can be used, for example, to better constrain the ice history of the last ice age and the Earth s structure, via existing glacial isostatic adjustment models. To improve the accuracy and stability of the GPS time series, the possible nuisance parameters and error sources need to be minimized. We have analysed GPS time series to study two phenomena. First, we study the refraction in the neutral atmosphere of the GPS signal, and, second, we study the surface loading of the crust by environmental factors, namely the non-tidal Baltic Sea, atmospheric load and varying continental water reservoirs. We studied the atmospheric effects on the GPS time series by comparing the standard method to slant delays derived from a regional numerical weather model. We have presented a method for correcting the atmospheric delays at the observational level. The results show that both standard atmosphere modelling and the atmospheric delays derived from a numerical weather model by ray-tracing provide a stable solution. The advantage of the latter is that the number of unknowns used in the computation decreases and thus, the computation may become faster and more robust. The computation can also be done with any processing software that allows the atmospheric correction to be turned off. The crustal deformation due to loading was computed by convolving Green s functions with surface load data, that is to say, global hydrology models, global numerical weather models and a local model for the Baltic Sea. The result was that the loading factors can be seen in the GPS coordinate time series. Reducing the computed deformation from the vertical time series of GPS coordinates reduces the scatter of the time series; however, the long term trends are not influenced. We show that global hydrology models and the local sea surface can explain up to 30% of the GPS time series variation. On the other hand atmospheric loading admittance in the GPS time series is low, and different hydrological surface load models could not be validated in the present study. In order to be used for GPS corrections in the future, both atmospheric loading and hydrological models need further analysis and improvements.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The increased accuracy in the cosmological observations, especially in the measurements of the comic microwave background, allow us to study the primordial perturbations in grater detail. In this thesis, we allow the possibility for a correlated isocurvature perturbations alongside the usual adiabatic perturbations. Thus far the simplest six parameter \Lambda CDM model has been able to accommodate all the observational data rather well. However, we find that the 3-year WMAP data and the 2006 Boomerang data favour a nonzero nonadiabatic contribution to the CMB angular power sprctrum. This is primordial isocurvature perturbation that is positively correlated with the primordial curvature perturbation. Compared with the adiabatic \Lambda CMD model we have four additional parameters describing the increased complexity if the primordial perturbations. Our best-fit model has a 4% nonadiabatic contribution to the CMB temperature variance and the fit is improved by \Delta\chi^2 = 9.7. We can attribute this preference for isocurvature to a feature in the peak structure of the angular power spectrum, namely, the widths of the second and third acoustic peak. Along the way, we have improved our analysis methods by identifying some issues with the parametrisation of the primordial perturbation spectra and suggesting ways to handle these. Due to the improvements, the convergence of our Markov chains is improved. The change of parametrisation has an effect on the MCMC analysis because of the change in priors. We have checked our results against this and find only marginal differences between our parametrisation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis consists of four research papers and an introduction providing some background. The structure in the universe is generally considered to originate from quantum fluctuations in the very early universe. The standard lore of cosmology states that the primordial perturbations are almost scale-invariant, adiabatic, and Gaussian. A snapshot of the structure from the time when the universe became transparent can be seen in the cosmic microwave background (CMB). For a long time mainly the power spectrum of the CMB temperature fluctuations has been used to obtain observational constraints, especially on deviations from scale-invariance and pure adiabacity. Non-Gaussian perturbations provide a novel and very promising way to test theoretical predictions. They probe beyond the power spectrum, or two point correlator, since non-Gaussianity involves higher order statistics. The thesis concentrates on the non-Gaussian perturbations arising in several situations involving two scalar fields, namely, hybrid inflation and various forms of preheating. First we go through some basic concepts -- such as the cosmological inflation, reheating and preheating, and the role of scalar fields during inflation -- which are necessary for the understanding of the research papers. We also review the standard linear cosmological perturbation theory. The second order perturbation theory formalism for two scalar fields is developed. We explain what is meant by non-Gaussian perturbations, and discuss some difficulties in parametrisation and observation. In particular, we concentrate on the nonlinearity parameter. The prospects of observing non-Gaussianity are briefly discussed. We apply the formalism and calculate the evolution of the second order curvature perturbation during hybrid inflation. We estimate the amount of non-Gaussianity in the model and find that there is a possibility for an observational effect. The non-Gaussianity arising in preheating is also studied. We find that the level produced by the simplest model of instant preheating is insignificant, whereas standard preheating with parametric resonance as well as tachyonic preheating are prone to easily saturate and even exceed the observational limits. We also mention other approaches to the study of primordial non-Gaussianities, which differ from the perturbation theory method chosen in the thesis work.