20 resultados para approximate

em Helda - Digital Repository of University of Helsinki


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Tove Jansson (1914--2001) was a Finnish illustrator, author, artist, caricaturist and comic artist. She is best known for her Moomin Books, written in Swedish, which she illustrated herself, and published between 1945 and 1977. My study focuses on the interweaving of images and words in Jansson s picturebooks, novels and short stories situated in the fantasy world of Moomin Valley. In particular, it concentrates on Jansson s development of a special kind of aesthetics of movement and stasis, based upon both illustration and text. The conventions of picturebook art and illustration are significant to both Jansson s visual art and her writing, and she was acutely conscious of them. My analysis of Jansson s work begins by discussing her first published picturebooks and less familiar illustrations (before she began her Moomin books) and I then proceed to discuss her three Moomin picturebooks, The Book about Moomin, Mymble and Little My; Who Will Comfort Toffle?, and The Dangerous Journey. The discussion moves from images to words and from words to images: Barthes s (1982) concept of anchoring and, in particular, what he calls relaying , form a point of reading and viewing Moomin texts and illustrations in a complementary relation, in which the message s unity occurs on a higher level: that of the story, the anecdote, the diegesis . The eight illustrated Moomin novels and one collection of short stories are analysed in a similar manner, taking into account the academic discourse about picturebooks which was developed in the last decade of the 20th century and beginning of the 21st century by, among others, scholars such as Nodelman, Rhedin, Doonan, Thiele, Stephens, Lewis, Nikolajeva and Scott. In her Moomin books, Jansson uses a wide variety of narrative and illustrative styles which are complementary to each other. Each book is different and unique in its own way, but a certain development or progression of mood and representation can be seen when assessing the series as a whole. Jansson s early stories are happy and adventurous but her later Moomin novels, beginning from Moominland Midwinter, focus more on the interiority of the characters, placing them in difficult situations which approximate social reality. This orientation is also reflected in the representation of movement and space. The books which were published first include more obviously descriptive passages, exemplifying the tradition of literary pictorialism. Whereas in Jansson s later work, the space develops into something that is alive which can have an enduring effect on the characters personalities and behaviour. This study shows how the idea of an image a dynamic image -- forms a holistic foundation for Jansson s imagination and work. The idea of central perspective, or frame, for instance, provided inspiration for whole stories or in the way that she developed her characters, as in the case of the Fillyjonk, who is a complex female figure, simultaneously frantic and prim. The idea of movement is central to the narrative art of picturebooks and illustrated texts, particularly in relation to the way that action is depicted. Jansson, however, also develops a specific choreography of characters in which poses and postures signify action, feelings and relationships. Here, I use two ideas from modern dance, contraction and release (Graham), to characterise the language of movement which is evident in Jansson s words and images. In Jansson s final Moomin novels and short stories, the idea of space becomes more and more dynamic and closely linked with characterisation. My study also examines a number of Jansson s early sketches for her Moomin novels, in which movement is performed much more dramatically than in those illustrations which appeared in the last novels to be published.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An overwhelming majority of all the research on soil phosphorus (P) has been carried out with soil samples taken from the surface soils only, and our understanding of the forms and the reactions of P at a soil profile scale is based on few observations. In Finland, the interest in studying the P in complete soil profiles has been particularly small because of the lack of tradition in studying soil genesis, morphology, or classification. In this thesis, the P reserves and the retention of orthophosphate phosphorus (PO4-P) were examined in four cultivated mineral soil profiles in Finland (three Inceptisols and one Spodosol). The soils were classified according to the U.S. Soil Taxonomy and soil samples were taken from the genetic horizons in the profiles. The samples were analyzed for total P concentration, Chang and Jackson P fractions, P sorption properties, concentrations of water-extractable P, and for concentrations of oxalate-extractable Al and Fe. Theoretical P sorption capacities and degrees of P saturation were calculated with the data from the oxalate-extractions and the P fractionations. The studied profiles can be divided into sections with clearly differing P characteristics by their master horizons Ap, B and C. The C (or transitional BC) horizons below an approximate depth of 70 cm were dominated by, assumingly apatitic, H2SO4-soluble P. The concentration of total P in the C horizons ranged from 729 to 810 mg kg-1. In the B horizons between the depths of 30 and 70 cm, a significant part of the primary acid-soluble P has been weathered and transformed to secondary P forms. A mean weathering rate of the primary P in the soils was estimated to vary between 230 and 290 g ha-1 year-1. The degrees of P saturation in the B and C horizons were smaller than 7%, and the solubility of PO4-P was negligible. The P conditions in the Ap horizons differed drastically from those in the subsurface horizons. The high concentrations of total P (689-1870 mg kg-1) in the Ap horizons are most likely attributable to long-term cultivation with positive P balances. A significant proportion of the P in the Ap horizons occurred in the NH4F- and NaOH-extractable forms and as organic P. These three P pools, together with the concentrations of oxalate-extractable Al and Fe, seem to control the dynamics of PO4-P in the soils. The degrees of P saturation in the Ap horizons were greater (8-36%) than in the subsurface horizons. This was also reflected in the sorption experiments: Only the Ap horizons were able to maintain elevated PO4-P concentrations in the solution phase − all the subsoil horizons acted as sinks for PO4-P. Most of the available sorption capacity in the soils is located in the B horizons. The results suggest that this capacity could be utilized in reducing the losses of soluble P from excessively fertilized soils by mixing highly sorptive material from the B horizons with the P-enriched surface soil. The drastic differences in the P characteristics observed between adjoining horizons have to be taken into consideration when conducting soil sampling. Sampling of subsoils has to be made according to the genetic horizons or at small depth increments. Otherwise, contrasting materials are likely to be mixed in the same sample; and the results of such samples are not representative of any material present in the studied profile. Air-drying of soil samples was found to alter the results of the sorption experiments and the water extractions. This indicates that the studies on the most labile P forms in soil should be carried out with moist samples.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Finland has moved from growing vegetables by natural light to year-round greenhouse production using artificial lighting. Determination of sensory effects on greenhouse-grown vegetables is important as sensory evaluation provides information which chemical methods can not. It can tell us about the quality of samples which affects the consumers' behaviour. There are different opinions on how the quality of vegetables should be determined. The consumers are dissatisfied with the quality of vegetables and fruits, although the variety of products is larger than ever. The aim of this study was to find out how artificial lighting contributes to the sensory quality of greenhouse tomatoes and cucumbers compared to traditional natural lighting, and how storage affects the sensory attributes of the samples. In this study there were two sets of tomatoes and two sets of cucumbers, representing two different harvest seasons. Sensory evaluation involved two steps. The first step was to sort the samples and the second step was to generate a profile using descriptive analysis. Sorting was found to give some approximate information on differences between tomato and cucumber samples. MDS-maps dimensions were presented by age and lighting technique. The reliability of sorting results was quite good. The quality of the natural products was inconsistent. Production technology had more of an effect on cucumber samples than tomato samples. Natural light cucumbers were, for example sweeter and softer than artificial light cucumbers. Age had an especially large effect on cucumber appearance characteristics. There were less differences between tomato samples than cucumber samples. Production technology had less of an effect on tomato samples than age, e.g. hardness decreased during storage. In this study, it was found that artificial lighting has little effect on the sensory quality of Finnish greenhouse tomatoes compared with tomatoes grown under natural light.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In Finland, peat harvesting sites are utilized down almost to the mineral soil. In this situation the properties of mineral subsoil are likely to have considerable influence on the suitability for the various after-use forms. The aims of this study were to recognize the chemical and physical properties of mineral subsoils possibly limiting the after-use of cut-over peatlands, to define a minimum practice for mineral subsoil studies and to describe the role of different geological areas. The future percentages of the different after-use forms were predicted, which made it possible to predict also carbon accumulation in this future situation. Mineral subsoils of 54 different peat production areas were studied. Their general features and grain size distribution was analysed. Other general items studied were pH, electrical conductivity, organic matter, water soluble nutrients (P, NO3-N, NH4-N, S and Fe) and exchangeable nutrients (Ca, Mg and K). In some cases also other elements were analysed. In an additional case study carbon accumulation effectiveness before the intervention was evaluated on three sites in Oulu area (representing sites typically considered for peat production). Areas with relatively sulphur rich mineral subsoil and pool-forming areas with very fine and compact mineral subsoil together covered approximately 1/5 of all areas. These areas were unsuitable for commercial use. They were recommended for example for mire regeneration. Another approximate 1/5 of the areas included very coarse or very fine sediments. Commercial use of these areas would demand special techniques - like using the remaining peat layer for compensating properties missing from the mineral subsoil. One after-use form was seldom suitable for one whole released peat production area. Three typical distribution patterns (models) of different mineral subsoils within individual peatlands were found. 57 % of studied cut-over peatlands were well suited for forestry. In a conservative calculation 26% of the areas were clearly suitable for agriculture, horticulture or energy crop production. If till without large boulders was included, the percentage of areas suitable to field crop production would be 42 %. 9-14 % of all areas were well suitable for mire regeneration or bird sanctuaries, but all areas were considered possible for mire regeneration with correct techniques. Also another 11 % was recommended for mire regeneration to avoid disturbing the mineral subsoil, so total 20-25 % of the areas would be used for rewetting. High sulphur concentrations and acidity were typical to the areas below the highest shoreline of the ancient Litorina sea and Lake Ladoga Bothnian Bay zone. Also differences related to nutrition were detected. In coarse sediments natural nutrient concentration was clearly higher in Lake Ladoga Bothnian Bay zone and in the areas of Svecokarelian schists and gneisses, than in Granitoid area of central Finland and in Archaean gneiss areas. Based on this study the recommended minimum analysis for after-use planning was for pH, sulphur content and fine material (<0.06 mm) percentage. Nutrition capacity could be analysed using the natural concentrations of calcium, magnesium and potassium. Carbon accumulation scenarios were developed based on the land-use predictions. These scenarios were calculated for areas in peat production and the areas released from peat production (59300 ha + 15 671 ha). Carbon accumulation of the scenarios varied between 0.074 and 0.152 million t C a-1. In the three peatlands considered for peat production the long term carbon accumulation rates varied between 13 and 24 g C m-2 a-1. The natural annual carbon accumulation had been decreasing towards the time of possible intervention.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis the use of the Bayesian approach to statistical inference in fisheries stock assessment is studied. The work was conducted in collaboration of the Finnish Game and Fisheries Research Institute by using the problem of monitoring and prediction of the juvenile salmon population in the River Tornionjoki as an example application. The River Tornionjoki is the largest salmon river flowing into the Baltic Sea. This thesis tackles the issues of model formulation and model checking as well as computational problems related to Bayesian modelling in the context of fisheries stock assessment. Each article of the thesis provides a novel method either for extracting information from data obtained via a particular type of sampling system or for integrating the information about the fish stock from multiple sources in terms of a population dynamics model. Mark-recapture and removal sampling schemes and a random catch sampling method are covered for the estimation of the population size. In addition, a method for estimating the stock composition of a salmon catch based on DNA samples is also presented. For most of the articles, Markov chain Monte Carlo (MCMC) simulation has been used as a tool to approximate the posterior distribution. Problems arising from the sampling method are also briefly discussed and potential solutions for these problems are proposed. Special emphasis in the discussion is given to the philosophical foundation of the Bayesian approach in the context of fisheries stock assessment. It is argued that the role of subjective prior knowledge needed in practically all parts of a Bayesian model should be recognized and consequently fully utilised in the process of model formulation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It is well known that an integrable (in the sense of Arnold-Jost) Hamiltonian system gives rise to quasi-periodic motion with trajectories running on invariant tori. These tori foliate the whole phase space. If we perturb an integrable system, the Kolmogorow-Arnold-Moser (KAM) theorem states that, provided some non-degeneracy condition and that the perturbation is sufficiently small, most of the invariant tori carrying quasi-periodic motion persist, getting only slightly deformed. The measure of the persisting invariant tori is large together with the inverse of the size of the perturbation. In the first part of the thesis we shall use a Renormalization Group (RG) scheme in order to prove the classical KAM result in the case of a non analytic perturbation (the latter will only be assumed to have continuous derivatives up to a sufficiently large order). We shall proceed by solving a sequence of problems in which theperturbations are analytic approximations of the original one. We will finally show that the approximate solutions will converge to a differentiable solution of our original problem. In the second part we will use an RG scheme using continuous scales, so that instead of solving an iterative equation as in the classical RG KAM, we will end up solving a partial differential equation. This will allow us to reduce the complications of treating a sequence of iterative equations to the use of the Banach fixed point theorem in a suitable Banach space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this thesis we study a few games related to non-wellfounded and stationary sets. Games have turned out to be an important tool in mathematical logic ranging from semantic games defining the truth of a sentence in a given logic to for example games on real numbers whose determinacies have important effects on the consistency of certain large cardinal assumptions. The equality of non-wellfounded sets can be determined by a so called bisimulation game already used to identify processes in theoretical computer science and possible world models for modal logic. Here we present a game to classify non-wellfounded sets according to their branching structure. We also study games on stationary sets moving back to classical wellfounded set theory. We also describe a way to approximate non-wellfounded sets with hereditarily finite wellfounded sets. The framework used to do this is domain theory. In the Banach-Mazur game, also called the ideal game, the players play a descending sequence of stationary sets and the second player tries to keep their intersection stationary. The game is connected to precipitousness of the corresponding ideal. In the pressing down game first player plays regressive functions defined on stationary sets and the second player responds with a stationary set where the function is constant trying to keep the intersection stationary. This game has applications in model theory to the determinacy of the Ehrenfeucht-Fraisse game. We show that it is consistent that these games are not equivalent.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The analysis of sequential data is required in many diverse areas such as telecommunications, stock market analysis, and bioinformatics. A basic problem related to the analysis of sequential data is the sequence segmentation problem. A sequence segmentation is a partition of the sequence into a number of non-overlapping segments that cover all data points, such that each segment is as homogeneous as possible. This problem can be solved optimally using a standard dynamic programming algorithm. In the first part of the thesis, we present a new approximation algorithm for the sequence segmentation problem. This algorithm has smaller running time than the optimal dynamic programming algorithm, while it has bounded approximation ratio. The basic idea is to divide the input sequence into subsequences, solve the problem optimally in each subsequence, and then appropriately combine the solutions to the subproblems into one final solution. In the second part of the thesis, we study alternative segmentation models that are devised to better fit the data. More specifically, we focus on clustered segmentations and segmentations with rearrangements. While in the standard segmentation of a multidimensional sequence all dimensions share the same segment boundaries, in a clustered segmentation the multidimensional sequence is segmented in such a way that dimensions are allowed to form clusters. Each cluster of dimensions is then segmented separately. We formally define the problem of clustered segmentations and we experimentally show that segmenting sequences using this segmentation model, leads to solutions with smaller error for the same model cost. Segmentation with rearrangements is a novel variation to the segmentation problem: in addition to partitioning the sequence we also seek to apply a limited amount of reordering, so that the overall representation error is minimized. We formulate the problem of segmentation with rearrangements and we show that it is an NP-hard problem to solve or even to approximate. We devise effective algorithms for the proposed problem, combining ideas from dynamic programming and outlier detection algorithms in sequences. In the final part of the thesis, we discuss the problem of aggregating results of segmentation algorithms on the same set of data points. In this case, we are interested in producing a partitioning of the data that agrees as much as possible with the input partitions. We show that this problem can be solved optimally in polynomial time using dynamic programming. Furthermore, we show that not all data points are candidates for segment boundaries in the optimal solution.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Minimum Description Length (MDL) principle is a general, well-founded theoretical formalization of statistical modeling. The most important notion of MDL is the stochastic complexity, which can be interpreted as the shortest description length of a given sample of data relative to a model class. The exact definition of the stochastic complexity has gone through several evolutionary steps. The latest instantation is based on the so-called Normalized Maximum Likelihood (NML) distribution which has been shown to possess several important theoretical properties. However, the applications of this modern version of the MDL have been quite rare because of computational complexity problems, i.e., for discrete data, the definition of NML involves an exponential sum, and in the case of continuous data, a multi-dimensional integral usually infeasible to evaluate or even approximate accurately. In this doctoral dissertation, we present mathematical techniques for computing NML efficiently for some model families involving discrete data. We also show how these techniques can be used to apply MDL in two practical applications: histogram density estimation and clustering of multi-dimensional data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first part of this work investigates the molecular epidemiology of a human enterovirus (HEV), echovirus 30 (E-30). This project is part of a series of studies performed in our research team analyzing the molecular epidemiology of HEV-B viruses. A total of 129 virus strains had been isolated in different parts of Europe. The sequence analysis was performed in three different genomic regions: 420 nucleotides (nt) in the VP4/VP2 capsid protein coding region, the entire VP1 capsid protein coding gene of 876 nt, and 150 nt in the VP1/2A junction region. The analysis revealed a succession of dominant sublineages within a major genotype. The temporally earlier genotypes had been replaced by a genetically homogenous lineage that has been circulating in Europe since the late 1970s. The same genotype was found by other research groups in North America and Australia. Globally, other cocirculating genetic lineages also exist. The prevalence of a dominant genotype makes E-30 different from other previously studied HEVs, such as polioviruses and coxsackieviruses B4 and B5, for which several coexisting genetic lineages have been reported. The second part of this work deals with molecular epidemiology of human rhinoviruses (HRVs). A total of 61 field isolates were studied in the 420-nt stretch in the capsid coding region of VP4/VP2. The isolates were collected from children under two years of age in Tampere, Finland. Sequences from the clinical isolates clustered in the two previously known phylogenetic clades. Seasonal clustering was found. Also, several distinct serotype-like clusters were found to co-circulate during the same epidemic season. Reappearance of a cluster after disappearing for a season was observed. The molecular epidemiology of the analyzed strains turned out to be complex, and we decided to continue our studies of HRV. Only five previously published complete genome sequences of HRV prototype strains were available for analysis. Therefore, all designated HRV prototype strains (n=102) were sequenced in the VP4/VP2 region, and the possibility of genetic typing of HRV was evaluated. Seventy-six of the 102 prototype strains clustered in HRV genetic group A (HRV-A) and 25 in group B (HRV-B). Serotype 87 clustered separately from other HRVs with HEV species D. The field strains of HRV represented as many as 19 different genotypes, as judged with an approximate demarcation of a 20% nt difference in the VP4/VP2 region. The interserotypic differences of HRV were generally similar to those reported between different HEV serotypes (i.e. about 20%), but smaller differences, less than 10%, were also observed. Because some HRV serotypes are genetically so closely related, we suggest that the genetic typing be performed using the criterion "the closest prototype strain". This study is the first systematic genetic characterization of all known HRV prototype strains, providing a further taxonomic proposal for classification of HRV. We proposed to divide the genus Human rhinoviruses into HRV-A and HRV-B. The final part of the work comprises a phylogenetic analysis of a subset (48) of HRV prototype strains and field isolates (12) in the nonstructural part of the genome coding for the RNA-dependent RNA polymerase (3D). The proposed division of the HRV strains in the species HRV-A and HRV-B was also supported by 3D region. HRV-B clustered closer to HEV species B, C, and also to polioviruses than to HRV-A. Intraspecies variation within both HRV-A and HRV-B was greater in the 3D coding region than in the VP4/VP2 coding region, in contrast to HEV. Moreover, the diversity of HRV in 3D exceeded that of HEV. One group of HRV-A, designated HRV-A', formed a separate cluster outside other HRV-A in the 3D region. It formed a cluster also in the capsid region, but located within HRV-A. This may reflect a different evolutionary history of distinct genomic regions among HRV-A. Furthermore, the tree topology within HRV-A in the 3D region differed from that in the VP4/VP2, suggesting possible recombination events in the evolution of the strains. No conflicting phylogenies were observed in any of the 12 field isolates. Possible recombination was further studied using the Similarity and Bootscanning analyses of the complete genome sequences of HRV available in public databases. Evidence for recombination among HRV-A was found, as HRV2 and HRV39 showed higher similarity in the nonstructural part of the genome. Whether HRV2 and HRV39 strains - and perhaps also some other HRV-A strains not yet completely sequenced - are recombinants remains to be determined.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Technical or contaminated ethanol products are sometimes ingested either accidentally or on purpose. Typical misused products are black-market liquor and automotive products, e.g., windshield washer fluids. In addition to less toxic solvents, these liquids may contain the deadly methanol. Symptoms of even lethal solvent poisoning are often non-specific at the early stage. The present series of studies was carried out to develop a method for solvent intoxication breath diagnostics to speed up the diagnosis procedure conventionally based on blood tests. Especially in the case of methanol ingestion, the analysis method should be sufficiently sensitive and accurate to determine the presence of even small amounts of methanol from the mixture of ethanol and other less-toxic components. In addition to the studies on the FT-IR method, the Dräger 7110 evidential breath analyzer was examined to determine its ability to reveal a coexisting toxic solvent. An industrial Fourier transform infrared analyzer was modified for breath testing. The sample cell fittings were widened and the cell size reduced in order to get an alveolar sample directly from a single exhalation. The performance and the feasibility of the Gasmet FT-IR analyzer were tested in clinical settings and in the laboratory. Actual human breath screening studies were carried out with healthy volunteers, inebriated homeless men, emergency room patients and methanol-intoxicated patients. A number of the breath analysis results were compared to blood test results in order to approximate the blood-breath relationship. In the laboratory experiments, the analytical performance of the Gasmet FT-IR analyzer and Dräger 7110 evidential breath analyzer was evaluated by means of artificial samples resembling exhaled breath. The investigations demonstrated that a successful breath ethanol analysis by Dräger 7110 evidential breath analyzer could exclude any significant methanol intoxication. In contrast, the device did not detect very high levels of acetone, 1-propanol and 2-propanol in simulated breath. The Dräger 7110 evidential breath ethanol analyzer was not equipped to recognize the interfering component. According to the studies the Gasmet FT-IR analyzer was adequately sensitive, selective and accurate for solvent intoxication diagnostics. In addition to diagnostics, the fast breath solvent analysis proved feasible for controlling the ethanol and methanol concentration during haemodialysis treatment. Because of the simplicity of the sampling and analysis procedure, non-laboratory personnel, such as police officers or social workers, could also operate the analyzer for screening purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The first quarter of the 20th century witnessed a rebirth of cosmology, study of our Universe, as a field of scientific research with testable theoretical predictions. The amount of available cosmological data grew slowly from a few galaxy redshift measurements, rotation curves and local light element abundances into the first detection of the cos- mic microwave background (CMB) in 1965. By the turn of the century the amount of data exploded incorporating fields of new, exciting cosmological observables such as lensing, Lyman alpha forests, type Ia supernovae, baryon acoustic oscillations and Sunyaev-Zeldovich regions to name a few. -- CMB, the ubiquitous afterglow of the Big Bang, carries with it a wealth of cosmological information. Unfortunately, that information, delicate intensity variations, turned out hard to extract from the overall temperature. Since the first detection, it took nearly 30 years before first evidence of fluctuations on the microwave background were presented. At present, high precision cosmology is solidly based on precise measurements of the CMB anisotropy making it possible to pinpoint cosmological parameters to one-in-a-hundred level precision. The progress has made it possible to build and test models of the Universe that differ in the way the cosmos evolved some fraction of the first second since the Big Bang. -- This thesis is concerned with the high precision CMB observations. It presents three selected topics along a CMB experiment analysis pipeline. Map-making and residual noise estimation are studied using an approach called destriping. The studied approximate methods are invaluable for the large datasets of any modern CMB experiment and will undoubtedly become even more so when the next generation of experiments reach the operational stage. -- We begin with a brief overview of cosmological observations and describe the general relativistic perturbation theory. Next we discuss the map-making problem of a CMB experiment and the characterization of residual noise present in the maps. In the end, the use of modern cosmological data is presented in the study of an extended cosmological model, the correlated isocurvature fluctuations. Current available data is shown to indicate that future experiments are certainly needed to provide more information on these extra degrees of freedom. Any solid evidence of the isocurvature modes would have a considerable impact due to their power in model selection.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.