44 resultados para number fields
Resumo:
XML documents are becoming more and more common in various environments. In particular, enterprise-scale document management is commonly centred around XML, and desktop applications as well as online document collections are soon to follow. The growing number of XML documents increases the importance of appropriate indexing methods and search tools in keeping the information accessible. Therefore, we focus on content that is stored in XML format as we develop such indexing methods. Because XML is used for different kinds of content ranging all the way from records of data fields to narrative full-texts, the methods for Information Retrieval are facing a new challenge in identifying which content is subject to data queries and which should be indexed for full-text search. In response to this challenge, we analyse the relation of character content and XML tags in XML documents in order to separate the full-text from data. As a result, we are able to both reduce the size of the index by 5-6\% and improve the retrieval precision as we select the XML fragments to be indexed. Besides being challenging, XML comes with many unexplored opportunities which are not paid much attention in the literature. For example, authors often tag the content they want to emphasise by using a typeface that stands out. The tagged content constitutes phrases that are descriptive of the content and useful for full-text search. They are simple to detect in XML documents, but also possible to confuse with other inline-level text. Nonetheless, the search results seem to improve when the detected phrases are given additional weight in the index. Similar improvements are reported when related content is associated with the indexed full-text including titles, captions, and references. Experimental results show that for certain types of document collections, at least, the proposed methods help us find the relevant answers. Even when we know nothing about the document structure but the XML syntax, we are able to take advantage of the XML structure when the content is indexed for full-text search.
Resumo:
Aerosol particles can cause detrimental environmental and health effects. The particles and their precursor gases are emitted from various anthropogenic and natural sources. It is important to know the origin and properties of aerosols to efficiently reduce their harmful effects. The diameter of aerosol particles (Dp) varies between ~0.001 and ~100 μm. Fine particles (PM2.5: Dp < 2.5 μm) are especially interesting because they are the most harmful and can be transported over long distances. The aim of this thesis is to study the impact on air quality by pollution episodes of long-range transported aerosols affecting the composition of the boundary-layer atmosphere in remote and relatively unpolluted regions of the world. The sources and physicochemical properties of aerosols were investigated in detail, based on various measurements (1) in southern Finland during selected long-range transport (LRT) pollution episodes and unpolluted periods and (2) over the Atlantic Ocean between Europe and Antarctica during a voyage. Furthermore, the frequency of LRT pollution episodes of fine particles in southern Finland was investigated over a period of 8 years, using long-term air quality monitoring data. In southern Finland, the annual mean PM2.5 mass concentrations were low but LRT caused high peaks of daily mean concentrations every year. At an urban background site in Helsinki, the updated WHO guideline value (24-h PM2.5 mean 25 μg/m3) was exceeded during 1-7 LRT episodes each year during 1999-2006. The daily mean concentrations varied between 25 and 49 μg/m3 during the episodes, which was 3-6 times higher than the mean concentration in the long term. The in-depth studies of selected LRT episodes in southern Finland revealed that biomass burning in agricultural fields and wildfires, occurring mainly in Eastern Europe, deteriorated air quality on a continental scale. The strongest LRT episodes of fine particles resulted from open biomass-burning fires but the emissions from other anthropogenic sources in Eastern Europe also caused significant LRT episodes. Particle mass and number concentrations increased strongly in the accumulation mode (Dp ~ 0.09-1 μm) during the LRT episodes. However, the concentrations of smaller particles (Dp < 0.09 μm) remained low or even decreased due to the uptake of vapours and molecular clusters by LRT particles. The chemical analysis of individual particles showed that the proportions of several anthropogenic particle types increased (e.g. tar balls, metal oxides/hydroxides, spherical silicate fly ash particles and various calcium-rich particles) in southern Finland during an LRT episode, when aerosols originated from the polluted regions of Eastern Europe and some open biomass-burning smoke was also brought in by LRT. During unpolluted periods when air masses arrived from the north, the proportions of marine aerosols increased. In unpolluted rural regions of southern Finland, both accumulation mode particles and small-sized (Dp ~ 1-3 μm) coarse mode particles originated mostly from LRT. However, the composition of particles was totally different in these size fractions. In both size fractions, strong internal mixing of chemical components was typical for LRT particles. Thus, the aging of particles has significant impacts on their chemical, hygroscopic and optical properties, which can largely alter the environmental and health effects of LRT aerosols. Over the Atlantic Ocean, the individual particle composition of small-sized (Dp ~ 1-3 μm) coarse mode particles was affected by continental aerosol plumes to distances of at least 100-1000 km from the coast (e.g. pollutants from industrialized Europe, desert dust from the Sahara and biomass-burning aerosols near the Gulf of Guinea). The rate of chloride depletion from sea-salt particles was high near the coasts of Europe and Africa when air masses arrived from polluted continental regions. Thus, the LRT of continental aerosols had significant impacts on the composition of the marine boundary-layer atmosphere and seawater. In conclusion, integration of the results obtained using different measurement techniques captured the large spatial and temporal variability of aerosols as observed at terrestrial and marine sites, and assisted in establishing the causal link between land-bound emissions, LRT and air quality.
Resumo:
Mitochondrial diseases are caused by disturbances of the energy metabolism. The disorders range from severe childhood neurological diseases to muscle diseases of adults. Recently, mitochondrial dysfunction has also been found in Parkinson s disease, diabetes, certain types of cancer and premature aging. Mitochondria are the power plants of the cell but they also participate in the regulation of cell growth, signaling and cell death. Mitochondria have their own genetic material, mtDNA, which contains the genetic instructions for cellular respiration. Single cell may host thousands of mitochondria and several mtDNA molecules may reside inside single mitochondrion. All proteins needed for mtDNA maintenance are, however, encoded by the nuclear genome, and therefore, mutations of the corresponding genes can also cause mitochondrial disease. We have here studied the function of mitochondrial helicase Twinkle. Our research group has previously identified nuclear Twinkle gene mutations underlying an inherited adult-onset disorder, progressive external ophthalmoplegia (PEO). Characteristic for the PEO disease is the accumulation of multiple mtDNA deletions in tissues such as the muscle and brain. In this study, we have shown that Twinkle helicase is essential for mtDNA maintenance and that it is capable of regulating mtDNA copy number. Our results support the role of Twinkle as the mtDNA replication helicase. No cure is available for mitochondrial disease. Good disease models are needed for studies of the cause of disease and its progression and for treatment trials. Such disease model, which replicates the key features of the PEO disease, has been generated in this study. The model allows for careful inspection of how Twinkle mutations lead to mtDNA deletions and further causes the PEO disease. This model will be utilized in a range of studies addressing the delay of the disease onset and progression and in subsequent treatment trials. In conclusion, in this thesis fundamental knowledge of the function of the mitochondrial helicase Twinkle was gained. In addition, the first model for adult-onset mitochondrial disease was generated.
Resumo:
In order to predict the current state and future development of Earth s climate, detailed information on atmospheric aerosols and aerosol-cloud-interactions is required. Furthermore, these interactions need to be expressed in such a way that they can be represented in large-scale climate models. The largest uncertainties in the estimate of radiative forcing on the present day climate are related to the direct and indirect effects of aerosol. In this work aerosol properties were studied at Pallas and Utö in Finland, and at Mount Waliguan in Western China. Approximately two years of data from each site were analyzed. In addition to this, data from two intensive measurement campaigns at Pallas were used. The measurements at Mount Waliguan were the first long term aerosol particle number concentration and size distribution measurements conducted in this region. They revealed that the number concentration of aerosol particles at Mount Waliguan were much higher than those measured at similar altitudes in other parts of the world. The particles were concentrated in the Aitken size range indicating that they were produced within a couple of days prior to reaching the site, rather than being transported over thousands of kilometers. Aerosol partitioning between cloud droplets and cloud interstitial particles was studied at Pallas during the two measurement campaigns, First Pallas Cloud Experiment (First PaCE) and Second Pallas Cloud Experiment (Second PaCE). The method of using two differential mobility particle sizers (DMPS) to calculate the number concentration of activated particles was found to agree well with direct measurements of cloud droplet. Several parameters important in cloud droplet activation were found to depend strongly on the air mass history. The effects of these parameters partially cancelled out each other. Aerosol number-to-volume concentration ratio was studied at all three sites using data sets with long time-series. The ratio was found to vary more than in earlier studies, but less than either aerosol particle number concentration or volume concentration alone. Both air mass dependency and seasonal pattern were found at Pallas and Utö, but only seasonal pattern at Mount Waliguan. The number-to-volume concentration ratio was found to follow the seasonal temperature pattern well at all three sites. A new parameterization for partitioning between cloud droplets and cloud interstitial particles was developed. The parameterization uses aerosol particle number-to-volume concentration ratio and aerosol particle volume concentration as the only information on the aerosol number and size distribution. The new parameterization is computationally more efficient than the more detailed parameterizations currently in use, but the accuracy of the new parameterization was slightly lower. The new parameterization was also compared to directly observed cloud droplet number concentration data, and a good agreement was found.
Resumo:
Einstein's general relativity is a classical theory of gravitation: it is a postulate on the coupling between the four-dimensional, continuos spacetime and the matter fields in the universe, and it yields their dynamical evolution. It is believed that general relativity must be replaced by a quantum theory of gravity at least at extremely high energies of the early universe and at regions of strong curvature of spacetime, cf. black holes. Various attempts to quantize gravity, including conceptually new models such as string theory, have suggested that modification to general relativity might show up even at lower energy scales. On the other hand, also the late time acceleration of the expansion of the universe, known as the dark energy problem, might originate from new gravitational physics. Thus, although there has been no direct experimental evidence contradicting general relativity so far - on the contrary, it has passed a variety of observational tests - it is a question worth asking, why should the effective theory of gravity be of the exact form of general relativity? If general relativity is modified, how do the predictions of the theory change? Furthermore, how far can we go with the changes before we are face with contradictions with the experiments? Along with the changes, could there be new phenomena, which we could measure to find hints of the form of the quantum theory of gravity? This thesis is on a class of modified gravity theories called f(R) models, and in particular on the effects of changing the theory of gravity on stellar solutions. It is discussed how experimental constraints from the measurements in the Solar System restrict the form of f(R) theories. Moreover, it is shown that models, which do not differ from general relativity at the weak field scale of the Solar System, can produce very different predictions for dense stars like neutron stars. Due to the nature of f(R) models, the role of independent connection of the spacetime is emphasized throughout the thesis.
Resumo:
An efficient and statistically robust solution for the identification of asteroids among numerous sets of astrometry is presented. In particular, numerical methods have been developed for the short-term identification of asteroids at discovery, and for the long-term identification of scarcely observed asteroids over apparitions, a task which has been lacking a robust method until now. The methods are based on the solid foundation of statistical orbital inversion properly taking into account the observational uncertainties, which allows for the detection of practically all correct identifications. Through the use of dimensionality-reduction techniques and efficient data structures, the exact methods have a loglinear, that is, O(nlog(n)), computational complexity, where n is the number of included observation sets. The methods developed are thus suitable for future large-scale surveys which anticipate a substantial increase in the astrometric data rate. Due to the discontinuous nature of asteroid astrometry, separate sets of astrometry must be linked to a common asteroid from the very first discovery detections onwards. The reason for the discontinuity in the observed positions is the rotation of the observer with the Earth as well as the motion of the asteroid and the observer about the Sun. Therefore, the aim of identification is to find a set of orbital elements that reproduce the observed positions with residuals similar to the inevitable observational uncertainty. Unless the astrometric observation sets are linked, the corresponding asteroid is eventually lost as the uncertainty of the predicted positions grows too large to allow successful follow-up. Whereas the presented identification theory and the numerical comparison algorithm are generally applicable, that is, also in fields other than astronomy (e.g., in the identification of space debris), the numerical methods developed for asteroid identification can immediately be applied to all objects on heliocentric orbits with negligible effects due to non-gravitational forces in the time frame of the analysis. The methods developed have been successfully applied to various identification problems. Simulations have shown that the methods developed are able to find virtually all correct linkages despite challenges such as numerous scarce observation sets, astrometric uncertainty, numerous objects confined to a limited region on the celestial sphere, long linking intervals, and substantial parallaxes. Tens of previously unknown main-belt asteroids have been identified with the short-term method in a preliminary study to locate asteroids among numerous unidentified sets of single-night astrometry of moving objects, and scarce astrometry obtained nearly simultaneously with Earth-based and space-based telescopes has been successfully linked despite a substantial parallax. Using the long-term method, thousands of realistic 3-linkages typically spanning several apparitions have so far been found among designated observation sets each spanning less than 48 hours.
Resumo:
In this thesis we examine multi-field inflationary models of the early Universe. Since non-Gaussianities may allow for the possibility to discriminate between models of inflation, we compute deviations from a Gaussian spectrum of primordial perturbations by extending the delta-N formalism. We use N-flation as a concrete model; our findings show that these models are generically indistinguishable as long as the slow roll approximation is still valid. Besides computing non-Guassinities, we also investigate Preheating after multi-field inflation. Within the framework of N-flation, we find that preheating via parametric resonance is suppressed, an indication that it is the old theory of preheating that is applicable. In addition to studying non-Gaussianities and preheatng in multi-field inflationary models, we study magnetogenesis in the early universe. To this aim, we propose a mechanism to generate primordial magnetic fields via rotating cosmic string loops. Magnetic fields in the micro-Gauss range have been observed in galaxies and clusters, but their origin has remained elusive. We consider a network of strings and find that rotating cosmic string loops, which are continuously produced in such networks, are viable candidates for magnetogenesis with relevant strength and length scales, provided we use a high string tension and an efficient dynamo.
Resumo:
When ordinary nuclear matter is heated to a high temperature of ~ 10^12 K, it undergoes a deconfinement transition to a new phase, strongly interacting quark-gluon plasma. While the color charged fundamental constituents of the nuclei, the quarks and gluons, are at low temperatures permanently confined inside color neutral hadrons, in the plasma the color degrees of freedom become dominant over nuclear, rather than merely nucleonic, volumes. Quantum Chromodynamics (QCD) is the accepted theory of the strong interactions, and confines quarks and gluons inside hadrons. The theory was formulated in early seventies, but deriving first principles predictions from it still remains a challenge, and novel methods of studying it are needed. One such method is dimensional reduction, in which the high temperature dynamics of static observables of the full four-dimensional theory are described using a simpler three-dimensional effective theory, having only the static modes of the various fields as its degrees of freedom. A perturbatively constructed effective theory is known to provide a good description of the plasma at high temperatures, where asymptotic freedom makes the gauge coupling small. In addition to this, numerical lattice simulations have, however, shown that the perturbatively constructed theory gives a surprisingly good description of the plasma all the way down to temperatures a few times the transition temperature. Near the critical temperature, the effective theory, however, ceases to give a valid description of the physics, since it fails to respect the approximate center symmetry of the full theory. The symmetry plays a key role in the dynamics near the phase transition, and thus one expects that the regime of validity of the dimensionally reduced theories can be significantly extended towards the deconfinement transition by incorporating the center symmetry in them. In the introductory part of the thesis, the status of dimensionally reduced effective theories of high temperature QCD is reviewed, placing emphasis on the phase structure of the theories. In the first research paper included in the thesis, the non-perturbative input required in computing the g^6 term in the weak coupling expansion of the pressure of QCD is computed in the effective theory framework at an arbitrary number of colors. The two last papers on the other hand focus on the construction of the center-symmetric effective theories, and subsequently the first non-perturbative studies of these theories are presented. Non-perturbative lattice simulations of a center-symmetric effective theory for SU(2) Yang-Mills theory show --- in sharp contrast to the perturbative setup --- that the effective theory accommodates a phase transition in the correct universality class of the full theory. This transition is seen to take place at a value of the effective theory coupling constant that is consistent with the full theory coupling at the critical temperature.
Resumo:
We consider an obstacle scattering problem for linear Beltrami fields. A vector field is a linear Beltrami field if the curl of the field is a constant times itself. We study the obstacles that are of Neumann type, that is, the normal component of the total field vanishes on the boundary of the obstacle. We prove the unique solvability for the corresponding exterior boundary value problem, in other words, the direct obstacle scattering model. For the inverse obstacle scattering problem, we deduce the formulas that are needed to apply the singular sources method. The numerical examples are computed for the direct scattering problem and for the inverse scattering problem.
Resumo:
Tieteellinen tiivistelmä Common scab is one of the most important soil-borne diseases of potato (Solanum tuberosum L.) in many potato production areas. It is caused by a number of Streptomyces species, in Finland the causal agents are Streptomyces scabies (Thaxter) Lambert & Loria and S. turgidiscabies Takeuchi. The scab-causing Streptomyces spp. are well-adapted, successful plant pathogens that survive in soil also as saprophytes. Control of these pathogens has proved to be difficult. Most of the methods used to manage potato common scab are aimed at controlling S. scabies, the most common of the scab-causing pathogens. The studies in this thesis investigated S. scabies and S. turgidiscabies as causal organisms of common scab and explored new approaches for control of common scab that would be effective against both species. S. scabies and S. turgidiscabies are known to co-occur in the same fields and in the same tuber lesions in Finland. The present study showed that both these pathogens cause similar symptoms on potato tubers, and the types of symptoms varied depending on cultivar rather than the pathogen species. Pathogenic strains of S. turgidiscabies were antagonistic to S. scabies in vitro indicating that these two species may be competing for the same ecological niche. In addition, strains of S. turgidiscabies were highly virulent in potato and they tolerated lower pH than those of S. scabies. Taken together these results suggest that S. turgidiscabies has become a major problem in potato production in Finland. The bacterial phytotoxins, thaxtomins, are produced by the scab-causing Streptomyces spp. and are essential for the induction of scab symptoms. In this study, thaxtomins were produced in vitro and four thaxtomin compounds isolated and characterized. All four thaxtomins induced similar symptoms of reduced root and shoot growth, root swelling or necrosis on micro-propagated potato seedlings. The main phytotoxin, thaxtomin A, was used as a selective agent in a bioassay in vitro to screen F1 potato progeny from a single cross. Tolerance to thaxtomin A in vitro and scab resistance in the field were correlated indicating that the in vitro bioassay could be used in the early stages of a resistance breeding program to discard scab-susceptible genotypes and elevate the overall levels of common scab resistance in potato breeding populations. The potential for biological control of S. scabies and S. turgidiscabies using a non-pathogenic Streptomyces strain (346) isolated from a scab lesion and S. griseoviridis strain (K61) from a commercially available biocontrol product was studied. Both strains showed antagonistic activity against S. scabies and S. turgidiscabies in vitro and suppressed the development of common scab disease caused by S. turgidiscabies in the glasshouse. Furthermore, strain 346 reduced the incidence of S. turgidiscabies in scab lesions on potato tubers in the field. These results demonstrated for the first time the potential for biological control of S. turgidiscabies in the glasshouse and under field conditions and may be applied to enhance control of common scab in the future.
Resumo:
We present a measurement of the top quark mass and of the top-antitop pair production cross section using p-pbar data collected with the CDFII detector at the Tevatron Collider at the Fermi National Accelerator Laboratory and corresponding to an integrated luminosity of 2.9 fb-1. We select events with six or more jets satisfying a number of kinematical requirements imposed by means of a neural network algorithm. At least one of these jets must originate from a b quark, as identified by the reconstruction of a secondary vertex inside the jet. The mass measurement is based on a likelihood fit incorporating reconstructed mass distributions representative of signal and background, where the absolute jet energy scale (JES) is measured simultaneously with the top quark mass. The measurement yields a value of 174.8 +- 2.4(stat+JES) ^{+1.2}_{-1.0}(syst) GeV/c^2, where the uncertainty from the absolute jet energy scale is evaluated together with the statistical uncertainty. The procedure measures also the amount of signal from which we derive a cross section, sigma_{ttbar} = 7.2 +- 0.5(stat) +- 1.0 (syst) +- 0.4 (lum) pb, for the measured values of top quark mass and JES.