44 resultados para Personal data
Resumo:
Analyzing statistical dependencies is a fundamental problem in all empirical science. Dependencies help us understand causes and effects, create new scientific theories, and invent cures to problems. Nowadays, large amounts of data is available, but efficient computational tools for analyzing the data are missing. In this research, we develop efficient algorithms for a commonly occurring search problem - searching for the statistically most significant dependency rules in binary data. We consider dependency rules of the form X->A or X->not A, where X is a set of positive-valued attributes and A is a single attribute. Such rules describe which factors either increase or decrease the probability of the consequent A. A classical example are genetic and environmental factors, which can either cause or prevent a disease. The emphasis in this research is that the discovered dependencies should be genuine - i.e. they should also hold in future data. This is an important distinction from the traditional association rules, which - in spite of their name and a similar appearance to dependency rules - do not necessarily represent statistical dependencies at all or represent only spurious connections, which occur by chance. Therefore, the principal objective is to search for the rules with statistical significance measures. Another important objective is to search for only non-redundant rules, which express the real causes of dependence, without any occasional extra factors. The extra factors do not add any new information on the dependence, but can only blur it and make it less accurate in future data. The problem is computationally very demanding, because the number of all possible rules increases exponentially with the number of attributes. In addition, neither the statistical dependency nor the statistical significance are monotonic properties, which means that the traditional pruning techniques do not work. As a solution, we first derive the mathematical basis for pruning the search space with any well-behaving statistical significance measures. The mathematical theory is complemented by a new algorithmic invention, which enables an efficient search without any heuristic restrictions. The resulting algorithm can be used to search for both positive and negative dependencies with any commonly used statistical measures, like Fisher's exact test, the chi-squared measure, mutual information, and z scores. According to our experiments, the algorithm is well-scalable, especially with Fisher's exact test. It can easily handle even the densest data sets with 10000-20000 attributes. Still, the results are globally optimal, which is a remarkable improvement over the existing solutions. In practice, this means that the user does not have to worry whether the dependencies hold in future data or if the data still contains better, but undiscovered dependencies.
Resumo:
Cell transition data is obtained from a cellular phone that switches its current serving cell tower. The data consists of a sequence of transition events, which are pairs of cell identifiers and transition times. The focus of this thesis is applying data mining methods to such data, developing new algorithms, and extracting knowledge that will be a solid foundation on which to build location-aware applications. In addition to a thorough exploration of the features of the data, the tools and methods developed in this thesis provide solutions to three distinct research problems. First, we develop clustering algorithms that produce a reliable mapping between cell transitions and physical locations observed by users of mobile devices. The main clustering algorithm operates in online fashion, and we consider also a number of offline clustering methods for comparison. Second, we define the concept of significant locations, known as bases, and give an online algorithm for determining them. Finally, we consider the task of predicting the movement of the user, based on historical data. We develop a prediction algorithm that considers paths of movement in their entirety, instead of just the most recent movement history. All of the presented methods are evaluated with a significant body of real cell transition data, collected from about one hundred different individuals. The algorithms developed in this thesis are designed to be implemented on a mobile device, and require no extra hardware sensors or network infrastructure. By not relying on external services and keeping the user information as much as possible on the user s own personal device, we avoid privacy issues and let the users control the disclosure of their location information.
Resumo:
The increased accuracy in the cosmological observations, especially in the measurements of the comic microwave background, allow us to study the primordial perturbations in grater detail. In this thesis, we allow the possibility for a correlated isocurvature perturbations alongside the usual adiabatic perturbations. Thus far the simplest six parameter \Lambda CDM model has been able to accommodate all the observational data rather well. However, we find that the 3-year WMAP data and the 2006 Boomerang data favour a nonzero nonadiabatic contribution to the CMB angular power sprctrum. This is primordial isocurvature perturbation that is positively correlated with the primordial curvature perturbation. Compared with the adiabatic \Lambda CMD model we have four additional parameters describing the increased complexity if the primordial perturbations. Our best-fit model has a 4% nonadiabatic contribution to the CMB temperature variance and the fit is improved by \Delta\chi^2 = 9.7. We can attribute this preference for isocurvature to a feature in the peak structure of the angular power spectrum, namely, the widths of the second and third acoustic peak. Along the way, we have improved our analysis methods by identifying some issues with the parametrisation of the primordial perturbation spectra and suggesting ways to handle these. Due to the improvements, the convergence of our Markov chains is improved. The change of parametrisation has an effect on the MCMC analysis because of the change in priors. We have checked our results against this and find only marginal differences between our parametrisation.
Resumo:
This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we estimate the ability of the Bertini cascade to simulate Compact Muon Solenoid (CMS) hadron calorimeter HCAL. LHC test beam activity has a tightly coupled cycle of simulation-to-data analysis. Typically, a Geant4 computer experiment is used to understand test beam measurements. Thus an another aspect of this thesis is a description of studies related to developing new CMS H2 test beam data analysis tools and performing data analysis on the basis of CMS Monte Carlo events. These events have been simulated in detail using Geant4 physics models, full CMS detector description, and event reconstruction. Using the ROOT data analysis framework we have developed an offline ANN-based approach to tag b-jets associated with heavy neutral Higgs particles, and we show that this kind of NN methodology can be successfully used to separate the Higgs signal from the background in the CMS experiment.
Resumo:
Solar flares were first observed by plain eye in white light by William Carrington in England in 1859. Since then these eruptions in the solar corona have intrigued scientists. It is known that flares influence the space weather experienced by the planets in a multitude of ways, for example by causing aurora borealis. Understanding flares is at the epicentre of human survival in space, as astronauts cannot survive the highly energetic particles associated with large flares in high doses without contracting serious radiation disease symptoms, unless they shield themselves effectively during space missions. Flares may be at the epicentre of man s survival in the past as well: it has been suggested that giant flares might have played a role in exterminating many of the large species on Earth, including dinosaurs. Having said that prebiotic synthesis studies have shown lightning to be a decisive requirement for amino acid synthesis on the primordial Earth. Increased lightning activity could be attributed to space weather, and flares. This thesis studies flares in two ways: in the spectral and the spatial domain. We have extracted solar spectra using three different instruments, namely GOES (Geostationary Operational Environmental Satellite), RHESSI (Reuven Ramaty High Energy Solar Spectroscopic Imager) and XSM (X-ray Solar Monitor) for the same flares. The GOES spectra are low resolution obtained with a gas proportional counter, the RHESSI spectra are higher resolution obtained with Germanium detectors and the XSM spectra are very high resolution observed with a silicon detector. It turns out that the detector technology and response influence the spectra we see substantially, and are important to understanding what conclusions to draw from the data. With imaging data, there was not such a luxury of choice available. We used RHESSI imaging data to observe the spatial size of solar flares. In the present work the focus was primarily on current solar flares. However, we did make use of our improved understanding of solar flares to observe young suns in NGC 2547. The same techniques used with solar monitors were applied with XMM-Newton, a stellar X-ray monitor, and coupled with ground based Halpha observations these techniques yielded estimates for flare parameters in young suns. The material in this thesis is therefore structured from technology to application, covering the full processing path from raw data and detector responses to concrete physical parameter results, such as the first measurement of the length of plasma flare loops in young suns.
Resumo:
Solar UV radiation is harmful for life on planet Earth, but fortunately the atmospheric oxygen and ozone absorb almost entirely the most energetic UVC radiation photons. However, part of the UVB radiation and much of the UVA radiation reaches the surface of the Earth, and affect human health, environment, materials and drive atmospheric and aquatic photochemical processes. In order to quantify these effects and processes there is a need for ground-based UV measurements and radiative transfer modeling to estimate the amounts of UV radiation reaching the biosphere. Satellite measurements with their near-global spatial coverage and long-term data conti-nuity offer an attractive option for estimation of the surface UV radiation. This work focuses on radiative transfer theory based methods used for estimation of the UV radiation reaching the surface of the Earth. The objectives of the thesis were to implement the surface UV algorithm originally developed at NASA Goddard Space Flight Center for estimation of the surface UV irradiance from the meas-urements of the Dutch-Finnish built Ozone Monitoring Instrument (OMI), to improve the original surface UV algorithm especially in relation with snow cover, to validate the OMI-derived daily surface UV doses against ground-based measurements, and to demonstrate how the satellite-derived surface UV data can be used to study the effects of the UV radiation. The thesis consists of seven original papers and a summary. The summary includes an introduction of the OMI instrument, a review of the methods used for modeling of the surface UV using satellite data as well as the con-clusions of the main results of the original papers. The first two papers describe the algorithm used for estimation of the surface UV amounts from the OMI measurements as well as the unique Very Fast Delivery processing system developed for processing of the OMI data received at the Sodankylä satellite data centre. The third and the fourth papers present algorithm improvements related to the surface UV albedo of the snow-covered land. Fifth paper presents the results of the comparison of the OMI-derived daily erythemal doses with those calculated from the ground-based measurement data. It gives an estimate of the expected accuracy of the OMI-derived sur-face UV doses for various atmospheric and other conditions, and discusses the causes of the differences between the satellite-derived and ground-based data. The last two papers demonstrate the use of the satellite-derived sur-face UV data. Sixth paper presents an assessment of the photochemical decomposition rates in aquatic environment. Seventh paper presents use of satellite-derived daily surface UV doses for planning of the outdoor material weathering tests.
Resumo:
Accelerator mass spectrometry (AMS) is an ultrasensitive technique for measuring the concentration of a single isotope. The electric and magnetic fields of an electrostatic accelerator system are used to filter out other isotopes from the ion beam. The high velocity means that molecules can be destroyed and removed from the measurement background. As a result, concentrations down to one atom in 10^16 atoms are measurable. This thesis describes the construction of the new AMS system in the Accelerator Laboratory of the University of Helsinki. The system is described in detail along with the relevant ion optics. System performance and some of the 14C measurements done with the system are described. In a second part of the thesis, a novel statistical model for the analysis of AMS data is presented. Bayesian methods are used in order to make the best use of the available information. In the new model, instrumental drift is modelled with a continuous first-order autoregressive process. This enables rigorous normalization to standards measured at different times. The Poisson statistical nature of a 14C measurement is also taken into account properly, so that uncertainty estimates are much more stable. It is shown that, overall, the new model improves both the accuracy and the precision of AMS measurements. In particular, the results can be improved for samples with very low 14C concentrations or measured only a few times.
Resumo:
Changes in alcohol pricing have been documented as inversely associated with changes in consumption and alcohol-related problems. Evidence of the association between price changes and health problems is nevertheless patchy and is based to a large extent on cross-sectional state-level data, or time series of such cross-sectional analyses. Natural experimental studies have been called for. There was a substantial reduction in the price of alcohol in Finland in 2004 due to a reduction in alcohol taxes of one third, on average, and the abolition of duty-free allowances for travellers from the EU. These changes in the Finnish alcohol policy could be considered a natural experiment, which offered a good opportunity to study what happens with regard to alcohol-related problems when prices go down. The present study investigated the effects of this reduction in alcohol prices on (1) alcohol-related and all-cause mortality, and mortality due to cardiovascular diseases, (2) alcohol-related morbidity in terms of hospitalisation, (3) socioeconomic differentials in alcohol-related mortality, and (4) small-area differences in interpersonal violence in the Helsinki Metropolitan area. Differential trends in alcohol-related mortality prior to the price reduction were also analysed. A variety of population-based register data was used in the study. Time-series intervention analysis modelling was applied to monthly aggregations of deaths and hospitalisation for the period 1996-2006. These and other mortality analyses were carried out for men and women aged 15 years and over. Socioeconomic differentials in alcohol-related mortality were assessed on a before/after basis, mortality being followed up in 2001-2003 (before the price reduction) and 2004-2005 (after). Alcohol-related mortality was defined in all the studies on mortality on the basis of information on both underlying and contributory causes of death. Hospitalisation related to alcohol meant that there was a reference to alcohol in the primary diagnosis. Data on interpersonal violence was gathered from 86 administrative small-areas in the Helsinki Metropolitan area and was also assessed on a before/after basis followed up in 2002-2003 and 2004-2005. The statistical methods employed to analyse these data sets included time-series analysis, and Poisson and linear regression. The results of the study indicate that alcohol-related deaths increased substantially among men aged 40-69 years and among women aged 50-69 after the price reduction when trends and seasonal variation were taken into account. The increase was mainly attributable to chronic causes, particularly liver diseases. Mortality due to cardiovascular diseases and all-cause mortality, on the other hand, decreased considerably among the-over-69-year-olds. The increase in alcohol-related mortality in absolute terms among the 30-59-year-olds was largest among the unemployed and early-age pensioners, and those with a low level of education, social class or income. The relative differences in change between the education and social class subgroups were small. The employed and those under the age of 35 did not suffer from increased alcohol-related mortality in the two years following the price reduction. The gap between the age and education groups, which was substantial in the 1980s, thus further broadened. With regard to alcohol-related hospitalisation, there was an increase in both chronic and acute causes among men under the age of 70, and among women in the 50-69-year age group when trends and seasonal variation were taken into account. Alcohol dependence and other alcohol-related mental and behavioural disorders were the largest category in both the total number of chronic hospitalisation and in the increase. There was no increase in the rate of interpersonal violence in the Helsinki Metropolitan area, and even a decrease in domestic violence. There was a significant relationship between the measures of social disadvantage on the area level and interpersonal violence, although the differences in the effects of the price reduction between the different areas were small. The findings of the present study suggest that that a reduction in alcohol prices may lead to a substantial increase in alcohol-related mortality and morbidity. However, large population group differences were observed regarding responsiveness to the price changes. In particular, the less privileged, such as the unemployed, were most sensitive. In contrast, at least in the Finnish context, the younger generations and the employed do not appear to be adversely affected, and those in the older age groups may even benefit from cheaper alcohol in terms of decreased rates of CVD mortality. The results also suggest that reductions in alcohol prices do not necessarily affect interpersonal violence. The population group differences in the effects of the price changes on alcohol-related harm should be acknowledged, and therefore the policy actions should focus on the population subgroups that are primarily responsive to the price reduction.
Resumo:
Increasing antimicrobial resistance in bacteria has led to the need for better understanding of antimicrobial usage patterns. In 1999, the World Organisation for Animal Health (OIE) recommended that an international ad hoc group should be established to address human and animal health risks related to antimicrobial resistance and the contribution of antimicrobial usage in veterinary medicine. In European countries the need for continuous recording of the usage of veterinary antimicrobials as well as for animal species-specific and indication-based data on usage has been acknowledged. Finland has been among the first countries to develop prudent use guidelines in veterinary medicine, as the Ministry of Agriculture and Forestry issued the first animal species-specific indication-based recommendations for antimicrobial use in animals in 1996. These guidelines have been revised in 2003 and 2009. However, surveillance on the species-specific use of antimicrobials in animals has not been performed in Finland. This thesis provides animal species-specific information on indication-based antimicrobial usage. Different methods for data collection have been utilized. Information on antimicrobial usage in animals has been gathered in four studies (studies A-D). Material from studies A, B and C have been used in an overlapping manner in the original publications I-IV. Study A (original publications I & IV) presents a retrospective cross-sectional survey on prescriptions for small animals at the Veterinary Teaching Hospital of the University of Helsinki. Prescriptions for antimicrobial agents (n = 2281) were collected and usage patterns, such as the indication and length of treatment, were reviewed. Most of the prescriptions were for dogs (78%), and primarily for the treatment of skin and ear infections most of which were treated with cephalexin for a median period of 14 days. Prescriptions for cats (18%) were most often for the treatment of urinary tract infections with amoxicillin for a median length of 10 days. Study B (original publication II) was a retrospective cross-sectional survey where prescriptions for animals were collected from 17 University Pharmacies nationwide. Antimicrobial prescriptions (n = 1038) for mainly dogs (65%) and cats (19%) were investigated. In this study, cephalexin and amoxicillin were also the most frequently used drugs for dogs and cats, respectively. In study C (original publications III & IV), the indication-based usage of antimicrobials of practicing veterinarians was analyzed by using a prospective questionnaire. Randomly selected practicing veterinarians in Finland (n = 262) recorded all their antimicrobial usage during a 7-day study period. Cattle (46%) with mastitis were the most common patients receiving antimicrobial treatment, generally intramuscular penicillin G or intramammary treatment with ampicillin and cloxacillin. The median length of treatment was four days, regardless of the route of administration. Antimicrobial use in horses was evaluated in study D, the results of which are previously unpublished. Firstly, data collected with the prospective questionnaire from the practicing veterinarians showed that horses (n = 89) were frequently treated for skin or wound infections by using penicillin G or trimethoprim-sulfadiazine. The mean duration of treatment was five to seven days. Secondly, according to retrospective data collected from patient records, horses (n = 74) that underwent colic surgery at the Veterinary Teaching Hospital of the University of Helsinki were generally treated according to national and hospital recommendations; penicillin G and gentamicin was administered preoperatively and treatment was continued for a median of three days postoperatively. In conclusion, Finnish veterinarians followed well the national prudent use guidelines. Narrow-spectrum antimicrobials were preferred and, for instance, fluoroquinolones were used sparingly. Prescription studies seemed to give good information on antimicrobials usage, especially when combined with complementary information from patient records. A prospective questionnaire study provided a fair amount of valuable data on several animal species. Electronic surveys are worthwhile exploiting in the future.
Resumo:
The core aim of machine learning is to make a computer program learn from the experience. Learning from data is usually defined as a task of learning regularities or patterns in data in order to extract useful information, or to learn the underlying concept. An important sub-field of machine learning is called multi-view learning where the task is to learn from multiple data sets or views describing the same underlying concept. A typical example of such scenario would be to study a biological concept using several biological measurements like gene expression, protein expression and metabolic profiles, or to classify web pages based on their content and the contents of their hyperlinks. In this thesis, novel problem formulations and methods for multi-view learning are presented. The contributions include a linear data fusion approach during exploratory data analysis, a new measure to evaluate different kinds of representations for textual data, and an extension of multi-view learning for novel scenarios where the correspondence of samples in the different views or data sets is not known in advance. In order to infer the one-to-one correspondence of samples between two views, a novel concept of multi-view matching is proposed. The matching algorithm is completely data-driven and is demonstrated in several applications such as matching of metabolites between humans and mice, and matching of sentences between documents in two languages.
Resumo:
The purpose of this paper is to test for the effect of uncertainty in a model of real estate investment in Finland during the hihhly cyclical period of 1975 to 1998. We use two alternative measures of uncertainty. The first measure is the volatility of stock market returns and the second measure is the heterogeneity in the answers of the quarterly business survey of the Confederation of Finnish Industry and Employers. The econometric analysis is based on the autoregressive distributed lag (ADL) model and the paper applies a 'general-to-specific' modelling approach. We find that the measure of heterogeneity is significant in the model, but the volatility of stock market returns is not. The empirical results give some evidence of an uncertainty-induced threshold slowing down real estate investment in Finland.
Resumo:
The study of soil microbiota and their activities is central to the understanding of many ecosystem processes such as decomposition and nutrient cycling. The collection of microbiological data from soils generally involves several sequential steps of sampling, pretreatment and laboratory measurements. The reliability of results is dependent on reliable methods in every step. The aim of this thesis was to critically evaluate some central methods and procedures used in soil microbiological studies in order to increase our understanding of the factors that affect the measurement results and to provide guidance and new approaches for the design of experiments. The thesis focuses on four major themes: 1) soil microbiological heterogeneity and sampling, 2) storage of soil samples, 3) DNA extraction from soil, and 4) quantification of specific microbial groups by the most-probable-number (MPN) procedure. Soil heterogeneity and sampling are discussed as a single theme because knowledge on spatial (horizontal and vertical) and temporal variation is crucial when designing sampling procedures. Comparison of adjacent forest, meadow and cropped field plots showed that land use has a strong impact on the degree of horizontal variation of soil enzyme activities and bacterial community structure. However, regardless of the land use, the variation of microbiological characteristics appeared not to have predictable spatial structure at 0.5-10 m. Temporal and soil depth-related patterns were studied in relation to plant growth in cropped soil. The results showed that most enzyme activities and microbial biomass have a clear decreasing trend in the top 40 cm soil profile and a temporal pattern during the growing season. A new procedure for sampling of soil microbiological characteristics based on stratified sampling and pre-characterisation of samples was developed. A practical example demonstrated the potential of the new procedure to reduce the analysis efforts involved in laborious microbiological measurements without loss of precision. The investigation of storage of soil samples revealed that freezing (-20 °C) of small sample aliquots retains the activity of hydrolytic enzymes and the structure of the bacterial community in different soil matrices relatively well whereas air-drying cannot be recommended as a storage method for soil microbiological properties due to large reductions in activity. Freezing below -70 °C was the preferred method of storage for samples with high organic matter content. Comparison of different direct DNA extraction methods showed that the cell lysis treatment has a strong impact on the molecular size of DNA obtained and on the bacterial community structure detected. An improved MPN method for the enumeration of soil naphthalene degraders was introduced as an alternative to more complex MPN protocols or the DNA-based quantification approach. The main advantage of the new method is the simple protocol and the possibility to analyse a large number of samples and replicates simultaneously.
Resumo:
An inverse problem for the wave equation is a mathematical formulation of the problem to convert measurements of sound waves to information about the wave speed governing the propagation of the waves. This doctoral thesis extends the theory on the inverse problems for the wave equation in cases with partial measurement data and also considers detection of discontinuous interfaces in the wave speed. A possible application of the theory is obstetric sonography in which ultrasound measurements are transformed into an image of the fetus in its mother's uterus. The wave speed inside the body can not be directly observed but sound waves can be produced outside the body and their echoes from the body can be recorded. The present work contains five research articles. In the first and the fifth articles we show that it is possible to determine the wave speed uniquely by using far apart sound sources and receivers. This extends a previously known result which requires the sound waves to be produced and recorded in the same place. Our result is motivated by a possible application to reflection seismology which seeks to create an image of the Earth s crust from recording of echoes stimulated for example by explosions. For this purpose, the receivers can not typically lie near the powerful sound sources. In the second article we present a sound source that allows us to recover many essential features of the wave speed from the echo produced by the source. Moreover, these features are known to determine the wave speed under certain geometric assumptions. Previously known results permitted the same features to be recovered only by sequential measurement of echoes produced by multiple different sources. The reduced number of measurements could increase the number possible applications of acoustic probing. In the third and fourth articles we develop an acoustic probing method to locate discontinuous interfaces in the wave speed. These interfaces typically correspond to interfaces between different materials and their locations are of interest in many applications. There are many previous approaches to this problem but none of them exploits sound sources varying freely in time. Our use of more variable sources could allow more robust implementation of the probing.