893 resultados para statistical method
Resumo:
Reliable budget/cost estimates for road maintenance and rehabilitation are subjected to uncertainties and variability in road asset condition and characteristics of road users. The CRC CI research project 2003-029-C ‘Maintenance Cost Prediction for Road’ developed a method for assessing variation and reliability in budget/cost estimates for road maintenance and rehabilitation. The method is based on probability-based reliable theory and statistical method. The next stage of the current project is to apply the developed method to predict maintenance/rehabilitation budgets/costs of large networks for strategic investment. The first task is to assess the variability of road data. This report presents initial results of the analysis in assessing the variability of road data. A case study of the analysis for dry non reactive soil is presented to demonstrate the concept in analysing the variability of road data for large road networks. In assessing the variability of road data, large road networks were categorised into categories with common characteristics according to soil and climatic conditions, pavement conditions, pavement types, surface types and annual average daily traffic. The probability distributions, statistical means, and standard deviation values of asset conditions and annual average daily traffic for each type were quantified. The probability distributions and the statistical information obtained in this analysis will be used to asset the variation and reliability in budget/cost estimates in later stage. Generally, we usually used mean values of asset data of each category as input values for investment analysis. The variability of asset data in each category is not taken into account. This analysis method demonstrated that it can be used for practical application taking into account the variability of road data in analysing large road networks for maintenance/rehabilitation investment analysis.
Resumo:
The measurement error model is a well established statistical method for regression problems in medical sciences, although rarely used in ecological studies. While the situations in which it is appropriate may be less common in ecology, there are instances in which there may be benefits in its use for prediction and estimation of parameters of interest. We have chosen to explore this topic using a conditional independence model in a Bayesian framework using a Gibbs sampler, as this gives a great deal of flexibility, allowing us to analyse a number of different models without losing generality. Using simulations and two examples, we show how the conditional independence model can be used in ecology, and when it is appropriate.
Resumo:
The effect of sample geometry on the melting rates of burning iron rods was assessed. Promoted-ignition tests were conducted with rods having cylindrical, rectangular, and triangular cross-sectional shapes over a range of cross-sectional areas. The regression rate of the melting interface (RRMI) was assessed using a statistical approach which enabled the quantification of confidence levels for the observed differences in RRMI. Statistically significant differences in RRMI were observed for rods with the same cross-sectional area but different cross-sectional shape. The magnitude of the proportional difference in RRMI increased with the cross-sectional area. Triangular rods had the highest RRMI, followed by rectangular rods, and then cylindrical rods. The dependence of RRMI on rod shape is shown to relate to the action of molten metal at corners. The corners of the rectangular and triangular rods melted faster than the faces due to their locally higher surface area to volume ratios. This phenomenon altered the attachment geometry between liquid and solid phases, increasing the surface area available for heat transfer, causing faster melting. Findings relating to the application of standard flammability test results in industrial situations are also presented.
Resumo:
Approximate Bayesian computation has become an essential tool for the analysis of complex stochastic models when the likelihood function is numerically unavailable. However, the well-established statistical method of empirical likelihood provides another route to such settings that bypasses simulations from the model and the choices of the approximate Bayesian computation parameters (summary statistics, distance, tolerance), while being convergent in the number of observations. Furthermore, bypassing model simulations may lead to significant time savings in complex models, for instance those found in population genetics. The Bayesian computation with empirical likelihood algorithm we develop in this paper also provides an evaluation of its own performance through an associated effective sample size. The method is illustrated using several examples, including estimation of standard distributions, time series, and population genetics models.
Resumo:
This paper presents the application of a statistical method for model structure selection of lift-drag and viscous damping components in ship manoeuvring models. The damping model is posed as a family of linear stochastic models, which is postulated based on previous work in the literature. Then a nested test of hypothesis problem is considered. The testing reduces to a recursive comparison of two competing models, for which optimal tests in the Neyman sense exist. The method yields a preferred model structure and its initial parameter estimates. Alternatively, the method can give a reduced set of likely models. Using simulated data we study how the selection method performs when there is both uncorrelated and correlated noise in the measurements. The first case is related to instrumentation noise, whereas the second case is related to spurious wave-induced motion often present during sea trials. We then consider the model structure selection of a modern high-speed trimaran ferry from full scale trial data.
Resumo:
Introduction Natural product provenance is important in the food, beverage and pharmaceutical industries, for consumer confidence and with health implications. Raman spectroscopy has powerful molecular fingerprint abilities. Surface Enhanced Raman Spectroscopy’s (SERS) sharp peaks allow distinction between minimally different molecules, so it should be suitable for this purpose. Methods Naturally caffeinated beverages with Guarana extract, coffee and Red Bull energy drink as a synthetic caffeinated beverage for comparison (20 µL ea.) were reacted 1:1 with Gold nanoparticles functionalised with anti-caffeine antibody (ab15221) (10 minutes), air dried and analysed in a micro-Raman instrument. The spectral data was processed using Principle Component Analysis (PCA). Results The PCA showed Guarana sourced caffeine varied significantly from synthetic caffeine (Red Bull) on component 1 (containing 76.4% of the variance in the data). See figure 1. The coffee containing beverages, and in particular Robert Timms (instant coffee) were very similar on component 1, but the barista espresso showed minor variance on component 1. Both coffee sourced caffeine samples varied with red Bull on component 2, (20% of variance). ************************************************************ Figure 1 PCA comparing a naturally caffeinated beverage containing Guarana with coffee. ************************************************************ Discussion PCA is an unsupervised multivariate statistical method that determines patterns within data. Figure 1 shows Caffeine in Guarana is notably different to synthetic caffeine. Other researchers have revealed that caffeine in Guarana plants is complexed with tannins. Naturally sourced/ lightly processed caffeine (Monster Energy, Espresso) are more inherently different than synthetic (Red Bull) /highly processed (Robert Timms) caffeine, in figure 1, which is consistent with this finding and demonstrates this technique’s applicability. Guarana provenance is important because it is still largely hand produced and its demand is escalating with recognition of its benefits. This could be a powerful technique for Guarana provenance, and may extend to other industries where provenance / authentication are required, e.g. the wine or natural pharmaceuticals industries.
Resumo:
Objectives Directly measuring disease incidence in a population is difficult and not feasible to do routinely. We describe the development and application of a new method of estimating at a population level the number of incident genital chlamydia infections, and the corresponding incidence rates, by age and sex using routine surveillance data. Methods A Bayesian statistical approach was developed to calibrate the parameters of a decision-pathway tree against national data on numbers of notifications and tests conducted (2001-2013). Independent beta probability density functions were adopted for priors on the time-independent parameters; the shape parameters of these beta distributions were chosen to match prior estimates sourced from peer-reviewed literature or expert opinion. To best facilitate the calibration, multivariate Gaussian priors on (the logistic transforms of) the time-dependent parameters were adopted, using the Matérn covariance function to favour changes over consecutive years and across adjacent age cohorts. The model outcomes were validated by comparing them with other independent empirical epidemiological measures i.e. prevalence and incidence as reported by other studies. Results Model-based estimates suggest that the total number of people acquiring chlamydia per year in Australia has increased by ~120% over 12 years. Nationally, an estimated 356,000 people acquired chlamydia in 2013, which is 4.3 times the number of reported diagnoses. This corresponded to a chlamydia annual incidence estimate of 1.54% in 2013, increased from 0.81% in 2001 (~90% increase). Conclusions We developed a statistical method which uses routine surveillance (notifications and testing) data to produce estimates of the extent and trends in chlamydia incidence.
Resumo:
Some statistical procedures already available in literature are employed in developing the water quality index, WQI. The nature of complexity and interdependency that occur in physical and chemical processes of water could be easier explained if statistical approaches were applied to water quality indexing. The most popular statistical method used in developing WQI is the principal component analysis (PCA). In literature, the WQI development based on the classical PCA mostly used water quality data that have been transformed and normalized. Outliers may be considered in or eliminated from the analysis. However, the classical mean and sample covariance matrix used in classical PCA methodology is not reliable if the outliers exist in the data. Since the presence of outliers may affect the computation of the principal component, robust principal component analysis, RPCA should be used. Focusing in Langat River, the RPCA-WQI was introduced for the first time in this study to re-calculate the DOE-WQI. Results show that the RPCA-WQI is capable to capture similar distribution in the existing DOE-WQI.
Resumo:
Genome-wide association studies (GWASs) have been successful at identifying single-nucleotide polymorphisms (SNPs) highly associated with common traits; however, a great deal of the heritable variation associated with common traits remains unaccounted for within the genome. Genome-wide complex trait analysis (GCTA) is a statistical method that applies a linear mixed model to estimate phenotypic variance of complex traits explained by genome-wide SNPs, including those not associated with the trait in a GWAS. We applied GCTA to 8 cohorts containing 7096 case and 19 455 control individuals of European ancestry in order to examine the missing heritability present in Parkinson's disease (PD). We meta-analyzed our initial results to produce robust heritability estimates for PD types across cohorts. Our results identify 27% (95% CI 17-38, P = 8.08E - 08) phenotypic variance associated with all types of PD, 15% (95% CI -0.2 to 33, P = 0.09) phenotypic variance associated with early-onset PD and 31% (95% CI 17-44, P = 1.34E - 05) phenotypic variance associated with late-onset PD. This is a substantial increase from the genetic variance identified by top GWAS hits alone (between 3 and 5%) and indicates there are substantially more risk loci to be identified. Our results suggest that although GWASs are a useful tool in identifying the most common variants associated with complex disease, a great deal of common variants of small effect remain to be discovered. © Published by Oxford University Press 2012.
Resumo:
Remote sensing provides a lucid and effective means for crop coverage identification. Crop coverage identification is a very important technique, as it provides vital information on the type and extent of crop cultivated in a particular area. This information has immense potential in the planning for further cultivation activities and for optimal usage of the available fertile land. As the frontiers of space technology advance, the knowledge derived from the satellite data has also grown in sophistication. Further, image classification forms the core of the solution to the crop coverage identification problem. No single classifier can prove to satisfactorily classify all the basic crop cover mapping problems of a cultivated region. We present in this paper the experimental results of multiple classification techniques for the problem of crop cover mapping of a cultivated region. A detailed comparison of the algorithms inspired by social behaviour of insects and conventional statistical method for crop classification is presented in this paper. These include the Maximum Likelihood Classifier (MLC), Particle Swarm Optimisation (PSO) and Ant Colony Optimisation (ACO) techniques. The high resolution satellite image has been used for the experiments.
Resumo:
Socioeconomic health inequalities have been widely documented, with a lower social position being associated with poorer physical and general health and higher mortality. For mental health the results have been more varied. However, the mechanisms by which the various dimensions of socioeconomic circumstances are associated with different domains of health are not yet fully understood. This is related to a lack of studies tackling the interrelations and pathways between multiple dimensions of socioeconomic circumstances and domains of health. In particular, evidence from comparative studies of populations from different national contexts that consider the complexity of the causes of socioeconomic health inequalities is needed. The aim of this study was to examine the associations of multiple socioeconomic circumstances with physical and mental health, more specifically physical functioning and common mental disorders. This was done in a comparative setting of two cohorts of white-collar public sector employees, one from Finland and one from Britain. The study also sought to find explanations for the observed associations between economic difficulties and health by analysing the contribution of health behaviours, living arrangements and work-family conflicts. The survey data were derived from the Finnish Helsinki Health Study baseline surveys in 2000-2002 among the City of Helsinki employees aged 40-60 years, and from the fifth phase of the London-based Whitehall II study (1997-9) which is a prospective study of civil servants aged 35-55 years at the time of recruitment. The data collection in the two countries was harmonised to safeguard maximal comparability. Physical functioning was measured with the Short Form (SF-36) physical component summary and common mental disorders with the General Health Questionnaire (GHQ-12). Socioeconomic circumstances were parental education, childhood economic difficulties, own education, occupational class, household income, housing tenure, and current economic difficulties. Further explanatory factors were health behaviours, living arrangements and work-family conflicts. The main statistical method used was logistic regression analysis. Analyses were conducted separately for the two sexes and two cohorts. Childhood and current economic difficulties were associated with poorer physical functioning and common mental disorders generally in both cohorts and sexes. Conventional dimensions of socioeconomic circumstances i.e. education, occupational class and income were associated with physical functioning and mediated each other’s effects, but in different ways in the two cohorts: education was more important in Helsinki and occupational class in London. The associations of economic difficulties with health were partly explained by work-family conflicts and other socioeconomic circumstances in both cohorts and sexes. In conclusion, this study on two country-specific cohorts confirms that different dimensions of socioeconomic circumstances are related but not interchangeable. They are also somewhat differently associated with physical and mental domains of health. In addition to conventionally measured dimensions of past and present socioeconomic circumstances, economic difficulties should be taken into account in studies and attempts to reduce health inequalities. Further explanatory factors, particularly conflicts between work and family, should also be considered when aiming to reduce inequalities and maintain the health of employees.
Resumo:
The Baltic countries share public health problems typical of most Eastern European transition economies: morbidity and mortality from non-communicable diseases is higher than in Western European countries. This situation has many similarities compared to a neighbouring country, Finland during the late 1960s. There are reasons to expect that health disadvantage may be increasing among the less advantaged population groups in the Baltic countries. The evidence on social differences in health in the Baltic countries is, however, scattered to studies using different methodologies making comparisons difficult. This study aims to bridge the evidence gap by providing comparable standardized cross-sectional and time trend analyses to the social patterning of variation in health and two key health behaviours i.e. smoking and drinking in Estonia, Latvia, Lithuania and Finland in 1994-2004 representing Eastern European transition countries and a stable Western European country. The data consisted of similar cross-sectional postal surveys conducted in 1994, 1996, 1998, 2000, 2002 and 2004 on adult populations (aged 20 64 years) in Estonia (n=9049), Latvia (n=7685), Lithuania (n=11634) and Finland (n=18821) in connection with the Finbalt Health Monitor project. The main statistical method was logistic regression analysis. Perceived health was found to be worse among both men and women in the Baltic countries than in Finland. Poor health was associated with older age and lower education in all countries studied. Urbanization and marital status were not consistently related to health. The existing educational inequalities in health remained generally stable over time from 1994 to 2004. In the Baltic countries, however, improvement in perceived health was mainly found among the better educated men and women. Daily smoking was associated with young age, lower education and psychological distress in all countries. Among women smoking was also associated with urbanisation in all countries except Estonia. Among Lithuanian women, the educational gradient in smoking was weakest, and the overall prevalence of smoking increased over time. Drinking was generally associated with young age among men and women, and with education among women. Better educated women were more often frequent drinkers and less educated binge drinkers. The exception was that in Latvian men and women both frequent drinking and binge drinking were associated with low education. In conclusion, the Baltic countries are likely to resemble Western European countries rather than other transition societies. While health inequalities did not markedly change, substantial inequalities do remain, and there were indications of favourable developments mainly among the better educated. Pressures towards increasing health inequalities may therefore be visible in the future, which would be in accordance with the results on smoking and drinking in this study.
Resumo:
Efficient and reliable diagnostic tools for the routine indexing and certification of clean propagating material are essential for the management of pospiviroid diseases in horticultural crops. This study describes the development of a true multiplexed diagnostic method for the detection and identification of all nine currently recognized pospiviroid species in one assay using Luminex bead-based suspension array technology. In addition, a new data-driven, statistical method is presented for establishing thresholds for positivity for individual assays within multiplexed arrays. When applied to the multiplexed array data generated in this study, the new method was shown to have better control of false positives and false negative results than two other commonly used approaches for setting thresholds. The 11-plex Luminex MagPlex-TAG pospiviroid array described here has a unique hierarchical assay design, incorporating a near-universal assay in addition to nine species-specific assays, and a co-amplified plant internal control assay for quality assurance purposes. All assays of the multiplexed array were shown to be 100% specific, sensitive and reproducible. The multiplexed array described herein is robust, easy to use, displays unambiguous results and has strong potential for use in routine pospiviroid indexing to improve disease management strategies.
Resumo:
A vast amount of public services and goods are contracted through procurement auctions. Therefore it is very important to design these auctions in an optimal way. Typically, we are interested in two different objectives. The first objective is efficiency. Efficiency means that the contract is awarded to the bidder that values it the most, which in the procurement setting means the bidder that has the lowest cost of providing a service with a given quality. The second objective is to maximize public revenue. Maximizing public revenue means minimizing the costs of procurement. Both of these goals are important from the welfare point of view. In this thesis, I analyze field data from procurement auctions and show how empirical analysis can be used to help design the auctions to maximize public revenue. In particular, I concentrate on how competition, which means the number of bidders, should be taken into account in the design of auctions. In the first chapter, the main policy question is whether the auctioneer should spend resources to induce more competition. The information paradigm is essential in analyzing the effects of competition. We talk of a private values information paradigm when the bidders know their valuations exactly. In a common value information paradigm, the information about the value of the object is dispersed among the bidders. With private values more competition always increases the public revenue but with common values the effect of competition is uncertain. I study the effects of competition in the City of Helsinki bus transit market by conducting tests for common values. I also extend an existing test by allowing bidder asymmetry. The information paradigm seems to be that of common values. The bus companies that have garages close to the contracted routes are influenced more by the common value elements than those whose garages are further away. Therefore, attracting more bidders does not necessarily lower procurement costs, and thus the City should not implement costly policies to induce more competition. In the second chapter, I ask how the auctioneer can increase its revenue by changing contract characteristics like contract sizes and durations. I find that the City of Helsinki should shorten the contract duration in the bus transit auctions because that would decrease the importance of common value components and cheaply increase entry which now would have a more beneficial impact on the public revenue. Typically, cartels decrease the public revenue in a significant way. In the third chapter, I propose a new statistical method for detecting collusion and compare it with an existing test. I argue that my test is robust to unobserved heterogeneity unlike the existing test. I apply both methods to procurement auctions that contract snow removal in schools of Helsinki. According to these tests, the bidding behavior of two of the bidders seems consistent with a contract allocation scheme.
Resumo:
In a statistical downscaling model, it is important to remove the bias of General Circulations Model (GCM) outputs resulting from various assumptions about the geophysical processes. One conventional method for correcting such bias is standardisation, which is used prior to statistical downscaling to reduce systematic bias in the mean and variances of GCM predictors relative to the observations or National Centre for Environmental Prediction/ National Centre for Atmospheric Research (NCEP/NCAR) reanalysis data. A major drawback of standardisation is that it may reduce the bias in the mean and variance of the predictor variable but it is much harder to accommodate the bias in large-scale patterns of atmospheric circulation in GCMs (e.g. shifts in the dominant storm track relative to observed data) or unrealistic inter-variable relationships. While predicting hydrologic scenarios, such uncorrected bias should be taken care of; otherwise it will propagate in the computations for subsequent years. A statistical method based on equi-probability transformation is applied in this study after downscaling, to remove the bias from the predicted hydrologic variable relative to the observed hydrologic variable for a baseline period. The model is applied in prediction of monsoon stream flow of Mahanadi River in India, from GCM generated large scale climatological data.