832 resultados para accuracy analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis describes the development of a simple and accurate method for estimating the quantity and composition of household waste arisings. The method is based on the fundamental tenet that waste arisings can be predicted from information on the demographic and socio-economic characteristics of households, thus reducing the need for the direct measurement of waste arisings to that necessary for the calibration of a prediction model. The aim of the research is twofold: firstly to investigate the generation of waste arisings at the household level, and secondly to devise a method for supplying information on waste arisings to meet the needs of waste collection and disposal authorities, policy makers at both national and European level and the manufacturers of plant and equipment for waste sorting and treatment. The research was carried out in three phases: theoretical, empirical and analytical. In the theoretical phase specific testable hypotheses were formulated concerning the process of waste generation at the household level. The empirical phase of the research involved an initial questionnaire survey of 1277 households to obtain data on their socio-economic characteristics, and the subsequent sorting of waste arisings from each of the households surveyed. The analytical phase was divided between (a) the testing of the research hypotheses by matching each household's waste against its demographic/socioeconomic characteristics (b) the development of statistical models capable of predicting the waste arisings from an individual household and (c) the development of a practical method for obtaining area-based estimates of waste arisings using readily available data from the national census. The latter method was found to represent a substantial improvement over conventional methods of waste estimation in terms of both accuracy and spatial flexibility. The research therefore represents a substantial contribution both to scientific knowledge of the process of household waste generation, and to the practical management of waste arisings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Due to the failure of PRARE the orbital accuracy of ERS-1 is typically 10-15 cm radially as compared to 3-4cm for TOPEX/Poseidon. To gain the most from these simultaneous datasets it is necessary to improve the orbital accuracy of ERS-1 so that it is commensurate with that of TOPEX/Poseidon. For the integration of these two datasets it is also necessary to determine the altimeter and sea state biases for each of the satellites. Several models for the sea state bias of ERS-1 are considered by analysis of the ERS-1 single satellite crossovers. The model adopted consists of the sea state bias as a percentage of the significant wave height, namely 5.95%. The removal of ERS-1 orbit error and recovery of an ERS-1 - TOPEX/Poseidon relative bias are both achieved by analysis of dual crossover residuals. The gravitational field based radial orbit error is modelled by a finite Fourier expansion series with the dominant frequencies determined by analysis of the JGM-2 co-variance matrix. Periodic and secular terms to model the errors due to atmospheric density, solar radiation pressure and initial state vector mis-modelling are also solved for. Validation of the dataset unification consists of comparing the mean sea surface topographies and annual variabilities derived from both the corrected and uncorrected ERS-1 orbits with those derived from TOPEX/Poseidon. The global and regional geographically fixed/variable orbit errors are also analysed pre and post correction, and a significant reduction is noted. Finally the use of dual/single satellite crossovers and repeat pass data, for the calibration of ERS-2 with respect to ERS-1 and TOPEX/Poseidon is shown by calculating the ERS-1/2 sea state and relative biases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual perception is dependent on both light transmission through the eye and neuronal conduction through the visual pathway. Advances in clinical diagnostics and treatment modalities over recent years have increased the opportunities to improve the optical path and retinal image quality. Higher order aberrations and retinal straylight are two major factors that influence light transmission through the eye and ultimately, visual outcome. Recent technological advancements have brought these important factors into the clinical domain, however the potential applications of these tools and considerations regarding interpretation of data are much underestimated. The purpose of this thesis was to validate and optimise wavefront analysers and a new clinical tool for the objective evaluation of intraocular scatter. The application of these methods in a clinical setting involving a range of conditions was also explored. The work was divided into two principal sections: 1. Wavefront Aberrometry: optimisation, validation and clinical application The main findings of this work were: • Observer manipulation of the aberrometer increases variability by a factor of 3. • Ocular misalignment can profoundly affect reliability, notably for off-axis aberrations. • Aberrations measured with wavefront analysers using different principles are not interchangeable, with poor relationships and significant differences between values. • Instrument myopia of around 0.30D is induced when performing wavefront analysis in non-cyclopleged eyes; values can be as high as 3D, being higher as the baseline level of myopia decreases. Associated accommodation changes may result in relevant changes to the aberration profile, particularly with respect to spherical aberration. • Young adult healthy Caucasian eyes have significantly more spherical aberration than Asian eyes when matched for age, gender, axial length and refractive error. Axial length is significantly correlated with most components of the aberration profile. 2. Intraocular light scatter: Evaluation of subjective measures and validation and application of a new objective method utilising clinically derived wavefront patterns. The main findings of this work were: • Subjective measures of clinical straylight are highly repeatable. Three measurements are suggested as the optimum number for increased reliability. • Significant differences in straylight values were found for contact lenses designed for contrast enhancement compared to clear lenses of the same design and material specifications. Specifically, grey/green tints induced significantly higher values of retinal straylight. • Wavefront patterns from a commercial Hartmann-Shack device can be used to obtain objective measures of scatter and are well correlated with subjective straylight values. • Perceived retinal stray light was similar in groups of patients implanted with monofocal and multi focal intraocular lenses. Correlation between objective and subjective measurements of scatter is poor, possibly due to different illumination conditions between the testing procedures, or a neural component which may alter with age. Careful acquisition results in highly reproducible in vivo measures of higher order aberrations; however, data from different devices are not interchangeable which brings the accuracy of measurement into question. Objective measures of intraocular straylight can be derived from clinical aberrometry and may be of great diagnostic and management importance in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis demonstrates that the use of finite elements need not be confined to space alone, but that they may also be used in the time domain, It is shown that finite element methods may be used successfully to obtain the response of systems to applied forces, including, for example, the accelerations in a tall structure subjected to an earthquake shock. It is further demonstrated that at least one of these methods may be considered to be a practical alternative to more usual methods of solution. A detailed investigation of the accuracy and stability of finite element solutions is included, and methods of applications to both single- and multi-degree of freedom systems are described. Solutions using two different temporal finite elements are compared with those obtained by conventional methods, and a comparison of computation times for the different methods is given. The application of finite element methods to distributed systems is described, using both separate discretizations in space and time, and a combined space-time discretization. The inclusion of both viscous and hysteretic damping is shown to add little to the difficulty of the solution. Temporal finite elements are also seen to be of considerable interest when applied to non-linear systems, both when the system parameters are time-dependent and also when they are functions of displacement. Solutions are given for many different examples, and the computer programs used for the finite element methods are included in an Appendix.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The finite element process is now used almost routinely as a tool of engineering analysis. From early days, a significant effort has been devoted to developing simple, cost effective elements which adequately fulfill accuracy requirements. In this thesis we describe the development and application of one of the simplest elements available for the statics and dynamics of axisymmetric shells . A semi analytic truncated cone stiffness element has been formulated and implemented in a computer code: it has two nodes with five degrees of freedom at each node, circumferential variations in displacement field are described in terms of trigonometric series, transverse shear is accommodated by means of a penalty function and rotary inertia is allowed for. The element has been tested in a variety of applications in the statics and dynamics of axisymmetric shells subjected to a variety of boundary conditions. Good results have been obtained for thin and thick shell cases .

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this study, a new entropy measure known as kernel entropy (KerEnt), which quantifies the irregularity in a series, was applied to nocturnal oxygen saturation (SaO 2) recordings. A total of 96 subjects suspected of suffering from sleep apnea-hypopnea syndrome (SAHS) took part in the study: 32 SAHS-negative and 64 SAHS-positive subjects. Their SaO 2 signals were separately processed by means of KerEnt. Our results show that a higher degree of irregularity is associated to SAHS-positive subjects. Statistical analysis revealed significant differences between the KerEnt values of SAHS-negative and SAHS-positive groups. The diagnostic utility of this parameter was studied by means of receiver operating characteristic (ROC) analysis. A classification accuracy of 81.25% (81.25% sensitivity and 81.25% specificity) was achieved. Repeated apneas during sleep increase irregularity in SaO 2 data. This effect can be measured by KerEnt in order to detect SAHS. This non-linear measure can provide useful information for the development of alternative diagnostic techniques in order to reduce the demand for conventional polysomnography (PSG). © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sentiment analysis over Twitter offer organisations a fast and effective way to monitor the publics' feelings towards their brand, business, directors, etc. A wide range of features and methods for training sentiment classifiers for Twitter datasets have been researched in recent years with varying results. In this paper, we introduce a novel approach of adding semantics as additional features into the training set for sentiment analysis. For each extracted entity (e.g. iPhone) from tweets, we add its semantic concept (e.g. Apple product) as an additional feature, and measure the correlation of the representative concept with negative/positive sentiment. We apply this approach to predict sentiment for three different Twitter datasets. Our results show an average increase of F harmonic accuracy score for identifying both negative and positive sentiment of around 6.5% and 4.8% over the baselines of unigrams and part-of-speech features respectively. We also compare against an approach based on sentiment-bearing topic analysis, and find that semantic features produce better Recall and F score when classifying negative sentiment, and better Precision with lower Recall and F score in positive sentiment classification.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

OBJECTIVE: To determine the accuracy, acceptability and cost-effectiveness of polymerase chain reaction (PCR) and optical immunoassay (OIA) rapid tests for maternal group B streptococcal (GBS) colonisation at labour. DESIGN: A test accuracy study was used to determine the accuracy of rapid tests for GBS colonisation of women in labour. Acceptability of testing to participants was evaluated through a questionnaire administered after delivery, and acceptability to staff through focus groups. A decision-analytic model was constructed to assess the cost-effectiveness of various screening strategies. SETTING: Two large obstetric units in the UK. PARTICIPANTS: Women booked for delivery at the participating units other than those electing for a Caesarean delivery. INTERVENTIONS: Vaginal and rectal swabs were obtained at the onset of labour and the results of vaginal and rectal PCR and OIA (index) tests were compared with the reference standard of enriched culture of combined vaginal and rectal swabs. MAIN OUTCOME MEASURES: The accuracy of the index tests, the relative accuracies of tests on vaginal and rectal swabs and whether test accuracy varied according to the presence or absence of maternal risk factors. RESULTS: PCR was significantly more accurate than OIA for the detection of maternal GBS colonisation. Combined vaginal or rectal swab index tests were more sensitive than either test considered individually [combined swab sensitivity for PCR 84% (95% CI 79-88%); vaginal swab 58% (52-64%); rectal swab 71% (66-76%)]. The highest sensitivity for PCR came at the cost of lower specificity [combined specificity 87% (95% CI 85-89%); vaginal swab 92% (90-94%); rectal swab 92% (90-93%)]. The sensitivity and specificity of rapid tests varied according to the presence or absence of maternal risk factors, but not consistently. PCR results were determinants of neonatal GBS colonisation, but maternal risk factors were not. Overall levels of acceptability for rapid testing amongst participants were high. Vaginal swabs were more acceptable than rectal swabs. South Asian women were least likely to have participated in the study and were less happy with the sampling procedure and with the prospect of rapid testing as part of routine care. Midwives were generally positive towards rapid testing but had concerns that it might lead to overtreatment and unnecessary interference in births. Modelling analysis revealed that the most cost-effective strategy was to provide routine intravenous antibiotic prophylaxis (IAP) to all women without screening. Removing this strategy, which is unlikely to be acceptable to most women and midwives, resulted in screening, based on a culture test at 35-37 weeks' gestation, with the provision of antibiotics to all women who screened positive being most cost-effective, assuming that all women in premature labour would receive IAP. The results were sensitive to very small increases in costs and changes in other assumptions. Screening using a rapid test was not cost-effective based on its current sensitivity, specificity and cost. CONCLUSIONS: Neither rapid test was sufficiently accurate to recommend it for routine use in clinical practice. IAP directed by screening with enriched culture at 35-37 weeks' gestation is likely to be the most acceptable cost-effective strategy, although it is premature to suggest the implementation of this strategy at present.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study presents some quantitative evidence from a number of simulation experiments on the accuracy of the productivitygrowth estimates derived from growthaccounting (GA) and frontier-based methods (namely data envelopment analysis-, corrected ordinary least squares-, and stochastic frontier analysis-based malmquist indices) under various conditions. These include the presence of technical inefficiency, measurement error, misspecification of the production function (for the GA and parametric approaches) and increased input and price volatility from one period to the next. The study finds that the frontier-based methods usually outperform GA, but the overall performance varies by experiment. Parametric approaches generally perform best when there is no functional form misspecification, but their accuracy greatly diminishes otherwise. The results also show that the deterministic approaches perform adequately even under conditions of (modest) measurement error and when measurement error becomes larger, the accuracy of all approaches (including stochastic approaches) deteriorates rapidly, to the point that their estimates could be considered unreliable for policy purposes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applying direct error counting, we compare the accuracy and evaluate the validity of different available numerical approaches to the estimation of the bit-error rate (BER) in 40-Gb/s return-to-zero differential phase-shift-keying transmission. As a particular example, we consider a system with in-line semiconductor optical amplifiers. We demonstrate that none of the existing models has an absolute superiority over the others. We also reveal the impact of the duty cycle on the accuracy of the BER estimates through the differently introduced Q-factors. © 2007 IEEE.