906 resultados para accuracy of estimation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We have developed an alignment-free method that calculates phylogenetic distances using a maximum-likelihood approach for a model of sequence change on patterns that are discovered in unaligned sequences. To evaluate the phylogenetic accuracy of our method, and to conduct a comprehensive comparison of existing alignment-free methods (freely available as Python package decaf+py at http://www.bioinformatics.org.au), we have created a data set of reference trees covering a wide range of phylogenetic distances. Amino acid sequences were evolved along the trees and input to the tested methods; from their calculated distances we infered trees whose topologies we compared to the reference trees. We find our pattern-based method statistically superior to all other tested alignment-free methods. We also demonstrate the general advantage of alignment-free methods over an approach based on automated alignments when sequences violate the assumption of collinearity. Similarly, we compare methods on empirical data from an existing alignment benchmark set that we used to derive reference distances and trees. Our pattern-based approach yields distances that show a linear relationship to reference distances over a substantially longer range than other alignment-free methods. The pattern-based approach outperforms alignment-free methods and its phylogenetic accuracy is statistically indistinguishable from alignment-based distances.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques are developed for the visual interpretation of drainage features from satellite imagery. The process of interpretation is formalised by the introduction of objective criteria. Problems of assessing the accuracy of maps are recognized, and a method is developed for quantifying the correctness of an interpretation, in which the more important features are given an appropriate weight. A study was made of imagery from a variety of landscapes in Britain and overseas, from which maps of drainage networks were drawn. The accuracy of the mapping was assessed in absolute terms, and also in relation to the geomorphic parameters used in hydrologic models. Results are presented relating the accuracy of interpretation to image quality, subjectivity and the effects of topography. It is concluded that the visual interpretation of satellite imagery gives maps of sufficient accuracy for the preliminary assessment of water resources, and for the estimation of geomorphic parameters. An examination is made of the use of remotely sensed data in hydrologic models. It is proposed that the spectral properties of a scene are holistic, and are therefore more efficient than conventional catchment characteristics. Key hydrologic parameters were identified, and were estimated from streamflow records. The correlation between hydrologic variables and spectral characteristics was examined, and regression models for streamflow were developed, based solely on spectral data. Regression models were also developed using conventional catchment characteristics, whose values were estimated using satellite imagery. It was concluded that models based primarily on variables derived from remotely sensed data give results which are as good as, or better than, models using conventional map data. The holistic properties of remotely sensed data are realised only in undeveloped areas. In developed areas an assessment of current land-use is a more useful indication of hydrologic response.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: To determine the accuracy, acceptability and cost-effectiveness of polymerase chain reaction (PCR) and optical immunoassay (OIA) rapid tests for maternal group B streptococcal (GBS) colonisation at labour. DESIGN: A test accuracy study was used to determine the accuracy of rapid tests for GBS colonisation of women in labour. Acceptability of testing to participants was evaluated through a questionnaire administered after delivery, and acceptability to staff through focus groups. A decision-analytic model was constructed to assess the cost-effectiveness of various screening strategies. SETTING: Two large obstetric units in the UK. PARTICIPANTS: Women booked for delivery at the participating units other than those electing for a Caesarean delivery. INTERVENTIONS: Vaginal and rectal swabs were obtained at the onset of labour and the results of vaginal and rectal PCR and OIA (index) tests were compared with the reference standard of enriched culture of combined vaginal and rectal swabs. MAIN OUTCOME MEASURES: The accuracy of the index tests, the relative accuracies of tests on vaginal and rectal swabs and whether test accuracy varied according to the presence or absence of maternal risk factors. RESULTS: PCR was significantly more accurate than OIA for the detection of maternal GBS colonisation. Combined vaginal or rectal swab index tests were more sensitive than either test considered individually [combined swab sensitivity for PCR 84% (95% CI 79-88%); vaginal swab 58% (52-64%); rectal swab 71% (66-76%)]. The highest sensitivity for PCR came at the cost of lower specificity [combined specificity 87% (95% CI 85-89%); vaginal swab 92% (90-94%); rectal swab 92% (90-93%)]. The sensitivity and specificity of rapid tests varied according to the presence or absence of maternal risk factors, but not consistently. PCR results were determinants of neonatal GBS colonisation, but maternal risk factors were not. Overall levels of acceptability for rapid testing amongst participants were high. Vaginal swabs were more acceptable than rectal swabs. South Asian women were least likely to have participated in the study and were less happy with the sampling procedure and with the prospect of rapid testing as part of routine care. Midwives were generally positive towards rapid testing but had concerns that it might lead to overtreatment and unnecessary interference in births. Modelling analysis revealed that the most cost-effective strategy was to provide routine intravenous antibiotic prophylaxis (IAP) to all women without screening. Removing this strategy, which is unlikely to be acceptable to most women and midwives, resulted in screening, based on a culture test at 35-37 weeks' gestation, with the provision of antibiotics to all women who screened positive being most cost-effective, assuming that all women in premature labour would receive IAP. The results were sensitive to very small increases in costs and changes in other assumptions. Screening using a rapid test was not cost-effective based on its current sensitivity, specificity and cost. CONCLUSIONS: Neither rapid test was sufficiently accurate to recommend it for routine use in clinical practice. IAP directed by screening with enriched culture at 35-37 weeks' gestation is likely to be the most acceptable cost-effective strategy, although it is premature to suggest the implementation of this strategy at present.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Screening for congenital heart defects (CHDs) relies on antenatal ultrasound and postnatal clinical examination; however, life-threatening defects often go undetected. Objective: To determine the accuracy, acceptability and cost-effectiveness of pulse oximetry as a screening test for CHDs in newborn infants. Design: A test accuracy study determined the accuracy of pulse oximetry. Acceptability of testing to parents was evaluated through a questionnaire, and to staff through focus groups. A decision-analytic model was constructed to assess cost-effectiveness. Setting: Six UK maternity units. Participants: These were 20,055 asymptomatic newborns at = 35 weeks’ gestation, their mothers and health-care staff. Interventions: Pulse oximetry was performed prior to discharge from hospital and the results of this index test were compared with a composite reference standard (echocardiography, clinical follow-up and follow-up through interrogation of clinical databases). Main outcome measures: Detection of major CHDs – defined as causing death or requiring invasive intervention up to 12 months of age (subdivided into critical CHDs causing death or intervention before 28 days, and serious CHDs causing death or intervention between 1 and 12 months of age); acceptability of testing to parents and staff; and the cost-effectiveness in terms of cost per timely diagnosis. Results: Fifty-three of the 20,055 babies screened had a major CHD (24 critical and 29 serious), a prevalence of 2.6 per 1000 live births. Pulse oximetry had a sensitivity of 75.0% [95% confidence interval (CI) 53.3% to 90.2%] for critical cases and 49.1% (95% CI 35.1% to 63.2%) for all major CHDs. When 23 cases were excluded, in which a CHD was already suspected following antenatal ultrasound, pulse oximetry had a sensitivity of 58.3% (95% CI 27.7% to 84.8%) for critical cases (12 babies) and 28.6% (95% CI 14.6% to 46.3%) for all major CHDs (35 babies). False-positive (FP) results occurred in 1 in 119 babies (0.84%) without major CHDs (specificity 99.2%, 95% CI 99.0% to 99.3%). However, of the 169 FPs, there were six cases of significant but not major CHDs and 40 cases of respiratory or infective illness requiring medical intervention. The prevalence of major CHDs in babies with normal pulse oximetry was 1.4 (95% CI 0.9 to 2.0) per 1000 live births, as 27 babies with major CHDs (6 critical and 21 serious) were missed. Parent and staff participants were predominantly satisfied with screening, perceiving it as an important test to detect ill babies. There was no evidence that mothers given FP results were more anxious after participating than those given true-negative results, although they were less satisfied with the test. White British/Irish mothers were more likely to participate in the study, and were less anxious and more satisfied than those of other ethnicities. The incremental cost-effectiveness ratio of pulse oximetry plus clinical examination compared with examination alone is approximately £24,900 per timely diagnosis in a population in which antenatal screening for CHDs already exists. Conclusions: Pulse oximetry is a simple, safe, feasible test that is acceptable to parents and staff and adds value to existing screening. It is likely to identify cases of critical CHDs that would otherwise go undetected. It is also likely to be cost-effective given current acceptable thresholds. The detection of other pathologies, such as significant CHDs and respiratory and infective illnesses, is an additional advantage. Other pulse oximetry techniques, such as perfusion index, may enhance detection of aortic obstructive lesions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Productivity at the macro level is a complex concept but also arguably the most appropriate measure of economic welfare. Currently, there is limited research available on the various approaches that can be used to measure it and especially on the relative accuracy of said approaches. This thesis has two main objectives: firstly, to detail some of the most common productivity measurement approaches and assess their accuracy under a number of conditions and secondly, to present an up-to-date application of productivity measurement and provide some guidance on selecting between sometimes conflicting productivity estimates. With regards to the first objective, the thesis provides a discussion on the issues specific to macro-level productivity measurement and on the strengths and weaknesses of the three main types of approaches available, namely index-number approaches (represented by Growth Accounting), non-parametric distance functions (DEA-based Malmquist indices) and parametric production functions (COLS- and SFA-based Malmquist indices). The accuracy of these approaches is assessed through simulation analysis, which provided some interesting findings. Probably the most important were that deterministic approaches are quite accurate even when the data is moderately noisy, that no approaches were accurate when noise was more extensive, that functional form misspecification has a severe negative effect in the accuracy of the parametric approaches and finally that increased volatility in inputs and prices from one period to the next adversely affects all approaches examined. The application was based on the EU KLEMS (2008) dataset and revealed that the different approaches do in fact result in different productivity change estimates, at least for some of the countries assessed. To assist researchers in selecting between conflicting estimates, a new, three step selection framework is proposed, based on findings of simulation analyses and established diagnostics/indicators. An application of this framework is also provided, based on the EU KLEMS dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Short text messages a.k.a Microposts (e.g. Tweets) have proven to be an effective channel for revealing information about trends and events, ranging from those related to Disaster (e.g. hurricane Sandy) to those related to Violence (e.g. Egyptian revolution). Being informed about such events as they occur could be extremely important to authorities and emergency professionals by allowing such parties to immediately respond. In this work we study the problem of topic classification (TC) of Microposts, which aims to automatically classify short messages based on the subject(s) discussed in them. The accurate TC of Microposts however is a challenging task since the limited number of tokens in a post often implies a lack of sufficient contextual information. In order to provide contextual information to Microposts, we present and evaluate several graph structures surrounding concepts present in linked knowledge sources (KSs). Traditional TC techniques enrich the content of Microposts with features extracted only from the Microposts content. In contrast our approach relies on the generation of different weighted semantic meta-graphs extracted from linked KSs. We introduce a new semantic graph, called category meta-graph. This novel meta-graph provides a more fine grained categorisation of concepts providing a set of novel semantic features. Our findings show that such category meta-graph features effectively improve the performance of a topic classifier of Microposts. Furthermore our goal is also to understand which semantic feature contributes to the performance of a topic classifier. For this reason we propose an approach for automatic estimation of accuracy loss of a topic classifier on new, unseen Microposts. We introduce and evaluate novel topic similarity measures, which capture the similarity between the KS documents and Microposts at a conceptual level, considering the enriched representation of these documents. Extensive evaluation in the context of Emergency Response (ER) and Violence Detection (VD) revealed that our approach outperforms previous approaches using single KS without linked data and Twitter data only up to 31.4% in terms of F1 measure. Our main findings indicate that the new category graph contains useful information for TC and achieves comparable results to previously used semantic graphs. Furthermore our results also indicate that the accuracy of a topic classifier can be accurately predicted using the enhanced text representation, outperforming previous approaches considering content-based similarity measures. © 2014 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We propose a robust adaptive time synchronization and frequency offset estimation method for coherent optical orthogonal frequency division multiplexing (CO-OFDM) systems by applying electrical dispersion pre-compensation (pre-EDC) to the pilot symbol. This technique effectively eliminates the timing error due to the fiber chromatic dispersion, thus increasing significantly the accuracy of the frequency offset estimation process and improving the overall system performance. In addition, a simple design of the pilot symbol is proposed for full-range frequency offset estimation. This pilot symbol can also be used to carry useful data to effectively reduce the overhead due to time synchronization by a factor of 2.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large-scale mechanical products, such as aircraft and rockets, consist of large numbers of small components, which introduce additional difficulty for assembly accuracy and error estimation. Planar surfaces as key product characteristics are usually utilised for positioning small components in the assembly process. This paper focuses on assembly accuracy analysis of small components with planar surfaces in large-scale volume products. To evaluate the accuracy of the assembly system, an error propagation model for measurement error and fixture error is proposed, based on the assumption that all errors are normally distributed. In this model, the general coordinate vector is adopted to represent the position of the components. The error transmission functions are simplified into a linear model, and the coordinates of the reference points are composed by theoretical value and random error. The installation of a Head-Up Display is taken as an example to analyse the assembly error of small components based on the propagation model. The result shows that the final coordination accuracy is mainly determined by measurement error of the planar surface in small components. To reduce the uncertainty of the plane measurement, an evaluation index of measurement strategy is presented. This index reflects the distribution of the sampling point set and can be calculated by an inertia moment matrix. Finally, a practical application is introduced for validating the evaluation index.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study examined the effect of schemas on consistency and accuracy of memory across interviews, providing theoretical hypotheses explaining why inconsistencies may occur. The design manipulated schema-typicality of items (schema-typical and atypical), question format (free-recall, cued-recall and recognition) and retention interval (immediate/2 week and 2 week/4 week). Consistency, accuracy and experiential quality of memory were measured. ^ All independent variables affected accuracy and experiential quality of memory while question format was the only variable affecting consistency. These results challenge the commonly held notion in the legal arena that consistency is a proxy for accuracy. The study also demonstrates that other variables, such as item-typicality and retention interval have different effects on consistency and accuracy in memory. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Historically, memory has been evaluated by examining how much is remembered, however a more recent conception of memory focuses on the accuracy of memories. When using this accuracy-oriented conception of memory, unlike with the quantity-oriented approach, memory does not always deteriorate over time. A possible explanation for this seemingly surprising finding lies in the metacognitive processes of monitoring and control. Use of these processes allows people to withhold responses of which they are unsure, or to adjust the precision of responses to a level that is broad enough to be correct. The ability to accurately report memories has implications for investigators who interview witnesses to crimes, and those who evaluate witness testimony. ^ This research examined the amount of information provided, accuracy, and precision of responses provided during immediate and delayed interviews about a videotaped mock crime. The interview format was manipulated such that a single free narrative response was elicited, or a series of either yes/no or cued questions were asked. Instructions provided by the interviewer indicated to the participants that they should either stress being informative, or being accurate. The interviews were then transcribed and scored. ^ Results indicate that accuracy rates remained stable and high after a one week delay. Compared to those interviewed immediately, after a delay participants provided less information and responses that were less precise. Participants in the free narrative condition were the most accurate. Participants in the cued questions condition provided the most precise responses. Participants in the yes/no questions condition were most likely to say “I don’t know”. The results indicate that people are able to monitor their memories and modify their reports to maintain high accuracy. When control over precision was not possible, such as in the yes/no condition, people said “I don’t know” to maintain accuracy. However when withholding responses and adjusting precision were both possible, people utilized both methods. It seems that concerns that memories reported after a long retention interval might be inaccurate are unfounded. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This dissertation aimed to improve travel time estimation for the purpose of transportation planning by developing a travel time estimation method that incorporates the effects of signal timing plans, which were difficult to consider in planning models. For this purpose, an analytical model has been developed. The model parameters were calibrated based on data from CORSIM microscopic simulation, with signal timing plans optimized using the TRANSYT-7F software. Independent variables in the model are link length, free-flow speed, and traffic volumes from the competing turning movements. The developed model has three advantages compared to traditional link-based or node-based models. First, the model considers the influence of signal timing plans for a variety of traffic volume combinations without requiring signal timing information as input. Second, the model describes the non-uniform spatial distribution of delay along a link, this being able to estimate the impacts of queues at different upstream locations of an intersection and attribute delays to a subject link and upstream link. Third, the model shows promise of improving the accuracy of travel time prediction. The mean absolute percentage error (MAPE) of the model is 13% for a set of field data from Minnesota Department of Transportation (MDOT); this is close to the MAPE of uniform delay in the HCM 2000 method (11%). The HCM is the industrial accepted analytical model in the existing literature, but it requires signal timing information as input for calculating delays. The developed model also outperforms the HCM 2000 method for a set of Miami-Dade County data that represent congested traffic conditions, with a MAPE of 29%, compared to 31% of the HCM 2000 method. The advantages of the proposed model make it feasible for application to a large network without the burden of signal timing input, while improving the accuracy of travel time estimation. An assignment model with the developed travel time estimation method has been implemented in a South Florida planning model, which improved assignment results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study investigated the effects of self-monitoring on the homework completion and accuracy rates of four, fourth-grade students with disabilities in an inclusive general education classroom. A multiple baseline across subjects design was utilized to examine four dependent variables: completion of spelling homework, accuracy of spelling homework, completion of math homework, accuracy of math homework. Data were collected and analyzed during baseline, three phases of intervention, and maintenance. ^ Throughout baseline and all phases, participants followed typical classroom procedures, brought their homework to school each day and gave it to the general education teacher. During Phase I of the intervention, participants self-monitored with a daily sheet at home and on the computer at school in the morning using KidTools (Fitzgerald & Koury, 2003); a student friendly, self-monitoring program. They also participated in brief daily conferences to review their self-monitoring sheets with the investigator, their special education teacher. Phase II followed the same steps except conferencing was reduced to two days a week, which were randomly selected by the researcher and Phase III conferencing was one random day a week. Maintenance data were taken over a two-to-three week period subsequent to the end of the intervention. ^ Results of this study demonstrated self-monitoring substantially improved spelling and math homework completion and accuracy rates of students with disabilities in an inclusive, general education classroom. On average, completion and accuracy rates were highest over baseline in Phase III. Self-monitoring led to higher percentages of completion and accuracy during each phase of the intervention compared to baseline, group percentages also rose slightly during maintenance. Therefore, results suggest self-monitoring leads to short-term maintenance in spelling and math homework completion and accuracy. ^ This study adds to the existing literature by investigating the effects of self-monitoring of homework for students with disabilities included in general education classrooms. Future research should consider selecting participants with other demographic characteristics, using peers for conferencing instead of the teacher, and the use of self-monitoring with other academic subjects (e.g., science, history). Additionally, future research could investigate the effects of each of the two self-monitoring components used alone, with or without the conferencing.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Interferometric synthetic aperture radar (InSAR) techniques can successfully detect phase variations related to the water level changes in wetlands and produce spatially detailed high-resolution maps of water level changes. Despite the vast details, the usefulness of the wetland InSAR observations is rather limited, because hydrologists and water resources managers need information on absolute water level values and not on relative water level changes. We present an InSAR technique called Small Temporal Baseline Subset (STBAS) for monitoring absolute water level time series using radar interferograms acquired successively over wetlands. The method uses stage (water level) observation for calibrating the relative InSAR observations and tying them to the stage's vertical datum. We tested the STBAS technique with two-year long Radarsat-1 data acquired during 2006–2008 over the Water Conservation Area 1 (WCA1) in the Everglades wetlands, south Florida (USA). The InSAR-derived water level data were calibrated using 13 stage stations located in the study area to generate 28 successive high spatial resolution maps (50 m pixel resolution) of absolute water levels. We evaluate the quality of the STBAS technique using a root mean square error (RMSE) criterion of the difference between InSAR observations and stage measurements. The average RMSE is 6.6 cm, which provides an uncertainty estimation of the STBAS technique to monitor absolute water levels. About half of the uncertainties are attributed to the accuracy of the InSAR technique to detect relative water levels. The other half reflects uncertainties derived from tying the relative levels to the stage stations' datum.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Hearing of the news of the death of Diana, Princess of Wales, in a traffic accident, is taken as an analogue for being a percipient but uninvolved witness to a crime, or a witness to another person's sudden confession to some illegal act. This event (known in the literature as a “reception event”) has previously been hypothesized to cause one to form a special type of memory commonly known as a “flashbulb memory” (FB) (Brown and Kulik, 1977). FB's are hypothesized to be especially resilient against forgetting, highly detailed including peripheral details, clear, and inspiring great confidence in the individual for their accuracy. FB's are dependent for their formation upon surprise, emotional valence, and impact, or consequentiality to the witness of the initiating event. FB's are thought to be enhanced by frequent rehearsal. FB's are very important in the context of criminal investigation and litigation in that investigators and jurors usually place great store in witnesses, regardless of their actual accuracy, who claim to have a clear and complete recollection of an event, and who express this confidently. Therefore, the lives, or at least the freedom, of criminal defendants, and the fortunes of civil litigants hang on the testimony of witnesses professing to have FB's. ^ In this study, which includes a large and diverse sample (N = 305), participants were surveyed within 2–4 days after hearing of the fatal accident, and again at intervals of 2 and 4 weeks, 6, 12, and 18 months. Contrary to the FB hypothesis, I found that participants' FB's degraded over time beginning at least as early as two weeks post event. At about 12 months the memory trace stabilized, resisting further degradation. Repeated interviewing did not have any negative affect upon accuracy, contrary to concerns in the literature. Analysis by correlation and regression indicated no effect or predictive power for participant age, emotionality, confidence, or student status, as related to accuracy of recall; nor was participant confidence in accuracy predicted by emotional impact as hypothesized. Results also indicate that, contrary to the notions of investigators and jurors, witnesses become more inaccurate over time regardless of their confidence in their memories, even for highly emotional events. ^