315 resultados para accuracy of estimation


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Subsampling is a common method for estimating the abundance of species in trawl catches. However, the accuracy of subsampling in representing the total catch has not been assessed. To estimate one possible source of bias due to subsampling, we tested whether the position on trawler sorting trays from which subsamples were taken affected their ability to represent species in catches. This was done by sorting catches into 10 kg subsamples and comparing subsamples taken from different positions on the sorting tray. Comparisons were made after species were grouped into three categories of abundance, either 'rare', 'common' or 'abundant'. A generalised linear model analysis showed that taking subsamples from different positions on the sorting tray had no major effect on estimating the total numbers or weights of fish or invertebrates, or the total number of fish or invertebrate taxa, recorded in each position. Some individual taxa showed differences between positions on the sorting tray (11.5% of taxa ina three-position design; 25% in a five-position design). But consistent and meaningful patterns in the position of these taxa on the sorting tray could only be seen for the pony fish Leiognathus moretoniensis and the saucer scallop Amusium pleuronectes. Because most bycatch laxa are well mixed throughout the catch, subsamples can be taken from any position on trawler sorting trays without introducing bias.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

For a wide class of semi-Markov decision processes the optimal policies are expressible in terms of the Gittins indices, which have been found useful in sequential clinical trials and pharmaceutical research planning. In general, the indices can be approximated via calibration based on dynamic programming of finite horizon. This paper provides some results on the accuracy of such approximations, and, in particular, gives the error bounds for some well known processes (Bernoulli reward processes, normal reward processes and exponential target processes).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

At present, the most reliable method to obtain end-user perceived quality is through subjective tests. In this paper, the impact of automatic region-of-interest (ROI) coding on perceived quality of mobile video is investigated. The evidence, which is based on perceptual comparison analysis, shows that the coding strategy improves perceptual quality. This is particularly true in low bit rate situations. The ROI detection method used in this paper is based on two approaches: - (1) automatic ROI by analyzing the visual contents automatically, and; - (2) eye-tracking based ROI by aggregating eye-tracking data across many users, used to both evaluate the accuracy of automatic ROI detection and the subjective quality of automatic ROI encoded video. The perceptual comparison analysis is based on subjective assessments with 54 participants, across different content types, screen resolutions, and target bit rates while comparing the two ROI detection methods. The results from the user study demonstrate that ROI-based video encoding has higher perceived quality compared to normal video encoded at a similar bit rate, particularly in the lower bit rate range.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study which factors in terms of trading environment and trader characteristics determine individual information acquisition in experimental asset markets. Traders with larger endowments, existing inconclusive information, lower risk aversion, and less experience in financial markets tend to acquire more information. Overall, we find that traders overacquire information, so that informed traders on average obtain negative profits net of information costs. Information acquisition and the associated losses do not diminish over time. This overacquisition phenomenon is inconsistent with predictions of rational expectations equilibrium, and we argue it resembles the overdissipation results from the contest literature. We find that more acquired information in the market leads to smaller differences between fundamental asset values and prices. Thus, the overacquisition phenomenon is a novel explanation for the high forecasting accuracy of prediction markets.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Optometry students are taught the process of subjective refraction through lectures and laboratory based practicals before progressing to supervised clinical practice. Simulated learning environments (SLEs) are an emerging technology that are used in a range of health disciplines, however, there is limited evidence regarding the effectiveness of clinical simulators as an educational tool. Methods: Forty optometry students (20 fourth year and 20 fifth year) were assessed twice by a qualified optometrist (two examinations separated by 4-8 weeks) while completing a monocular non-cycloplegic subjective refraction on the same patient with an unknown refractive error simulated using contact lenses. Half of the students were granted access to an online SLE, The Brien Holden Vision Institute (BHVI®) Virtual Refractor, and the remaining students formed a control group. The primary outcome measures at each visit were; accuracy of the clinical refraction compared to a qualified optometrist and relative to the Optometry Council of Australia and New Zealand (OCANZ) subjective refraction examination criteria. Secondary measures of interest included descriptors of student SLE engagement, student self-reported confidence levels and correlations between performance in the simulated and real world clinical environment. Results: Eighty percent of students in the intervention group interacted with the SLE (for an average of 100 minutes); however, there was no correlation between measures of student engagement with the BHVI® Virtual Refractor and speed or accuracy of clinical subjective refractions. Fifth year students were typically more confident and refracted more accurately and quickly than fourth year students. A year group by experimental group interaction (p = 0.03) was observed for accuracy of the spherical component of refraction, and post hoc analysis revealed that less experienced students exhibited greater gains in clinical accuracy following exposure to the SLE intervention. Conclusions: Short-term exposure to a SLE can positively influence clinical subjective refraction outcomes for less experienced optometry students and may be of benefit in increasing the skills of novice refractionists to levels appropriate for commencing supervised clinical interactions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sonography is an important clinical tool in diagnosing appendicitis in children as it can obviate both exposure to potentially harmful ionising radiation from computed tomography scans and the need for unnecessary appendicectomies. This review examines the diagnostic accuracy of ultrasound in the identification of acute appendicitis, with a particular focus on the the utility of secondary sonographic signs as an adjunct or corollary to traditionally examined criteria. These secondary signs can be important in cases where the appendix cannot be identified with ultrasound and a more meaningful finding may be made by incorporating the presence or absence of secondary sonographic signs. There is evidence that integrating these secondary signs into the final ultrasound diagnosis can improve the utility of ultrasound in cases where appendicitis is expected, though there remains some conjecture about whether they play a more important role in negative or positive prediction in the absence of an identifiable appendix.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Successful healing of long bone fractures is dependent on the mechanical environment created within the fracture, which in turn is dependent on the fixation strategy. Recent literature reports have suggested that locked plating devices are too stiff to reliably promote healing. However, in vitro testing of these devices has been inconsistent in both method of constraint and reported outcomes, making comparisons between studies and the assessment of construct stiffness problematic. Each of the methods previously used in the literature were assessed for their effect on the bending of the sample and concordant stiffness. The choice of outcome measures used in in vitro fracture studies was also assessed. Mechanical testing was conducted on seven hole locked plated constructs in each method for comparison. Based on the assessment of each method the use of spherical bearings, ball joints or similar is suggested at both ends of the sample. The use of near and far cortex movement was found to be more comprehensive and more accurate than traditional centrally calculated inter fragmentary movement values; stiffness was found to be highly susceptible to the accuracy of deformation measurements and constraint method, and should only be used as a within study comparison method. The reported stiffness values of locked plate constructs from in vitro mechanical testing is highly susceptible to testing constraints and output measures, with many standard techniques overestimating the stiffness of the construct. This raises the need for further investigation into the actual mechanical behaviour within the fracture gap of these devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

PURPOSE To develop and test decision tree (DT) models to classify physical activity (PA) intensity from accelerometer output and Gross Motor Function Classification System (GMFCS) classification level in ambulatory youth with cerebral palsy (CP); and 2) compare the classification accuracy of the new DT models to that achieved by previously published cut-points for youth with CP. METHODS Youth with CP (GMFCS Levels I - III) (N=51) completed seven activity trials with increasing PA intensity while wearing a portable metabolic system and ActiGraph GT3X accelerometers. DT models were used to identify vertical axis (VA) and vector magnitude (VM) count thresholds corresponding to sedentary (SED) (<1.5 METs), light PA (LPA) (>/=1.5 and <3 METs) and moderate-to-vigorous PA (MVPA) (>/=3 METs). Models were trained and cross-validated using the 'rpart' and 'caret' packages within R. RESULTS For the VA (VA_DT) and VM decision trees (VM_DT), a single threshold differentiated LPA from SED, while the threshold for differentiating MVPA from LPA decreased as the level of impairment increased. The average cross-validation accuracy for the VC_DT was 81.1%, 76.7%, and 82.9% for GMFCS levels I, II, and III, respectively. The corresponding cross-validation accuracy for the VM_DT was 80.5%, 75.6%, and 84.2%, respectively. Within each GMFCS level, the decision tree models achieved better PA intensity recognition than previously published cut-points. The accuracy differential was greatest among GMFCS level III participants, in whom the previously published cut-points misclassified 40% of the MVPA activity trials. CONCLUSION GMFCS-specific cut-points provide more accurate assessments of MVPA levels in youth with CP across the full spectrum of ambulatory ability.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A central tenet in the theory of reliability modelling is the quantification of the probability of asset failure. In general, reliability depends on asset age and the maintenance policy applied. Usually, failure and maintenance times are the primary inputs to reliability models. However, for many organisations, different aspects of these data are often recorded in different databases (e.g. work order notifications, event logs, condition monitoring data, and process control data). These recorded data cannot be interpreted individually, since they typically do not have all the information necessary to ascertain failure and preventive maintenance times. This paper presents a methodology for the extraction of failure and preventive maintenance times using commonly-available, real-world data sources. A text-mining approach is employed to extract keywords indicative of the source of the maintenance event. Using these keywords, a Naïve Bayes classifier is then applied to attribute each machine stoppage to one of two classes: failure or preventive. The accuracy of the algorithm is assessed and the classified failure time data are then presented. The applicability of the methodology is demonstrated on a maintenance data set from an Australian electricity company.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel crop detection system applied to the challenging task of field sweet pepper (capsicum) detection. The field-grown sweet pepper crop presents several challenges for robotic systems such as the high degree of occlusion and the fact that the crop can have a similar colour to the background (green on green). To overcome these issues, we propose a two-stage system that performs per-pixel segmentation followed by region detection. The output of the segmentation is used to search for highly probable regions and declares these to be sweet pepper. We propose the novel use of the local binary pattern (LBP) to perform crop segmentation. This feature improves the accuracy of crop segmentation from an AUC of 0.10, for previously proposed features, to 0.56. Using the LBP feature as the basis for our two-stage algorithm, we are able to detect 69.2% of field grown sweet peppers in three sites. This is an impressive result given that the average detection accuracy of people viewing the same colour imagery is 66.8%.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The emergence of multiple satellite navigation systems, including BDS, Galileo, modernized GPS, and GLONASS, brings great opportunities and challenges for precise point positioning (PPP). We study the contributions of various GNSS combinations to PPP performance based on undifferenced or raw observations, in which the signal delays and ionospheric delays must be considered. A priori ionospheric knowledge, such as regional or global corrections, strengthens the estimation of ionospheric delay parameters. The undifferenced models are generally more suitable for single-, dual-, or multi-frequency data processing for single or combined GNSS constellations. Another advantage over ionospheric-free PPP models is that undifferenced models avoid noise amplification by linear combinations. Extensive performance evaluations are conducted with multi-GNSS data sets collected from 105 MGEX stations in July 2014. Dual-frequency PPP results from each single constellation show that the convergence time of undifferenced PPP solution is usually shorter than that of ionospheric-free PPP solutions, while the positioning accuracy of undifferenced PPP shows more improvement for the GLONASS system. In addition, the GLONASS undifferenced PPP results demonstrate performance advantages in high latitude areas, while this impact is less obvious in the GPS/GLONASS combined configuration. The results have also indicated that the BDS GEO satellites have negative impacts on the undifferenced PPP performance given the current “poor” orbit and clock knowledge of GEO satellites. More generally, the multi-GNSS undifferenced PPP results have shown improvements in the convergence time by more than 60 % in both the single- and dual-frequency PPP results, while the positioning accuracy after convergence indicates no significant improvements for the dual-frequency PPP solutions, but an improvement of about 25 % on average for the single-frequency PPP solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research constructed a readability measurement for French speakers who view English as a second language. It identified the true cognates, which are the similar words from these two languages, as an indicator of the difficulty of an English text for French people. A multilingual lexical resource is used to detect true cognates in text, and Statistical Language Modelling to predict the predict the readability level. The proposed enhanced statistical language model is making a step in the right direction by improving the accuracy of readability predictions for French speakers by up to 10% compared to state of the art approaches. The outcome of this study could accelerate the learning process for French speakers who are studying English. More importantly, this study also benefits the readability estimation research community, presenting an approach and evaluation at sentence level as well as innovating with the use of cognates as a new text feature.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In modern evolutionary divergence analysis the role of geological information extends beyond providing a timescale, to informing molecular rate variation across the tree. Here I consider the implications of this development. I use fossil calibrations to test the accuracy of models of molecular rate evolution for placental mammals, and reveal substantial misspecification associated with life history rate correlates. Adding further calibrations to reduce dating errors at specific nodes unfortunately tends to transfer underlying rate errors to adjacent branches. Thus, tight calibration across the tree is vital to buffer against rate model errors. I argue that this must include allowing maximum bounds to be tight when good fossil records permit, otherwise divergences deep in the tree will tend to be inflated by the interaction of rate errors and asymmetric confidence in minimum and maximum bounds. In the case of placental mammals I sought to reduce the potential for transferring calibration and rate model errors across the tree by focusing on well-supported calibrations with appropriately conservative maximum bounds. The resulting divergence estimates are younger than others published recently, and provide the long-anticipated molecular signature for the placental mammal radiation observed in the fossil record near the 66 Ma Cretaceous–Paleogene extinction event.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The most difficult operation in the flood inundation mapping using optical flood images is to separate fully inundated areas from the ‘wet’ areas where trees and houses are partly covered by water. This can be referred as a typical problem the presence of mixed pixels in the images. A number of automatic information extraction image classification algorithms have been developed over the years for flood mapping using optical remote sensing images. Most classification algorithms generally, help in selecting a pixel in a particular class label with the greatest likelihood. However, these hard classification methods often fail to generate a reliable flood inundation mapping because the presence of mixed pixels in the images. To solve the mixed pixel problem advanced image processing techniques are adopted and Linear Spectral unmixing method is one of the most popular soft classification technique used for mixed pixel analysis. The good performance of linear spectral unmixing depends on two important issues, those are, the method of selecting endmembers and the method to model the endmembers for unmixing. This paper presents an improvement in the adaptive selection of endmember subset for each pixel in spectral unmixing method for reliable flood mapping. Using a fixed set of endmembers for spectral unmixing all pixels in an entire image might cause over estimation of the endmember spectra residing in a mixed pixel and hence cause reducing the performance level of spectral unmixing. Compared to this, application of estimated adaptive subset of endmembers for each pixel can decrease the residual error in unmixing results and provide a reliable output. In this current paper, it has also been proved that this proposed method can improve the accuracy of conventional linear unmixing methods and also easy to apply. Three different linear spectral unmixing methods were applied to test the improvement in unmixing results. Experiments were conducted in three different sets of Landsat-5 TM images of three different flood events in Australia to examine the method on different flooding conditions and achieved satisfactory outcomes in flood mapping.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background: Clinicians frequently use their own judgement to assess patient’s hydration status although there is limited evidence for the diagnostic utility of any individual clinical symptom. Hence, the aim of this study was to compare the diagnostic accuracy of clinically assessed dehydration in older hospital patients (using multiple symptoms), versus dehydration measured using serum-calculated osmolality (CO) as the reference standard. Method: Participants were 44 hospital patients aged ≥ 60 years. Dehydration was assessed clinically and pathologically (CO) within 24 hours of admission and at study exit. Indicators of diagnostic accuracy were calculated. Results: Clinicians identified 27% of patients as dehydrated at admission, and 19% at exit, compared to 19% and 16% using CO. Agreement between the measures was fair at admission and poor at exit. Clinical assessment showed poor sensitivity for predicting dehydration with reasonable specificity. Conclusions: Compared to the use of CO, clinical assessment of dehydration in older patients was poor. Given that failure to identify dehydration in this population may have serious consequences, we recommend that clinicians do not rely upon their own assessments without also using the reference standard.