7 resultados para Sequential error ratio

em Deakin Research Online - Australia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Researchers typically tackle questions by constructing powerful, highlyreplicated sampling protocols or experimental designs. Such approaches often demand large samples sizes and are usually only conducted on a once-off basis. In contrast, many industries need to continually monitor phenomena such as equipment reliability, water quality, or the abundance of a pest. In such instances, costs and time inherent in sampling preclude the use of highlyintensive methods. Ideally, one wants to collect the absolute minimum number of samples needed to make an appropriate decision. Sequential sampling, wherein the sample size is a function of the results of the sampling process itself, offers a practicable solution. But smaller sample sizes equate to less knowledge about the population, and thus an increased risk of making an incorrect management decision. There are various statistical techniques to account for and measure risk in sequential sampling plans. We illustrate these methods and assess them using examples relating to the management of arthropod pests in commercial crops, but they can be applied to any situation where sequential sampling is used.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The communication via email is one of the most popular services of the Internet. Emails have brought us great convenience in our daily work and life. However, unsolicited messages or spam, flood our email boxes, which results in bandwidth, time and money wasting. To this end, this paper presents a rough set based model to classify emails into three categories - spam, no-spam and suspicious, rather than two classes (spam and non-spam) in most currently used approaches. By comparing with popular classification methods like Naive Bayes classification, the error ratio that a non-spam is discriminated to spam can be reduced using our proposed model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper draws on empirical evidence to demonstrate that a heuristic framework signals collapse with significantly higher accuracy than the traditional static approach. Using a sample of 494 US publicly listed companies comprising 247 collapsed matched with 247 financially healthy ones, a heuristic framework is decisively superior the closer one gets to the event of collapse, culminating in 12.5% more overall accuracy than a static approach during
the year of collapse. An even more dramatic improvement occurs in relation to reduction of Type I error, with a heuristic framework delivering an improvement of 66.7% over its static counterpart.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis, using a computer simulation, studies the effect of the normal distribution assumption on the power of several many-sample location and scale test procedures. It also suggests an almost robust parametric test, namely numerical likelihood ratio test (NLRT) for non-normal situations. The NLRT is found better than all of the tests considered. Some real life data sets were used as examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is currently no universally recommended and accepted method of data processing within the science of indirect calorimetry for either mixing chamber or breath-by-breath systems of expired gas analysis. Exercise physiologists were first surveyed to determine methods used to process oxygen consumption ([OV0312]O 2) data, and current attitudes to data processing within the science of indirect calorimetry. Breath-by-breath datasets obtained from indirect calorimetry during incremental exercise were then used to demonstrate the consequences of commonly used time, breath and digital filter post-acquisition data processing strategies. Assessment of the variability in breath-by-breath data was determined using multiple regression based on the independent variables ventilation (VE), and the expired gas fractions for oxygen and carbon dioxide, FEO 2 and FECO2, respectively. Based on the results of explanation of variance of the breath-by-breath [OV0312]O2 data, methods of processing to remove variability were proposed for time-averaged, breath-averaged and digital filter applications. Among exercise physiologists, the strategy used to remove the variability in sequential [OV0312]O2 measurements varied widely, and consisted of time averages (30 sec [38%], 60 sec [18%], 20 sec [11%], 15 sec [8%]), a moving average of five to 11 breaths (10%), and the middle five of seven breaths (7%). Most respondents indicated that they used multiple criteria to establish maximum [OV0312]O 2 ([OV0312]O2max) including: the attainment of age-predicted maximum heart rate (HRmax) [53%], respiratory exchange ratio (RER) >1.10 (49%) or RER >1.15 (27%) and a rating of perceived exertion (RPE) of >17, 18 or 19 (20%). The reasons stated for these strategies included their own beliefs (32%), what they were taught (26%), what they read in research articles (22%), tradition (13%) and the influence of their colleagues (7%). The combination of VE, FEO 2 and FECO2 removed 96-98% of [OV0312]O2 breath-by-breath variability in incremental and steady-state exercise [OV0312]O2 data sets, respectively. Correction of residual error in [OV0312]O2 datasets to 10% of the raw variability results from application of a 30-second time average, 15-breath running average, or a 0.04 Hz low cut-off digital filter. Thus, we recommend that once these data processing strategies are used, the peak or maximal value becomes the highest processed datapoint. Exercise physiologists need to agree on, and continually refine through empirical research, a consistent process for analysing data from indirect calorimetry.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The prevalence of visual impairment due to uncorrected refractive error has not been previously studied in Canada. A population-based study was conducted in Brantford, Ontario. The target population included all people 40 years of age and older. Study participants were selected using a randomized sampling strategy based on postal codes. Presenting distance and near visual acuities were measured with habitual spectacle correction, if any, in place. Best corrected visual acuities were determined for all participants who had a presenting distance visual acuity of less than 20/25. Population weighted prevalence of distance visual impairment (visual acuity <20/40 in the better eye) was 2.7% (n = 768, 95% confidence interval (CI) 1.8–4.0%) with 71.8% correctable by refraction. Population weighted prevalence of near visual impairment (visual acuity <20/40 with both eyes) was 2.2% (95% CI 1.4–3.6) with 69.1% correctable by refraction. Multivariable adjusted analysis showed that the odds of having distance visual impairment was independently associated with increased age (odds ratio, OR, 3.56, 95% CI 1.22–10.35; ≥65 years compared to those 39–64 years), and time since last eye examination (OR 4.93, 95% CI 1.19–20.32; ≥5 years compared to ≤2 years). The same factors appear to be associated with increased prevalence of near visual impairment but were not statistically significant. The majority of visual impairment found in Brantford was due to uncorrected refractive error. Factors that increased the prevalence of visual impairment were the same for distance and near visual acuity measurements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Capability indices in both univariate and multivariate processes are extensively employed in quality control to assess the quality status of production batches before their release for operational use. It is traditionally a measure of the ratio of the allowable process spread and the actual spread. In this paper, we will adopt a bootstrap and sequential sampling procedures to determine the optimal sample size for estimating a multivariate capability index introduced by Pearns et. al. [12]. Bootstrap techniques have the distinct advantage of placing very minimum requirement on the distributions of the underlying quality characteristics, thereby rendering them more relevant under a wide variety of situations. Finally, we provide several numerical examples where the sequential sampling procedures are evaluated and compared.