2 resultados para Computing Classification Systems
em eResearch Archive - Queensland Department of Agriculture
Resumo:
Quality and safety evaluation of agricultural products has become an increasingly important consideration in market/commercial viability and systems for such evaluations are now demanded by customers, including distributors and retailers. Unfortunately, most horticultural products struggle with delivering adequate and consistent quality to the consumer. Removing inconsistencies and providing what the consumer expects is a key factor for retaining and expanding both domestic and international markets. Most commercial quality classification systems for fruit and vegetables are based on external features of the product, for example: shape, colour, size, weight and blemishes. However, the external appearance of most fruit is generally not an accurate guide to the internal or eating quality of the fruit. Internal quality of fruit is currently subjectively judged on attributes such as volatiles, firmness, and appearance. Destructive subjective measures such as internal flesh colour, or objective measures such as extraction of juice to measure sweetness (oBrix) or assessment of dry matter (DM) content are also used, although obviously not for every fruit – just a sample to represent the whole consignment. For avocado fruit, external colour is not a maturity characteristic, and its smell is too weak and appears later in its maturity stage (Gaete-Garreton et al., 2005). Since maturity is a major component of avocado quality and palatability, it is important to harvest mature fruit, so as to ensure that fruit will ripen properly and have acceptable eating quality. Currently, commercial avocado maturity estimation is based on destructive assessment of the %DM, and sometimes percent oil, both of which are highly correlated with maturity (Clark et al., 2003; Mizrach & Flitsanov, 1999). Avocados Australia Limited (AAL (2008)) recommend a minimum maturity standard for its growers of 23 %DM (greater than 10% oil content) for the ‘Hass’ cultivar, although consumer studies indicate a preference for at least 25 %DM (Harker et al., 2007).
Resumo:
Many statistical forecast systems are available to interested users. In order to be useful for decision-making, these systems must be based on evidence of underlying mechanisms. Once causal connections between the mechanism and their statistical manifestation have been firmly established, the forecasts must also provide some quantitative evidence of `quality’. However, the quality of statistical climate forecast systems (forecast quality) is an ill-defined and frequently misunderstood property. Often, providers and users of such forecast systems are unclear about what ‘quality’ entails and how to measure it, leading to confusion and misinformation. Here we present a generic framework to quantify aspects of forecast quality using an inferential approach to calculate nominal significance levels (p-values) that can be obtained either by directly applying non-parametric statistical tests such as Kruskal-Wallis (KW) or Kolmogorov-Smirnov (KS) or by using Monte-Carlo methods (in the case of forecast skill scores). Once converted to p-values, these forecast quality measures provide a means to objectively evaluate and compare temporal and spatial patterns of forecast quality across datasets and forecast systems. Our analysis demonstrates the importance of providing p-values rather than adopting some arbitrarily chosen significance levels such as p < 0.05 or p < 0.01, which is still common practice. This is illustrated by applying non-parametric tests (such as KW and KS) and skill scoring methods (LEPS and RPSS) to the 5-phase Southern Oscillation Index classification system using historical rainfall data from Australia, The Republic of South Africa and India. The selection of quality measures is solely based on their common use and does not constitute endorsement. We found that non-parametric statistical tests can be adequate proxies for skill measures such as LEPS or RPSS. The framework can be implemented anywhere, regardless of dataset, forecast system or quality measure. Eventually such inferential evidence should be complimented by descriptive statistical methods in order to fully assist in operational risk management.