70 resultados para Arithmetic mean

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

A retrospective assessment of exposure to benzene was carried out for a nested case control study of lympho-haematopoietic cancers, including leukaemia, in the Australian petroleum industry. Each job or task in the industry was assigned a Base Estimate (BE) of exposure derived from task-based personal exposure assessments carried out by the company occupational hygienists. The BEs corresponded to the estimated arithmetic mean exposure to benzene for each job or task and were used in a deterministic algorithm to estimate the exposure of subjects in the study. Nearly all of the data sets underlying the BEs were found to contain some values below the limit of detection (LOD) of the sampling and analytical methods and some were very heavily censored; up to 95% of the data were below the LOD in some data sets. It was necessary, therefore, to use a method of calculating the arithmetic mean exposures that took into account the censored data. Three different methods were employed in an attempt to select the most appropriate method for the particular data in the study. A common method is to replace the missing (censored) values with half the detection limit. This method has been recommended for data sets where much of the data are below the limit of detection or where the data are highly skewed; with a geometric standard deviation of 3 or more. Another method, involving replacing the censored data with the limit of detection divided by the square root of 2, has been recommended when relatively few data are below the detection limit or where data are not highly skewed. A third method that was examined is Cohen's method. This involves mathematical extrapolation of the left-hand tail of the distribution, based on the distribution of the uncensored data, and calculation of the maximum likelihood estimate of the arithmetic mean. When these three methods were applied to the data in this study it was found that the first two simple methods give similar results in most cases. Cohen's method on the other hand, gave results that were generally, but not always, higher than simpler methods and in some cases gave extremely high and even implausible estimates of the mean. It appears that if the data deviate substantially from a simple log-normal distribution, particularly if high outliers are present, then Cohen's method produces erratic and unreliable estimates. After examining these results, and both the distributions and proportions of censored data, it was decided that the half limit of detection method was most suitable in this particular study.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The effects of animal species (AS; Angora goats, Merino sheep, mixed-grazed goats and sheep at the ratio of 1:1) and stocking rate (SR; 7.5, 10 and 12.5 animals/ha) on fibre production and quality were determined in a replicated experiment on improved annual temperate pastures in southern Australia from 1981 to 1984. Separately grazed sheep produced the most total clean fibre/ha at each SR. Mixed-grazed treatments produced amounts of clean fibre/ha similar to the arithmetic mean of sheep and goat treatments at 7.5/ha (21.9 versus 21.3 kg/ha), 10% more at 10/ha (28.3 versus 25.3 kg/ha, P < 0.05) and 7% more at 12.5/ha (31.6 versus 29.6 kg/ha, P < 0.10). Clean wool production/head was affected by AS and SR but not year. Clean mohair production was affected by SR and year but not AS. Variation in mean fibre diameter (MFD) accounted for 67 and 71%, respectively, of the variation in clean wool and clean mohair production/head. There was an AS SR interaction for clean fibre production/t pasture. Growth rate of mohair was highest in autumn and least in summer. In each season, an increase in the SR reduced the clean mohair growth rate. Growth rate of wool was highest in spring and least in summer. Wool and mohair MFD were affected by an AS SR interaction. Mohair MFD was also affected by year and season. At 10/ha, wool from mixed-grazed sheep had a greater MFD than wool from separately grazed sheep (20.2 versus 18.9 μm) and mixed-grazed goats grew mohair 1 μm coarser than separately grazed goats. At 12.5/ha mixed-grazed goats grew mohair 1.9 μm finer than separately grazed goats. Mohair MFD was predicted by a multiple regression that included average liveweight for the period of fleece growth, season of growth (summer 1 μm finer than winter) and year (range 1.27 μm). Mohair MFD increased 4.7 μm/10 kg increase in average fleece-free liveweight (P = 6.4 10-14). Fleece-free liveweight alone accounted for 76.4% of the variation in mohair MFD. There was an AS SR interaction for the incidence of kemp and medullated fibres; under severe grazing pressure their incidence was suppressed. This experiment indicated that the principles associated with the effects of SR on wool production on annual temperate pastures apply to mohair production. Mixed grazing of Merino sheep and Angora goats produced complementary and competitive effects depending on the SR. Angora goats should not be grazed alone or mixed-grazed with sheep on annual temperate pastures at SR greater than that recommended for Merino sheep.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the problem of averaging values on lattices, and in particular on discrete product lattices. This problem arises in image processing when several color values given in RGB, HSL, or another coding scheme, need to be combined. We show how the arithmetic mean and the median can be constructed by minimizing appropriate penalties. We also discuss which of them coincide with the Cartesian product of the standard mean and median.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We review various representations of the median and related aggregation functions. An advantage of the median is that it discards extreme values of the inputs, and hence exhibits a better central tendency than the arithmetic mean. However, the value of the median depends on only one or two central inputs. Our aim is to design median-like aggregation functions whose value depends on several central inputs. Such functions will preserve the stability of the median against extreme values, but will take more inputs into account. A method based on graduation curves is presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We investigate the problem of averaging values on lattices, and in particular on discrete product lattices. This problem arises in image processing when several color values given in RGB, HSL, or another coding scheme, need to be combined. We show how the arithmetic mean and the median can be constructed by minimizing appropriate penalties, and we discuss which of them coincide with the Cartesian product of the standard mean and median. We apply these functions in image processing. We present three algorithms for color image reduction based on minimizing penalty functions on discrete product lattices.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Extensions of aggregation functions to Atanassov orthopairs (often referred to as intuitionistic fuzzy sets or AIFS) usually involve replacing the standard arithmetic operations with those defined for the membership and non-membership orthopairs. One problem with such constructions is that the usual choice of operations has led to formulas which do not generalize the aggregation of ordinary fuzzy sets (where the membership and non-membership values add to 1). Previous extensions of the weighted arithmetic mean and ordered weighted averaging operator also have the absorbent element 〈1,0〉, which becomes particularly problematic in the case of the Bonferroni mean, whose generalizations are useful for modeling mandatory requirements. As well as considering the consistency and interpretability of the operations used for their construction, we hold that it is also important for aggregation functions over higher order fuzzy sets to exhibit analogous behavior to their standard definitions. After highlighting the main drawbacks of existing Bonferroni means defined for Atanassov orthopairs and interval data, we present two alternative methods for extending the generalized Bonferroni mean. Both lead to functions with properties more consistent with the original Bonferroni mean, and which coincide in the case of ordinary fuzzy values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This chapter gives an overview of aggregation functions toward their use in recommender systems. Simple aggregation functions such as the arithmetic mean are often employed to aggregate user features, item ratings, measures of similarity, etc., however many other aggregation functions exist which could deliver increased accuracy and flexibility to many systems. We provide definitions of some important families and properties, sophisticated methods of construction, and various examples of aggregation functions in the domain of recommender systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In image processing, particularly in image reduction, averaging aggregation functions play an important role. In this work we study the aggregation of color values (RGB) and we present an image reduction algorithm for RGB color images. For this purpose, we define and study aggregation functions and penalty functions in product lattices. We show how the arithmetic mean and the median can be obtained by minimizing specific penalty functions. Moreover, we study other penalty functions and we show that, in general, aggregation functions on product lattices do not coincide with the cartesian product of the corresponding aggregation functions. Finally, we make an experimental study where we test our reduction algorithm and we analyze the stability of the penalty functions in images affected by noise.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Density-based means have been recently proposed as a method for dealing with outliers in the stream processing of data. Derived from a weighted arithmetic mean with variable weights that depend on the location of all data samples, these functions are not monotonic and hence cannot be classified as aggregation functions. In this article we establish the weak monotonicity of this class of averaging functions and use this to establish robust generalisations of these means. Specifically, we find that as proposed, the density based means are only robust to isolated outliers. However, by using penalty based formalisms of averaging functions and applying more sophisticated and robust density estimators, we are able to define a broader family of density based means that are more effective at filtering both isolated and clustered outliers. © 2014 Elsevier Inc. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the case of real-valued inputs, averaging aggregation functions have been studied extensively with results arising in fields including probability and statistics, fuzzy decision-making, and various sciences. Although much of the behavior of aggregation functions when combining standard fuzzy membership values is well established, extensions to interval-valued fuzzy sets, hesitant fuzzy sets, and other new domains pose a number of difficulties. The aggregation of non-convex or discontinuous intervals is usually approached in line with the extension principle, i.e. by aggregating all real-valued input vectors lying within the interval boundaries and taking the union as the final output. Although this is consistent with the aggregation of convex interval inputs, in the non-convex case such operators are not idempotent and may result in outputs which do not faithfully summarize or represent the set of inputs. After giving an overview of the treatment of non-convex intervals and their associated interpretations, we propose a novel extension of the arithmetic mean based on penalty functions that provides a representative output and satisfies idempotency.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Averaging is ubiquitous in many sciences, engineering, and everyday practice. The notions of the arithmetic, geometric, and harmonic means developed by the ancient Greeks are in widespread use today. When thinking of an average, most people would use arithmetic mean, “the average”, or perhaps its weighted version in order to associate the inputs with the degrees of importance. While this is certainly the simplest and most intuitive averaging function, its use is often not warranted. For example, when averaging the interest rates, it is the geometric and not the arithmetic mean which is the right method. On the other hand, the arithmetic mean can also be biased for a few extreme inputs, and hence can convey false meaning. This is the reason why real estate markets report the median and not the average prices (which could be biased by one or a few outliers), and why judges’ marks in some Olympic sports are trimmed of the smallest and the largest values.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We consider an optimization problem in ecology where our objective is to maximize biodiversity with respect to different land-use allocations. As it turns out, the main problem can be framed as learning the weights of a weighted arithmetic mean where the objective is the geometric mean of its outputs. We propose methods for approximating solutions to this and similar problems, which are non-linear by nature, using linear and bilevel techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present in this paper some properties of k-Lipschitz quasi-arithmetic means. The Lipschitz aggregation operations are stable with respect to input inaccuracies, what is a very important property for applications. Moreover, we provide sufficient conditions to determine when a quasi–arithemetic mean holds the k-Lipschitz property and allow us to calculate the Lipschitz constant k.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A nuclear exclusion appears in all general insurance policies. Since its introduction to Australia and New Zealand in the 1960s this exclusion has seen almost no change. So what are the reasons for this article? There are two reasons. First, there has been a misunderstanding on the part of some in the industry about the scope of this exclusion. This results in unnecessary alterations to the policy. The other is that a new wording is emerging some sections of the market which could be tar-reaching in its effect. The purpose of this article is to examine several aspects related to the exclusion. The first section examines the nature and extent of exposures in relation to radiation and nuclear energy and serves as background to under standing the exclusion wording. Section two provides the reasons for the inclusion of the clause and its historical origins. Section three addresses the intended scope of the current exclusion and the final section examines the scope of a new wording that is appearing and the possible implications that may result.