31 resultados para Ordered weighted average
em University of Queensland eSpace - Australia
Resumo:
The integration of geo-information from multiple sources and of diverse nature in developing mineral favourability indexes (MFIs) is a well-known problem in mineral exploration and mineral resource assessment. Fuzzy set theory provides a convenient framework to combine and analyse qualitative and quantitative data independently of their source or characteristics. A novel, data-driven formulation for calculating MFIs based on fuzzy analysis is developed in this paper. Different geo-variables are considered fuzzy sets and their appropriate membership functions are defined and modelled. A new weighted average-type aggregation operator is then introduced to generate a new fuzzy set representing mineral favourability. The membership grades of the new fuzzy set are considered as the MFI. The weights for the aggregation operation combine the individual membership functions of the geo-variables, and are derived using information from training areas and L, regression. The technique is demonstrated in a case study of skarn tin deposits and is used to integrate geological, geochemical and magnetic data. The study area covers a total of 22.5 km(2) and is divided into 349 cells, which include nine control cells. Nine geo-variables are considered in this study. Depending on the nature of the various geo-variables, four different types of membership functions are used to model the fuzzy membership of the geo-variables involved. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
The data structure of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. This research develops a methodology for evaluating, ex ante, the relative desirability of alternative data structures for end user queries. This research theorizes that the data structure that yields the lowest weighted average complexity for a representative sample of information requests is the most desirable data structure for end user queries. The theory was tested in an experiment that compared queries from two different relational database schemas. As theorized, end users querying the data structure associated with the less complex queries performed better Complexity was measured using three different Halstead metrics. Each of the three metrics provided excellent predictions of end user performance. This research supplies strong evidence that organizations can use complexity metrics to evaluate, ex ante, the desirability of alternate data structures. Organizations can use these evaluations to enhance the efficient and effective retrieval of information by creating data structures that minimize end user query complexity.
Resumo:
The schema of an information system can significantly impact the ability of end users to efficiently and effectively retrieve the information they need. Obtaining quickly the appropriate data increases the likelihood that an organization will make good decisions and respond adeptly to challenges. This research presents and validates a methodology for evaluating, ex ante, the relative desirability of alternative instantiations of a model of data. In contrast to prior research, each instantiation is based on a different formal theory. This research theorizes that the instantiation that yields the lowest weighted average query complexity for a representative sample of information requests is the most desirable instantiation for end-user queries. The theory was validated by an experiment that compared end-user performance using an instantiation of a data structure based on the relational model of data with performance using the corresponding instantiation of the data structure based on the object-relational model of data. Complexity was measured using three different Halstead metrics: program length, difficulty, and effort. For a representative sample of queries, the average complexity using each instantiation was calculated. As theorized, end users querying the instantiation with the lower average complexity made fewer semantic errors, i.e., were more effective at composing queries. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
The aim of the study was to perform a genetic linkage analysis for eye color, for comparative data. Similarity in eye color of mono- and dizygotic twins was rated by the twins' mother, their father and/or the twins themselves. For 4748 twin pairs the similarity in eye color was available on a three point scale (not at all alike-somewhat alike-completely alike), absolute eye color on individuals was not assessed. The probability that twins were alike for eye color was calculated as a weighted average of the different responses of all respondents on several different time points. The mean probability of being alike for eye color was 0.98 for MZ twins (2167 pairs), whereas the mean probability for DZ twins was 0.46 (2537 pairs), suggesting very high heritability for eye color. For 294 DZ twin pairs genome-wide marker data were available. The probability of being alike for eye color was regressed on the average amount of IBD sharing. We found a peak LOD-score of 2.9 at chromosome 15q, overlapping with the region recently implicated for absolute ratings of eye color in Australian twins [Zhu, G., Evans, D. M., Duffy, D. L., Montgomery, G. W., Medland, S. E., Gillespie, N. A., Ewen, K. R., Jewell, M., Liew, Y. W., Hayward, N. K., Sturm, R. A., Trent, J. M., and Martin, N. G. (2004). Twin Res. 7:197-210] and containing the OCA2 gene, which is the major candidate gene for eye color [Sturm, R. A. Teasdale, R. D, and Box, N. F. (2001). Gene 277:49-62]. Our results demonstrate that comparative measures on relatives can be used in genetic linkage analysis.
Resumo:
Occupational standards concerning allowable concentrations of chemical compounds in the ambient air of workplaces have been established in several countries worldwide. With the integration of the European Union (EU), there has been a need of establishing harmonised Occupational Exposure Limits (OEL). The European Commission Directive 95/320/EC of 12 July 1995 has given the tasks to a Scientific Committee for Occupational Exposure Limits (SCOEL) to propose, based on scientific data and where appropriate, occupational limit values which may include the 8-h time-weighted average (TWA), short-term limits/excursion limits (STEL) and Biological Limit Values (BLVs). In 2000, the European Union issued a list of 62 chemical substances with Occupational Exposure Limits. Of these, 25 substances received a skin notation, indicating that toxicologically significant amounts may be taken up via the skin. For such substances, monitoring of concentrations in ambient air may not be sufficient, and biological monitoring strategies appear of potential importance in the medical surveillance of exposed workers. Recent progress has been made with respect to formulation of a strategy related to health-based BLVs. (c) 2005 Elsevier Ireland Ltd. All rights reserved.
Resumo:
Calibration of a groundwater model requires that hydraulic properties be estimated throughout a model domain. This generally constitutes an underdetermined inverse problem, for which a Solution can only be found when some kind of regularization device is included in the inversion process. Inclusion of regularization in the calibration process can be implicit, for example through the use of zones of constant parameter value, or explicit, for example through solution of a constrained minimization problem in which parameters are made to respect preferred values, or preferred relationships, to the degree necessary for a unique solution to be obtained. The cost of uniqueness is this: no matter which regularization methodology is employed, the inevitable consequence of its use is a loss of detail in the calibrated field. This, ill turn, can lead to erroneous predictions made by a model that is ostensibly well calibrated. Information made available as a by-product of the regularized inversion process allows the reasons for this loss of detail to be better understood. In particular, it is easily demonstrated that the estimated value for an hydraulic property at any point within a model domain is, in fact, a weighted average of the true hydraulic property over a much larger area. This averaging process causes loss of resolution in the estimated field. Where hydraulic conductivity is the hydraulic property being estimated, high averaging weights exist in areas that are strategically disposed with respect to measurement wells, while other areas may contribute very little to the estimated hydraulic conductivity at any point within the model domain, this possibly making the detection of hydraulic conductivity anomalies in these latter areas almost impossible. A study of the post-calibration parameter field covariance matrix allows further insights into the loss of system detail incurred through the calibration process to be gained. A comparison of pre- and post-calibration parameter covariance matrices shows that the latter often possess a much smaller spectral bandwidth than the former. It is also demonstrated that, as all inevitable consequence of the fact that a calibrated model cannot replicate every detail of the true system, model-to-measurement residuals can show a high degree of spatial correlation, a fact which must be taken into account when assessing these residuals either qualitatively, or quantitatively in the exploration of model predictive uncertainty. These principles are demonstrated using a synthetic case in which spatial parameter definition is based oil pilot points, and calibration is Implemented using both zones of piecewise constancy and constrained minimization regularization. (C) 2005 Elsevier Ltd. All rights reserved.
Resumo:
The texture segmentation techniques are diversified by the existence of several approaches. In this paper, we propose fuzzy features for the segmentation of texture image. For this purpose, a membership function is constructed to represent the effect of the neighboring pixels on the current pixel in a window. Using these membership function values, we find a feature by weighted average method for the current pixel. This is repeated for all pixels in the window treating each time one pixel as the current pixel. Using these fuzzy based features, we derive three descriptors such as maximum, entropy, and energy for each window. To segment the texture image, the modified mountain clustering that is unsupervised and fuzzy c-means clustering have been used. The performance of the proposed features is compared with that of fractal features.
Resumo:
Height, weight, and tissue accrual were determined in 60 male and 53 female adolescents measured annually over six years using standard anthropometry and dual-energy X-ray absorptiometry (DXA). Annual velocities were derived, and the ages and magnitudes of peak height and peak tissue velocities were determined using a cubic spline fit to individual data. Individuals were rank ordered on the basis of sex and age at peak height velocity (PHV) and then divided into quartiles: early (lowest quartile), average (middle two quartiles), and late (highest quartile) maturers. Sex- and maturity-related comparisons in ages and magnitudes of peak height and peak tissue velocities were made. Males reached peak velocities significantly later than females for all tissues and had significantly greater magnitudes at peak. The age at PHV was negatively correlated with the magnitude of PHV in both sexes. At a similar maturity point (age at PHV) there were no differences in weight or fat mass among maturity groups in both sexes. Late maturing males, however, accrued more bone mineral and lean mass and were taller at the age of PHV compared to early maturers. Thus, maturational status (early, average, or late maturity) as indicated by age at PHV is inversely related to the magnitude and late maturers for weight and fat mass in boys and girls. Am. J. Hum. Biol. 13:1-8, 2001. (C) 2001 Wiley-Liss, Inc.
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.
Resumo:
X-Ray diffraction is reported from mesoporous silicate films grown at the air/water interface. The films were studied both as powdered films, and oriented on silicon or mica sheets. At early stages of growth we observe Bragg diffraction from a highly ordered cubic phase, with both long and short d-spacing peaks. We have assigned this as a discontinuous micellar Pm3n phase in which the silica is partly ordered. Later films retain only the known hexagonal p6m peaks and have lost any order both at short d-spacings and the longer d-spacing Bragg peaks characteristic of the cubic structure. The silica framework is considerably expanded from that in bulk amorphous silica, average Si Si distances are some 30% greater. Incorporation of glycerol or polyethylene glycol preserves the earlier cubic structure. To be consistent with earlier, in situ, X-ray and neutron reflectivity data we infer that both structures are produced after a phase transition from a less-ordered him structure late in the induction phase. The structural relations between the film Pm3n and p6m phase(s) and the known bulk SBA-1 and MCM-41 phases are briefly discussed.
Resumo:
Diffusion- and perfusion-weighted magnetic resonance imaging provides important pathophysiological information in acute bra-in ischemia. We performed a prospective study in 19 sub-6-hour stroke patients using serial diffusion- and perfusion-weighted imaging before intravenous thrombolysis, with repeat studies, both subacutely and at outcome. For comparison of ischemic lesion evolution and clinical outcome, we used a historical control group of 21 sub-6-hour ischemic stroke patients studied serially with diffusion- and perfusion-weighted imaging. The two groups were well matched for the baseline National Institutes of Health Stroke Scale and magnetic resonance parameters. Perfusion-weighted imaging-diffusion-weighted imaging mismatch was present in 16 of 19 patients treated with tissue plasminogen activator, and 16 of 21 controls. Perfusion-weighted imaging-diffusion-weighted imaging mismatch patients treated with tissue plaminogen activator had higher recanalization rates and enhanced reperfusion at day 3 (81% vs 47% in controls), and a greater proportion of severely hypoperfused acute mismatch tissue not progressing to infarction (82% vs -25% in controls). Despite similar baseline diffusion-weighted imaging lesions, infarct expansion was less in the recombinant tissue plaminogen activator group (14cm(3) vs 56cm(3) in controls). The positive effect of thrombolysis on lesion growth in mismatch patients translated into a greater improvement in baseline to outcome National Institutes of Health Stroke Scale in the group treated with recombinant tissue plaminogen activator, and a significantly larger proportion of patients treated with recombinant tissue plaminogen activator having a clinically meaningful improvement in National Institutes of Health Stroke Scale of;2:7 points. The natural evolution of acute perfusion-weighted imaging-diffusion-weighted imaging mismatch tissue may be altered by thrombolysis, with improved stroke outcome. This has implications for the use of diffusion- and perfusion-weighted imaging in selecting and monitoring patients for thrombolytic therapy.
Resumo:
The assumption in analytical solutions for flow from surface and buried point sources of an average water content, (θ) over bar, behind the wetting front is examined. Some recent work has shown that this assumption fitted some field data well. Here we calculated (θ) over bar using a steady state solution based on the work by Raats [1971] and an exponential dependence of the diffusivity upon the water content. This is compared with a constant value of (θ) over bar calculated from an assumption of a hydraulic conductivity at the wetting front of 1 mm day(-1) and the water content at saturation. This comparison was made for a wide range of soils. The constant (θ) over bar generally underestimated (θ) over bar at small wetted radii and overestimated (θ) over bar at large radii. The crossover point between under and overestimation changed with both soil properties and flow rate. The largest variance occurred for coarser texture soils at low-flow rates. At high-flow rates in finer-textured soils the use of a constant (θ) over bar results in underestimation of the time for the wetting front to reach a particular radius. The value of (θ) over bar is related to the time at which the wetting front reaches a given radius. In coarse-textured soils the use of a constant value of (θ) over bar can result in an error of the time when the wetting front reaches a particular radius, as large as 80% at low-flow rates and large radii.