952 resultados para Weighted Overlay Analysis
Resumo:
After ingestion of a standardized dose of ethanol, alcohol concentrations were assessed, over 3.5 hours from blood (six readings) and breath (10 readings) in a sample of 412 MZ and DZ twins who took part in an Alcohol Challenge Twin Study (ACTS). Nearly all participants were subsequently genotyped on two polymorphic SNPs in the ADH1B and ADH1C loci known to affect in vitro ADH activity. In the DZ pairs, 14 microsatellite markers covering a 20.5 cM region on chromosome 4 that includes the ADH gene family were assessed, Variation in the timed series of autocorrelated blood and breath alcohol readings was studied using a bivariate simplex design. The contribution of a quantitative trait locus (QTL) or QTL's linked to the ADH region was estimated via a mixture of likelihoods weighted by identity-by-descent probabilities. The effects of allelic substitution at the ADH1B and ADH1C loci were estimated in the means part of the model simultaneously with the effects sex and age. There was a major contribution to variance in alcohol metabolism due to a QTL which accounted for about 64% of the additive genetic covariation common to both blood and breath alcohol readings at the first time point. No effects of the ADH1B*47His or ADH1C*349Ile alleles on in vivo metabolism were observed, although these have been shown to have major effects in vitro. This implies that there is a major determinant of variation for in vivo alcohol metabolism in the ADH region that is not accounted for by these polymorphisms. Earlier analyses of these data suggested that alcohol metabolism is related to drinking behavior and imply that this QTL may be protective against alcohol dependence.
Resumo:
Spatial data mining recently emerges from a number of real applications, such as real-estate marketing, urban planning, weather forecasting, medical image analysis, road traffic accident analysis, etc. It demands for efficient solutions for many new, expensive, and complicated problems. In this paper, we investigate the problem of evaluating the top k distinguished “features” for a “cluster” based on weighted proximity relationships between the cluster and features. We measure proximity in an average fashion to address possible nonuniform data distribution in a cluster. Combining a standard multi-step paradigm with new lower and upper proximity bounds, we presented an efficient algorithm to solve the problem. The algorithm is implemented in several different modes. Our experiment results not only give a comparison among them but also illustrate the efficiency of the algorithm.
Resumo:
Data envelopment analysis defines the relative efficiency of a decision making unit (DMU) as the ratio of the sum of its weighted outputs to the sum of its weighted inputs allowing the DMUs to freely allocate weights to their inputs/outputs. However, this measure may not reflect a DMU's true efficiency as some inputs/outputs may not contribute reasonably to the efficiency measure. Traditionally, to overcome this problem weights restrictions have been imposed. This paper offers a new approach to this problem where DMUs operate a constant returns to scale technology in a single input multi-output context. The approach is based on introducing unobserved DMUs, created by adjusting the output levels of certain observed relatively efficient DMUs, reflecting a combination of technical information of feasible production levels and the DM's value judgments. Its main advantage is that the information conveyed by the DM is local, with reference to a specific observed DMU. The approach is illustrated on a real life application. © 2003 Elsevier B.V. All rights reserved.
Resumo:
The purpose of this study is to develop econometric models to better understand the economic factors affecting inbound tourist flows from each of six origin countries that contribute to Hong Kong’s international tourism demand. To this end, we test alternative cointegration and error correction approaches to examine the economic determinants of tourist flows to Hong Kong, and to produce accurate econometric forecasts of inbound tourism demand. Our empirical findings show that permanent income is the most significant determinant of tourism demand in all models. The variables of own price, weighted substitute prices, trade volume, the share price index (as an indicator of changes in wealth in origin countries), and a dummy variable representing the Beijing incident (1989) are also found to be important determinants for some origin countries. The average long-run income and own price elasticity was measured at 2.66 and – 1.02, respectively. It was hypothesised that permanent income is a better explanatory variable of long-haul tourism demand than current income. A novel approach (grid search process) has been used to empirically derive the weights to be attached to the lagged income variable for estimating permanent income. The results indicate that permanent income, estimated with empirically determined relatively small weighting factors, was capable of producing better results than the current income variable in explaining long-haul tourism demand. This finding suggests that the use of current income in previous empirical tourism demand studies may have produced inaccurate results. The share price index, as a measure of wealth, was also found to be significant in two models. Studies of tourism demand rarely include wealth as an explanatory forecasting long-haul tourism demand. However, finding a satisfactory proxy for wealth common to different countries is problematic. This study indicates with the ECM (Error Correction Models) based on the Engle-Granger (1987) approach produce more accurate forecasts than ECM based on Pesaran and Shin (1998) and Johansen (1988, 1991, 1995) approaches for all of the long-haul markets and Japan. Overall, ECM produce better forecasts than the OLS, ARIMA and NAÏVE models, indicating the superiority of the application of a cointegration approach for tourism demand forecasting. The results show that permanent income is the most important explanatory variable for tourism demand from all countries but there are substantial variations between countries with the long-run elasticity ranging between 1.1 for the U.S. and 5.3 for U.K. Price is the next most important variable with the long-run elasticities ranging between -0.8 for Japan and -1.3 for Germany and short-run elasticities ranging between – 0.14 for Germany and -0.7 for Taiwan. The fastest growing market is Mainland China. The findings have implications for policies and strategies on investment, marketing promotion and pricing.
Resumo:
This paper introduces a new mathematical method for improving the discrimination power of data envelopment analysis and to completely rank the efficient decision-making units (DMUs). Fuzzy concept is utilised. For this purpose, first all DMUs are evaluated with the CCR model. Thereafter, the resulted weights for each output are considered as fuzzy sets and are then converted to fuzzy numbers. The introduced model is a multi-objective linear model, endpoints of which are the highest and lowest of the weighted values. An added advantage of the model is its ability to handle the infeasibility situation sometimes faced by previously introduced models.
Resumo:
Purpose: Dynamic contact angle (DCA) methods have advantages over other contact angle methodologies, not least that they can provide more than single contact angle values. Here we illustrate the use of DCA analysis to provide “fingerprint” characterisation of contact lens surfaces, and the way that different materials change in the early stages of wear. Method: The DCA method involves attaching to a microbalance weighted strips cut from a lens. The strips are then cyclically inserted into and removed from an aqueous solution. Conventionally, readings of force taken from linear portions of the resultant dipping curves are translated into advancing (CAa) and receding contact (CAr) angles. Additionally, analysis of the force versus immersion profile provides a “fingerprint” characterisation of the state of the lens surface. Results: CAa and CAr values from DCA traces provide a useful means of differentiating gross differences in hydrophilicity and molecular mobility of surfaces under particular immersion and emersion conditions, such as dipping rate and dwell times. Typical values for etafilcon A (CAa:63.1; CAr:37) and balafilcon B (CAa:118.4; CAr:36.4) illustrate this. Surface modifications induced in lens manufacture are observed to produce not only changes in these value, which may be small, but also changes in the DCA “fingerprint” (slope, undulations, length of plateau). Interestingly, similar changes are induced in the first few hours of lens wear with some lens-patient combinations. Conclusions: Although single parameter contact angles are useful for material characterisation, information of potential clinical interest can be obtained from more detailed analysis of DCA traces.
Resumo:
Objective: To determine the efficacy of cholinesterase inhibitors (ChEIs) in improving the behavioral and psychological symptoms of dementia (BPSD) in patients with Alzheimer’s disease (AD). Data sources: We searched MEDLINE, Cochrane Registry, and the Cumulative Index to Nursing and Allied Health Literature (CINAHL) from 1966 to 2007. We limited our search to English Language, full text, published articles and human studies. Data extraction: We included randomized, double-blind, placebo-controlled trials evaluating the efficacy of donepezil, rivastigmine, or galantamine in managing BPSD displayed by AD patients. Using the United States Preventive Services Task Force (USPSTF) guidelines, we critically appraised all studies and included only those with an attrition rate of less than 40%, concealed measurement of the outcomes, and intention to treat analysis of the collected data. All data were imputed into pre-defined evidence based tables and were pooled using the Review Manager 4.2.1 software for data synthesis. Results: We found 12 studies that met our inclusion criteria but only nine of them provided sufficient data for the meta-analysis. Among patients with mild to severe AD and in comparison to placebo, ChEIs as a class had a beneficial effects on reducing BPSD with a standard mean difference (SMD) of -0.10 (95% confidence interval [CI]; -0.18, -0.01) and a weighted mean difference (WMD) of -1.38 neuropsychiatry inventory point (95% CI; -2.30, -0.46). In studies with mild AD patients, the WMD was -1.92 (95% CI; -3.18, -0.66); and in studies with severe AD patients, the WMD was -0.06 (95% CI; -2.12, +0.57). Conclusion: Cholinesterase inhibitors lead to a statistical significant reduction in BPSD among patients with AD, yet the clinical relevance of this effect remains unclear.
Resumo:
Magnetoencephalography (MEG), a non-invasive technique for characterizing brain electrical activity, is gaining popularity as a tool for assessing group-level differences between experimental conditions. One method for assessing task-condition effects involves beamforming, where a weighted sum of field measurements is used to tune activity on a voxel-by-voxel basis. However, this method has been shown to produce inhomogeneous smoothness differences as a function of signal-to-noise across a volumetric image, which can then produce false positives at the group level. Here we describe a novel method for group-level analysis with MEG beamformer images that utilizes the peak locations within each participant's volumetric image to assess group-level effects. We compared our peak-clustering algorithm with SnPM using simulated data. We found that our method was immune to artefactual group effects that can arise as a result of inhomogeneous smoothness differences across a volumetric image. We also used our peak-clustering algorithm on experimental data and found that regions were identified that corresponded with task-related regions identified in the literature. These findings suggest that our technique is a robust method for group-level analysis with MEG beamformer images.
Resumo:
Lack of discrimination power and poor weight dispersion remain major issues in Data Envelopment Analysis (DEA). Since the initial multiple criteria DEA (MCDEA) model developed in the late 1990s, only goal programming approaches; that is, the GPDEA-CCR and GPDEA-BCC were introduced for solving the said problems in a multi-objective framework. We found GPDEA models to be invalid and demonstrate that our proposed bi-objective multiple criteria DEA (BiO-MCDEA) outperforms the GPDEA models in the aspects of discrimination power and weight dispersion, as well as requiring less computational codes. An application of energy dependency among 25 European Union member countries is further used to describe the efficacy of our approach. © 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper seeks to advance the theory and practice of the dynamics of complex networks in relation to direct and indirect citations. It applies social network analysis (SNA) and the ordered weighted averaging operator (OWA) to study a patent citations network. So far the SNA studies investigating long chains of patents citations have rarely been undertaken and the importance of a node in a network has been associated mostly with its number of direct ties. In this research OWA is used to analyse complex networks, assess the role of indirect ties, and provide guidance to reduce complexity for decision makers and analysts. An empirical example of a set of European patents published in 2000 in the renewable energy industry is provided to show the usefulness of the proposed approach for the preference ranking of patent citations.
Resumo:
We consider an uncertain version of the scheduling problem to sequence set of jobs J on a single machine with minimizing the weighted total flow time, provided that processing time of a job can take on any real value from the given closed interval. It is assumed that job processing time is unknown random variable before the actual occurrence of this time, where probability distribution of such a variable between the given lower and upper bounds is unknown before scheduling. We develop the dominance relations on a set of jobs J. The necessary and sufficient conditions for a job domination may be tested in polynomial time of the number n = |J| of jobs. If there is no a domination within some subset of set J, heuristic procedure to minimize the weighted total flow time is used for sequencing the jobs from such a subset. The computational experiments for randomly generated single-machine scheduling problems with n ≤ 700 show that the developed dominance relations are quite helpful in minimizing the weighted total flow time of n jobs with uncertain processing times.
Resumo:
Mathematics Subject Classification: 26A16, 26A33, 46E15.
Resumo:
Mathematics Subject Classification: 26D10.
Resumo:
This study surveys the ordered weighted averaging (OWA) operator literature using a citation network analysis. The main goals are the historical reconstruction of scientific development of the OWA field, the identification of the dominant direction of knowledge accumulation that emerged since the publication of the first OWA paper, and to discover the most active lines of research. The results suggest, as expected, that Yager's paper (IEEE Trans. Systems Man Cybernet, 18(1), 183-190, 1988) is the most influential paper and the starting point of all other research using OWA. Starting from his contribution, other lines of research developed and we describe them.
Resumo:
MSC 2010: 26A33