63 resultados para Similarity measure


Relevância:

20.00% 20.00%

Publicador:

Resumo:

With no universal approach for measuring brand performance, we show how a consumer-based brand measure was developed for corporate financial services brands. Churchill's paradigm was adopted. A literature review and 20 depth interviews with experts suggested that brand loyalty, consumer satisfaction and reputation constitute the brand performance measure. Ten financial services organisations provided access to their consumers. Following a postal survey, 600 questionnaires were analysed through principal components analysis to identify the consumer-based measure. Further testing revealed this to be a valid and reliable brand performance measure.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hocaoglu MB, Gaffan EA, Ho AK. The Huntington's disease health-related quality of life questionnaire: a disease-specific measure of health-related quality of life. Huntington's disease (HD) is a genetic neurodegenerative disorder characterized by motor, cognitive and psychiatric disturbances, and yet there is no disease-specific patient-reported health-related quality of life outcome measure for patients. Our aim was to develop and validate such an instrument, i.e. the Huntington's Disease health-related Quality of Life questionnaire (HDQoL), to capture the true impact of living with this disease. Semi-structured interviews were conducted with the full spectrum of people living with HD, to form a pool of items, which were then examined in a larger sample prior to data-driven item reduction. We provide the statistical basis for the extraction of three different sets of scales from the HDQoL, and present validation and psychometric data on these scales using a sample of 152 participants living with HD. These new patient-derived scales provide promising patient-reported outcome measures for HD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Salmonella are closely related to commensal Escherichia coli but have gained virulence factors enabling them to behave as enteric pathogens. Less well studied are the similarities and differences that exist between the metabolic properties of these organisms that may contribute toward niche adaptation of Salmonella pathogens. To address this, we have constructed a genome scale Salmonella metabolic model (iMA945). The model comprises 945 open reading frames or genes, 1964 reactions, and 1036 metabolites. There was significant overlap with genes present in E. coli MG1655 model iAF1260. In silico growth predictions were simulated using the model on different carbon, nitrogen, phosphorous, and sulfur sources. These were compared with substrate utilization data gathered from high throughput phenotyping microarrays revealing good agreement. Of the compounds tested, the majority were utilizable by both Salmonella and E. coli. Nevertheless a number of differences were identified both between Salmonella and E. coli and also within the Salmonella strains included. These differences provide valuable insight into differences between a commensal and a closely related pathogen and within different pathogenic strains opening new avenues for future explorations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attempts to estimate photosynthetic rate or gross primary productivity from remotely sensed absorbed solar radiation depend on knowledge of the light use efficiency (LUE). Early models assumed LUE to be constant, but now most researchers try to adjust it for variations in temperature and moisture stress. However, more exact methods are now required. Hyperspectral remote sensing offers the possibility of sensing the changes in the xanthophyll cycle, which is closely coupled to photosynthesis. Several studies have shown that an index (the photochemical reflectance index) based on the reflectance at 531 nm is strongly correlated with the LUE over hours, days and months. A second hyperspectral approach relies on the remote detection of fluorescence, which is a directly related to the efficiency of photosynthesis. We discuss the state of the art of the two approaches. Both have been demonstrated to be effective, but we specify seven conditions required before the methods can become operational.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Drought characterisation is an intrinsically spatio-temporal problem. A limitation of previous approaches to characterisation is that they discard much of the spatio-temporal information by reducing events to a lower-order subspace. To address this, an explicit 3-dimensional (longitude, latitude, time) structure-based method is described in which drought events are defined by a spatially and temporarily coherent set of points displaying standardised precipitation below a given threshold. Geometric methods can then be used to measure similarity between individual drought structures. Groupings of these similarities provide an alternative to traditional methods for extracting recurrent space-time signals from geophysical data. The explicit consideration of structure encourages the construction of summary statistics which relate to the event geometry. Example measures considered are the event volume, centroid, and aspect ratio. The utility of a 3-dimensional approach is demonstrated by application to the analysis of European droughts (15 °W to 35°E, and 35 °N to 70°N) for the period 1901–2006. Large-scale structure is found to be abundant with 75 events identified lasting for more than 3 months and spanning at least 0.5 × 106 km2. Near-complete dissimilarity is seen between the individual drought structures, and little or no regularity is found in the time evolution of even the most spatially similar drought events. The spatial distribution of the event centroids and the time evolution of the geographic cross-sectional areas strongly suggest that large area, sustained droughts result from the combination of multiple small area (∼106 km2) short duration (∼3 months) events. The small events are not found to occur independently in space. This leads to the hypothesis that local water feedbacks play an important role in the aggregation process.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: Jargon aphasia is one of the most intractable forms of aphasia with limited recommendation on amelioration of associated naming difficulties and neologisms. The few naming therapy studies that exist in jargon aphasia have utilized either semantic or phonological approaches but the results have been equivocal. Moreover, the effect of therapy on characteristics of neologisms is less explored. Aims: This study investigates the effectiveness of a phonological naming therapy (i.e., phonological component analysis, PCA) on picture naming abilities and on quantitative and qualitative changes in neologisms for an individual with jargon aphasia (FF). Methods: FF showed evidence of jargon aphasia with severe naming difficulties and produced a very high proportion of neologisms. A single-subject multiple probe design across behaviors was employed to evaluate the effects of PCA therapy on the accuracy for three sets of words. In therapy, a phonological components analysis chart was used to identify five phonological components (i.e., rhymes, first sound, first sound associate, final sound, number of syllables) for each target word. Generalization effects—change in percent accuracy and error pattern—were examined comparing pre-and post-therapy responses on the Philadelphia Naming Test and these responses were analyzed to explore the characteristics of the neologisms. The quantitative change in neologisms was measured by change in the proportion of neologisms from pre- to post-therapy and the qualitative change was indexed by the phonological overlap between target and neologism. Results: As a consequence of PCA therapy, FF showed a significant improvement in his ability to name the treated items. His performance in maintenance and follow-up phases remained comparable to his performance during the therapy phases. Generalization to other naming tasks did not show a change in accuracy but distinct differences in error pattern (an increase in proportion of real word responses and a decrease in proportion of neologisms) were observed. Notably, the decrease in neologisms occurred with a corresponding trend for increase in the phonological similarity between the neologisms and the targets. Conclusions: This study demonstrated the effectiveness of a phonological therapy for improving naming abilities and reducing the amount of neologisms in an individual with severe jargon aphasia. The positive outcome of this research is encouraging, as it provides evidence for effective therapies for jargon aphasia and also emphasizes that use of the quality and quantity of errors may provide a sensitive outcome measure to determine therapy effectiveness, in particular for client groups who are difficult to treat.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim  Species distribution models (SDMs) based on current species ranges underestimate the potential distribution when projected in time and/or space. A multi-temporal model calibration approach has been suggested as an alternative, and we evaluate this using 13,000 years of data. Location  Europe. Methods  We used fossil-based records of presence for Picea abies, Abies alba and Fagus sylvatica and six climatic variables for the period 13,000 to 1000 yr bp. To measure the contribution of each 1000-year time step to the total niche of each species (the niche measured by pooling all the data), we employed a principal components analysis (PCA) calibrated with data over the entire range of possible climates. Then we projected both the total niche and the partial niches from single time frames into the PCA space, and tested if the partial niches were more similar to the total niche than random. Using an ensemble forecasting approach, we calibrated SDMs for each time frame and for the pooled database. We projected each model to current climate and evaluated the results against current pollen data. We also projected all models into the future. Results  Niche similarity between the partial and the total-SDMs was almost always statistically significant and increased through time. SDMs calibrated from single time frames gave different results when projected to current climate, providing evidence of a change in the species realized niches through time. Moreover, they predicted limited climate suitability when compared with the total-SDMs. The same results were obtained when projected to future climates. Main conclusions  The realized climatic niche of species differed for current and future climates when SDMs were calibrated considering different past climates. Building the niche as an ensemble through time represents a way forward to a better understanding of a species' range and its ecology in a changing climate.