41 resultados para preventive measure


Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

With no universal approach for measuring brand performance, we show how a consumer-based brand measure was developed for corporate financial services brands. Churchill's paradigm was adopted. A literature review and 20 depth interviews with experts suggested that brand loyalty, consumer satisfaction and reputation constitute the brand performance measure. Ten financial services organisations provided access to their consumers. Following a postal survey, 600 questionnaires were analysed through principal components analysis to identify the consumer-based measure. Further testing revealed this to be a valid and reliable brand performance measure.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Hocaoglu MB, Gaffan EA, Ho AK. The Huntington's disease health-related quality of life questionnaire: a disease-specific measure of health-related quality of life. Huntington's disease (HD) is a genetic neurodegenerative disorder characterized by motor, cognitive and psychiatric disturbances, and yet there is no disease-specific patient-reported health-related quality of life outcome measure for patients. Our aim was to develop and validate such an instrument, i.e. the Huntington's Disease health-related Quality of Life questionnaire (HDQoL), to capture the true impact of living with this disease. Semi-structured interviews were conducted with the full spectrum of people living with HD, to form a pool of items, which were then examined in a larger sample prior to data-driven item reduction. We provide the statistical basis for the extraction of three different sets of scales from the HDQoL, and present validation and psychometric data on these scales using a sample of 152 participants living with HD. These new patient-derived scales provide promising patient-reported outcome measures for HD.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Attempts to estimate photosynthetic rate or gross primary productivity from remotely sensed absorbed solar radiation depend on knowledge of the light use efficiency (LUE). Early models assumed LUE to be constant, but now most researchers try to adjust it for variations in temperature and moisture stress. However, more exact methods are now required. Hyperspectral remote sensing offers the possibility of sensing the changes in the xanthophyll cycle, which is closely coupled to photosynthesis. Several studies have shown that an index (the photochemical reflectance index) based on the reflectance at 531 nm is strongly correlated with the LUE over hours, days and months. A second hyperspectral approach relies on the remote detection of fluorescence, which is a directly related to the efficiency of photosynthesis. We discuss the state of the art of the two approaches. Both have been demonstrated to be effective, but we specify seven conditions required before the methods can become operational.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The estimation of the long-term wind resource at a prospective site based on a relatively short on-site measurement campaign is an indispensable task in the development of a commercial wind farm. The typical industry approach is based on the measure-correlate-predict �MCP� method where a relational model between the site wind velocity data and the data obtained from a suitable reference site is built from concurrent records. In a subsequent step, a long-term prediction for the prospective site is obtained from a combination of the relational model and the historic reference data. In the present paper, a systematic study is presented where three new MCP models, together with two published reference models �a simple linear regression and the variance ratio method�, have been evaluated based on concurrent synthetic wind speed time series for two sites, simulating the prospective and the reference site. The synthetic method has the advantage of generating time series with the desired statistical properties, including Weibull scale and shape factors, required to evaluate the five methods under all plausible conditions. In this work, first a systematic discussion of the statistical fundamentals behind MCP methods is provided and three new models, one based on a nonlinear regression and two �termed kernel methods� derived from the use of conditional probability density functions, are proposed. All models are evaluated by using five metrics under a wide range of values of the correlation coefficient, the Weibull scale, and the Weibull shape factor. Only one of all models, a kernel method based on bivariate Weibull probability functions, is capable of accurately predicting all performance metrics studied.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The UK Department for Environment, Food and Rural Affairs (Defra) identified practices to reduce the risk of animal disease outbreaks. We report on the response of sheep and pig farmers in England to promotion of these practices. A conceptual framework was established from research on factors influencing adoption of animal health practices, linking knowledge, attitudes, social influences and perceived constraints to the implementation of specific practices. Qualitative data were collected from nine sheep and six pig enterprises in 2011. Thematic analysis explored attitudes and responses to the proposed practices, and factors influencing the likelihood of implementation. Most feel they are doing all they can reasonably do to minimise disease risk and that practices not being implemented are either not relevant or ineffective. There is little awareness and concern about risk from unseen threats. Pig farmers place more emphasis than sheep farmers on controlling wildlife, staff and visitor management and staff training. The main factors that influence livestock farmers’ decision on whether or not to implement a specific disease risk measure are: attitudes to, and perceptions of, disease risk; attitudes towards the specific measure and its efficacy; characteristics of the enterprise which they perceive as making a measure impractical; previous experience of a disease or of the measure; and the credibility of information and advice. Great importance is placed on access to authoritative information with most seeing vets as the prime source to interpret generic advice from national bodies in the local context. Uptake of disease risk measures could be increased by: improved risk communication through the farming press and vets to encourage farmers to recognise hidden threats; dissemination of credible early warning information to sharpen farmers’ assessment of risk; and targeted information through training events, farming press, vets and other advisers, and farmer groups, tailored to the different categories of livestock farmer.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As low carbon technologies become more pervasive, distribution network operators are looking to support the expected changes in the demands on the low voltage networks through the smarter control of storage devices. Accurate forecasts of demand at the single household-level, or of small aggregations of households, can improve the peak demand reduction brought about through such devices by helping to plan the appropriate charging and discharging cycles. However, before such methods can be developed, validation measures are required which can assess the accuracy and usefulness of forecasts of volatile and noisy household-level demand. In this paper we introduce a new forecast verification error measure that reduces the so called “double penalty” effect, incurred by forecasts whose features are displaced in space or time, compared to traditional point-wise metrics, such as Mean Absolute Error and p-norms in general. The measure that we propose is based on finding a restricted permutation of the original forecast that minimises the point wise error, according to a given metric. We illustrate the advantages of our error measure using half-hourly domestic household electrical energy usage data recorded by smart meters and discuss the effect of the permutation restriction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Prism is a modular classification rule generation method based on the ‘separate and conquer’ approach that is alternative to the rule induction approach using decision trees also known as ‘divide and conquer’. Prism often achieves a similar level of classification accuracy compared with decision trees, but tends to produce a more compact noise tolerant set of classification rules. As with other classification rule generation methods, a principle problem arising with Prism is that of overfitting due to over-specialised rules. In addition, over-specialised rules increase the associated computational complexity. These problems can be solved by pruning methods. For the Prism method, two pruning algorithms have been introduced recently for reducing overfitting of classification rules - J-pruning and Jmax-pruning. Both algorithms are based on the J-measure, an information theoretic means for quantifying the theoretical information content of a rule. Jmax-pruning attempts to exploit the J-measure to its full potential because J-pruning does not actually achieve this and may even lead to underfitting. A series of experiments have proved that Jmax-pruning may outperform J-pruning in reducing overfitting. However, Jmax-pruning is computationally relatively expensive and may also lead to underfitting. This paper reviews the Prism method and the two existing pruning algorithms above. It also proposes a novel pruning algorithm called Jmid-pruning. The latter is based on the J-measure and it reduces overfitting to a similar level as the other two algorithms but is better in avoiding underfitting and unnecessary computational effort. The authors conduct an experimental study on the performance of the Jmid-pruning algorithm in terms of classification accuracy and computational efficiency. The algorithm is also evaluated comparatively with the J-pruning and Jmax-pruning algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Purpose – The development of marketing strategies optimally adjusted to export markets has been a vitally important topic for both managers and academics for about five decades. However, there is no agreement in the literature about which elements integrate marketing strategy and which components of domestic strategies should be adapted to export markets. The purpose of this paper is to develop a new scale – STRATADAPT. Design/methodology/approach – Results from a sample of small and medium-sized industrial exporting firms support a four-dimensional scale – product, promotion, price, and distribution strategies – of 30 items. The scale presents evidence of composite reliability as well as discriminant and nomological validity. Findings – Findings reveal that all four dimensions of marketing strategy adaptation are positively associated with the amount of the firm's financial resources allocated to export activity. Practical implications – The STRATADAPT scale may assist managers in developing better international marketing strategies as well as in planning more accurate and efficient marketing programs across markets. Originality/value – This study develops a new scale, the STRATADAPT scale, which is a broad measure of export marketing strategy adaptation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An alternative procedure to that of Lo is proposed for assessing whether there is significant evidence of persistence in time series. The technique estimates the Hurst exponent itself, and significance testing is based on an application of bootstrapping using surrogate data. The method is applied to a set of 10 daily pound exchange rates. A general lack of long-term memory is found to characterize all the series tested, in sympathy with the findings of a number of other recent papers which have used Lo's techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Aim To develop a brief, parent-completed instrument (‘ERIC’) for detection of cognitive delay in 10-24 month-olds born preterm, or with low birth weight, or with perinatal complications, and to establish its diagnostic properties. Method Scores were collected from parents of 317 children meeting ≥1 inclusion criteria (birth weight <1500g; gestational age <34 completed weeks; 5-minute Apgar <7; presence of hypoxic-ischemic encephalopathy) and meeting no exclusion criteria. Children were assessed for cognitive delay using a criterion score on the Bayley Scales of Infant and Toddler Development Cognitive Scale III1 <80. Items were retained according to their individual associations with delay. Sensitivity, specificity, Positive and Negative Predictive Values were estimated and a truncated ERIC was developed for use <14 months. Results ERIC detected 17 out of 18 delayed children in the sample, with 94.4% sensitivity (95% CI [confidence interval] 83.9-100%), 76.9% specificity (72.1-81.7%), 19.8% positive predictive value (11.4-28.2%); 99.6% negative predictive value (98.7-100%); 4.09 likelihood ratio positive; and 0.07 likelihood ratio negative; the associated Area under the Curve was .909 (.829-.960). Interpretation ERIC has potential value as a quickly-administered diagnostic instrument for the absence of early cognitive delay in preterm or premature infants of 10-24 months, and as a screen for cognitive delay. Further research may be needed before ERIC can be recommended for wide-scale use.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent gravity missions have produced a dramatic improvement in our ability to measure the ocean’s mean dynamic topography (MDT) from space. To fully exploit this oceanic observation, however, we must quantify its error. To establish a baseline, we first assess the error budget for an MDT calculated using a 3rd generation GOCE geoid and the CLS01 mean sea surface (MSS). With these products, we can resolve MDT spatial scales down to 250 km with an accuracy of 1.7 cm, with the MSS and geoid making similar contributions to the total error. For spatial scales within the range 133–250 km the error is 3.0 cm, with the geoid making the greatest contribution. For the smallest resolvable spatial scales (80–133 km) the total error is 16.4 cm, with geoid error accounting for almost all of this. Relative to this baseline, the most recent versions of the geoid and MSS fields reduce the long and short-wavelength errors by 0.9 and 3.2 cm, respectively, but they have little impact in the medium-wavelength band. The newer MSS is responsible for most of the long-wavelength improvement, while for the short-wavelength component it is the geoid. We find that while the formal geoid errors have reasonable global mean values they fail capture the regional variations in error magnitude, which depend on the steepness of the sea floor topography.