848 resultados para Preference-based measure
Resumo:
Background. At present, prostate cancer screening (PCS) guidelines require a discussion of risks, benefits, alternatives, and personal values, making decision aids an important tool to help convey information and to help clarify values. Objective: The overall goal of this study is to provide evidence of the reliability and validity of a PCS anxiety measure and the Decisional Conflict Scale (DCS). Methods. Using data from a randomized, controlled PCS decision aid trial that measured PCS anxiety at baseline and DCS at baseline (T0) and at two-weeks (T2), four psychometric properties were assessed: (1) internal consistency reliability, indicated by factor analysis intraclass correlations and Cronbach's α; (2) construct validity, indicated by patterns of Pearson correlations among subscales; (3) discriminant validity, indicated by the measure's ability to discriminate between undecided men and those with a definite screening intention; and (4) factor validity and invariance using confirmatory factor analyses (CFA). Results. The PCS anxiety measure had adequate internal consistency reliability and good construct and discriminant validity. CFAs indicated that the 3-factor model did not have adequate fit. CFAs for a general PCS anxiety measure and a PSA anxiety measure indicated adequate fit. The general PCS anxiety measure was invariant across clinics. The DCS had adequate internal consistency reliability except for the support subscale and had adequate discriminate validity. Good construct validity was found at the private clinic, but was only found for the feeling informed subscale at the public clinic. The traditional DCS did not have adequate fit at T0 or at T2. The alternative DCS had adequate fit at T0 but was not identified at T2. Factor loadings indicated that two subscales, feeling informed and feeling clear about values, were not distinct factors. Conclusions. Our general PCS anxiety measure can be used in PCS decision aid studies. The alternative DCS may be appropriate for men eligible for PCS. Implications: More emphasis needs to be placed on the development of PCS anxiety items relating to testing procedures. We recommend that the two DCS versions be validated in other samples of men eligible for PCS and in other health care decisions that involve uncertainty. ^
Resumo:
This work is a contribution to the definition and assessment of structural robustness. Special emphasis is given to reliability of reinforced concrete structures under corrosion of longitudinal reinforcement. On this communication several authors’ proposals in order to define and measure structural robustness are analyzed and discussed. The probabilistic based robustness index is defined, considering the reliability index decreasing for all possible damage levels. Damage is considered as the corrosion level of the longitudinal reinforcement in terms of rebar weight loss. Damage produces changes in both cross sectional area of rebar and bond strength. The proposed methodology is illustrated by means of an application example. In order to consider the impact of reinforcement corrosion on failure probability growth, an advanced methodology based on the strong discontinuities approach and an isotropic continuum damage model for concrete is adopted. The methodology consist on a two-step analysis: on the first step an analysis of the cross section is performed in order to capture phenomena such as expansion of the reinforcement due to the corrosion products accumulation and damage and cracking in the reinforcement surrounding concrete; on the second step a 2D deteriorated structural model is built with the results obtained on the first step of the analysis. The referred methodology combined with a Monte Carlo simulation is then used to compute the failure probability and the reliability index of the structure for different corrosion levels. Finally, structural robustness is assessed using the proposed probabilistic index.
Resumo:
Bandura (1986) developed the concept of moral disengagement to explain how individuals can engage in detrimental behavior while experiencing low levels of negative feelings such as guilt-feelings. Most of the research conducted on moral disengagement investigated this concept as a global concept (e.g., Bandura, Barbaranelli, Caprara, & Pastorelli, 1996; Moore, Detert, Klebe Treviño, Baker, & Mayer, 2012) while Bandura (1986, 1990) initially developed eight distinct mechanisms of moral disengagement grouped into four categories representing the various means through which moral disengagement can operate. In our work, we propose to develop measures of this concept based on its categories, namely rightness of actions, rejection of personal responsibility, distortion of negative consequences, and negative perception of the victims, and which is not specific a particular area of research. Through our measures, we aim at better understanding the cognitive process leading individuals to behave unethically by investigating which category plays a role in explaining unethical behavior depending on the situations in which individuals are. To this purpose, we conducted five studies to develop the measures and to test its predictive validity. Particularly, we assessed the ability of the newly developed measures to predict two types of unethical behaviors, i.e. discriminatory behavior and cheating behavior. Confirmatory Factor analyses demonstrated a good fit of the model and findings generally supported our predictions.
Resumo:
The analysis of short segments of noise-contaminated, multivariate real world data constitutes a challenge. In this paper we compare several techniques of analysis, which are supposed to correctly extract the amount of genuine cross-correlations from a multivariate data set. In order to test for the quality of their performance we derive time series from a linear test model, which allows the analytical derivation of genuine correlations. We compare the numerical estimates of the four measures with the analytical results for different correlation pattern. In the bivariate case all but one measure performs similarly well. However, in the multivariate case measures based on the eigenvalues of the equal-time cross-correlation matrix do not extract exclusively information about the amount of genuine correlations, but they rather reflect the spatial organization of the correlation pattern. This may lead to failures when interpreting the numerical results as illustrated by an application to three electroencephalographic recordings of three patients suffering from pharmacoresistent epilepsy.
Resumo:
Data envelopment analysis (DEA) has gained a wide range of applications in measuring comparative efficiency of decision making units (DMUs) with multiple incommensurate inputs and outputs. The standard DEA method requires that the status of all input and output variables be known exactly. However, in many real applications, the status of some measures is not clearly known as inputs or outputs. These measures are referred to as flexible measures. This paper proposes a flexible slacks-based measure (FSBM) of efficiency in which each flexible measure can play input role for some DMUs and output role for others to maximize the relative efficiency of the DMU under evaluation. Further, we will show that when an operational unit is efficient in a specific flexible measure, this measure can play both input and output roles for this unit. In this case, the optimal input/output designation for flexible measure is one that optimizes the efficiency of the artificial average unit. An application in assessing UK higher education institutions used to show the applicability of the proposed approach. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
In the majority of production processes, noticeable amounts of bad byproducts or bad outputs are produced. The negative effects of the bad outputs on efficiency cannot be handled by the standard Malmquist index to measure productivity change over time. Toward this end, the Malmquist-Luenberger index (MLI) has been introduced, when undesirable outputs are present. In this paper, we introduce a Data Envelopment Analysis (DEA) model as well as an algorithm, which can successfully eliminate a common infeasibility problem encountered in MLI mixed period problems. This model incorporates the best endogenous direction amongst all other possible directions to increase desirable output and decrease the undesirable outputs at the same time. A simple example used to illustrate the new algorithm and a real application of steam power plants is used to show the applicability of the proposed model.
Resumo:
There is evidence showing that individual behavior often deviates fromthe classical principle of maximization. This evidence raises at least two importantquestions: (i) how severe the deviations are and (ii) which method is the best forextracting relevant information from choice behavior for the purposes of welfare analysis.In this paper we address these two questions by identifying from a foundationalanalysis a new measure of the rationality of individuals that enables the analysis ofindividual welfare in potentially inconsistent subjects, all based on standard revealedpreference data. We call such measure minimal index.
Resumo:
In many product categories, unit prices facilitate price comparisons across brands and package sizes; this enables consumers to identify those products that provide the greatest value. However in other product categories, unit prices may be confusing. This is because there are two types of unit pricing, measure-based and usage-based. Measure-based unit prices are what the name implies; price is expressed in cents or dollars per unit of measure (e.g. ounce). Usage-based unit prices, on the other hand, are expressed in terms of cents or dollars per use (e.g., wash load or serving). The results of this study show that in two different product categories (i.e., laundry detergent and dry breakfast cereal), measure-based unit prices reduced consumers’ ability to identify higher value products, but when a usage-based unit price was provided, their ability to identify product value was increased. When provided with both a measure-based and a usage-based unit price, respondents did not perform as well as when they were provided only a usage-based unit price, additional evidence that the measure-based unit price hindered consumers’ comparisons. Finally, the presence of two potential moderators, education about the meaning of the two measures and having to rank order the options in the choice set in terms of value before choosing, did not eliminate these effects.
Resumo:
Partial moments are extensively used in literature for modeling and analysis of lifetime data. In this paper, we study properties of partial moments using quantile functions. The quantile based measure determines the underlying distribution uniquely. We then characterize certain lifetime quantile function models. The proposed measure provides alternate definitions for ageing criteria. Finally, we explore the utility of the measure to compare the characteristics of two lifetime distributions
Resumo:
Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
With no universal approach for measuring brand performance, we show how a consumer-based brand measure was developed for corporate financial services brands. Churchill's paradigm was adopted. A literature review and 20 depth interviews with experts suggested that brand loyalty, consumer satisfaction and reputation constitute the brand performance measure. Ten financial services organisations provided access to their consumers. Following a postal survey, 600 questionnaires were analysed through principal components analysis to identify the consumer-based measure. Further testing revealed this to be a valid and reliable brand performance measure.