961 resultados para Data Utility
Resumo:
The joint and alternative uses of attribute non-attendance and importance ranking data within discrete choice experiments are investigated using data from Lebanon examining consumers’ preferences for safety certification in food. We find that both types of information; attribute non-attendance and importance rankings, improve estimates of respondent utility. We introduce a method of integrating both types of information simultaneously and find that this outperforms models where either importance ranking or non-attendance data are used alone. As in previous studies, stated non-attendance of attributes was not found to be consistent with respondents having zero marginal utility for those attributes
Resumo:
Using the Pricing Equation in a panel-data framework, we construct a novel consistent estimator of the stochastic discount factor (SDF) which relies on the fact that its logarithm is the serial-correlation ìcommon featureîin every asset return of the economy. Our estimator is a simple function of asset returns, does not depend on any parametric function representing preferences, is suitable for testing di§erent preference speciÖcations or investigating intertemporal substitution puzzles, and can be a basis to construct an estimator of the risk-free rate. For post-war data, our estimator is close to unity most of the time, yielding an average annual real discount rate of 2.46%. In formal testing, we cannot reject standard preference speciÖcations used in the literature and estimates of the relative risk-aversion coe¢ cient are between 1 and 2, and statistically equal to unity. Using our SDF estimator, we found little signs of the equity-premium puzzle for the U.S.
Resumo:
This paper uses 1992:1-2004:2 quarterly data and two di§erent methods (approximation under lognormality and calibration) to evaluate the existence of an equity-premium puzzle in Brazil. In contrast with some previous works in the Brazilian literature, I conclude that the model used by Mehra and Prescott (1985), either with additive or recursive preferences, is not able to satisfactorily rationalize the equity premium observed in the Brazilian data. The second contribution of the paper is calling the attention to the fact that the utility function may not exist if the data (as it is the case with Brazilian time series) implies the existence of states in which high negative rates of consumption growth are attained with relatively high probability.
Resumo:
The objective of this paper is to test for optimality of consumption decisions at the aggregate level (representative consumer) taking into account popular deviations from the canonical CRRA utility model rule of thumb and habit. First, we show that rule-of-thumb behavior in consumption is observational equivalent to behavior obtained by the optimizing model of King, Plosser and Rebelo (Journal of Monetary Economics, 1988), casting doubt on how reliable standard rule-of-thumb tests are. Second, although Carroll (2001) and Weber (2002) have criticized the linearization and testing of euler equations for consumption, we provide a deeper critique directly applicable to current rule-of-thumb tests. Third, we show that there is no reason why return aggregation cannot be performed in the nonlinear setting of the Asset-Pricing Equation, since the latter is a linear function of individual returns. Fourth, aggregation of the nonlinear euler equation forms the basis of a novel test of deviations from the canonical CRRA model of consumption in the presence of rule-of-thumb and habit behavior. We estimated 48 euler equations using GMM, with encouraging results vis-a-vis the optimality of consumption decisions. At the 5% level, we only rejected optimality twice out of 48 times. Empirical-test results show that we can still rely on the canonical CRRA model so prevalent in macroeconomics: out of 24 regressions, we found the rule-of-thumb parameter to be statistically signi cant at the 5% level only twice, and the habit ƴ parameter to be statistically signi cant on four occasions. The main message of this paper is that proper return aggregation is critical to study intertemporal substitution in a representative-agent framework. In this case, we fi nd little evidence of lack of optimality in consumption decisions, and deviations of the CRRA utility model along the lines of rule-of-thumb behavior and habit in preferences represent the exception, not the rule.
Resumo:
This paper tests the optimality of consumption decisions at the aggregate level taking into account popular deviations from the canonical constant-relative-risk-aversion (CRRA) utility function model-rule of thumb and habit. First, based on the critique in Carroll (2001) and Weber (2002) of the linearization and testing strategies using euler equations for consumption, we provide extensive empirical evidence of their inappropriateness - a drawback for standard rule- of-thumb tests. Second, we propose a novel approach to test for consumption optimality in this context: nonlinear estimation coupled with return aggregation, where rule-of-thumb behavior and habit are special cases of an all encompassing model. We estimated 48 euler equations using GMM. At the 5% level, we only rejected optimality twice out of 48 times. Moreover, out of 24 regressions, we found the rule-of-thumb parameter to be statistically significant only twice. Hence, lack of optimality in consumption decisions represent the exception, not the rule. Finally, we found the habit parameter to be statistically significant on four occasions out of 24.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Atmospheric conditions at the site of a cosmic ray observatory must be known for reconstructing observed extensive air showers. The Global Data Assimilation System (GDAS) is a global atmospheric model predicated on meteorological measurements and numerical weather predictions. GDAS provides altitude-dependent profiles of the main state variables of the atmosphere like temperature, pressure, and humidity. The original data and their application to the air shower reconstruction of the Pierre Auger Observatory are described. By comparisons with radiosonde and weather station measurements obtained on-site in Malargue and averaged monthly models, the utility of the GDAS data is shown. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Trichoepithelioma is a benign neoplasm that shares both clinical and histological features with basal cell carcinoma. It is important to distinguish these neoplasms because they require different clinical behavior and therapeutic planning. Many studies have addressed the use of immunohistochemistry to improve the differential diagnosis of these tumors. These studies present conflicting results when addressing the same markers, probably owing to the small number of basaloid tumors that comprised their studies, which generally did not exceed 50 cases. We built a tissue microarray with 162 trichoepithelioma and 328 basal cell carcinoma biopsies and tested a panel of immune markers composed of CD34, CD10, epithelial membrane antigen, Bcl-2, cytokeratins 15 and 20 and D2-40. The results were analyzed using multiple linear and logistic regression models. This analysis revealed a model that could differentiate trichoepithelioma from basal cell carcinoma in 36% of the cases. The panel of immunohistochemical markers required to differentiate between these tumors was composed of CD10, cytokeratin 15, cytokeratin 20 and D2-40. The results obtained in this work were generated from a large number of biopsies and resulted in the confirmation of overlapping epithelial and stromal immunohistochemical profiles from these basaloid tumors. The results also corroborate the point of view that trichoepithelioma and basal cell carcinoma tumors represent two different points in the differentiation of a single cell type. Despite the use of panels of immune markers, histopathological criteria associated with clinical data certainly remain the best guideline for the differential diagnosis of trichoepithelioma and basal cell carcinoma. Modern Pathology (2012) 25, 1345-1353; doi: 10.1038/modpathol.2012.96; published online 8 June 2012
Resumo:
We propose a new method for fitting proportional hazards models with error-prone covariates. Regression coefficients are estimated by solving an estimating equation that is the average of the partial likelihood scores based on imputed true covariates. For the purpose of imputation, a linear spline model is assumed on the baseline hazard. We discuss consistency and asymptotic normality of the resulting estimators, and propose a stochastic approximation scheme to obtain the estimates. The algorithm is easy to implement, and reduces to the ordinary Cox partial likelihood approach when the measurement error has a degenerative distribution. Simulations indicate high efficiency and robustness. We consider the special case where error-prone replicates are available on the unobserved true covariates. As expected, increasing the number of replicate for the unobserved covariates increases efficiency and reduces bias. We illustrate the practical utility of the proposed method with an Eastern Cooperative Oncology Group clinical trial where a genetic marker, c-myc expression level, is subject to measurement error.
Resumo:
BACKGROUND: There is little evidence on differences across health care systems in choice and outcome of the treatment of chronic low back pain (CLBP) with spinal surgery and conservative treatment as the main options. At least six randomised controlled trials comparing these two options have been performed; they show conflicting results without clear-cut evidence for superior effectiveness of any of the evaluated interventions and could not address whether treatment effect varied across patient subgroups. Cost-utility analyses display inconsistent results when comparing surgical and conservative treatment of CLBP. Due to its higher feasibility, we chose to conduct a prospective observational cohort study. METHODS: This study aims to examine if1. Differences across health care systems result in different treatment outcomes of surgical and conservative treatment of CLBP2. Patient characteristics (work-related, psychological factors, etc.) and co-interventions (physiotherapy, cognitive behavioural therapy, return-to-work programs, etc.) modify the outcome of treatment for CLBP3. Cost-utility in terms of quality-adjusted life years differs between surgical and conservative treatment of CLBP.This study will recruit 1000 patients from orthopaedic spine units, rehabilitation centres, and pain clinics in Switzerland and New Zealand. Effectiveness will be measured by the Oswestry Disability Index (ODI) at baseline and after six months. The change in ODI will be the primary endpoint of this study.Multiple linear regression models will be used, with the change in ODI from baseline to six months as the dependent variable and the type of health care system, type of treatment, patient characteristics, and co-interventions as independent variables. Interactions will be incorporated between type of treatment and different co-interventions and patient characteristics. Cost-utility will be measured with an index based on EQol-5D in combination with cost data. CONCLUSION: This study will provide evidence if differences across health care systems in the outcome of treatment of CLBP exist. It will classify patients with CLBP into different clinical subgroups and help to identify specific target groups who might benefit from specific surgical or conservative interventions. Furthermore, cost-utility differences will be identified for different groups of patients with CLBP. Main results of this study should be replicated in future studies on CLBP.
Resumo:
Turrialba is one of the largest and most active stratovolcanoes in the Central Cordillera of Costa Rica and an excellent target for validation of satellite data using ground based measurements due to its high elevation, relative ease of access, and persistent elevated SO2 degassing. The Ozone Monitoring Instrument (OMI) aboard the Aura satellite makes daily global observations of atmospheric trace gases and it is used in this investigation to obtain volcanic SO2 retrievals in the Turrialba volcanic plume. We present and evaluate the relative accuracy of two OMI SO2 data analysis procedures, the automatic Band Residual Index (BRI) technique and the manual Normalized Cloud-mass (NCM) method. We find a linear correlation and good quantitative agreement between SO2 burdens derived from the BRI and NCM techniques, with an improved correlation when wet season data are excluded. We also present the first comparisons between volcanic SO2 emission rates obtained from ground-based mini-DOAS measurements at Turrialba and three new OMI SO2 data analysis techniques: the MODIS smoke estimation, OMI SO2 lifetime, and OMI SO2 transect techniques. A robust validation of OMI SO2 retrievals was made, with both qualitative and quantitative agreements under specific atmospheric conditions, proving the utility of satellite measurements for estimating accurate SO2 emission rates and monitoring passively degassing volcanoes.
Resumo:
BACKGROUND Contact force (CF) is an important determinant of lesion formation for atrial endocardial radiofrequency ablation. There are minimal published data on CF and ventricular lesion formation. We studied the impact of CF on lesion formation using an ovine model both endocardially and epicardially. METHODS AND RESULTS Twenty sheep received 160 epicardial and 160 endocardial ventricular radiofrequency applications using either a 3.5-mm irrigated-tip catheter (Thermocool, Biosense-Webster, n=160) or a 3.5 irrigated-tip catheter with CF assessment (Tacticath, Endosense, n=160), via percutaneous access. Power was delivered at 30 watts for 60 seconds, when either catheter/tissue contact was felt to be good or when CF>10 g with Tacticath. After completion of all lesions, acute dimensions were taken at pathology. Identifiable lesion formation from radiofrequency application was improved with the aid of CF information, from 78% to 98% on the endocardium (P<0.001) and from 90% to 100% on the epicardium (P=0.02). The mean total force was greater on the endocardium (39±18 g versus 21±14 g for the epicardium; P<0.001) mainly because of axial force. Despite the force-time integral being greater endocardially, epicardial lesions were larger (231±182 mm(3) versus 209±131 mm(3); P=0.02) probably because of the absence of the heat sink effect of the circulating blood and covered a greater area (41±27 mm(2) versus 29±17 mm(2); P=0.03) because of catheter orientation. CONCLUSIONS In the absence of CF feedback, 22% of endocardial radiofrequency applications that are thought to have good contact did not result in lesion formation. Epicardial ablation is associated with larger lesions.
Resumo:
Increasing amounts of clinical research data are collected by manual data entry into electronic source systems and directly from research subjects. For this manual entered source data, common methods of data cleaning such as post-entry identification and resolution of discrepancies and double data entry are not feasible. However data accuracy rates achieved without these mechanisms may be higher than desired for a particular research use. We evaluated a heuristic usability method for utility as a tool to independently and prospectively identify data collection form questions associated with data errors. The method evaluated had a promising sensitivity of 64% and a specificity of 67%. The method was used as described in the literature for usability with no further adaptations or specialization for predicting data errors. We conclude that usability evaluation methodology should be further investigated for use in data quality assurance.
Resumo:
The selection of a model to guide the understanding and resolution of community problems is an important issue relating to the foundation of public health practice: assessment, policy development, and assurance. Many assessment models produce a diagnosis of community weaknesses, but fail to promote planning and interventions. Rapid Participatory Appraisal (RPA) is a participatory action research model which regards assessment as the first step in the problem solving process, and claims to achieve assessment and policy development within limited resources of time and money. Literature documenting the fulfillment of these claims, and thereby supporting the utility of the model, is relatively sparse and difficult to obtain. Very few articles discuss the changes resulting from RPA assessments in urban areas, and those that do describe studies conducted outside the U.S.A. ^ This study examines the utility of the RPA model and its underlying theories: systems theory, grounded theory, and principles of participatory change, as illustrated by the case study of a community assessment conducted for the Texas Diabetes Institute (TDI), San Antonio, Texas, and subsequent outcomes. Diabetes has a high prevalence and is a major issue in San Antonio. Faculty and students conducted the assessment by informal collaboration between two nursing and public health assessment courses, providing practical student experiences. The study area was large, and the flexibility of the model tested by its use in contiguous sub-regions, reanalyzing aggregated results for the study area. Official TDI reports, and a mail survey of agency employees, described policy development resulting from community diagnoses revealed by the assessment. ^ The RPA model met the criteria for utility from the perspectives of merit, worth, efficiency, and effectiveness. The RPA model best met the agencies' criteria (merit), met the data needs of TDI in this particular situation (worth), provided valid results within budget, time, and personnel constraints (efficiency), and stimulated policy development by TDI (effectiveness). ^ The RPA model appears to have utility for community assessment, diagnosis, and policy development in circumstances similar to the TDI diabetes study. ^
Resumo:
Research has shown that disease-specific health related quality of life (HRQoL) instruments are more responsive than generic instruments to particular disease conditions. However, only a few studies have used disease-specific instruments to measure HRQoL in hemophilia. The goal of this project was to develop a disease-specific utility instrument that measures patient preferences for various hemophilia health states. The visual analog scale (VAS), a ranking method, and the standard gamble (SG), a choice-based method incorporating risk, were used to measure patient preferences. Study participants (n = 128) were recruited from the UT/Gulf States Hemophilia and Thrombophilia Center and stratified by age: 0–18 years and 19+. ^ Test retest reliability was demonstrated for both VAS and SG instruments: overall within-subject correlation coefficients were 0.91 and 0.79, respectively. Results showed statistically significant differences in responses between pediatric and adult participants when using the SG (p = .045). However, no significant differences were shown between these groups when using the VAS (p = .636). When responses to VAS and SG instruments were compared, statistically significant differences in both pediatric (p < .0001) and adult (p < .0001) groups were observed. Data from this study also demonstrated that persons with hemophilia with varying severity of disease, as well as those who were HIV infected, were able to evaluate a range of health states for hemophilia. This has important implications for the study of quality of life in hemophilia and the development of disease-specific HRQoL instruments. ^ The utility measures obtained from this study can be applied in economic evaluations that analyze the cost/utility of alternative hemophilia treatments. Results derived from the SG indicate that age can influence patients' preferences regarding their state of health. This may have implications for considering treatment options based on the mean age of the population under consideration. Although both instruments independently demonstrated reliability and validity, results indicate that the two measures may not be interchangeable. ^