983 resultados para Standard information


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper studies the implications of correlation of private signals about the liquidation value of a risky asset in a variation of a standard noisy rational expectations model in which traders receive endowment shocks which are private information and have a common component. We …nd that a necessary condition to generate multiple linear partially revealing rational expectations equilibria is the existence of several sources of information dispersion. In this context equilibrium multiplicity tends to occur when information is more dispersed. A necessary condition to have strategic complementarity in information acquisition is to have mul- tiple equilibria. When the equilibrium is unique there is strategic substi- tutability in information acquisition, corroborating the result obtained in Grossman and Stiglitz (1980). JEL Classi…cation: D82, D83, G14 Keywords: Multiplicity of equilibria, strategic complementarity, asym- metric information.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: In patients with malignant pleural mesothelioma undergoing a multimodality therapy, treatment toxicity may outweigh the benefit of progression-free survival. The subjective experience across different treatment phases is an important clinical outcome. This study compares a standard with an individual quality of life (QoL) measure used in a multi-center phase II trial. PATIENTS AND METHODS: Sixty-one patients with stage I-III technically operable pleural mesothelioma were treated with preoperative chemotherapy, followed by pleuropneumonectomy and subsequent radiotherapy. QoL was assessed at baseline, at day 1 of cycle 3, and 1, 3 and 6 months post-surgery by using the Rotterdam Symptom Checklist (RSCL) and the Schedule for the Evaluation of Quality of Life-Direct Weighting (SEIQoL-DW), a measure that is based on five individually nominated and weighted QoL-domains. RESULTS: Completion rates were 98% (RSCL) and 92% (SEIQoL) at baseline and 98%/89% at cycle 3, respectively. Of the operated patients (N=45) RSCL and SEIQoL were available from 86%/72%, 93%/74%, and 94%/76% at months 1, 3, and 6 post-surgery. Average assessment time for the SEIQoL was 24min compared to 8min needed for the RSCL. Median changes from baseline indicate that both RSCL QoL overall score and SEIQoL index remained stable during chemotherapy with a clinically significant deterioration (change>or=8 points) 1 month after surgery (median change of -66 and -14 for RSCL and SEIQoL, respectively). RSCL QoL overall scores improved thereafter, but remained beneath baseline level until 6 months after surgery. SEIQoL scores improved to baseline-level at month 3 after surgery, but worsened again at month 6. RSCL QoL overall score and SEIQoL index were moderately correlated at baseline (r=.30; p<or=.05) and at 6-month follow-up (r=.42; p<or=.05) but not at the other time points. CONCLUSION: The SEIQoL assessment seems to be feasible within a phase II clinical trial, but may require more effort from staff. More distinctive QoL changes in accordance with clinical changes were measured with the RSCL. Our findings suggest that the two measures are not interchangeable: the RSCL is to favor when mainly information related to the course of disease- and treatment is of interest.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The National Obesity Observatory was established to provide a single point of contact for wide-ranging authoritative information on data and evidence related to obesity, overweight, underweight and their determinants. The Standard Evaluation Framework is a list of data collection criteria and supporting guidance for collecting high quality information to support the evaluation of weight management interventions. This is a quick reference guide to the core criteria of the Standard Evaluation Framework. Essential criteria are presented as the minimum recommended data for evaluating a weight management intervention. Desirable criteria are additional data that would enhance the evaluation.refer to the resource

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. The use of hospital discharge administrative data (HDAD) has been recommended for automating, improving, even substituting, population-based cancer registries. The frequency of false positive and false negative cases recommends local validation. Methods. The aim of this study was to detect newly diagnosed, false positive and false negative cases of cancer from hospital discharge claims, using four Spanish population-based cancer registries as the gold standard. Prostate cancer was used as a case study. Results. A total of 2286 incident cases of prostate cancer registered in 2000 were used for validation. In the most sensitive algorithm (that using five diagnostic codes), estimates for Sensitivity ranged from 14.5% (CI95% 10.3-19.6) to 45.7% (CI95% 41.4-50.1). In the most predictive algorithm (that using five diagnostic and five surgical codes) Positive Predictive Value estimates ranged from 55.9% (CI95% 42.4-68.8) to 74.3% (CI95% 67.0-80.6). The most frequent reason for false positive cases was the number of prevalent cases inadequately considered as newly diagnosed cancers, ranging from 61.1% to 82.3% of false positive cases. The most frequent reason for false negative cases was related to the number of cases not attended in hospital settings. In this case, figures ranged from 34.4% to 69.7% of false negative cases, in the most predictive algorithm. Conclusions. HDAD might be a helpful tool for cancer registries to reach their goals. The findings suggest that, for automating cancer registries, algorithms combining diagnoses and procedures are the best option. However, for cancer surveillance purposes, in those cancers like prostate cancer in which care is not only hospital-based, combining inpatient and outpatient information will be required.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The automatic interpretation of conventional traffic signs is very complex and time consuming. The paper concerns an automatic warning system for driving assistance. It does not interpret the standard traffic signs on the roadside; the proposal is to incorporate into the existing signs another type of traffic sign whose information will be more easily interpreted by a processor. The type of information to be added is profuse and therefore the most important object is the robustness of the system. The basic proposal of this new philosophy is that the co-pilot system for automatic warning and driving assistance can interpret with greater ease the information contained in the new sign, whilst the human driver only has to interpret the "classic" sign. One of the codings that has been tested with good results and which seems to us easy to implement is that which has a rectangular shape and 4 vertical bars of different colours. The size of these signs is equivalent to the size of the conventional signs (approximately 0.4 m2). The colour information from the sign can be easily interpreted by the proposed processor and the interpretation is much easier and quicker than the information shown by the pictographs of the classic signs

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper introduces the evaluation report after fostering a Standard-based Interoperability Framework (SIF) between the Virgen del Rocío University Hospital (VRUH) Haemodialysis (HD) Unit and 5 outsourced HD centres in order to improve integrated care by automatically sharing patients' Electronic Health Record (EHR) and lab test reports. A pre-post study was conducted during fourteen months. The number of lab test reports of both emergency and routine nature regarding to 379 outpatients was computed before and after the integration of the SIF. Before fostering SIF, 19.38 lab tests per patient were shared between VRUH and HD centres, 5.52 of them were of emergency nature while 13.85 were routine. After integrating SIF, 17.98 lab tests per patient were shared, 3.82 of them were of emergency nature while 14.16 were routine. The inclusion of a SIF in the HD Integrated Care Process has led to an average reduction of 1.39 (p=0.775) lab test requests per patient, including a reduction of 1.70 (p=0.084) in those of emergency nature, whereas an increase of 0.31 (p=0.062) was observed in routine lab tests. Fostering this strategy has led to the reduction in emergency lab test requests, which implies a potential improvement of the integrated care.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND AND STUDY AIMS: The current gold standard in Barrett's esophagus monitoring consists of four-quadrant biopsies every 1-2 cm in accordance with the Seattle protocol. Adding brush cytology processed by digital image cytometry (DICM) may further increase the detection of patients with Barrett's esophagus who are at risk of neoplasia. The aim of the present study was to assess the additional diagnostic value and accuracy of DICM when added to the standard histological analysis in a cross-sectional multicenter study of patients with Barrett's esophagus in Switzerland. METHODS: One hundred sixty-four patients with Barrett's esophagus underwent 239 endoscopies with biopsy and brush cytology. DICM was carried out on 239 cytology specimens. Measures of the test accuracy of DICM (relative risk, sensitivity, specificity, likelihood ratios) were obtained by dichotomizing the histopathology results (high-grade dysplasia or adenocarcinoma vs. all others) and DICM results (aneuploidy/intermediate pattern vs. diploidy). RESULTS: DICM revealed diploidy in 83% of 239 endoscopies, an intermediate pattern in 8.8%, and aneuploidy in 8.4%. An intermediate DICM result carried a relative risk (RR) of 12 and aneuploidy a RR of 27 for high-grade dysplasia/adenocarcinoma. Adding DICM to the standard biopsy protocol, a pathological cytometry result (aneuploid or intermediate) was found in 25 of 239 endoscopies (11%; 18 patients) with low-risk histology (no high-grade dysplasia or adenocarcinoma). During follow-up of 14 of these 18 patients, histological deterioration was seen in 3 (21%). CONCLUSION: DICM from brush cytology may add important information to a standard biopsy protocol by identifying a subgroup of BE-patients with high-risk cellular abnormalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to compare the diagnostic efficiency of plain film and spiral CT examinations with 3D reconstructions of 42 tibial plateau fractures and to assess the accuracy of these two techniques in the pre-operative surgical plan in 22 cases. Forty-two tibial plateau fractures were examined with plain film (anteroposterior, lateral, two obliques) and spiral CT with surface-shaded-display 3D reconstructions. The Swiss AO-ASIF classification system of bone fracture from Muller was used. In 22 cases the surgical plans and the sequence of reconstruction of the fragments were prospectively determined with both techniques, successively, and then correlated with the surgical reports and post-operative plain film. The fractures were underestimated with plain film in 18 of 42 cases (43%). Due to the spiral CT 3D reconstructions, and precise pre-operative information, the surgical plans based on plain film were modified and adjusted in 13 cases among 22 (59%). Spiral CT 3D reconstructions give a better and more accurate demonstration of the tibial plateau fracture and allows a more precise pre-operative surgical plan.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are many situations in which individuals have a choice of whether or notto observe eventual outcomes. In these instances, individuals often prefer to remainignorant. These contexts are outside the scope of analysis of the standard vonNeumann-Morgenstern (vNM) expected utility model, which does not distinguishbetween lotteries for which the agent sees the final outcome and those for which hedoes not. I develop a simple model that admits preferences for making an observationor for remaining in doubt. I then use this model to analyze the connectionbetween preferences of this nature and risk-attitude. This framework accommodatesa wide array of behavioral patterns that violate the vNM model, and thatmay not seem related, prima facie. For instance, it admits self-handicapping, inwhich an agent chooses to impair his own performance. It also accommodatesa status quo bias without having recourse to framing effects, or to an explicitdefinition of reference points. In a political economy context, voters have strictincentives to shield themselves from information. In settings with other-regardingpreferences, this model predicts observed behavior that seems inconsistent witheither altruism or self-interested behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study a novel class of noisy rational expectations equilibria in markets with largenumber of agents. We show that, as long as noise increases with the number of agents inthe economy, the limiting competitive equilibrium is well-defined and leads to non-trivialinformation acquisition, perfect information aggregation, and partially revealing prices,even if per capita noise tends to zero. We find that in such equilibrium risk sharing and price revelation play dierent roles than in the standard limiting economy in which per capita noise is not negligible. We apply our model to study information sales by a monopolist, information acquisition in multi-asset markets, and derivatives trading. Thelimiting equilibria are shown to be perfectly competitive, even when a strategic solutionconcept is used.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show that unconditionally efficient returns do not achieve the maximum unconditionalSharpe ratio, neither display zero unconditional Jensen s alphas, when returns arepredictable. Next, we define a new type of efficient returns that is characterized by thoseunconditional properties. We also study a different type of efficient returns that is rationalizedby standard mean-variance preferences and motivates new Sharpe ratios and Jensen salphas. We revisit the testable implications of asset pricing models from the perspective ofthe three sets of efficient returns. We also revisit the empirical evidence on the conditionalvariants of the CAPM and the Fama-French model from a portfolio perspective.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes a project led by the Instituto Brasileiro de Informações em Ciência e Tecnologia (Ibict), a government institution, to build a national digital library for electronic theses and dissertations - Bibliteca Digital de Teses e Dissertações (BDTD). The project has been a collaborative effort among Ibict, universities and other research centers in Brazil. The developers adopted a system architecture based on the Open Archives Initiative (OAI) in which universities and research centers act as data providers and Ibict as a service provider. A Brazilian metadata standard for electronic theses and dissertations was developed for the digital library. A toolkit including open source package was also developed by Ibict to be distributed to potential data providers. BDTD has been integrated with the international initiative: the Networked Digital Library of Thesis and Dissertation (NDLTD). Discussions in the paper address various issues related to project design, development and management as well as the role played by Ibict. Conclusions highlight some important lessons learned to date and challenges for the future in expanding the BDTD project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Conventional magnetic resonance imaging (MRI) techniques are highly sensitive to detect multiple sclerosis (MS) plaques, enabling a quantitative assessment of inflammatory activity and lesion load. In quantitative analyses of focal lesions, manual or semi-automated segmentations have been widely used to compute the total number of lesions and the total lesion volume. These techniques, however, are both challenging and time-consuming, being also prone to intra-observer and inter-observer variability.Aim: To develop an automated approach to segment brain tissues and MS lesions from brain MRI images. The goal is to reduce the user interaction and to provide an objective tool that eliminates the inter- and intra-observer variability.Methods: Based on the recent methods developed by Souplet et al. and de Boer et al., we propose a novel pipeline which includes the following steps: bias correction, skull stripping, atlas registration, tissue classification, and lesion segmentation. After the initial pre-processing steps, a MRI scan is automatically segmented into 4 classes: white matter (WM), grey matter (GM), cerebrospinal fluid (CSF) and partial volume. An expectation maximisation method which fits a multivariate Gaussian mixture model to T1-w, T2-w and PD-w images is used for this purpose. Based on the obtained tissue masks and using the estimated GM mean and variance, we apply an intensity threshold to the FLAIR image, which provides the lesion segmentation. With the aim of improving this initial result, spatial information coming from the neighbouring tissue labels is used to refine the final lesion segmentation.Results:The experimental evaluation was performed using real data sets of 1.5T and the corresponding ground truth annotations provided by expert radiologists. The following values were obtained: 64% of true positive (TP) fraction, 80% of false positive (FP) fraction, and an average surface distance of 7.89 mm. The results of our approach were quantitatively compared to our implementations of the works of Souplet et al. and de Boer et al., obtaining higher TP and lower FP values.Conclusion: Promising MS lesion segmentation results have been obtained in terms of TP. However, the high number of FP which is still a well-known problem of all the automated MS lesion segmentation approaches has to be improved in order to use them for the standard clinical practice. Our future work will focus on tackling this issue.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Vertical Clearance Log is prepared for the purpose of providing vertical clearance restrictions by route on the primary road system. This report is used by the Iowa Department of Transportation’s Motor Carrier Services to route oversize vehicles around structures with vertical restrictions too low for the cargo height. The source of the data is the Geographic Information Management System (GIMS) that is managed by the Office of Research & Analytics in the Performance & Technology Division. The data is collected by inspection crews and through the use of LiDAR technology to reflect changes to structures on the primary road system. This log is produced annually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present the most comprehensive comparison to date of the predictive benefit of genetics in addition to currently used clinical variables, using genotype data for 33 single-nucleotide polymorphisms (SNPs) in 1,547 Caucasian men from the placebo arm of the REduction by DUtasteride of prostate Cancer Events (REDUCE®) trial. Moreover, we conducted a detailed comparison of three techniques for incorporating genetics into clinical risk prediction. The first method was a standard logistic regression model, which included separate terms for the clinical covariates and for each of the genetic markers. This approach ignores a substantial amount of external information concerning effect sizes for these Genome Wide Association Study (GWAS)-replicated SNPs. The second and third methods investigated two possible approaches to incorporating meta-analysed external SNP effect estimates - one via a weighted PCa 'risk' score based solely on the meta analysis estimates, and the other incorporating both the current and prior data via informative priors in a Bayesian logistic regression model. All methods demonstrated a slight improvement in predictive performance upon incorporation of genetics. The two methods that incorporated external information showed the greatest receiver-operating-characteristic AUCs increase from 0.61 to 0.64. The value of our methods comparison is likely to lie in observations of performance similarities, rather than difference, between three approaches of very different resource requirements. The two methods that included external information performed best, but only marginally despite substantial differences in complexity.