118 resultados para panel estimates
Resumo:
BACKGROUND: The quality of colon cleansing is a major determinant of quality of colonoscopy. To our knowledge, the impact of bowel preparation on the quality of colonoscopy has not been assessed prospectively in a large multicenter study. Therefore, this study assessed the factors that determine colon-cleansing quality and the impact of cleansing quality on the technical performance and diagnostic yield of colonoscopy. METHODS: Twenty-one centers from 11 countries participated in this prospective observational study. Colon-cleansing quality was assessed on a 5-point scale and was categorized on 3 levels. The clinical indication for colonoscopy, diagnoses, and technical parameters related to colonoscopy were recorded. RESULTS: A total of 5832 patients were included in the study (48.7% men, mean age 57.6 [15.9] years). Cleansing quality was lower in elderly patients and in patients in the hospital. Procedures in poorly prepared patients were longer, more difficult, and more often incomplete. The detection of polyps of any size depended on cleansing quality: odds ratio (OR) 1.73: 95% confidence interval (CI)[1.28, 2.36] for intermediate-quality compared with low-quality preparation; and OR 1.46: 95% CI[1.11, 1.93] for high-quality compared with low-quality preparation. For polyps >10 mm in size, corresponding ORs were 1.0 for low-quality cleansing, OR 1.83: 95% CI[1.11, 3.05] for intermediate-quality cleansing, and OR 1.72: 95% CI[1.11, 2.67] for high-quality cleansing. Cancers were not detected less frequently in the case of poor preparation. CONCLUSIONS: Cleansing quality critically determines quality, difficulty, speed, and completeness of colonoscopy, and is lower in hospitalized patients and patients with higher levels of comorbid conditions. The proportion of patients who undergo polypectomy increases with higher cleansing quality, whereas colon cancer detection does not seem to critically depend on the quality of bowel preparation.
Resumo:
The aim of this study was to evaluate and compare organ doses delivered to patients in wrist and petrous bone examinations using a multislice spiral computed tomography (CT) and a C-arm cone-beam CT equipped with a flat-panel detector (XperCT). For this purpose, doses to the target organ, i.e. wrist or petrous bone, together with those to the most radiosensitive nearby organs, i.e. thyroid and eye lens, were measured and compared. Furthermore, image quality was compared for both imaging systems and different acquisition modes using a Catphan phantom. Results show that both systems guarantee adequate accuracy for diagnostic purposes for wrist and petrous bone examinations. Compared with the CT scanner, the XperCT system slightly reduces the dose to target organs and shortens the overall duration of the wrist examination. In addition, using the XperCT enables a reduction of the dose to the eye lens during head scans (skull base and ear examinations).
Resumo:
OBJECTIVE: To assess the suitability of a hot-wire anemometer infant monitoring system (Florian, Acutronic Medical Systems AG, Hirzel, Switzerland) for measuring flow and tidal volume (Vt) proximal to the endotracheal tube during high-frequency oscillatory ventilation. DESIGN: In vitro model study. SETTING: Respiratory research laboratory. SUBJECT: In vitro lung model simulating moderate to severe respiratory distress. INTERVENTION: The lung model was ventilated with a SensorMedics 3100A ventilator. Vt was recorded from the monitor display (Vt-disp) and compared with the gold standard (Vt-adiab), which was calculated using the adiabatic gas equation from pressure changes inside the model. MEASUREMENTS AND MAIN RESULTS: A range of Vt (1-10 mL), frequencies (5-15 Hz), pressure amplitudes (10-90 cm H2O), inspiratory times (30% to 50%), and Fio2 (0.21-1.0) was used. Accuracy was determined by using modified Bland-Altman plots (95% limits of agreement). An exponential decrease in Vt was observed with increasing oscillatory frequency. Mean DeltaVt-disp was 0.6 mL (limits of agreement, -1.0 to 2.1) with a linear frequency dependence. Mean DeltaVt-disp was -0.2 mL (limits of agreement, -0.5 to 0.1) with increasing pressure amplitude and -0.2 mL (limits of agreement, -0.3 to -0.1) with increasing inspiratory time. Humidity and heating did not affect error, whereas increasing Fio2 from 0.21 to 1.0 increased mean error by 6.3% (+/-2.5%). CONCLUSIONS: The Florian infant hot-wire flowmeter and monitoring system provides reliable measurements of Vt at the airway opening during high-frequency oscillatory ventilation when employed at frequencies of 8-13 Hz. The bedside application could improve monitoring of patients receiving high-frequency oscillatory ventilation, favor a better understanding of the physiologic consequences of different high-frequency oscillatory ventilation strategies, and therefore optimize treatment.
Batch effect confounding leads to strong bias in performance estimates obtained by cross-validation.
Resumo:
BACKGROUND: With the large amount of biological data that is currently publicly available, many investigators combine multiple data sets to increase the sample size and potentially also the power of their analyses. However, technical differences ("batch effects") as well as differences in sample composition between the data sets may significantly affect the ability to draw generalizable conclusions from such studies. FOCUS: The current study focuses on the construction of classifiers, and the use of cross-validation to estimate their performance. In particular, we investigate the impact of batch effects and differences in sample composition between batches on the accuracy of the classification performance estimate obtained via cross-validation. The focus on estimation bias is a main difference compared to previous studies, which have mostly focused on the predictive performance and how it relates to the presence of batch effects. DATA: We work on simulated data sets. To have realistic intensity distributions, we use real gene expression data as the basis for our simulation. Random samples from this expression matrix are selected and assigned to group 1 (e.g., 'control') or group 2 (e.g., 'treated'). We introduce batch effects and select some features to be differentially expressed between the two groups. We consider several scenarios for our study, most importantly different levels of confounding between groups and batch effects. METHODS: We focus on well-known classifiers: logistic regression, Support Vector Machines (SVM), k-nearest neighbors (kNN) and Random Forests (RF). Feature selection is performed with the Wilcoxon test or the lasso. Parameter tuning and feature selection, as well as the estimation of the prediction performance of each classifier, is performed within a nested cross-validation scheme. The estimated classification performance is then compared to what is obtained when applying the classifier to independent data.
Resumo:
The OLS estimator of the intergenerational earnings correlation is biased towards zero, while the instrumental variables estimator is biased upwards. The first of these results arises because of measurement error, while the latter rests on the presumption that the education of the parent family is an invalid instrument. We propose a panel data framework for quantifying the asymptotic biases of these estimators, as well as a mis-specification test for the IV estimator. [Author]
Resumo:
BACKGROUND: Prospective data describing the appropriateness of use of colonoscopy based on detailed panel-based clinical criteria are not available. METHODS: In a cohort of 553 consecutive patients referred for colonoscopy to two university-based Swiss outpatient clinics, the percentage of patients who underwent colonoscopy for appropriate, equivocal, and inappropriate indications and the relationship between appropriateness of use and the presence of relevant endoscopic lesions was prospectively assessed. This assessment was based on criteria of the American Society for Gastrointestinal Endoscopy and explicit American and Swiss criteria developed in 1994 by a formal panel process using the RAND/UCLA appropriateness method. RESULTS: The procedures were rated appropriate or equivocal in 72.2% by criteria of the American Society for Gastrointestinal Endoscopy, in 68.5% by explicit American criteria, and in 74.4% by explicit Swiss criteria (not statistically significant, NS). Inappropriate use (overuse) of colonoscopy was found in 27.8%, 31.5%, and 25.6%, respectively (NS). The proportion of appropriate procedures was higher with increasing age. Almost all reasons for using colonoscopy could be assessed by the two explicit criteria sets, whereas 28.4% of reasons for using colonoscopy could not be evaluated by the criteria of the American Society for Gastrointestinal Endoscopy (p < 0.0001). The probability of finding a relevant endoscopic lesion was distinctly higher in the procedures rated appropriate or equivocal than in procedures judged inappropriate. CONCLUSIONS: The rate of inappropriate use of colonoscopy is substantial in Switzerland. Explicit criteria allow assessment of almost all indications encountered in clinical practice. In this study, all sets of appropriateness criteria significantly enhanced the probability of finding a relevant endoscopic lesion during colonoscopy.
Resumo:
RESUME L'objectif de cette étude est d'évaluer comment de jeunes médecins en formation perçoivent le risque cardiovasculaire de leurs patients hypertendus en se basant sur les recommandations médicales (« guidelines ») et sur leur jugement clinique. Il s'agit d'une étude transversale observationnelle effectuée à la Policlinique Médicale Universitaire de Lausanne (PMU). 200 patients hypertendus ont été inclus dans l'étude ainsi qu'un groupe contrôle de 50 patients non hypertendus présentant au moins un facteur de risque cardiovasculaire. Nous avons comparé le risque cardiovasculaire à 10 ans calculé par un programme informatique basé sur l'équation de Framingham. L'équation a été adaptée pour les médecins par l'OMS-ISH au risque perçu, estimé cliniquement par les médecins. Les résultats de notre étude ont montrés que les médecins sous-estiment le risque cardiovasculaire à 10 ans de leurs patients, comparé au risque calculé selon l'équation de Framingham. La concordance entre les deux méthodes était de 39% pour les patients hypertendus et de 30% pour le groupe contrôle de patients non hypertendus. La sous-estimation du risque. cardiovasculaire pour les patients hypertendus était corrélée au fait qu'ils avaient une tension artérielle systolique stabilisée inférieure a 140 mmHg (OR=2.1 [1.1 ;4.1]). En conclusion, les résultats de cette étude montrent que les jeunes médecins en formation ont souvent une perception incorrecte du risque cardiovasculaire de leurs patients, avec une tendance à sous-estimer ce risque. Toutefois le risque calculé pourrait aussi être légèrement surestimé lorsqu'on applique l'équation de Framingham à la population suisse. Pour mettre en pratique une évaluation systématique des facteurs de risque en médecine de premier recours, un accent plus grand devrait être mis sur l'enseignement de l'évaluation du risque cardiovasculaire ainsi que sur la mise en oeuvre de programme pour l'amélioration de la qualité.
Resumo:
Social scientists often estimate models from correlational data, where the independent variable has not been exogenously manipulated; they also make implicit or explicit causal claims based on these models. When can these claims be made? We answer this question by first discussing design and estimation conditions under which model estimates can be interpreted, using the randomized experiment as the gold standard. We show how endogeneity--which includes omitted variables, omitted selection, simultaneity, common methods bias, and measurement error--renders estimates causally uninterpretable. Second, we present methods that allow researchers to test causal claims in situations where randomization is not possible or when causal interpretation is confounded, including fixed-effects panel, sample selection, instrumental variable, regression discontinuity, and difference-in-differences models. Third, we take stock of the methodological rigor with which causal claims are being made in a social sciences discipline by reviewing a representative sample of 110 articles on leadership published in the previous 10 years in top-tier journals. Our key finding is that researchers fail to address at least 66 % and up to 90 % of design and estimation conditions that make causal claims invalid. We conclude by offering 10 suggestions on how to improve non-experimental research.
Resumo:
General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.
Resumo:
Thirty monoclonal antibodies from eight laboratories exchanged after the First Workshop on Monoclonal Antibodies to Human Melanoma held in March 1981 at NIH were tested in an antibody-binding radioimmunoassay using a panel of 28 different cell lines. This panel included 12 melanomas, three neuroblastomas, four gliomas, one retinoblastoma, four colon carcinomas, one lung carcinoma, one cervical carcinoma, one endometrial carcinoma, and one breast carcinoma. The reactivity pattern of the 30 monoclonal antibodies tested showed that none of them were directed against antigens strictly restricted to melanoma, but that several of them recognize antigenic structures preferentially expressed on melanoma cells. A large number of antibodies were found to crossreact with gliomas and neuroblastomas. Thus, they seem to recognize neuroectoderm associated differentiation antigens. Four monoclonal antibodies produced in our laboratory were further studied for the immunohistological localization of melanoma associated antigens on fresh tumor material. In a three-layer biotin-avidin-peroxidase system each antibody showed a different staining pattern with the tumor cells, suggesting that they were directed against different antigens.
Resumo:
Amphibians display wide variations in life-history traits and life cycles that should prove useful to explore the evolution of sex-biased dispersal, but quantitative data on sex-specific dispersal patterns are scarce. Here, we focused on Salamandra atra, an endemic alpine species showing peculiar life-history traits. Strictly terrestrial and viviparous, the species has a promiscuous mating system, and females reproduce only every 3 to 4 years. In the present study, we provide quantitative estimates of asymmetries in male vs. female dispersal using both field-based (mark-recapture) and genetic approaches (detection of sex-biased dispersal and estimates of migration rates based on the contrast in genetic structure across sexes and age classes). Our results revealed a high level of gene flow among populations, which stems exclusively from male dispersal. We hypothesize that philopatric females benefit from being familiar with their natal area for the acquisition and defence of an appropriate shelter, while male dispersal has been secondarily favoured by inbreeding avoidance. Together with other studies on amphibians, our results indicate that a species' mating system alone is a poor predictor of sex-linked differences in dispersal, in particular for promiscuous species. Further studies should focus more directly on the proximate forces that favour or limit dispersal to refine our understanding of the evolution of sex-biased dispersal in animals.
Resumo:
Quantitative knowledge of the turnover of different leukocyte populations is a key to our understanding of immune function in health and disease. Much progress has been made thanks to the introduction of stable isotope labeling, the state-of-the-art technique for in vivo quantification of cellular life spans. Yet, even leukocyte life span estimates on the basis of stable isotope labeling can vary up to 10-fold among laboratories. We investigated whether these differences could be the result of variances in the length of the labeling period among studies. To this end, we performed deuterated water-labeling experiments in mice, in which only the length of label administration was varied. The resulting life span estimates were indeed dependent on the length of the labeling period when the data were analyzed using a commonly used single-exponential model. We show that multiexponential models provide the necessary tool to obtain life span estimates that are independent of the length of the labeling period. Use of a multiexponential model enabled us to reduce the gap between human T-cell life span estimates from 2 previously published labeling studies. This provides an important step toward unambiguous understanding of leukocyte turnover in health and disease.