980 resultados para Proxy Respondents


Relevância:

10.00% 10.00%

Publicador:

Resumo:

[cat] En aquest treball, es realitza una nova estimació del VAB industrial espanyol a un nivell de desagregació territorial corresponent a les províncies (NUTSIII) i les Comunitats Autònomes (NUTS II). Per assolir aquest objectiu es planteja una nova metodologia d’estimació de les xifres històriques de VAB industrial regional. Front a les aproximacions tradicionals, basades en la utilització de fonts fiscals com a forma d’aproximar la capacitat productiva industrial, en aquest treball s’ofereix una estimació que també es basa en les rendes generades per la producció industrial de les regions. Amb aquest objectiu, es fa servir la metodologia proposada per Geary i Stark (2002) i les millores proposades per Crafts (2005). La utilització d’aquesta metodologia permet elaborar una nova estimació retrospectiva del VAB industrial de les regions espanyoles a diversos talls temporals corresponents al període 1860-1930.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND AND AIMS: Previous studies suggest that the new DSM-5 criteria for alcohol use disorder (AUD) will increase the apparent prevalence of AUD. This study estimates the 12-month prevalence of AUD using both DSM-IV and DSM-5 criteria and compares the characteristics of men in a high risk sample who meet both, only one and neither sets of diagnostic criteria. DESIGN, SETTING AND PARTICIPANTS: 5943 Swiss men aged 18-25 years who participated in the Cohort Study on Substance Use Risk Factors (C-SURF), a population-based cohort study recruited from three of the six military recruitment centres in Switzerland (response rate = 79.2%). MEASUREMENTS: DSM-IV and DSM-5 criteria, alcohol use patterns, and other substance use were assessed. FINDINGS: Approximately 31.7% (30.5-32.8) of individuals met DSM-5 AUD criteria [21.2% mild (20.1-22.2); 10.5% moderate/severe (9.7-11.3)], which was less than the total rate when DSM-IV criteria for alcohol abuse (AA) and alcohol dependence (AD) were combined [36.8% overall (35.5-37.9); 26.6% AA (25.4-27.7); 10.2% AD (9.4-10.9)]. Of 2479 respondents meeting criteria for either diagnoses, 1585 (63.9%) met criteria for both. For those meeting DSM-IV criteria only (n = 598, 24.1%), hazardous use was most prevalent, whereas the criteria larger/longer use than intended and tolerance to alcohol were most prevalent for respondents meeting DSM-5 criteria only (n = 296, 11.9%). Two in five DSM-IV alcohol abuse cases and one-third of DSM-5 mild AUD individuals fulfilled the diagnostic criteria due to the hazardous use criterion. The addition of the craving and excluding of legal criterion, respectively, did not affect estimated AUD prevalence. CONCLUSIONS: In a high-risk sample of young Swiss males, prevalence of alcohol use disorder as diagnosed by DSM-5 was slightly lower than prevalence of DSM-IV diagnosis of dependence plus abuse; 63.9% of those who met either criterion met criteria for both.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE: Although dual-energy X-ray absorptiometry (DEXA) is the preferred method to estimate adiposity, body mass index (BMI) is often used as a proxy. However, the ability of BMI to measure adiposity change among youth is poorly evidenced. This study explored which metrics of BMI change have the highest correlations with different metrics of DEXA change. METHODS: Data were from the Quebec Adipose and Lifestyle Investigation in Youth cohort, a prospective cohort of children (8-10 years at recruitment) from Québec, Canada (n=557). Height and weight were measured by trained nurses at baseline (2008) and follow-up (2010). Metrics of BMI change were raw (ΔBMIkg/m(2) ), adjusted for median BMI (ΔBMIpercentage) and age-sex-adjusted with the Centers for Disease Control and Prevention growth curves expressed as centiles (ΔBMIcentile) or z-scores (ΔBMIz-score). Metrics of DEXA change were raw (total fat mass; ΔFMkg), per cent (ΔFMpercentage), height-adjusted (fat mass index; ΔFMI) and age-sex-adjusted z-scores (ΔFMz-score). Spearman's rank correlations were derived. RESULTS: Correlations ranged from modest (0.60) to strong (0.86). ΔFMkg correlated most highly with ΔBMIkg/m(2) (r = 0.86), ΔFMI with ΔBMIkg/m(2) and ΔBMIpercentage (r = 0.83-0.84), ΔFMz-score with ΔBMIz-score (r = 0.78), and ΔFMpercentage with ΔBMIpercentage (r = 0.68). Correlations with ΔBMIcentile were consistently among the lowest. CONCLUSIONS: In 8-10-year-old children, absolute or per cent change in BMI is a good proxy for change in fat mass or FMI, and BMI z-score change is a good proxy for FM z-score change. However change in BMI centile and change in per cent fat mass perform less well and are not recommended.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: Health status measures usually have an asymmetric distribution and present a highpercentage of respondents with the best possible score (ceiling effect), specially when they areassessed in the overall population. Different methods to model this type of variables have beenproposed that take into account the ceiling effect: the tobit models, the Censored Least AbsoluteDeviations (CLAD) models or the two-part models, among others. The objective of this workwas to describe the tobit model, and compare it with the Ordinary Least Squares (OLS) model,that ignores the ceiling effect.Methods: Two different data sets have been used in order to compare both models: a) real datacomming from the European Study of Mental Disorders (ESEMeD), in order to model theEQ5D index, one of the measures of utilities most commonly used for the evaluation of healthstatus; and b) data obtained from simulation. Cross-validation was used to compare thepredicted values of the tobit model and the OLS models. The following estimators werecompared: the percentage of absolute error (R1), the percentage of squared error (R2), the MeanSquared Error (MSE) and the Mean Absolute Prediction Error (MAPE). Different datasets werecreated for different values of the error variance and different percentages of individuals withceiling effect. The estimations of the coefficients, the percentage of explained variance and theplots of residuals versus predicted values obtained under each model were compared.Results: With regard to the results of the ESEMeD study, the predicted values obtained with theOLS model and those obtained with the tobit models were very similar. The regressioncoefficients of the linear model were consistently smaller than those from the tobit model. In thesimulation study, we observed that when the error variance was small (s=1), the tobit modelpresented unbiased estimations of the coefficients and accurate predicted values, specially whenthe percentage of individuals wiht the highest possible score was small. However, when theerrror variance was greater (s=10 or s=20), the percentage of explained variance for the tobitmodel and the predicted values were more similar to those obtained with an OLS model.Conclusions: The proportion of variability accounted for the models and the percentage ofindividuals with the highest possible score have an important effect in the performance of thetobit model in comparison with the linear model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Objective: To describe the methodology of Confirmatory Factor Analyis for categorical items and to apply this methodology to evaluate the factor structure and invariance of the WHO-Disability Assessment Schedule (WHODAS-II) questionnaire, developed by the World HealthOrganization.Methods: Data used for the analysis come from the European Study of Mental Disorders(ESEMeD), a cross-sectional interview to a representative sample of the general population of 6 european countries (n=8796). Respondents were administered a modified version of theWHODAS-II, that measures functional disability in the previous 30 days in 6 differentdimensions: Understanding and Communicating; Self-Care, Getting Around, Getting Along withOthers, Life Activities and Participation. The questionnaire includes two types of items: 22severity items (5 points likert) and 8 frequency items (continuous). An Exploratory factoranalysis (EFA) with promax rotation was conducted on a random 50% of the sample. Theremaining half of the sample was used to perform a Confirmatory Factor Analysis (CFA) inorder to compare three different models: (a) the model suggested by the results obtained in theEFA; (b) the theoretical model suggested by the WHO with 6 dimensions; (c) a reduced modelequivalent to model b where 4 of the frequency items are excluded. Moreover, a second orderfactor was also evaluated. Finally, a CFA with covariates was estimated in order to evaluatemeasurement invariance of the items between Mediterranean and non-mediterranean countries.Results: The solution that provided better results in the EFA was that containing 7 factors. Twoof the frequency items presented high factor loadings in the same factor, and one of thempresented factor loadings smaller than 0.3 with all the factors. With regard to the CFA, thereduced model (model c) presented the best goodness of fit results (CFI=0.992,TLI=0.996,RMSEA=0.024). The second order factor structure presented adequate goodness of fit (CFI=0.987,TLI=0.991, RMSEA=0.036). Measurement non-invariance was detected for one of the items of thequestionnaire (FD20 ¿ Embarrassment due to health problems).Conclusions: AFC confirmed the initial hypothesis about the factorial structure of the WHODAS-II in 6factors. The second order factor supports the existence of a global dimension of disability. The use of 4of the frequency items is not recommended in the scoring of the corresponding dimensions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lay perceptions of collectives (e.g., groups, organizations, countries) implicated in the 2009 H1N1 outbreak were studied. Collectives serve symbolic functions to help laypersons make sense of the uncertainty involved in a disease outbreak. We argue that lay representations are dramatized, featuring characters like heroes, villains and victims. In interviews conducted soon after the outbreak, 47 Swiss respondents discussed the risk posed by H1N1, its origins and effects, and protective measures. Countries were the most frequent collectives mentioned. Poor, underdeveloped countries were depicted as victims, albeit ambivalently, as they were viewed as partly responsible for their own plight. Experts (physicians, researchers) and political and health authorities were depicted as heroes. Two villains emerged: the media (viewed as fear mongering or as a puppet serving powerful interests) and private corporations (e.g., the pharmaceutical industry). Laypersons' framing of disease threat diverges substantially from official perspectives.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Family Attitude Scale (FAS) is a self-report measure of critical or hostile attitudes and behaviors towards another family member, and demonstrates an ability to predict relapse in psychoses. Data are not currently available on a French version of the scale. The present study developed a French version of the FAS, using a large general population sample to test its internal structure, criterion validity and relationships with the respondents' symptoms and psychiatric diagnoses, and examined the reciprocity of FAS ratings by respondents and their partners. A total of 2072 adults from an urban population undertook a diagnostic interview and completed self-report measures, including an FAS about their partner. A subset of participants had partners who also completed the FAS. Confirmatory factor analyses revealed an excellent fit by a single-factor model, and the FAS demonstrated a strong association with dyadic adjustment. FAS scores of respondents were affected by their anxiety levels and mood, alcohol and anxiety diagnoses, and moderate reciprocity of attitudes and behaviors between the partners was seen. The French version of the FAS has similarly strong psychometric properties to the original English version. Future research should assess the ability of the French FAS to predict relapse of psychiatric disorders.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background: Medical errors have recently been recognized as a relevant concern in public health, and increasing research efforts have been made to find ways of improving patient safety. In palliative care, however, studies on errors are scant. Objective: Our aim was to gather pilot data concerning experiences and attitudes of palliative care professionals on this topic. Methods: We developed a questionnaire, which consists of questions on relevance, estimated frequency, kinds and severity of errors, their causes and consequences, and the way palliative care professionals handle them. The questionnaire was sent to all specialist palliative care institutions in the region of Bavaria, Germany (n=168; inhabitants 12.5 million) reaching a response rate of 42% (n=70). Results: Errors in palliative care were regarded as a highly relevant problem (median 8 on a 10-point numeric rating scale). Most respondents experienced a moderate frequency of errors (1-10 per 100 patients). Errors in communication were estimated to be more common than those in symptom control. The causes most often mentioned were deficits in communication or organization. Moral and psychological problems for the person committing the error were seen as more frequent than consequences for the patient. Ninety percent of respondents declared that they disclose errors to the harmed patient. For 78% of the professionals, the issue was not a part of their professional training. Conclusion: Professionals acknowledge errors-in particular errors in communication-to be a common and relevant problem in palliative care, one that has, however, been neglected in training and research.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Blood sampling is a frequent medical procedure, very often considered as a stressful experience by children. Local anesthetics have been developed, but are expensive and not reimbursed by insurance companies in our country. We wanted to assess parents' willingness to pay (WTP) for this kind of drug. PATIENTS AND METHODS: Over 6 months, all parents of children presenting for general (GV) or specialized visit (SV) with blood sampling. WTP was assessed through three scenarios [avoiding blood sampling (ABS), using the drug on prescription (PD), or over the counter (OTC)], with a payment card system randomized to ascending or descending order of prices (AO or DO). RESULTS: Fifty-six responses were collected (34 GV, 22 SV, 27 AO and 29 DO), response rate 40%. Response distribution was wide, with median WTP of 40 for ABS, 25 for PD, 10 for OTC, which is close to the drug's real price. Responses were similar for GV and SV. Median WTP amounted to 0.71, 0.67, 0.20% of respondents' monthly income for the three scenarios, respectively, with a maximum at 10%. CONCLUSIONS: Assessing parents' WTP in an outpatient setting is difficult, with wide result distribution, but median WTP is close to the real drug price. This finding could be used to promote insurance coverage for this drug.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

La présente étude est à la fois une évaluation du processus de la mise en oeuvre et des impacts de la police de proximité dans les cinq plus grandes zones urbaines de Suisse - Bâle, Berne, Genève, Lausanne et Zurich. La police de proximité (community policing) est à la fois une philosophie et une stratégie organisationnelle qui favorise un partenariat renouvelé entre la police et les communautés locales dans le but de résoudre les problèmes relatifs à la sécurité et à l'ordre public. L'évaluation de processus a analysé des données relatives aux réformes internes de la police qui ont été obtenues par l'intermédiaire d'entretiens semi-structurés avec des administrateurs clés des cinq départements de police, ainsi que dans des documents écrits de la police et d'autres sources publiques. L'évaluation des impacts, quant à elle, s'est basée sur des variables contextuelles telles que des statistiques policières et des données de recensement, ainsi que sur des indicateurs d'impacts construit à partir des données du Swiss Crime Survey (SCS) relatives au sentiment d'insécurité, à la perception du désordre public et à la satisfaction de la population à l'égard de la police. Le SCS est un sondage régulier qui a permis d'interroger des habitants des cinq grandes zones urbaines à plusieurs reprises depuis le milieu des années 1980. L'évaluation de processus a abouti à un « Calendrier des activités » visant à créer des données de panel permettant de mesurer les progrès réalisés dans la mise en oeuvre de la police de proximité à l'aide d'une grille d'évaluation à six dimensions à des intervalles de cinq ans entre 1990 et 2010. L'évaluation des impacts, effectuée ex post facto, a utilisé un concept de recherche non-expérimental (observational design) dans le but d'analyser les impacts de différents modèles de police de proximité dans des zones comparables à travers les cinq villes étudiées. Les quartiers urbains, délimités par zone de code postal, ont ainsi été regroupés par l'intermédiaire d'une typologie réalisée à l'aide d'algorithmes d'apprentissage automatique (machine learning). Des algorithmes supervisés et non supervisés ont été utilisés sur les données à haute dimensionnalité relatives à la criminalité, à la structure socio-économique et démographique et au cadre bâti dans le but de regrouper les quartiers urbains les plus similaires dans des clusters. D'abord, les cartes auto-organisatrices (self-organizing maps) ont été utilisées dans le but de réduire la variance intra-cluster des variables contextuelles et de maximiser simultanément la variance inter-cluster des réponses au sondage. Ensuite, l'algorithme des forêts d'arbres décisionnels (random forests) a permis à la fois d'évaluer la pertinence de la typologie de quartier élaborée et de sélectionner les variables contextuelles clés afin de construire un modèle parcimonieux faisant un minimum d'erreurs de classification. Enfin, pour l'analyse des impacts, la méthode des appariements des coefficients de propension (propensity score matching) a été utilisée pour équilibrer les échantillons prétest-posttest en termes d'âge, de sexe et de niveau d'éducation des répondants au sein de chaque type de quartier ainsi identifié dans chacune des villes, avant d'effectuer un test statistique de la différence observée dans les indicateurs d'impacts. De plus, tous les résultats statistiquement significatifs ont été soumis à une analyse de sensibilité (sensitivity analysis) afin d'évaluer leur robustesse face à un biais potentiel dû à des covariables non observées. L'étude relève qu'au cours des quinze dernières années, les cinq services de police ont entamé des réformes majeures de leur organisation ainsi que de leurs stratégies opérationnelles et qu'ils ont noué des partenariats stratégiques afin de mettre en oeuvre la police de proximité. La typologie de quartier développée a abouti à une réduction de la variance intra-cluster des variables contextuelles et permet d'expliquer une partie significative de la variance inter-cluster des indicateurs d'impacts avant la mise en oeuvre du traitement. Ceci semble suggérer que les méthodes de géocomputation aident à équilibrer les covariables observées et donc à réduire les menaces relatives à la validité interne d'un concept de recherche non-expérimental. Enfin, l'analyse des impacts a révélé que le sentiment d'insécurité a diminué de manière significative pendant la période 2000-2005 dans les quartiers se trouvant à l'intérieur et autour des centres-villes de Berne et de Zurich. Ces améliorations sont assez robustes face à des biais dus à des covariables inobservées et covarient dans le temps et l'espace avec la mise en oeuvre de la police de proximité. L'hypothèse alternative envisageant que les diminutions observées dans le sentiment d'insécurité soient, partiellement, un résultat des interventions policières de proximité semble donc être aussi plausible que l'hypothèse nulle considérant l'absence absolue d'effet. Ceci, même si le concept de recherche non-expérimental mis en oeuvre ne peut pas complètement exclure la sélection et la régression à la moyenne comme explications alternatives. The current research project is both a process and impact evaluation of community policing in Switzerland's five major urban areas - Basel, Bern, Geneva, Lausanne, and Zurich. Community policing is both a philosophy and an organizational strategy that promotes a renewed partnership between the police and the community to solve problems of crime and disorder. The process evaluation data on police internal reforms were obtained through semi-structured interviews with key administrators from the five police departments as well as from police internal documents and additional public sources. The impact evaluation uses official crime records and census statistics as contextual variables as well as Swiss Crime Survey (SCS) data on fear of crime, perceptions of disorder, and public attitudes towards the police as outcome measures. The SCS is a standing survey instrument that has polled residents of the five urban areas repeatedly since the mid-1980s. The process evaluation produced a "Calendar of Action" to create panel data to measure community policing implementation progress over six evaluative dimensions in intervals of five years between 1990 and 2010. The impact evaluation, carried out ex post facto, uses an observational design that analyzes the impact of the different community policing models between matched comparison areas across the five cities. Using ZIP code districts as proxies for urban neighborhoods, geospatial data mining algorithms serve to develop a neighborhood typology in order to match the comparison areas. To this end, both unsupervised and supervised algorithms are used to analyze high-dimensional data on crime, the socio-economic and demographic structure, and the built environment in order to classify urban neighborhoods into clusters of similar type. In a first step, self-organizing maps serve as tools to develop a clustering algorithm that reduces the within-cluster variance in the contextual variables and simultaneously maximizes the between-cluster variance in survey responses. The random forests algorithm then serves to assess the appropriateness of the resulting neighborhood typology and to select the key contextual variables in order to build a parsimonious model that makes a minimum of classification errors. Finally, for the impact analysis, propensity score matching methods are used to match the survey respondents of the pretest and posttest samples on age, gender, and their level of education for each neighborhood type identified within each city, before conducting a statistical test of the observed difference in the outcome measures. Moreover, all significant results were subjected to a sensitivity analysis to assess the robustness of these findings in the face of potential bias due to some unobserved covariates. The study finds that over the last fifteen years, all five police departments have undertaken major reforms of their internal organization and operating strategies and forged strategic partnerships in order to implement community policing. The resulting neighborhood typology reduced the within-cluster variance of the contextual variables and accounted for a significant share of the between-cluster variance in the outcome measures prior to treatment, suggesting that geocomputational methods help to balance the observed covariates and hence to reduce threats to the internal validity of an observational design. Finally, the impact analysis revealed that fear of crime dropped significantly over the 2000-2005 period in the neighborhoods in and around the urban centers of Bern and Zurich. These improvements are fairly robust in the face of bias due to some unobserved covariate and covary temporally and spatially with the implementation of community policing. The alternative hypothesis that the observed reductions in fear of crime were at least in part a result of community policing interventions thus appears at least as plausible as the null hypothesis of absolutely no effect, even if the observational design cannot completely rule out selection and regression to the mean as alternative explanations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In forensic science, there is a strong interest in determining the post-mortem interval (PMI) of human skeletal remains up to 50 years after death. Currently, there are no reliable methods to resolve PMI, the determination of which relies almost exclusively on the experience of the investigating expert. Here we measured (90)Sr and (210)Pb ((210)Po) incorporated into bones through a biogenic process as indicators of the time elapsed since death. We hypothesised that the activity of radionuclides incorporated into trabecular bone will more accurately match the activity in the environment and the food chain at the time of death than the activity in cortical bone because of a higher remodelling rate. We found that determining (90)Sr can yield reliable PMI estimates as long as a calibration curve exists for (90)Sr covering the studied area and the last 50 years. We also found that adding the activity of (210)Po, a proxy for naturally occurring (210)Pb incorporated through ingestion, to the (90)Sr dating increases the reliability of the PMI value. Our results also show that trabecular bone is subject to both (90)Sr and (210)Po diagenesis. Accordingly, we used a solubility profile method to determine the biogenic radionuclide only, and we are proposing a new method of bone decontamination to be used prior to (90)Sr and (210)Pb dating.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chemical pollution is known to affect microbial community composition but it is poorly understood how toxic compounds influence physiology of single cells that may lay at the basis of loss of reproductive fitness. Here we analyze physiological disturbances of a variety of chemical pollutants at single cell level using the bacterium Pseudomonas fluorescens in an oligotrophic growth assay. As a proxy for physiological disturbance we measured changes in geometric mean ethidium bromide (EB) fluorescence intensities in subpopulations of live and dividing cells exposed or not exposed to different dosages of tetradecane, 4-chlorophenol, 2-chlorobiphenyl, naphthalene, benzene, mercury chloride, or water-dissolved oil fractions. Because ethidium bromide efflux is an energy-dependent process any disturbance in cellular energy generation is visible as an increased cytoplasmic fluorescence. Interestingly, all pollutants even at the lowest dosage of 1 nmol/mL culture produced significantly increased ethidium bromide fluorescence compared to nonexposed controls. Ethidium bromide fluorescence intensities increased upon pollutant exposure dosage up to a saturation level, and were weakly (r(2) = 0.3905) inversely correlated to the proportion of live cells at that time point in culture. Temporal increase in EB fluorescence of growing cells is indicative for toxic but reversible effects. Cells displaying high continued EB fluorescence levels experience constant and permanent damage, and no longer contribute to population growth. The procedure developed here using bacterial ethidium bromide efflux pump activity may be a useful complement to screen sublethal toxicity effects of chemicals.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: Chronic disease management initiatives emphasize patient-centered care, and quality of life (QoL) is increasingly considered a representative outcome in that context. In this study we evaluated the association between receipt of processes of diabetic care and QoL. Methods: This cross-sectional population-based study (2011) used self-reported data from non-institutionalized, adult diabetics, recruited from randomly selected community pharmacies in Vaud. Outcomes included the physical and mental composites of the SF-36 (PCS, MCS) and the disease-specific Audit of Diabetes-Dependent QoL (ADDQoL). Main exposure variables were receipt of six diabetes processes-of care in the past 12 months. We also evaluated whether the association between care received and QoL was congruent with the chronic care model, when assessed by the Patient Assessment of Chronic Illness Care (PACIC). We used linear regressions to examine the association between process measures and the three composites of health-related QoL. Analyses were adjusted for age, gender, socioeconomic status, living companion, BMI, alcohol, smoking, physical activity, co-morbidities and diabetes mellitus (DM) characteristics (type, insulin use, complications, duration). Results: Mean age of the 519 diabetic patients was 64.4 years (SD 11.3), 60% were male and 73% had a living companion; 87% reported type 2 DM, half of respondents required insulin treatment, 48% had at least one DM complication, and 48% had DM over 10 years. Crude overall mean QoL scores were PCS: 43.4 (SD 10.5), MCS: 47.0 (SD 11.2) and ADDQoL: -1.56 (SD 1.6). In bivariate analyses, patients who received the influenza vaccine versus those who did not, had lower ADDQoL and PCS scores; there were no other indicator differences. In adjusted models including all processes, receipt of influenza vaccine was associated with lower ADDQoL (β= - 0.41, p=.01); there were no other associations between process indicators and QoL composites. There was no process association even when these were reported as combined measures of processes of care. PACIC score was associated only with the MCS (β= 1.57, p=.004). Conclusions: Process indicators for diabetes care did not show an association with QoL. This may represent an effect lag time between time of process received and quality of life; or that treatment may be related with inconvenience and patient worry. Further research is needed to explore these unexpected findings.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: Influenza vaccination remains below the federally targeted levels outlined in Healthy People 2020. Compared to non-Hispanic whites, racial and ethnic minorities are less likely to be vaccinated for influenza, despite being at increased risk for influenza-related complications and death. Also, vaccinated minorities are more likely to receive influenza vaccinations in office-based settings and less likely to use non-medical vaccination locations compared to non-Hispanic white vaccine users. OBJECTIVE: To assess the number of "missed opportunities" for influenza vaccination in office-based settings by race and ethnicity and the magnitude of potential vaccine uptake and reductions in racial and ethnic disparities in influenza vaccination if these "missed opportunities" were eliminated. DESIGN: National cross-sectional Internet survey administered between March 4 and March 14, 2010 in the United States. PARTICIPANTS: Non-Hispanic black, Hispanic and non-Hispanic white adults living in the United States (N = 3,418). MAIN MEASURES: We collected data on influenza vaccination, frequency and timing of healthcare visits, and self-reported compliance with a potential provider recommendation for vaccination during the 2009-2010 influenza season. "Missed opportunities" for seasonal influenza vaccination in office-based settings were defined as the number of unvaccinated respondents who reported at least one healthcare visit in the Fall and Winter of 2009-2010 and indicated their willingness to get vaccinated if a healthcare provider strongly recommended it. "Potential vaccine uptake" was defined as the sum of actual vaccine uptake and "missed opportunities." KEY RESULTS: The frequency of "missed opportunities" for influenza vaccination in office-based settings was significantly higher among racial and ethnic minorities than non-Hispanic whites. Eliminating these "missed opportunities" could have cut racial and ethnic disparities in influenza vaccination by roughly one half. CONCLUSIONS: Improved office-based practices regarding influenza vaccination could significantly impact Healthy People 2020 goals by increasing influenza vaccine uptake and reducing corresponding racial and ethnic disparities.