772 resultados para Random Sample Size
Resumo:
Background Chronic illness and premature mortality from malaria, water-borne diseases, and respiratory illnesses have long been known to diminish the welfare of individuals and households in developing countries. Previous research has also shown that chronic diseases among farming populations suppress labor productivity and agricultural output. As the illness and death toll from HIV/AIDS continues to climb in most of sub-Saharan Africa, concern has arisen that the loss of household labor it causes will reduce crop yields, impoverish farming households, intensify malnutrition, and suppress growth in the agricultural sector. If chronic morbidity and premature mortality among individuals in farming households have substantial impacts on household production, and if a large number of households are affected, it is possible that an increase in morbidity and mortality from HIV/AIDS or other diseases could affect national aggregate output and exports. If, on the other hand, the impact at the household farm level is modest, or if relatively few households are affected, there is likely to be little effect on aggregate production across an entire country. Which of these outcomes is more likely in West Africa is unknown. Little rigorous, quantitative research has been published on the impacts of AIDS on smallholder farm production, particularly in West Africa. The handful of studies that have been conducted have looked mainly at small populations in areas of very high HIV prevalence in southern and eastern Africa. Conclusions about how HIV/AIDS, and other causes of chronic morbidity and mortality, are affecting agriculture across the continent cannot be drawn from these studies. In view of the importance of agriculture, and particularly smallholder agriculture, in the economies of most African countries and the scarcity of resources for health interventions, it is valuable to identify, describe, and quantify the impact of chronic morbidity and mortality on smallholder production of important crops in West Africa. One such crop is cocoa. In Ghana, cocoa is a crop of national importance that is produced almost exclusively by smallholder households. In 2003, Ghana was the world’s second-largest producer of cocoa. Cocoa accounted for a quarter of Ghana’s export revenues that year and generated 15 percent of employment. The success and growth of the cocoa industry is thus vital to the country’s overall social and economic development. Study Objectives and Methods In February and March 2005, the Center for International Health and Development of Boston University (CIHD) and the Department of Agricultural Economics and Agribusiness (DAEA) of the University of Ghana, with financial support from the Africa Bureau of the U.S. Agency for International Development and from Mars, Inc., which is a major purchaser of West African cocoa, conducted a survey of a random sample of cocoa farming households in the Western Region of Ghana. The survey documented the extent of chronic morbidity and mortality in cocoa growing households in the Western Region of Ghana, the country’s largest cocoa growing region, and analyzed the impact of morbidity and mortality on cocoa production. It aimed to answer three specific research questions. (1) What is the baseline status of the study population in terms of household size and composition, acute and chronic morbidity, recent mortality, and cocoa production? (2) What is the relationship between household size and cocoa production, and how can this relationship be used to understand the impact of adult mortality and chronic morbidity on the production of cocoa at the household level? The study population was the approximately 42,000 cocoa farming households in the southern part of Ghana’s Western Region. A random sample of households was selected from a roster of eligible households developed from existing administrative information. Under the supervision of the University of Ghana field team, enumerators were graduate students of the Department of Agricultural Economics and Agribusiness or employees of the Cocoa Services Division. A total of 632 eligible farmers participated in the survey. Of these, 610 provided complete responses to all questions needed to complete the multivariate statistical analysis reported here.
Resumo:
Estimation of the skeleton of a directed acyclic graph (DAG) is of great importance for understanding the underlying DAG and causal effects can be assessed from the skeleton when the DAG is not identifiable. We propose a novel method named PenPC to estimate the skeleton of a high-dimensional DAG by a two-step approach. We first estimate the nonzero entries of a concentration matrix using penalized regression, and then fix the difference between the concentration matrix and the skeleton by evaluating a set of conditional independence hypotheses. For high-dimensional problems where the number of vertices p is in polynomial or exponential scale of sample size n, we study the asymptotic property of PenPC on two types of graphs: traditional random graphs where all the vertices have the same expected number of neighbors, and scale-free graphs where a few vertices may have a large number of neighbors. As illustrated by extensive simulations and applications on gene expression data of cancer patients, PenPC has higher sensitivity and specificity than the state-of-the-art method, the PC-stable algorithm.
Resumo:
Background: Postal and electronic questionnaires are widely used for data collection in epidemiological studies but non-response reduces the effective sample size and can introduce bias. Finding ways to increase response to postal and electronic questionnaires would improve the quality of health research. Objectives: To identify effective strategies to increase response to postal and electronic questionnaires. Search strategy: We searched 14 electronic databases to February 2008 and manually searched the reference lists of relevant trials and reviews, and all issues of two journals. We contacted the authors of all trials or reviews to ask about unpublished trials. Where necessary, we also contacted authors to confirm methods of allocation used and to clarify results presented. We assessed the eligibility of each trial using pre-defined criteria. Selection criteria: Randomised controlled trials of methods to increase response to postal or electronic questionnaires. Data collection and analysis: We extracted data on the trial participants, the intervention, the number randomised to intervention and comparison groups and allocation concealment. For each strategy, we estimated pooled odds ratios (OR) and 95% confidence intervals (CI) in a random-effects model. We assessed evidence for selection bias using Egger's weighted regression method and Begg's rank correlation test and funnel plot. We assessed heterogeneity among trial odds ratios using a Chi 2 test and the degree of inconsistency between trial results was quantified using the I 2 statistic. Main results: Postal We found 481 eligible trials.The trials evaluated 110 different ways of increasing response to postal questionnaires.We found substantial heterogeneity among trial results in half of the strategies. The odds of response were at least doubled using monetary incentives (odds ratio 1.87; 95% CI 1.73 to 2.04; heterogeneity P < 0.00001, I 2 = 84%), recorded delivery (1.76; 95% CI 1.43 to 2.18; P = 0.0001, I 2 = 71%), a teaser on the envelope - e.g. a comment suggesting to participants that they may benefit if they open it (3.08; 95% CI 1.27 to 7.44) and a more interesting questionnaire topic (2.00; 95% CI 1.32 to 3.04; P = 0.06, I 2 = 80%). The odds of response were substantially higher with pre-notification (1.45; 95% CI 1.29 to 1.63; P < 0.00001, I 2 = 89%), follow-up contact (1.35; 95% CI 1.18 to 1.55; P < 0.00001, I 2 = 76%), unconditional incentives (1.61; 1.36 to 1.89; P < 0.00001, I 2 = 88%), shorter questionnaires (1.64; 95%CI 1.43 to 1.87; P < 0.00001, I 2 = 91%), providing a second copy of the questionnaire at follow up (1.46; 95% CI 1.13 to 1.90; P < 0.00001, I 2 = 82%), mentioning an obligation to respond (1.61; 95% CI 1.16 to 2.22; P = 0.98, I 2 = 0%) and university sponsorship (1.32; 95% CI 1.13 to 1.54; P < 0.00001, I 2 = 83%). The odds of response were also increased with non-monetary incentives (1.15; 95% CI 1.08 to 1.22; P < 0.00001, I 2 = 79%), personalised questionnaires (1.14; 95% CI 1.07 to 1.22; P < 0.00001, I 2 = 63%), use of hand-written addresses (1.25; 95% CI 1.08 to 1.45; P = 0.32, I 2 = 14%), use of stamped return envelopes as opposed to franked return envelopes (1.24; 95% CI 1.14 to 1.35; P < 0.00001, I 2 = 69%), an assurance of confidentiality (1.33; 95% CI 1.24 to 1.42) and first class outward mailing (1.11; 95% CI 1.02 to 1.21; P = 0.78, I 2 = 0%). The odds of response were reduced when the questionnaire included questions of a sensitive nature (0.94; 95% CI 0.88 to 1.00; P = 0.51, I 2 = 0%). Electronic: We found 32 eligible trials. The trials evaluated 27 different ways of increasing response to electronic questionnaires. We found substantial heterogeneity among trial results in half of the strategies. The odds of response were increased by more than a half using non-monetary incentives (1.72; 95% CI 1.09 to 2.72; heterogeneity P < 0.00001, I 2 = 95%), shorter e-questionnaires (1.73; 1.40 to 2.13; P = 0.08, I 2 = 68%), including a statement that others had responded (1.52; 95% CI 1.36 to 1.70), and a more interesting topic (1.85; 95% CI 1.52 to 2.26). The odds of response increased by a third using a lottery with immediate notification of results (1.37; 95% CI 1.13 to 1.65), an offer of survey results (1.36; 95% CI 1.15 to 1.61), and using a white background (1.31; 95% CI 1.10 to 1.56). The odds of response were also increased with personalised e-questionnaires (1.24; 95% CI 1.17 to 1.32; P = 0.07, I 2 = 41%), using a simple header (1.23; 95% CI 1.03 to 1.48), using textual representation of response categories (1.19; 95% CI 1.05 to 1.36), and giving a deadline (1.18; 95% CI 1.03 to 1.34). The odds of response tripled when a picture was included in an e-mail (3.05; 95% CI 1.84 to 5.06; P = 0.27, I 2 = 19%). The odds of response were reduced when "Survey" was mentioned in the e-mail subject line (0.81; 95% CI 0.67 to 0.97; P = 0.33, I 2 = 0%), and when the e-mail included a male signature (0.55; 95% CI 0.38 to 0.80; P = 0.96, I 2 = 0%). Authors' conclusions: Health researchers using postal and electronic questionnaires can increase response using the strategies shown to be effective in this systematic review. Copyright © 2009 The Cochrane Collaboration. Published by John Wiley & Sons, Ltd.
--------------------------------------------------------------------------------
Reaxys Database Information|
--------------------------------------------------------------------------------
Resumo:
Background: Pharmacogenetics is a rapidly growing field that aims to identify the genes that influence drug response. This science can be used as a powerful tool to tailor drug treatment to the genetic makeup of individuals. The present study explores the coverage of the topic of pharmacogenetics and its potential benefit in personalised medicine by the UK newsprint media.
Methods: The LexisNexis database was used to identify and retrieve full text articles from the 10 highest circulation national daily newspapers and their Sunday equivalents in the UK. Content analysis of newspaper articles which referenced pharmacogenetic testing was carried out. A second researcher coded a random sample (21%) of newspaper articles to establish the inter-rater reliability of coding.
Results: Of the 256 articles captured by the search terms, 96 articles (with pharmacogenetics as a major component) met the study inclusion criteria. The majority of articles over-stated the benefits of pharmacogenetic testing while paying less attention to the associated risks. Overall beneficial effects were mentioned 5.3 times more frequently than risks (p < 0.001). The most common illnesses for which pharmacogenetically based personalised medicine was discussed were cancer, cardiovascular disease and CNS diseases. Only 13% of newspaper articles that cited a specific scientific study mentioned this link in the article. There was a positive correlation between the size of the article and both the number of benefits and risks stated (P < 0.01).
Conclusion: More comprehensive coverage of the area of personalised medicine within the print media is needed to inform public debate on the inclusion of pharmacogentic testing in routine practice.
Resumo:
Purpose
The purpose of this paper is to investigate the impact of employees’ perceptions of high involvement work practices (HIWPs) on burnout (emotional exhaustion and depersonalisation) via the mediating role of role overload and procedural justice. Further, perceived colleague support was hypothesised to moderate the effects of role overload and procedural justice on these outcomes.
Design/Methodology
The study was conducted on a random sample of unionised registered nurses (RNs) working in the Canadian public health care sector, stratified by mission and size of the institution to ensure representativeness. Of the 6546 nurses solicited, 2174 returned a completed questionnaire, resulting in a response rate of 33.2%. To test our hypotheses we conducted structural equation modelling (SEM) in Mplus version 6.0 (Muthen and Muthen, 1998 – 2010) with Maximum Likelihood (ML) estimation.
Results
The results showed that procedural justice and role overload fully mediated the influence of HIWPs on burnout. Moreover, colleague support moderated the effects of procedural justice and role overload on emotional exhaustion but not depersonalisation.
Limitations
The study used a cross-sectional research design and is conducted among one occupational group (i.e. nurses).
Research/Practical Implications
The findings question the dark side of HRM in the health care context. They also contribute to the lack of theoretical and empirical work dedicated to understanding the ‘black box’ problem (Castanheira and Chambel, 2010).
Originality/Value
The study employs a well-known theoretical perspective from the occupational health psychology literature to the HR field in order to contribute to the lack of theorising in the HR-well-being link.
Resumo:
Nos últimos anos, o número de vítimas de acidentes de tráfego por milhões de habitantes em Portugal tem sido mais elevado do que a média da União Europeia. Ao nível nacional torna-se premente uma melhor compreensão dos dados de acidentes e sobre o efeito do veículo na gravidade do mesmo. O objetivo principal desta investigação consistiu no desenvolvimento de modelos de previsão da gravidade do acidente, para o caso de um único veículo envolvido e para caso de uma colisão, envolvendo dois veículos. Além disso, esta investigação compreendeu o desenvolvimento de uma análise integrada para avaliar o desempenho do veículo em termos de segurança, eficiência energética e emissões de poluentes. Os dados de acidentes foram recolhidos junto da Guarda Nacional Republicana Portuguesa, na área metropolitana do Porto para o período de 2006-2010. Um total de 1,374 acidentes foram recolhidos, 500 acidentes envolvendo um único veículo e 874 colisões. Para a análise da segurança, foram utilizados modelos de regressão logística. Para os acidentes envolvendo um único veículo, o efeito das características do veículo no risco de feridos graves e/ou mortos (variável resposta definida como binária) foi explorado. Para as colisões envolvendo dois veículos foram criadas duas variáveis binárias adicionais: uma para prever a probabilidade de feridos graves e/ou mortos num dos veículos (designado como veículo V1) e outra para prever a probabilidade de feridos graves e/ou mortos no outro veículo envolvido (designado como veículo V2). Para ultrapassar o desafio e limitações relativas ao tamanho da amostra e desigualdade entre os casos analisados (apenas 5.1% de acidentes graves), foi desenvolvida uma metodologia com base numa estratégia de reamostragem e foram utilizadas 10 amostras geradas de forma aleatória e estratificada para a validação dos modelos. Durante a fase de modelação, foi analisado o efeito das características do veículo, como o peso, a cilindrada, a distância entre eixos e a idade do veículo. Para a análise do consumo de combustível e das emissões, foi aplicada a metodologia CORINAIR. Posteriormente, os dados das emissões foram modelados de forma a serem ajustados a regressões lineares. Finalmente, foi desenvolvido um indicador de análise integrada (denominado “SEG”) que proporciona um método de classificação para avaliar o desempenho do veículo ao nível da segurança rodoviária, consumos e emissões de poluentes.Face aos resultados obtidos, para os acidentes envolvendo um único veículo, o modelo de previsão do risco de gravidade identificou a idade e a cilindrada do veículo como estatisticamente significativas para a previsão de ocorrência de feridos graves e/ou mortos, ao nível de significância de 5%. A exatidão do modelo foi de 58.0% (desvio padrão (D.P.) 3.1). Para as colisões envolvendo dois veículos, ao prever a probabilidade de feridos graves e/ou mortos no veículo V1, a cilindrada do veículo oposto (veículo V2) aumentou o risco para os ocupantes do veículo V1, ao nível de significância de 10%. O modelo para prever o risco de gravidade no veículo V1 revelou um bom desempenho, com uma exatidão de 61.2% (D.P. 2.4). Ao prever a probabilidade de feridos graves e/ou mortos no veículo V2, a cilindrada do veículo V1 aumentou o risco para os ocupantes do veículo V2, ao nível de significância de 5%. O modelo para prever o risco de gravidade no veículo V2 também revelou um desempenho satisfatório, com uma exatidão de 40.5% (D.P. 2.1). Os resultados do indicador integrado SEG revelaram que os veículos mais recentes apresentam uma melhor classificação para os três domínios: segurança, consumo e emissões. Esta investigação demonstra que não existe conflito entre a componente da segurança, a eficiência energética e emissões relativamente ao desempenho dos veículos.
Resumo:
Resumo Objectivos: Avaliação da Tosse em doentes com Doença Pulmonar Obstrutiva Crónica (DPOC). Identificar e determinar a relação dos factores preditivos que contribuem para a deterioração da capacidade de tosse nestes indivíduos. Tipo de estudo: Estudo observacional descritivo de natureza transversal. Definição dos casos: Os critérios de diagnóstico da DPOC são o quadro clínico e o Gold standard para diagnóstico da DPOC – a espirometria. População-alvo: Todos os utentes com patologia primária de DPOC diagnosticada que se desloquem ao serviço de função respiratória do Hospital de Viseu, para realizar provas. Método de Amostragem: Foi utilizada uma amostra aleatória constituída por todos os indivíduos, que cumpriram os critérios de inclusão, conscientes e colaborantes, que aceitaram participar neste estudo. Dimensão da amostra: Uma amostra de 55 indivíduos que se deslocaram ao serviço de função respiratória, entre Janeiro e Junho de 2009, para realizar provas de função respiratória. Condução do estudo: Os utentes que aceitaram participar neste estudo foram sujeitos a um questionário de dados clínicos e realizaram 5 testes: índice de massa corporal (IMC), estudo funcional respiratório e gasometria arterial, avaliação da força dos músculos respiratórios (PImax e PEmax) e avaliação do débito máximo da tosse (Peak Cough Flow). Análise estatística: Foram obtidos dados caracterizadores da amostra em estudo, sendo posteriormente correlacionado o valor de débito máximo da tosse (Peak Cough Flow) com os resultados obtidos para as avaliações do IMC, estudo funcional respiratório, PImax e PEmax, gasometria, avaliação da capacidade de Tosse e número de internamentos no último ano por agudização da DPOC. Tendo sido encontrados os valores de correlação entre o Peak Cough Flow e os restantes parâmetros. Resultados: Após análise dos resultados, foram obtidos os valores de Peak Cough Flow para a população com DPOC e verificou-se valores diminuídos em comparação com os valores normais da população, tendo-se verificado maiores valores de PCF em indivíduos do sexo masculino, em comparação aos valores do sexo feminino. Foi analisada a relação entre o PCF e a idade, peso, altura e IMC, não tendo sido encontrada relação, dado que a tosse não apresenta uma variação segundo os valores antropométricos, tal como a relação com os valores espirométricos. Quanto aos parâmetros funcionais respiratórios foram analisadas as relações com o PCF. Verificou-se relações significativas entre o PCF e o FEV1, a FVC, o PEF, apresentando uma relação positiva, onde maiores valores destes parâmetros estão correlacionados com maiores picos de tosse. Quanto a RAW e RV, o PCF apresenta uma relação negativa, onde uma maior resistência da via aérea ou doentes mais hiperinsuflados leva a menores valores de PCF. Por outro lado não foi encontrada relação entre o PCF e a FRC e o TLC. Quanto à força dos músculos respiratórios, verificou-se relação significativa com o PImax e a PEmax em que a fraqueza ao nível dos músculos respiratórios contribuem para um menor valor de PCF. Relativamente aos valores da gasometria arterial, verificou-se relação entre o PCF e a PaO2 de forma positiva, em que doentes hipoxémicos apresentam menores valores de tosse, e a PaCO2, de forma negativa, em que os doentes hipercápnicos apresentam menores valores de PCF tendo sido verificada relação entre o PCF e o pH e sO2. Quanto à relação entre o número de internamentos por agudização da DPOC no último ano e o PCF verificou-se uma relação significativa, onde um menor valor de PCF contribui para uma maior taxa de internamento por agudização da DPOC. Conclusão: Este conjunto de conclusões corrobora a hipótese inicialmente formulada, de que o Peak Cough Flow se encontra diminuído nos indivíduos com Doença Pulmonar Obstrutiva Crónica onde a variação do PCF se encontra directamente relacionada com os parâmetros funcionais respiratórios, com a força dos músculos respiratórios e com os valores de gasometria arterial. ABSTRACT: Aims: Cough evaluation in Chronic Obstructive Pulmonary Disease (COPD) patients. Identify and determine the relation of the predictive factors that contribute to the cough capacity degradation in this type of patients. Type of study: Descriptive observational study of transversal nature. Case definition: The COPD diagnosis criteria are the clinical presentation and the gold standard to the COPD diagnosis- the Spirometry. Target Population: Every patients, with primary pathology of COPD diagnosed, who went to the respiratory function service of Viseu hospital to perform tests. Sampling Method: It was used a random sample constituted by all the, conscious and cooperating individuals, who complied with the inclusion criteria and who accepted to make part of this study. Sample size: A sample of 55 individuals that went to the respiratory function service between January and June 2009 to perform respiratory function tests. Study: The patients who accepted to make part of this study were submitted to a clinical data questionary and performed 5 tests: body mass index (BMI), respiratory functional study, arterial blood gas level, evaluation of respiratory muscles strength (maximal inspiratory pressure (MIP) and maximum expiratory pressure (MEP)), and Peak Cough Flow evaluation. Statistic Analysis: Were obtained characterizing data of the sample in study, and later correlated the value of the Peak Cough Flow with the results from the evaluation of the body mass index (BMI), the respiratory functional study the MIP and MEP, the arterial blood gas level and also with the ability to cough evaluation and the number of hospitalizations in the last year for COPD exacerbations. The values of correlation between the Peak Cough Flow and the other parameters were found. Results: After analyzing the results, were obtained the values of Peak Cough Flow for the population with COPD. There were decreased values compared with the population normal values, having been found higher values of PCF in males compared to female values. It was analyzed the relation between the PCF and the age, weight, height and BMI but no relation was found on account of the fact that the cough does not show a variation according to anthropometric parameters, such as the relation with spirometric values. As for the respiratory functional parameters were analyzed relations with the PCF. There were significant relations between the PCF and FEV1, the FVC, the PEF, presenting a positive relation, where higher values of these parameters are correlated with higher incidence of cough. Concerning the RAW and RV, the PCF has a negative relation, in which a higher airway resistance or in more hyperinflated patients, leads to lower values of PCF. On the other hand no correlation was found between the PCF and the FRC and TLC. Regarding the respiratory muscle strength, there was a significant relation with the MIP and MEP, in which the weakness at the level of respiratory muscles contribute to a lower value of PCF. For values of arterial blood gas level, there was no relation between the PCF and PaO2, in a positive way, in which patients with hypoxemia present lower values of cough, and PaCO2, in a negative way in which hypercapnic patients had lower values of PCF, having being founded a relation between the PCF and the pH and sO2. As for the relation between the number of hospitalizations for COPD exacerbation in the last year and the PCF was found a significant relation, in which a smaller value of PCF contributes to a higher rate of hospitalization for COPD exacerbation. Conclusion: This set of findings supports the hypothesis first formulated that Peak Cough Flow is decreased in individuals with Chronic Obstructive Pulmonary Disease, in which the variation of the PCF is directly related to the respiratory function parameters, the strength of respiratory muscles and the values of arterial blood gases.
Resumo:
Stroke is one of the most common conditions requiring rehabilitation, and its motor impairments are a major cause of permanent disability. Hemiparesis is observed by 80% of the patients after acute stroke. Neuroimaging studies showed that real and imagined movements have similarities regarding brain activation, supplying evidence that those similarities are based on the same process. Within this context, the combination of mental practice (MP) with physical and occupational therapy appears to be a natural complement based on neurorehabilitation concepts. Our study seeks to investigate if MP for stroke rehabilitation of upper limbs is an effective adjunct therapy. PubMed (Medline), ISI knowledge (Institute for Scientific Information) and SciELO (Scientific Electronic Library) were terminated on 20 February 2015. Data were collected on variables as follows: sample size, type of supervision, configuration of mental practice, setting the physical practice (intensity, number of sets and repetitions, duration of contractions, rest interval between sets, weekly and total duration), measures of sensorimotor deficits used in the main studies and significant results. Random effects models were used that take into account the variance within and between studies. Seven articles were selected. As there was no statistically significant difference between the two groups (MP vs control), showed a - 0.6 (95% CI: -1.27 to 0.04), for upper limb motor restoration after stroke. The present meta-analysis concluded that MP is not effective as adjunct therapeutic strategy for upper limb motor restoration after stroke.
Resumo:
What role do social networks play in determining migrant labor market outcomes? We examine this question using data from a random sample of 1500 immigrants living in Ireland. We propose a theoretical model formally predicting that immigrants with more contacts have additional access to job offers, and are therefore better able to become employed and choose higher paid jobs. Our empirical analysis confirms these findings, while focusing more generally on the relationship between migrants’ social networks and a variety of labor market outcomes (namely wages, employment, occupational choice and job security), contrary to the literature. We find evidence that having one more contact in the network is associated with an increase of 11pp in the probability of being employed and with an increase of about 100 euros in the average salary. However, our data is not suggestive of a network size effect on occupational choice and job security. Our findings are robust to sample selection and other endogeneity concerns.
Resumo:
Nous développons dans cette thèse, des méthodes de bootstrap pour les données financières de hautes fréquences. Les deux premiers essais focalisent sur les méthodes de bootstrap appliquées à l’approche de "pré-moyennement" et robustes à la présence d’erreurs de microstructure. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. En se basant sur cette ap- proche d’estimation de la volatilité intégrée en présence d’erreurs de microstructure, nous développons plusieurs méthodes de bootstrap qui préservent la structure de dépendance et l’hétérogénéité dans la moyenne des données originelles. Le troisième essai développe une méthode de bootstrap sous l’hypothèse de Gaussianité locale des données financières de hautes fréquences. Le premier chapitre est intitulé: "Bootstrap inference for pre-averaged realized volatility based on non-overlapping returns". Nous proposons dans ce chapitre, des méthodes de bootstrap robustes à la présence d’erreurs de microstructure. Particulièrement nous nous sommes focalisés sur la volatilité réalisée utilisant des rendements "pré-moyennés" proposés par Podolskij et Vetter (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à hautes fréquences consécutifs qui ne se chevauchent pas. Le "pré-moyennement" permet de réduire l’influence de l’effet de microstructure avant d’appliquer la volatilité réalisée. Le non-chevauchement des blocs fait que les rendements "pré-moyennés" sont asymptotiquement indépendants, mais possiblement hétéroscédastiques. Ce qui motive l’application du wild bootstrap dans ce contexte. Nous montrons la validité théorique du bootstrap pour construire des intervalles de type percentile et percentile-t. Les simulations Monte Carlo montrent que le bootstrap peut améliorer les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques, pourvu que le choix de la variable externe soit fait de façon appropriée. Nous illustrons ces méthodes en utilisant des données financières réelles. Le deuxième chapitre est intitulé : "Bootstrapping pre-averaged realized volatility under market microstructure noise". Nous développons dans ce chapitre une méthode de bootstrap par bloc basée sur l’approche "pré-moyennement" de Jacod et al. (2009), où les rendements "pré-moyennés" sont construits sur des blocs de rendements à haute fréquences consécutifs qui se chevauchent. Le chevauchement des blocs induit une forte dépendance dans la structure des rendements "pré-moyennés". En effet les rendements "pré-moyennés" sont m-dépendant avec m qui croît à une vitesse plus faible que la taille d’échantillon n. Ceci motive l’application d’un bootstrap par bloc spécifique. Nous montrons que le bloc bootstrap suggéré par Bühlmann et Künsch (1995) n’est valide que lorsque la volatilité est constante. Ceci est dû à l’hétérogénéité dans la moyenne des rendements "pré-moyennés" au carré lorsque la volatilité est stochastique. Nous proposons donc une nouvelle procédure de bootstrap qui combine le wild bootstrap et le bootstrap par bloc, de telle sorte que la dépendance sérielle des rendements "pré-moyennés" est préservée à l’intérieur des blocs et la condition d’homogénéité nécessaire pour la validité du bootstrap est respectée. Sous des conditions de taille de bloc, nous montrons que cette méthode est convergente. Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques. Nous illustrons cette méthode en utilisant des données financières réelles. Le troisième chapitre est intitulé: "Bootstrapping realized covolatility measures under local Gaussianity assumption". Dans ce chapitre nous montrons, comment et dans quelle mesure on peut approximer les distributions des estimateurs de mesures de co-volatilité sous l’hypothèse de Gaussianité locale des rendements. En particulier nous proposons une nouvelle méthode de bootstrap sous ces hypothèses. Nous nous sommes focalisés sur la volatilité réalisée et sur le beta réalisé. Nous montrons que la nouvelle méthode de bootstrap appliquée au beta réalisé était capable de répliquer les cummulants au deuxième ordre, tandis qu’il procurait une amélioration au troisième degré lorsqu’elle est appliquée à la volatilité réalisée. Ces résultats améliorent donc les résultats existants dans cette littérature, notamment ceux de Gonçalves et Meddahi (2009) et de Dovonon, Gonçalves et Meddahi (2013). Les simulations Monte Carlo montrent que le bootstrap améliore les propriétés en échantillon fini de l’estimateur de la volatilité intégrée par rapport aux résultats asymptotiques et les résultats de bootstrap existants. Nous illustrons cette méthode en utilisant des données financières réelles.
Resumo:
Background: An important challenge in conducting social research of specific relevance to harm reduction programs is locating hidden populations of consumers of substances like cannabis who typically report few adverse or unwanted consequences of their use. Much of the deviant, pathologized perception of drug users is historically derived from, and empirically supported, by a research emphasis on gaining ready access to users in drug treatment or in prison populations with higher incidence of problems of dependence and misuse. Because they are less visible, responsible recreational users of illicit drugs have been more difficult to study. Methods: This article investigates Respondent Driven Sampling (RDS) as a method of recruiting experienced marijuana users representative of users in the general population. Based on sampling conducted in a multi-city study (Halifax, Montreal, Toronto, and Vancouver), and compared to samples gathered using other research methods, we assess the strengths and weaknesses of RDS recruitment as a means of gaining access to illicit substance users who experience few harmful consequences of their use. Demographic characteristics of the sample in Toronto are compared with those of users in a recent household survey and a pilot study of Toronto where the latter utilized nonrandom self-selection of respondents. Results: A modified approach to RDS was necessary to attain the target sample size in all four cities (i.e., 40 'users' from each site). The final sample in Toronto was largely similar, however, to marijuana users in a random household survey that was carried out in the same city. Whereas well-educated, married, whites and females in the survey were all somewhat overrepresented, the two samples, overall, were more alike than different with respect to economic status and employment. Furthermore, comparison with a self-selected sample suggests that (even modified) RDS recruitment is a cost-effective way of gathering respondents who are more representative of users in the general population than nonrandom methods of recruitment ordinarily produce. Conclusions: Research on marijuana use, and other forms of drug use hidden in the general population of adults, is important for informing and extending harm reduction beyond its current emphasis on 'at-risk' populations. Expanding harm reduction in a normalizing context, through innovative research on users often overlooked, further challenges assumptions about reducing harm through prohibition of drug use and urges consideration of alternative policies such as decriminalization and legal regulation.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
Cost-sharing, which involves government-farmer partnership in the funding of agricultural extension service, is one of the reforms aimed at achieving sustainable funding for extension systems. This study examined the perceptions of farmers and extension professionals on this reform agenda in Nigeria. The study was carried out in six geopolitical zones of Nigeria. A multi-stage random sampling technique was applied in the selection of respondents. A sample size of 268 farmers and 272 Agricultural Development Programme (ADP) extension professionals participated in the study. Both descriptive and inferential statistics were used in analysing the data generated from this research. The results show that majority of farmers (80.6%) and extension professionals (85.7%) had favourable perceptions towards cost-sharing. Furthermore, the overall difference in their perceptions was not significant (t =0.03). The study concludes that the strong favourable perception held by the respondents is a pointer towards acceptance of the reform. It therefore recommends that government, extension administrators and policymakers should design and formulate effective strategies and regulations for the introduction and use of cost-sharing as an alternative approach to financing agricultural technology transfer in Nigeria.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
In the forecasting of binary events, verification measures that are “equitable” were defined by Gandin and Murphy to satisfy two requirements: 1) they award all random forecasting systems, including those that always issue the same forecast, the same expected score (typically zero), and 2) they are expressible as the linear weighted sum of the elements of the contingency table, where the weights are independent of the entries in the table, apart from the base rate. The authors demonstrate that the widely used “equitable threat score” (ETS), as well as numerous others, satisfies neither of these requirements and only satisfies the first requirement in the limit of an infinite sample size. Such measures are referred to as “asymptotically equitable.” In the case of ETS, the expected score of a random forecasting system is always positive and only falls below 0.01 when the number of samples is greater than around 30. Two other asymptotically equitable measures are the odds ratio skill score and the symmetric extreme dependency score, which are more strongly inequitable than ETS, particularly for rare events; for example, when the base rate is 2% and the sample size is 1000, random but unbiased forecasting systems yield an expected score of around −0.5, reducing in magnitude to −0.01 or smaller only for sample sizes exceeding 25 000. This presents a problem since these nonlinear measures have other desirable properties, in particular being reliable indicators of skill for rare events (provided that the sample size is large enough). A potential way to reconcile these properties with equitability is to recognize that Gandin and Murphy’s two requirements are independent, and the second can be safely discarded without losing the key advantages of equitability that are embodied in the first. This enables inequitable and asymptotically equitable measures to be scaled to make them equitable, while retaining their nonlinearity and other properties such as being reliable indicators of skill for rare events. It also opens up the possibility of designing new equitable verification measures.