965 resultados para Panel Studies
Resumo:
We review all journal articles based on “PSED-type” research, i.e., longitudinal, empirical studies of large probability samples of on-going, business start-up efforts. We conclude that the research stream has yielded interesting findings; sometimes by confirming prior research with a less bias-prone methodology and at other times by challenging whether prior conclusions are valid for the early stages of venture development. Most importantly, the research has addressed new, process-related research questions that prior research has shunned or been unable to study in a rigorous manner. The research has revealed an enormous and fascinating variability in new venture creation that also makes it challenging to arrive at broadly valid generalizations. An analysis of the findings across studies as well as an examination of those studies that have been relatively more successful at explaining outcomes give good guidance regarding what is required in order to achieve strong and credible results. We compile and present such advice to users of existing data sets and designers of new projects in the following areas: Statistically representative and/or theoretically relevant sampling; Level of analysis issues; Dealing with process heterogeneity; Dealing with other heterogeneity issues, and Choice and interpretation of dependent variables.
Resumo:
Longitudinal panel studies of large, random samples of business start-ups captured at the pre-operational stage allow researchers to address core issues for entrepreneurship research, namely, the processes of creation of new business ventures as well as their antecedents and outcomes. Here, we perform a methods-orientated review of all 83 journal articles that have used this type of data set, our purpose being to assist users of current data sets as well as designers of new projects in making the best use of this innovative research approach. Our review reveals a number of methods issues that are largely particular to this type of research. We conclude that amidst exemplary contributions, much of the reviewed research has not adequately managed these methods challenges, nor has it made use of the full potential of this new research approach. Specifically, we identify and suggest remedies for context-specific and interrelated methods challenges relating to sample definition, choice of level of analysis, operationalization and conceptualization, use of longitudinal data and dealing with various types of problematic heterogeneity. In addition, we note that future research can make further strides towards full utilization of the advantages of the research approach through better matching (from either direction) between theories and the phenomena captured in the data, and by addressing some under-explored research questions for which the approach may be particularly fruitful.
Resumo:
Accompanying the call for increased evidence-based policy the developed world is implementing more longitudinal panel studies which periodically gather information about the same people over a number of years. Panel studies distinguish between transitory and persistent states (e.g. poverty, unemployment) and facilitate causal explanations of relationships between variables. However, they are complex and costly. A growing number of developing countries are now implementing or considering starting panel studies. The objectives of this paper are to identify challenges that arise in panel studies, and to give examples of how these have been addressed in resource-constrained environments. The main issues considered are: the development of a conceptual framework which links macro and micro contexts; sampling the cohort in a cost-effective way; tracking individuals; ethics and data management and analysis. Panel studies require long term funding, a stable institution and an acceptance that there will be limited value for money in terms of results from early stages, with greater benefits accumulating in the study's mature years. Copyright © 2003 John Wiley & Sons, Ltd.
Resumo:
This case study deals with the role of time series analysis in sociology, and its relationship with the wider literature and methodology of comparative case study research. Time series analysis is now well-represented in top-ranked sociology journals, often in the form of ‘pooled time series’ research designs. These studies typically pool multiple countries together into a pooled time series cross-section panel, in order to provide a larger sample for more robust and comprehensive analysis. This approach is well suited to exploring trans-national phenomena, and for elaborating useful macro-level theories specific to social structures, national policies, and long-term historical processes. It is less suited however, to understanding how these global social processes work in different countries. As such, the complexities of individual countries - which often display very different or contradictory dynamics than those suggested in pooled studies – are subsumed. Meanwhile, a robust literature on comparative case-based methods exists in the social sciences, where researchers focus on differences between cases, and the complex ways in which they co-evolve or diverge over time. A good example of this is the inequality literature, where although panel studies suggest a general trend of rising inequality driven by the weakening power of labour, marketisation of welfare, and the rising power of capital, some countries have still managed to remain resilient. This case study takes a closer look at what can be learned by applying the insights of case-based comparative research to the method of time series analysis. Taking international income inequality as its point of departure, it argues that we have much to learn about the viability of different combinations of policy options by examining how they work in different countries over time. By taking representative cases from different welfare systems (liberal, social democratic, corporatist, or antipodean), we can better sharpen our theories of how policies can be more specifically engineered to offset rising inequality. This involves a fundamental realignment of the strategy of time series analysis, grounding it instead in a qualitative appreciation of the historical context of cases, as a basis for comparing effects between different countries.
Resumo:
Esta tese investiga os efeitos agudos da poluição atmosférica no pico de fluxo expiratório (PFE) de escolares com idades entre 6 e 15 anos, residentes em municípios da Amazônia Brasileira. O primeiro artigo avaliou os efeitos do material particulado fino (PM2,5) no PFE de 309 escolares do município de Alta Floresta, Mato Grosso (MT), durante a estação seca de 2006. Modelos de efeitos mistos foram estimados para toda a amostra e estratificados por turno escolar e presença de sintomas de asma. O segundo artigo expõe as estratégias utilizadas para a determinação da função de variância do erro aleatório dos modelos de efeitos mistos. O terceiro artigo analisa os dados do estudo de painel com 234 escolares, realizado na estação seca de 2008 em Tangará da Serra, MT. Avaliou-se os efeitos lineares e com defasagem distribuída (PDLM) do material particulado inalável (PM10), do PM2,5 e do Black Carbon (BC) no PFE de todos os escolares e estratificados por grupos de idade. Nos três artigos, os modelos de efeitos mistos foram ajustados por tendência temporal, temperatura, umidade e características individuais. Os modelos também consideraram o ajuste da autocorrelação residual e da função de variância do erro aleatório. Quanto às exposições, foram avaliados os efeitos das exposições de 5hs, 6hs, 12hs e 24hs, no dia corrente, com defasagens de 1 a 5 dias e das médias móveis de 2 e 3 dias. No que se refere aos resultados de Alta Floresta, os modelos para todas as crianças indicaram reduções no PFE variando de 0,26 l/min (IC95%: 0,49; 0,04) a 0,38 l/min (IC95%: 0,71; 0,04), para cada aumento de 10g/m3 no PM2,5. Não foram observados efeitos significativos da poluição no grupo das crianças asmáticas. A exposição de 24hs apresentou efeito significativo no grupo de alunos da tarde e no grupo dos não asmáticos. A exposição de 0hs a 5:30hs foi significativa tanto para os alunos da manhã quanto para a tarde. Em Tangará da Serra, os resultados mostraram reduções significativas do PFE para aumentos de 10 unidades do poluente, principalmente para as defasagens de 3, 4 e 5 dias. Para o PM10, as reduções variaram de 0,15 (IC95%: 0,29; 0,01) a 0,25 l/min (IC95%: 0,40 ; 0,10). Para o PM2,5, as reduções estiveram entre 0,46 l/min (IC95%: 0,86 to 0,06 ) e 0,54 l/min (IC95%: 0,95; 0,14). E no BC, a redução foi de aproximadamente 0,014 l/min. Em relação ao PDLM, efeitos mais importantes foram observados nos modelos baseados na exposição do dia corrente até 5 dias passados. O efeito global foi significativo apenas para o PM10, com redução do PFE de 0,31 l/min (IC95%: 0,56; 0,05). Esta abordagem também indicou efeitos defasados significativos para todos os poluentes. Por fim, o estudo apontou as crianças de 6 a 8 anos como grupo mais sensível aos efeitos da poluição. Os achados da tese sugerem que a poluição atmosférica decorrente da queima de biomassa está associada a redução do PFE de crianças e adolescentes com idades entre 6 e 15 anos, residentes na Amazônia Brasileira.
Resumo:
A study is made to determine the maximum permissible time lag both under iced and not iced storage conditions between the catching of mackerel (Rastrelliger kanagurta) and its curing, so that the quality of the finished product is within tolerable limits. Based on physical, chemical, bacteriological and taste panel studies the maximum time lag permissible is fixed as 8hrs under not iced condition and 3 days under iced condition. Icing of fish is also found to affect the tasting qualities of the finished product.
Resumo:
Within Western societies women or girls meanwhile outperform men or boys with regard to attainments in primary and secondary education. For example concerning upper secondary degrees the share of females attaining the Matura approaches two thirds in Switzerland, while the share of females attaining the Baccalaureate exceeds fifty per cent in France. However, if transitions to higher education are regarded, the share of entitled females entering such institutions is significantly lower than among men in Switzerland. An opposite pattern is observed in France where females outperform men at this educational stage, too. With regard to migrant background, it has been shown by previous research focussing on secondary effects of ethnic origin that such youths enter the more demanding educational tracks (e.g. higher education) more often than their non-migrant peers if controlled for eligibility, their lower socioeconomic status and performances. However, so far only a few studies refer to the question of a possible gender gap regarding secondary effects of ethnic origin (e.g. Fleischmann et al., mimeo). Thus, with regard to a possible interaction of a migrant background and - for example - a female gender, it is important to note that in both countries many migrant groups have their origins in countries and regions where male advantage remains very strong. This is in particular the case for migrant groups from non-Western countries, e.g. Turkey, Algeria, Marocco or Tunisie, where gender gaps in the literacy rates of up to 18 per cent are still observed. In order to investigate the question of a possible disadvantages of women with a migrant background stemming from such countries when compared to non-migrant females two panel studies - the Tree data in Switzerland and the Panel d’élèves 1995 in France -, are analysed.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.
Resumo:
BACKGROUND: Since the publication of the 2006 American College of Chest Physicians (CHEST) cough guidelines, a variety of tools has been developed or further refined for assessing cough. The purpose of the present committee was to evaluate instruments used by investigators performing clinical research on chronic cough. The specific aims were to (1) assess the performance of tools designed to measure cough frequency, severity, and impact in adults, adolescents, and children with chronic cough and (2) make recommendations or suggestions related to these findings.
METHODS: By following the CHEST methodologic guidelines, the CHEST Expert Cough Panel based its recommendations and suggestions on a recently published comparative effectiveness review commissioned by the US Agency for Healthcare Research and Quality, a corresponding summary published in CHEST, and an updated systematic review through November 2013. Recommendations or suggestions based on these data were discussed, graded, and voted on during a meeting of the Expert Cough Panel.
RESULTS: We recommend for adults, adolescents (≥ 14 years of age), and children complaining of chronic cough that validated and reliable health-related quality-of-life (QoL) questionnaires be used as the measurement of choice to assess the impact of cough, such as the Leicester Cough Questionnaire and the Cough-Specific Quality-of-Life Questionnaire in adult and adolescent patients and the Parent Cough-Specific Quality of Life Questionnaire in children. We recommend acoustic cough counting to assess cough frequency but not cough severity. Limited data exist regarding the performance of visual analog scales, numeric rating scales, and tussigenic challenges.
CONCLUSIONS: Validated and reliable cough-specific health-related QoL questionnaires are recommended as the measurement of choice to assess the impact of cough on patients. How they compare is yet to be determined. When used, the reporting of cough severity by visual analog or numeric rating scales should be standardized. Previously validated QoL questionnaires or other cough assessments should not be modified unless the new version has been shown to be reliable and valid. Finally, in research settings, tussigenic challenges play a role in understanding mechanisms of cough.
Resumo:
BACKGROUND: Successful management of chronic cough has varied in the primary research studies in the reported literature. One of the potential reasons relates to a lack of intervention fidelity to the core elements of the diagnostic and/or therapeutic interventions that were meant to be used by the investigators.
METHODS: We conducted a systematic review to summarize the evidence supporting intervention fidelity as an important methodologic consideration in assessing the effectiveness of clinical practice guidelines used for the diagnosis and management of chronic cough. We developed and used a tool to assess for five areas of intervention fidelity. Medline (PubMed), Scopus, and the Cochrane Database of Systematic Reviews were searched from January 1998 to May 2014. Guideline recommendations and suggestions for those conducting research using guidelines or protocols to diagnose and manage chronic cough in the adult were developed and voted upon using CHEST Organization methodology.
RESULTS: A total of 23 studies (17 uncontrolled prospective observational, two randomized controlled, and four retrospective observational) met our inclusion criteria. These articles included 3,636 patients. Data could not be pooled for meta-analysis because of heterogeneity. Findings related to the five areas of intervention fidelity included three areas primarily related to the provider and two primarily related to the patients. In the area of study design, 11 of 23 studies appeared to be underpinned by a single guideline/protocol; for training of providers, two of 23 studies reported training, and zero of 23 reported the use of an intervention manual; and for the area of delivery of treatment, when assessing the treatment of gastroesophageal reflux disease, three of 23 studies appeared consistent with the most recent guideline/protocol referenced by the authors. For receipt of treatment, zero of 23 studies mentioned measuring concordance of patient-interventionist understanding of the treatment recommended, and zero of 23 mentioned measuring enactment of treatment, with three of 23 measuring side effects and two of 23 measuring adherence. The overall average intervention fidelity score for all 23 studies was poor (20.74 out of 48).
CONCLUSIONS: Only low-quality evidence supports that intervention fidelity strategies were used when conducting primary research in diagnosing and managing chronic cough in adults. This supports the contention that some of the variability in the reporting of patients with unexplained or unresolved chronic cough may be due to lack of intervention fidelity. By following the recommendations and suggestions in this article, researchers will likely be better able to incorporate strategies to address intervention fidelity, thereby strengthening the validity and generalizability of their results that provide the basis for the development of trustworthy guidelines.
Resumo:
The continued global spread and evolution of HIV diversity pose significant challenges to diagnostics and vaccine strategies. NIAID partnered with the FDA, WRAIR, academia, and industry to form a Viral Panel Working Group to design and prepare a panel of well-characterized current and diverse HIV isolates. Plasma samples that had screened positive for HIV infection and had evidence of recently acquired infection were donated by blood centers in North and South America, Europe, and Africa. A total of 80 plasma samples were tested by quantitative nucleic acid tests, p24 antigen, EIA, and Western blot to assign a Fiebig stage indicative of approximate time from initial infection. Evaluation of viral load using FDA-cleared assays showed excellent concordance when subtype B virus was tested, but lower correlations for subtype C. Plasma samples were cocultivated with phytohemagglutinin (PHA)-stimulated peripheral blood mononuclear cells (PBMCs) from normal donors to generate 30 viral isolates (50-80% success rate for samples with viral load >10,000 copies/ml), which were then expanded to 10(7)-10(9) virus copies per ml. Analysis of env sequences showed that sequences derived from cultured PBMCs were not distinguishable from those obtained from the original plasma. The pilot collection includes 30 isolates representing subtypes B, C, B/F, CRF04_cpx, and CRF02_AG. These studies will serve as a basis for the development of a comprehensive panel of highly characterized viral isolates that reflects the current dynamic and complex HIV epidemic, and will be made available through the External Quality Assurance Program Oversight Laboratory (EQAPOL).