915 resultados para Secondary Data Analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A time series is a sequence of observations made over time. Examples in public health include daily ozone concentrations, weekly admissions to an emergency department or annual expenditures on health care in the United States. Time series models are used to describe the dependence of the response at each time on predictor variables including covariates and possibly previous values in the series. Time series methods are necessary to account for the correlation among repeated responses over time. This paper gives an overview of time series ideas and methods used in public health research.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nitrogen and water are essential for plant growth and development. In this study, we designed experiments to produce gene expression data of poplar roots under nitrogen starvation and water deprivation conditions. We found low concentration of nitrogen led first to increased root elongation followed by lateral root proliferation and eventually increased root biomass. To identify genes regulating root growth and development under nitrogen starvation and water deprivation, we designed a series of data analysis procedures, through which, we have successfully identified biologically important genes. Differentially Expressed Genes (DEGs) analysis identified the genes that are differentially expressed under nitrogen starvation or drought. Protein domain enrichment analysis identified enriched themes (in same domains) that are highly interactive during the treatment. Gene Ontology (GO) enrichment analysis allowed us to identify biological process changed during nitrogen starvation. Based on the above analyses, we examined the local Gene Regulatory Network (GRN) and identified a number of transcription factors. After testing, one of them is a high hierarchically ranked transcription factor that affects root growth under nitrogen starvation. It is very tedious and time-consuming to analyze gene expression data. To avoid doing analysis manually, we attempt to automate a computational pipeline that now can be used for identification of DEGs and protein domain analysis in a single run. It is implemented in scripts of Perl and R.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present article describes and analyses youth criminality in the city of Rosario, Argentina between the years 2003-2006. Key actors’ understandings of and responses to the conflict were investigated by means of semi-structured interviews, observations, discourse analysis of policy documents, analysis of secondary data, and draw heavily on the experience of the author, a citizen and youth worker of Rosario. The actors examined were the police, the local government, young delinquents and youth organisations. Youth criminality is analysed from a conflict transformation approach using conflict analysis tools. Whereas, the provincial police understand the issue as a delinquency problem, other actors perceive it as an expression of a wider urban social conflict between those that are “included” and those that are “excluded” and as one of the negative effects of globalisation processes. The results suggest that police responses addressing only direct violence are ineffective, even contributing to increased tensions and polarisation, whereas strategies addressing cultural and structural violence are more suitable for this type of social urban conflict. Finally, recommendations for local youth policy are proposed to facilitate participation and inclusion of youth and as a tool for peaceful conflict transformation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Cloud computing provides a promising solution to the genomics data deluge problem resulting from the advent of next-generation sequencing (NGS) technology. Based on the concepts of “resources-on-demand” and “pay-as-you-go”, scientists with no or limited infrastructure can have access to scalable and cost-effective computational resources. However, the large size of NGS data causes a significant data transfer latency from the client’s site to the cloud, which presents a bottleneck for using cloud computing services. In this paper, we provide a streaming-based scheme to overcome this problem, where the NGS data is processed while being transferred to the cloud. Our scheme targets the wide class of NGS data analysis tasks, where the NGS sequences can be processed independently from one another. We also provide the elastream package that supports the use of this scheme with individual analysis programs or with workflow systems. Experiments presented in this paper show that our solution mitigates the effect of data transfer latency and saves both time and cost of computation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVES To identify the timing of significant arch dimensional increases during orthodontic alignment involving round and rectangular nickel-titanium (NiTi) wires and rectangular stainless steel (SS). A secondary aim was to compare the timing of changes occurring with conventional and self-ligating fixed appliance systems. METHODS In this non-primary publication, additional data from a multicenter randomised trial initially involving 96 patients, aged 16 years and above, were analysed. The main pre-specified outcome measures were the magnitude and timing of maxillary intercanine, interpremolar, and intermolar dimensions. Each participant underwent alignment with a standard Damon (Ormco, Orange, CA) wire sequence for a minimum of 34 weeks. Blinding of clinicians and patients was not possible; however, outcome assessors and data analysts were kept blind to the appliance type during data analysis. RESULTS Complete data were obtained from 71 subjects. Significant arch dimensional changes were observed relatively early in treatment. In particular, changes in maxillary inter-first and second premolar dimensions occurred after alignment with an 0.014in. NiTi wire (P<0.05). No statistical differences in transverse dimensions were found between rectangular NiTi and working SS wires for each transverse dimension (P>0.05). Bracket type had no significant effect on the timing of the transverse dimensional changes. CONCLUSIONS Arch dimensional changes were found to occur relatively early in treatment, irrespective of the appliance type. Nickel-titanium wires may have a more profound effect on transverse dimensions than previously believed. CLINICAL SIGNIFICANCE On the basis of this research orthodontic expansion may occur relatively early in treatment. Nickel-titanium wires may have a more profound effect on transverse dimensions than previously believed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the Practice Change Model, physicians act as key stakeholders, people who have both an investment in the practice and the capacity to influence how the practice performs. This leadership role is critical to the development and change of the practice. Leadership roles and effectiveness are an important factor in quality improvement in primary care practices.^ The study conducted involved a comparative case study analysis to identify leadership roles and the relationship between leadership roles and the number and type of quality improvement strategies adopted during a Practice Change Model-based intervention study. The research utilized secondary data from four primary care practices with various leadership styles. The practices are located in the San Antonio region and serve a large Hispanic population. The data was collected by two ABC Project Facilitators from each practice during a 12-month period including Key Informant Interviews (all staff members), MAP (Multi-method Assessment Process), and Practice Facilitation field notes. This data was used to evaluate leadership styles, management within the practice, and intervention tools that were implemented. The chief steps will be (1) to analyze if the leader-member relations contribute to the type of quality improvement strategy or strategies selected (2) to investigate if leader-position power contributes to the number of strategies selected and the type of strategy selected (3) and to explore whether the task structure varies across the four primary care practices.^ The research found that involving more members of the clinic staff in decision-making, building bridges between organizational staff and clinical staff, and task structure are all associated with the direct influence on the number and type of quality improvement strategies implemented in primary care practice.^ Although this research only investigated leadership styles of four different practices, it will offer future guidance on how to establish the priorities and implementation of quality improvement strategies that will have the greatest impact on patient care improvement. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this comparative analysis of CHIP Perinatal policy (42 CFR § 457) was to provide a basis for understanding the variation in policy outputs across the twelve states that, as of June 2007, implemented the Unborn Child rule. This Department of Health and Human Services regulation expanded in 2002 the definition of “child” to include the period from conception to birth, allowing states to consider an unborn child a “targeted low-income child” and therefore eligible for SCHIP coverage. ^ Specific study aims were to (1) describe typologically the structural and contextual features of the twelve states that adopted a CHIP Perinatal policy; (2) describe and differentiate among the various designs of CHIP Perinatal policy implemented in the states; and (3) develop a conceptual model that links the structural and contextual features of the adopting states to differences in the forms the policy assumed, once it was implemented. ^ Secondary data were collected from publicly available information sources to describe characteristics of states’ political system, health system, economic system, sociodemographic context and implemented policy attributes. I posited that socio-demographic differences, political system differences and health system differences would directly account for the observed differences in policy output among the states. ^ Exploratory data analysis techniques, which included median polishing and multidimensional scaling, were employed to identify compelling patterns in the data. Scaled results across model components showed that economic system was most closely related to policy output, followed by health system. Political system and socio-demographic characteristics were shown to be weakly associated with policy output. Goodness-of-fit measures for MDS solutions implemented across states and model components, in one- and two-dimensions, were very good. ^ This comparative policy analysis of twelve states that adopted and implemented HHS Regulation 42 C.F.R. § 457 contributes to existing knowledge in three areas: CHIP Perinatal policy, public health policy and policy sciences. First, the framework allows for the identification of CHIP Perinatal program design possibilities and provides a basis for future studies that evaluate policy impact or performance. Second, studies of policy determinants are not well represented in the health policy literature. Thus, this study contributes to the development of the literature in public health policy. Finally, the conceptual framework for policy determinants developed in this study suggests new ways for policy makers and practitioners to frame policy arguments, encouraging policy change or reform. ^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study was a retrospective design and used secondary data from the National Child Abuse and Neglect Data System (NCANDS), provided by the National Data Archive on Child Abuse and Neglect Family Life Development Center administered by Cornell University. The dataset contained information for the year 2005 on children from birth to 18 years of age. Child abuse and neglect for disabled children, was evaluated in-depth in the present study. Descriptive and statistical analysis was performed using the children with and without disabilities. It was found that children with disabilities have a lower rate of substantiation that likely indicates the interference of reporting due to their handicap. The results of this research demonstrate the important need to teach professionals and laypersons alike on how to recognize and substantiate abuse among disabled children.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Clinical Research Data Quality Literature Review and Pooled Analysis We present a literature review and secondary analysis of data accuracy in clinical research and related secondary data uses. A total of 93 papers meeting our inclusion criteria were categorized according to the data processing methods. Quantitative data accuracy information was abstracted from the articles and pooled. Our analysis demonstrates that the accuracy associated with data processing methods varies widely, with error rates ranging from 2 errors per 10,000 files to 5019 errors per 10,000 fields. Medical record abstraction was associated with the highest error rates (70–5019 errors per 10,000 fields). Data entered and processed at healthcare facilities had comparable error rates to data processed at central data processing centers. Error rates for data processed with single entry in the presence of on-screen checks were comparable to double entered data. While data processing and cleaning methods may explain a significant amount of the variability in data accuracy, additional factors not resolvable here likely exist. Defining Data Quality for Clinical Research: A Concept Analysis Despite notable previous attempts by experts to define data quality, the concept remains ambiguous and subject to the vagaries of natural language. This current lack of clarity continues to hamper research related to data quality issues. We present a formal concept analysis of data quality, which builds on and synthesizes previously published work. We further posit that discipline-level specificity may be required to achieve the desired definitional clarity. To this end, we combine work from the clinical research domain with findings from the general data quality literature to produce a discipline-specific definition and operationalization for data quality in clinical research. While the results are helpful to clinical research, the methodology of concept analysis may be useful in other fields to clarify data quality attributes and to achieve operational definitions. Medical Record Abstractor’s Perceptions of Factors Impacting the Accuracy of Abstracted Data Medical record abstraction (MRA) is known to be a significant source of data errors in secondary data uses. Factors impacting the accuracy of abstracted data are not reported consistently in the literature. Two Delphi processes were conducted with experienced medical record abstractors to assess abstractor’s perceptions about the factors. The Delphi process identified 9 factors that were not found in the literature, and differed with the literature by 5 factors in the top 25%. The Delphi results refuted seven factors reported in the literature as impacting the quality of abstracted data. The results provide insight into and indicate content validity of a significant number of the factors reported in the literature. Further, the results indicate general consistency between the perceptions of clinical research medical record abstractors and registry and quality improvement abstractors. Distributed Cognition Artifacts on Clinical Research Data Collection Forms Medical record abstraction, a primary mode of data collection in secondary data use, is associated with high error rates. Distributed cognition in medical record abstraction has not been studied as a possible explanation for abstraction errors. We employed the theory of distributed representation and representational analysis to systematically evaluate cognitive demands in medical record abstraction and the extent of external cognitive support employed in a sample of clinical research data collection forms. We show that the cognitive load required for abstraction in 61% of the sampled data elements was high, exceedingly so in 9%. Further, the data collection forms did not support external cognition for the most complex data elements. High working memory demands are a possible explanation for the association of data errors with data elements requiring abstractor interpretation, comparison, mapping or calculation. The representational analysis used here can be used to identify data elements with high cognitive demands.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitative real-time polymerase chain reaction (qPCR) is a sensitive gene quantitation method that has been widely used in the biological and biomedical fields. The currently used methods for PCR data analysis, including the threshold cycle (CT) method, linear and non-linear model fitting methods, all require subtracting background fluorescence. However, the removal of background fluorescence is usually inaccurate, and therefore can distort results. Here, we propose a new method, the taking-difference linear regression method, to overcome this limitation. Briefly, for each two consecutive PCR cycles, we subtracted the fluorescence in the former cycle from that in the later cycle, transforming the n cycle raw data into n-1 cycle data. Then linear regression was applied to the natural logarithm of the transformed data. Finally, amplification efficiencies and the initial DNA molecular numbers were calculated for each PCR run. To evaluate this new method, we compared it in terms of accuracy and precision with the original linear regression method with three background corrections, being the mean of cycles 1-3, the mean of cycles 3-7, and the minimum. Three criteria, including threshold identification, max R2, and max slope, were employed to search for target data points. Considering that PCR data are time series data, we also applied linear mixed models. Collectively, when the threshold identification criterion was applied and when the linear mixed model was adopted, the taking-difference linear regression method was superior as it gave an accurate estimation of initial DNA amount and a reasonable estimation of PCR amplification efficiencies. When the criteria of max R2 and max slope were used, the original linear regression method gave an accurate estimation of initial DNA amount. Overall, the taking-difference linear regression method avoids the error in subtracting an unknown background and thus it is theoretically more accurate and reliable. This method is easy to perform and the taking-difference strategy can be extended to all current methods for qPCR data analysis.^

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Based mainly on secondary data and partly on primary information obtained through field surveys in selected rural areas in Bihar in 2011, this paper firstly argues the critical importance of agricultural growth for overall economic development, and then reviews the sluggish growth of agriculture in Bihar in the past and examines the major reasons for this. The long-term negligence of agricultural research (especially development and diffusion endeavors for improved rice varieties suitable to the local conditions of Bihar) by the state government and some sort of ‘backwardness’ in tube-well irrigation technology can be pointed out as important constraints. There is, in particular, the ‘paradox’ in Bihar agriculture of why rice and wheat yields have remained so low in spite of the relatively well-developed irrigation by tube-wells. Finally, by showing the process of a rapid increase in autumn and winter rice yields during the 1990s in West Bengal, it is suggested that Bihar farmers and policy-makers should learn from the experience of West Bengal in order to get some hints for the development of the rice sector in Bihar.