7 resultados para Emerging Challenges in offshoring
em Duke University
Resumo:
The global prevalence of obesity in the older adult population is growing, an increasing concern in both the developed and developing countries of the world. The study of geriatric obesity and its management is a relatively new area of research, especially pertaining to those with elevated health risks. This review characterizes the state of science for this "fat and frail" population and identifies the many gaps in knowledge where future study is urgently needed. In community dwelling older adults, opportunities to improve both body weight and nutritional status are hampered by inadequate programs to identify and treat obesity, but where support programs exist, there are proven benefits. Nutritional status of the hospitalized older adult should be optimized to overcome the stressors of chronic disease, acute illness, and/or surgery. The least restrictive diets tailored to individual preferences while meeting each patient's nutritional needs will facilitate the energy required for mobility, respiratory sufficiency, immunocompentence, and wound healing. Complications of care due to obesity in the nursing home setting, especially in those with advanced physical and mental disabilities, are becoming more ubiquitous; in almost all of these situations, weight stability is advocated, as some evidence links weight loss with increased mortality. High quality interdisciplinary studies in a variety of settings are needed to identify standards of care and effective treatments for the most vulnerable obese older adults.
Resumo:
A large increase in natural gas production occurred in western Colorado’s Piceance basin in the mid- to late-2000s, generating a surge in population, economic activity, and heavy truck traffic in this rural region. We describe the fiscal effects related to this development for two county governments: Garfield and Rio Blanco, and two city governments: Grand Junction and Rifle. Counties maintain rural road networks in Colorado, and Garfield County’s ability to fashion agreements with operators to repair roads damaged during operations helped prevent the types of large new costs seen in Rio Blanco County, a neighboring county with less government capacity and where such agreements were not made. Rifle and Grand Junction experienced substantial oil- and gas-driven population growth, with greater challenges in the smaller, more isolated, and less economically diverse city of Rifle. Lessons from this case study include the value of crafting road maintenance agreements, fiscal risks for small and geographically isolated communities experiencing rapid population growth, challenges associated with limited infrastructure, and the desirability of flexibility in the allocation of oil- and gas-related revenue.
Resumo:
Human genetics has been experiencing a wave of genetic discoveries thanks to the development of several technologies, such as genome-wide association studies (GWAS), whole-exome sequencing, and whole genome sequencing. Despite the massive genetic discoveries of new variants associated with human diseases, several key challenges emerge following the genetic discovery. GWAS is known to be good at identifying the locus associated with the patient phenotype. However, the actually causal variants responsible for the phenotype are often elusive. Another challenge in human genetics is that even the causal mutations are already known, the underlying biological effect might remain largely ambiguous. Functional evaluation plays a key role to solve these key challenges in human genetics both to identify causal variants responsible for the phenotype, and to further develop the biological insights from the disease-causing mutations.
We adopted various methods to characterize the effects of variants identified in human genetic studies, including patient genetic and phenotypic data, RNA chemistry, molecular biology, virology, and multi-electrode array and primary neuronal culture systems. Chapter 1 is a broader introduction for the motivation and challenges for functional evaluation in human genetic studies, and the background of several genetics discoveries, such as hepatitis C treatment response, in which we performed functional characterization.
Chapter 2 focuses on the characterization of causal variants following the GWAS study for hepatitis C treatment response. We characterized a non-coding SNP (rs4803217) of IL28B (IFNL3) in high linkage disequilibrium (LD) with the discovery SNP identified in the GWAS. In this chapter, we used inter-disciplinary approaches to characterize rs4803217 on RNA structure, disease association, and protein translation.
Chapter 3 describes another avenue of functional characterization following GWAS focusing on the novel transcripts and proteins identified near the IL28B (IFNL3) locus. It has been recently speculated that this novel protein, which was named IFNL4, may affect the HCV treatment response and clearance. In this chapter, we used molecular biology, virology, and patient genetic and phenotypic data to further characterize and understand the biology of IFNL4. The efforts in chapter 2 and 3 provided new insights to the candidate causal variant(s) responsible for the GWAS for HCV treatment response, however, more evidence is still required to make claims for the exact causal roles of these variants for the GWAS association.
Chapter 4 aims to characterize a mutation already known to cause a disease (seizure) in a mouse model. We demonstrate the potential use of multi-electrode array (MEA) system for the functional characterization and drug testing on mutations found in neurological diseases, such as seizure. Functional characterization in neurological diseases is relatively challenging and available systematic tools are relatively limited. This chapter shows an exploratory research and example to establish a system for the broader use for functional characterization and translational opportunities for mutations found in neurological diseases.
Overall, this dissertation spans a range of challenges of functional evaluations in human genetics. It is expected that the functional characterization to understand human mutations will become more central in human genetics, because there are still many biological questions remaining to be answered after the explosion of human genetic discoveries. The recent advance in several technologies, including genome editing and pluripotent stem cells, is also expected to make new tools available for functional studies in human diseases.
Resumo:
Constant technology advances have caused data explosion in recent years. Accord- ingly modern statistical and machine learning methods must be adapted to deal with complex and heterogeneous data types. This phenomenon is particularly true for an- alyzing biological data. For example DNA sequence data can be viewed as categorical variables with each nucleotide taking four different categories. The gene expression data, depending on the quantitative technology, could be continuous numbers or counts. With the advancement of high-throughput technology, the abundance of such data becomes unprecedentedly rich. Therefore efficient statistical approaches are crucial in this big data era.
Previous statistical methods for big data often aim to find low dimensional struc- tures in the observed data. For example in a factor analysis model a latent Gaussian distributed multivariate vector is assumed. With this assumption a factor model produces a low rank estimation of the covariance of the observed variables. Another example is the latent Dirichlet allocation model for documents. The mixture pro- portions of topics, represented by a Dirichlet distributed variable, is assumed. This dissertation proposes several novel extensions to the previous statistical methods that are developed to address challenges in big data. Those novel methods are applied in multiple real world applications including construction of condition specific gene co-expression networks, estimating shared topics among newsgroups, analysis of pro- moter sequences, analysis of political-economics risk data and estimating population structure from genotype data.
Resumo:
The globalization contributes to rapid economic developments and great changes of lifestyle in Madre de Dios of Peru, both of which have influenced the health status of local people in direct and indirect ways. The high overweight and obesity rate has become one of the biggest health challenges in this region. This study quantitatively analyzed the impact of household economic status and food consumption patterns on overweight and obesity, and tried to establish their relationship with local economic activities. People living in mining communities are more likely to be overweight or obese. Increased family incomes and lacks of health knowledge are two important reasons. The large consumption of soda and alcohol are positively associated with overweight and obesity. In addition, lack of physical activities is also one of the risk factors of overweight and obesity.
Resumo:
Precision medicine is an emerging approach to disease treatment and prevention that considers variability in patient genes, environment, and lifestyle. However, little has been written about how such research impacts emergency care. Recent advances in analytical techniques have made it possible to characterize patients in a more comprehensive and sophisticated fashion at the molecular level, promising highly individualized diagnosis and treatment. Among these techniques are various systematic molecular phenotyping analyses (e.g., genomics, transcriptomics, proteomics, and metabolomics). Although a number of emergency physicians use such techniques in their research, widespread discussion of these approaches has been lacking in the emergency care literature and many emergency physicians may be unfamiliar with them. In this article, we briefly review the underpinnings of such studies, note how they already impact acute care, discuss areas in which they might soon be applied, and identify challenges in translation to the emergency department (ED). While such techniques hold much promise, it is unclear whether the obstacles to translating their findings to the ED will be overcome in the near future. Such obstacles include validation, cost, turnaround time, user interface, decision support, standardization, and adoption by end-users.
Resumo:
Surveys can collect important data that inform policy decisions and drive social science research. Large government surveys collect information from the U.S. population on a wide range of topics, including demographics, education, employment, and lifestyle. Analysis of survey data presents unique challenges. In particular, one needs to account for missing data, for complex sampling designs, and for measurement error. Conceptually, a survey organization could spend lots of resources getting high-quality responses from a simple random sample, resulting in survey data that are easy to analyze. However, this scenario often is not realistic. To address these practical issues, survey organizations can leverage the information available from other sources of data. For example, in longitudinal studies that suffer from attrition, they can use the information from refreshment samples to correct for potential attrition bias. They can use information from known marginal distributions or survey design to improve inferences. They can use information from gold standard sources to correct for measurement error.
This thesis presents novel approaches to combining information from multiple sources that address the three problems described above.
The first method addresses nonignorable unit nonresponse and attrition in a panel survey with a refreshment sample. Panel surveys typically suffer from attrition, which can lead to biased inference when basing analysis only on cases that complete all waves of the panel. Unfortunately, the panel data alone cannot inform the extent of the bias due to attrition, so analysts must make strong and untestable assumptions about the missing data mechanism. Many panel studies also include refreshment samples, which are data collected from a random sample of new
individuals during some later wave of the panel. Refreshment samples offer information that can be utilized to correct for biases induced by nonignorable attrition while reducing reliance on strong assumptions about the attrition process. To date, these bias correction methods have not dealt with two key practical issues in panel studies: unit nonresponse in the initial wave of the panel and in the
refreshment sample itself. As we illustrate, nonignorable unit nonresponse
can significantly compromise the analyst's ability to use the refreshment samples for attrition bias correction. Thus, it is crucial for analysts to assess how sensitive their inferences---corrected for panel attrition---are to different assumptions about the nature of the unit nonresponse. We present an approach that facilitates such sensitivity analyses, both for suspected nonignorable unit nonresponse
in the initial wave and in the refreshment sample. We illustrate the approach using simulation studies and an analysis of data from the 2007-2008 Associated Press/Yahoo News election panel study.
The second method incorporates informative prior beliefs about
marginal probabilities into Bayesian latent class models for categorical data.
The basic idea is to append synthetic observations to the original data such that
(i) the empirical distributions of the desired margins match those of the prior beliefs, and (ii) the values of the remaining variables are left missing. The degree of prior uncertainty is controlled by the number of augmented records. Posterior inferences can be obtained via typical MCMC algorithms for latent class models, tailored to deal efficiently with the missing values in the concatenated data.
We illustrate the approach using a variety of simulations based on data from the American Community Survey, including an example of how augmented records can be used to fit latent class models to data from stratified samples.
The third method leverages the information from a gold standard survey to model reporting error. Survey data are subject to reporting error when respondents misunderstand the question or accidentally select the wrong response. Sometimes survey respondents knowingly select the wrong response, for example, by reporting a higher level of education than they actually have attained. We present an approach that allows an analyst to model reporting error by incorporating information from a gold standard survey. The analyst can specify various reporting error models and assess how sensitive their conclusions are to different assumptions about the reporting error process. We illustrate the approach using simulations based on data from the 1993 National Survey of College Graduates. We use the method to impute error-corrected educational attainments in the 2010 American Community Survey using the 2010 National Survey of College Graduates as the gold standard survey.