3 resultados para Selection and implementation methodology
em Duke University
Resumo:
This dissertation interrogates existing scholarly paradigms regarding aetiology in the Histories of Herodotus in order to open up new avenues to approach a complex and varied topic. Since aetiology has mostly been treated as the study of cause and effect in the Histories, this work expands the purview of aetiology to include Herodotus’ explanations of origins more generally. The overarching goal in examining the methodological principles of Herodotean aetiology is to show the extent to which they resonate across the Histories according to their initial development in the proem, especially in those places that seem to deviate from the work’s driving force (i.e. the Persian Wars). Though the focus is on correlating the principles espoused in the proem with their deployment in Herodotus’ ethnographies and other seemingly divergent portions of his work, the dissertation also demonstrates the influence of these principles on some of the more “historical” aspects of the Histories where the struggle between Greeks and barbarians is concerned. The upshot is to make a novel case not only for the programmatic significance of the proem, but also for the cohesion of Herodotean methodology from cover to cover, a perennial concern for scholars of Greek history and historiography.
Chapter One illustrates how the proem to the Histories (1.1.0-1.5.3) prefigures Herodotus’ engagement with aetiological discussions throughout the Histories. Chapter Two indicates how the reading of the proem laid out in Chapter One allows for Herodotus’ deployment of aetiology in the Egyptian logos (especially where the pharaoh Psammetichus’ investigation of the origins of Egyptian language, nature, and custom are concerned) to be viewed within the methodological continuum of the Histories at large. Chapter Three connects Herodotus’ programmatic interest in the origins of erga (i.e. “works” or “achievements” manifested as monuments and deeds of abstract and concrete sorts) with the patterns addressed in Chapters One and Two. Chapter Four examines aetiological narratives in the Scythian logos and argues through them that this logos is as integral to the Histories as the analogous Egyptian logos studied in Chapter Two. Chapter Five demonstrates how the aetiologies associated with the Greeks’ collaboration with the Persians (i.e. medism) in the lead-up to the battle of Thermopylae recapitulate programmatic patterns isolated in previous chapters and thereby extend the methodological continuum of the Histories beyond the “ethnographic” logoi to some of the most representative “historical” logoi of Herodotus’ work. Chapter Six concludes the dissertation and makes one final case for methodological cohesion by showing the inextricability of the end of the Histories from its beginning.
Resumo:
Background: The burden of mental health is increased in humanitarian settings, and needs to be addressed in emergency situations. The World Health Organization has recently released the mental health Global Action Programme Humanitarian Intervention Guide (mhGAP-HIG) in order to scale up mental health service delivery in humanitarian settings through task-shifting. This study aims to evaluate, contextualize and identify possible barriers and challenges to mhGAP-HIG manual content, training and implementation in post-earthquake Nepal.
Methods: This qualitative study was conducted in Kathmandu, Nepal. Key informant interviews were conducted with fourteen psychiatrists involved in a mhGAP-HIG Training of Trainers and Supervisors (ToTS) in order to assess the mhGAP-HIG, ToTS training, and the potential challenges and barriers to mhGAP-HIG implementation. Themes identified by informants were supplemented by process notes taken by the researcher during observed training sessions and meetings.
Results: Key themes emerging from key informant interviews include the need to take three factors into account in manual contextualization: culture, health systems and the humanitarian setting. This includes translation of the manual into the local language, adding or expanding upon conditions prevalent in Nepal, and more consideration to improving feasibility of manual use by non-specialists.
Conclusion: The mhGAP-HIG must be tailored to specific humanitarian settings for effective implementation. This study shows the importance of conducting a manual contextualization workshop prior to training in order to maximize the feasibility and success in training health care workers in mhGAP.
Resumo:
Fitting statistical models is computationally challenging when the sample size or the dimension of the dataset is huge. An attractive approach for down-scaling the problem size is to first partition the dataset into subsets and then fit using distributed algorithms. The dataset can be partitioned either horizontally (in the sample space) or vertically (in the feature space), and the challenge arise in defining an algorithm with low communication, theoretical guarantees and excellent practical performance in general settings. For sample space partitioning, I propose a MEdian Selection Subset AGgregation Estimator ({\em message}) algorithm for solving these issues. The algorithm applies feature selection in parallel for each subset using regularized regression or Bayesian variable selection method, calculates the `median' feature inclusion index, estimates coefficients for the selected features in parallel for each subset, and then averages these estimates. The algorithm is simple, involves very minimal communication, scales efficiently in sample size, and has theoretical guarantees. I provide extensive experiments to show excellent performance in feature selection, estimation, prediction, and computation time relative to usual competitors.
While sample space partitioning is useful in handling datasets with large sample size, feature space partitioning is more effective when the data dimension is high. Existing methods for partitioning features, however, are either vulnerable to high correlations or inefficient in reducing the model dimension. In the thesis, I propose a new embarrassingly parallel framework named {\em DECO} for distributed variable selection and parameter estimation. In {\em DECO}, variables are first partitioned and allocated to m distributed workers. The decorrelated subset data within each worker are then fitted via any algorithm designed for high-dimensional problems. We show that by incorporating the decorrelation step, DECO can achieve consistent variable selection and parameter estimation on each subset with (almost) no assumptions. In addition, the convergence rate is nearly minimax optimal for both sparse and weakly sparse models and does NOT depend on the partition number m. Extensive numerical experiments are provided to illustrate the performance of the new framework.
For datasets with both large sample sizes and high dimensionality, I propose a new "divided-and-conquer" framework {\em DEME} (DECO-message) by leveraging both the {\em DECO} and the {\em message} algorithm. The new framework first partitions the dataset in the sample space into row cubes using {\em message} and then partition the feature space of the cubes using {\em DECO}. This procedure is equivalent to partitioning the original data matrix into multiple small blocks, each with a feasible size that can be stored and fitted in a computer in parallel. The results are then synthezied via the {\em DECO} and {\em message} algorithm in a reverse order to produce the final output. The whole framework is extremely scalable.