14 resultados para SSSP
Resumo:
Graph algorithms have been shown to possess enough parallelism to keep several computing resources busy-even hundreds of cores on a GPU. Unfortunately, tuning their implementation for efficient execution on a particular hardware configuration of heterogeneous systems consisting of multicore CPUs and GPUs is challenging, time consuming, and error prone. To address these issues, we propose a domain-specific language (DSL), Falcon, for implementing graph algorithms that (i) abstracts the hardware, (ii) provides constructs to write explicitly parallel programs at a higher level, and (iii) can work with general algorithms that may change the graph structure (morph algorithms). We illustrate the usage of our DSL to implement local computation algorithms (that do not change the graph structure) and morph algorithms such as Delaunay mesh refinement, survey propagation, and dynamic SSSP on GPU and multicore CPUs. Using a set of benchmark graphs, we illustrate that the generated code performs close to the state-of-the-art hand-tuned implementations.
Resumo:
Paramedics are trained to use specialized medical knowledge and a variety of medical procedures and pharmaceutical interventions to “save patients and prevent further damage” in emergency situations, both as members of “health-care teams” in hospital emergency departments (Swanson, 2005: 96) and on the streets – unstandardized contexts “rife with chaotic, dangerous, and often uncontrollable elements” (Campeau, 2008: 3). The paramedic’s unique skill-set and ability to function in diverse situations have resulted in the occupation becoming ever more important to health care systems (Alberta Health and Wellness, 2008: 12).
Today, prehospital emergency services, while varying, exist in every major city and many rural areas throughout North America (Paramedics Association of Canada, 2008) and other countries around the world (Roudsari et al., 2007). Services in North America, for instance, treat and/or transport 2 million Canadians (over 250,000 in Alberta alone ) and between 25 and 30 million Americans annually (Emergency Medical Services Chiefs of Canada, 2006; National EMS Research Agenda, 2001). In Canada, paramedics make up one of the largest groups of health care professionals, with numbers exceeding 20,000 (Pike and Gibbons, 2008; Paramedics Association of Canada, 2008). However, there is little known about the work practices of paramedics, especially in light of recent changes to how their work is organized, making the profession “rich with unexplored opportunities for research on the full range of paramedic work” (Campeau, 2008: 2).
This presentation reports on findings from an institutional ethnography that explored the work of paramedics and different technologies of knowledge and governance that intersect with and organize their work practices. More specifically, my tentative focus of this presentation is on discussing some of the ruling discourses central to many of the technologies used on the front lines of EMS in Alberta and the consequences of such governance practices for both the front line workers and their patients. In doing so, I will demonstrate how IE can be used to answer Rankin and Campbell’s (2006) call for additional research into “the social organization of information in health care and attention to the (often unintended) ways ‘such textual products may accomplish…ruling purposes but otherwise fail people and, moreover, obscure that failure’ (p. 182)” (cited in McCoy, 2008: 709).
Resumo:
Health reform practices in Canada and elsewhere have restructured the purpose and use of diagnostic labels and the processes of naming such labels. Diagnoses are no longer only a means to tell doctors and patients what may be wrong and indicate potential courses of treatment; some diagnoses activate specialized services and supports for persons with a disability and those who provide care for them. In British Columbia, a standardized process of diagnosis with the outcome of an autism spectrum disorder gives access to government provided health care and educational services and supports. Such processes enter individuals into a complex of text mediated relations, regulated by the principles of evidence-based medicine. However, the diagnosis of autism in children is notoriously uncertain. Because of this ambiguity, standardizing the diagnostic process creates a hurdle in gaining help and support for parents who have children with problems that could lead to a diagnosis on the autism spectrum. Such processes and their organizing relations are problematized, explored and explicated below. Grounded in the epistemological and ontological shift offered by Dorothy E. Smith (1987; 1990a; 1999; 2005), this article reports on the findings of an institutional ethnographic study that explored the diagnostic process of autism in British Columbia. More specifically, this article focuses on the processes involved in going from mothers talking from their experience about their childrens problems to the formalized and standardized, and thus “virtually” produced, diagnoses that may or may not give access to services and supports in different systems of care. Two psychologists, a developmental pediatrician, a social worker – members of a specialized multidisciplinary assessment team – and several mothers of children with a diagnosis on the autism spectrum were interviewed. The implications of standardizing the diagnosis process of a disability that is not clear-cut and has funding attached are discussed. This ethnography also provides a glimpse of the implications of current and ongoing reforms in the state-supported health care system in British Columbia, and more generally in Canada, for people’s everyday doings.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
Solid-state shear pulverization (SSSP) is a unique processing technique for mechanochemical modification of polymers, compatibilization of polymer blends, and exfoliation and dispersion of fillers in polymer nanocomposites. A systematic parametric study of the SSSP technique is conducted to elucidate the detailed mechanism of the process and establish the basis for a range of current and future operation scenarios. Using neat, single component polypropylene (PP) as the model material, we varied machine type, screw design, and feed rate to achieve a range of shear and compression applied to the material, which can be quantified through specific energy input (Ep). As a universal processing variable, Ep reflects the level of chain scission occurring in the material, which correlates well to the extent of the physical property changes of the processed PP. Additionally, we compared the operating cost estimates of SSSP and conventional twin screw extrusion to determine the practical viability of SSSP.
Resumo:
In this communication, solid-state/melt extrusion (SSME) is introduced as a novel technique that combines solid-state shear pulverization (SSSP) and conventional twin screw extrusion (TSE) in a single extrusion system. The morphology and property enhancements in a model linear low-density polyethylene/organically modified clay nanocomposite sample fabricated via SSME were compared to those fabricated via SSSP and TSE. The results show that SSME is capable of exfoliating and dispersing the nanofillers similarly to SSSP, while achieving a desirable output rate and producing extrudate similar in form to that from TSE.
Resumo:
Investigates multiple processing parameters, includingpolymer type, filler type, processing technique, severity of SSSP (Solid-state shear pulverization)processing, and postprocessing, of SSSP. HDPE and LLDPE polymers with pristine clay and organo-clay samples are explored. Effects on crystallization, high-temperature behavior, mechanicalproperties, and gas barrier properties are examined. Thermal, mechanical, and morphological characterization is conducted to determine polymer/filler compatibility and superior processing methods for the polymer-clay nanocomposites.
Resumo:
The blending of common polymers allows for the rapid and facile synthesis of new materials with highly tunable properties at a fraction of the costs of new monomer development and synthesis. Most blends of polymers, however, are completely immiscible and separate into distinct phases with minimal phase interaction, severelydegrading the performance of the material. Cross-phase interactions and property enhancement can be achieved with these blends through reactive processing or compatibilizer addition. A new class of blend compatibilization relies on the mechanochemical reactions between polymer chains via solid-state, high energy processing. Two contrasting mechanochemical processing techniques are explored in this thesis: cryogenic milling and solid-state shear pulverization (SSSP). Cryogenic milling is a batch process where a milling rod rapidly impacts the blend sample while submerged within a bath of liquid nitrogen. In contrast, SSSP is a continuous process where blend components are subjected to high shear and compressive forces while progressing down a chilled twin-screw barrel. In the cryogenic milling study, through the application of a synthesized labeledpolymer, in situ formation of copolymers was observed for the first time. The microstructures of polystyrene/high-density polyethylene (PS/HDPE) blends fabricated via cryomilling followed by intimate melt-state mixing and static annealing were found to be morphologically stable over time. PS/HDPE blends fabricated via SSSP also showed compatibilization by way of ideal blend morphology through growth mechanisms with slightly different behavior compared to the cryomilled blends. The new Bucknell University SSSP instrument was carefully analyzed and optimized to produce compatibilized polymer blends through a full-factorial experiment. Finally, blends of varying levels of compatibilization were subjected to common material tests to determine alternative means of measuring and quantifying compatibilization,
Resumo:
Biodegradable polymer/clay nanocomposites were prepared withpristine and organically modified montmorillonite in polylactic acid (PLA) and polycaprolactone (PCL) polymer matrices. Nanocomposites were fabricated using extrusion and SSSP to compare the effects of melt-state and solid-state processing on the morphology of the final nanocomposite. Characterization of various material properties was performed on prepared biodegradable polymer/clay nanocomposites to evaluate property enhancements from different clays and/or processing methods.
Resumo:
Polylactic acid (PLA) is a bio-derived, biodegradable polymer with a number of similar mechanical properties to commodity plastics like polyethylene (PE) and polyethylene terephthalate (PETE). There has recently been a great interest in using PLA to replace these typical petroleum-derived polymers because of the developing trend to use more sustainable materials and technologies. However, PLA¿s inherent slow crystallization behavior is not compatible with prototypical polymer processing techniques such as molding and extrusion, and in turn inhibits its widespread use in industrial applications. In order to make PLA into a commercially-viable material, there is a need to process the material in such a way that its tendency to form crystals is enhanced. The industry standard for producing PLA products is via twin screw extrusion (TSE), where polymer pellets are fed into a heated extruder, mixed at a temperature above its melting temperature, and molded into a desired shape. A relatively novel processing technique called solid-state shear pulverization (SSSP) processes the polymer in the solid state so that nucleation sites can develop and fast crystallization can occur. SSSP has also been found to enhance the mechanical properties of a material, but its powder output form is undesirable in industry. A new process called solid-state/melt extrusion (SSME), developed at Bucknell University, combines the TSE and SSSP processes in one instrument. This technique has proven to produce moldable polymer products with increased mechanical strength. This thesis first investigated the effects of the TSE, SSSP, and SSME polymer processing techniques on PLA. The study seeks to determine the process that yields products with the most enhanced thermal and mechanical properties. For characterization, percent crystallinity, crystallization half time, storage modulus, softening temperature, degradation temperature and molecular weight were analyzed for all samples. Through these characterization techniques, it was observed that SSME-processed PLA had enhanced properties relative to TSE- and SSSP-processed PLA. Because of the previous findings, an optimization study for SSME-processed PLA was conducted where throughput and screw design were varied. The optimization study determined PLA processed with a low flow rate and a moderate screw design in an SSME process produced a polymer product with the largest increase in thermal properties and a high retention of polymer structure relative to TSE-, SSSP-, and all other SSME-processed PLA. It was concluded that the SSSP part of processing scissions polymer chains, creating defects within the material, while the TSE part of processing allows these defects to be mixed thoroughly throughout the sample. The study showed that a proper SSME setup allows for both the increase in nucleation sites within the polymer and sufficient mixing, which in turn leads to the development of a large amount of crystals in a short period of time.
Resumo:
Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.