888 resultados para Parallel processing (Electronic computers) - Research


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The research reported in this dissertation investigates the processes required to mechanically alloy Pb1-xSnxTe and AgSbTe2 and a method of combining these two end compounds to result in (y)(AgSbTe2)–(1 - y)(Pb1-xSnxTe) thermoelectric materials for power generation applications. In general, traditional melt processing of these alloys has employed high purity materials that are subjected to time and energy intensive processes that result in highly functional material that is not easily reproducible. This research reports the development of mechanical alloying processes using commercially available 99.9% pure elemental powders in order to provide a basis for the economical production of highly functional thermoelectric materials. Though there have been reports of high and low ZT materials fabricated by both melt alloying and mechanical alloying, the processing-structure-properties-performance relationship connecting how the material is made to its resulting functionality is poorly understood. This is particularly true for mechanically alloyed material, motivating an effort to investigate bulk material within the (y)(AgSbTe2)–(1 - y)(Pb1-xSnx- Te) system using the mechanical alloying method. This research adds to the body of knowledge concerning the way in which mechanical alloying can be used to efficiently produce high ZT thermoelectric materials. The processes required to mechanically alloy elemental powders to form Pb1-xSnxTe and AgSbTe2 and to subsequently consolidate the alloyed powder is described. The composition, phases present in the alloy, volume percent, size and spacing of the phases are reported. The room temperature electronic transport properties of electrical conductivity, carrier concentration and carrier mobility are reported for each alloy and the effect of the presence of any secondary phase on the electronic transport properties is described. An mechanical mixing approach for incorporating the end compounds to result in (y)(AgSbTe2)–(1-y)(Pb1-xSnxTe) is described and when 5 vol.% AgSbTe2 was incorporated was found to form a solid solution with the Pb1-xSnxTe phase. An initial attempt to change the carrier concentration of the Pb1-xSnxTe phase was made by adding excess Te and found that the carrier density of the alloys in this work are not sensitive to excess Te. It has been demonstrated using the processing techniques reported in this research that this material system, when appropriately doped, has the potential to perform as highly functional thermoelectric material.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The performance of memory-guided saccades with two different delays (3 s and 30 s of memorisation) was studied in eight subjects. Single pulse transcranial magnetic stimulation (TMS) was applied simultaneously over the left and right dorsolateral prefrontal cortex (DLPFC) 1 s after target presentation. In both delays, stimulation significantly increased the percentage of error in amplitude of memory-guided saccades. Furthermore, the interfering effect of TMS was significantly higher in the short delay compared to that of the long delay paradigm. The results are discussed in the context of a mixed model of spatial working memory control including two components: First, serial information processing with a predominant role of the DLPFC during the early period of memorisation and, second, parallel information processing, which is independent from the DLPFC, operating during longer delays.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Background Patients' health related quality of life (HRQoL) has rarely been systematically monitored in general practice. Electronic tools and practice training might facilitate the routine application of HRQoL questionnaires. Thorough piloting of innovative procedures is strongly recommended before the conduction of large-scale studies. Therefore, we aimed to assess i) the feasibility and acceptance of HRQoL assessment using tablet computers in general practice, ii) the perceived practical utility of HRQoL results and iii) to identify possible barriers hindering wider application of this approach. Methods Two HRQoL questionnaires (St. George's Respiratory Questionnaire SGRQ and EORTC QLQ-C30) were electronically presented on portable tablet computers. Wireless network (WLAN) integration into practice computer systems of 14 German general practices with varying infrastructure allowed automatic data exchange and the generation of a printout or a PDF file. General practitioners (GPs) and practice assistants were trained in a 1-hour course, after which they could invite patients with chronic diseases to fill in the electronic questionnaire during their waiting time. We surveyed patients, practice assistants and GPs regarding their acceptance of this tool in semi-structured telephone interviews. The number of assessments, HRQoL results and interview responses were analysed using quantitative and qualitative methods. Results Over the course of 1 year, 523 patients filled in the electronic questionnaires (1–5 times; 664 total assessments). On average, results showed specific HRQoL impairments, e.g. with respect to fatigue, pain and sleep disturbances. The number of electronic assessments varied substantially between practices. A total of 280 patients, 27 practice assistants and 17 GPs participated in the telephone interviews. Almost all GPs (16/17 = 94%; 95% CI = 73–99%), most practice assistants (19/27 = 70%; 95% CI = 50–86%) and the majority of patients (240/280 = 86%; 95% CI = 82–91%) indicated that they would welcome the use of electronic HRQoL questionnaires in the future. GPs mentioned availability of local health services (e.g. supportive, physiotherapy) (mean: 9.4 ± 1.0 SD; scale: 1 – 10), sufficient extra time (8.9 ± 1.5) and easy interpretation of HRQoL results (8.6 ± 1.6) as the most important prerequisites for their use. They believed HRQoL assessment facilitated both communication and follow up of patients' conditions. Practice assistants emphasised that this process demonstrated an extra commitment to patient centred care; patients viewed it as a tool, which contributed to the physicians' understanding of their personal condition and circumstances. Conclusion This pilot study indicates that electronic HRQoL assessment is technically feasible in general practices. It can provide clinically significant information, which can either be used in the consultation for routine care, or for research purposes. While GPs, practice assistants and patients were generally positive about the electronic procedure, several barriers (e.g. practices' lack of time and routine in HRQoL assessment) need to be overcome to enable broader application of electronic questionnaires in every day medical practice.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Identifying accurate numbers of soldiers determined to be medically not ready after completing soldier readiness processing may help inform Army leadership about ongoing pressures on the military involved in long conflict with regular deployment. In Army soldiers screened using the SRP checklist for deployment, what is the prevalence of soldiers determined to be medically not ready? Study group. 15,289 soldiers screened at all 25 Army deployment platform sites with the eSRP checklist over a 4-month period (June 20, 2009 to October 20, 2009). The data included for analysis included age, rank, component, gender and final deployment medical readiness status from MEDPROS database. Methods.^ This information was compiled and univariate analysis using chi-square was conducted for each of the key variables by medical readiness status. Results. Descriptive epidemiology Of the total sample 1548 (9.7%) were female and 14319 (90.2%) were male. Enlisted soldiers made up 13,543 (88.6%) of the sample and officers 1,746 (11.4%). In the sample, 1533 (10.0%) were soldiers over the age of 40 and 13756 (90.0%) were age 18-40. Reserve, National Guard and Active Duty made up 1,931 (12.6%), 2,942 (19.2%) and 10,416 (68.1%) respectively. Univariate analysis. Overall 1226 (8.0%) of the soldiers screened were determined to be medically not ready for deployment. Biggest predictive factor was female gender OR (2.8; 2.57-3.28) p<0.001. Followed by enlisted rank OR (2.01; 1.60-2.53) p<0.001. Reserve component OR (1.33; 1.16-1.53) p<0.001 and Guard OR (0.37; 0.30-0.46) p<0.001. For age > 40 demonstrated OR (1.2; 1.09-1.50) p<0.003. Overall the results underscore there may be key demographic groups relating to medical readiness that can be targeted with programs and funding to improve overall military medical readiness.^

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In recent years, applications in domains such as telecommunications, network security or large scale sensor networks showed the limits of the traditional store-then-process paradigm. In this context, Stream Processing Engines emerged as a candidate solution for all these applications demanding for high processing capacity with low processing latency guarantees. With Stream Processing Engines, data streams are not persisted but rather processed on the fly, producing results continuously. Current Stream Processing Engines, either centralized or distributed, do not scale with the input load due to single-node bottlenecks. Moreover, they are based on static configurations that lead to either under or over-provisioning. This Ph.D. thesis discusses StreamCloud, an elastic paralleldistributed stream processing engine that enables for processing of large data stream volumes. Stream- Cloud minimizes the distribution and parallelization overhead introducing novel techniques that split queries into parallel subqueries and allocate them to independent sets of nodes. Moreover, Stream- Cloud elastic and dynamic load balancing protocols enable for effective adjustment of resources depending on the incoming load. Together with the parallelization and elasticity techniques, Stream- Cloud defines a novel fault tolerance protocol that introduces minimal overhead while providing fast recovery. StreamCloud has been fully implemented and evaluated using several real word applications such as fraud detection applications or network analysis applications. The evaluation, conducted using a cluster with more than 300 cores, demonstrates the large scalability, the elasticity and fault tolerance effectiveness of StreamCloud. Resumen En los útimos años, aplicaciones en dominios tales como telecomunicaciones, seguridad de redes y redes de sensores de gran escala se han encontrado con múltiples limitaciones en el paradigma tradicional de bases de datos. En este contexto, los sistemas de procesamiento de flujos de datos han emergido como solución a estas aplicaciones que demandan una alta capacidad de procesamiento con una baja latencia. En los sistemas de procesamiento de flujos de datos, los datos no se persisten y luego se procesan, en su lugar los datos son procesados al vuelo en memoria produciendo resultados de forma continua. Los actuales sistemas de procesamiento de flujos de datos, tanto los centralizados, como los distribuidos, no escalan respecto a la carga de entrada del sistema debido a un cuello de botella producido por la concentración de flujos de datos completos en nodos individuales. Por otra parte, éstos están basados en configuraciones estáticas lo que conducen a un sobre o bajo aprovisionamiento. Esta tesis doctoral presenta StreamCloud, un sistema elástico paralelo-distribuido para el procesamiento de flujos de datos que es capaz de procesar grandes volúmenes de datos. StreamCloud minimiza el coste de distribución y paralelización por medio de una técnica novedosa la cual particiona las queries en subqueries paralelas repartiéndolas en subconjuntos de nodos independientes. Ademas, Stream- Cloud posee protocolos de elasticidad y equilibrado de carga que permiten una optimización de los recursos dependiendo de la carga del sistema. Unidos a los protocolos de paralelización y elasticidad, StreamCloud define un protocolo de tolerancia a fallos que introduce un coste mínimo mientras que proporciona una rápida recuperación. StreamCloud ha sido implementado y evaluado mediante varias aplicaciones del mundo real tales como aplicaciones de detección de fraude o aplicaciones de análisis del tráfico de red. La evaluación ha sido realizada en un cluster con más de 300 núcleos, demostrando la alta escalabilidad y la efectividad tanto de la elasticidad, como de la tolerancia a fallos de StreamCloud.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Abstract is not available

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper describes a particular knowledge acquisition tool for the construction and maintenance of the knowledge model of an intelligent system for emergency management in the field of hydrology. This tool has been developed following an innovative approach directed to end-users non familiarized in computer oriented terminology. According to this approach, the tool is conceived as a document processor specialized in a particular domain (hydrology) in such a way that the whole knowledge model is viewed by the user as an electronic document. The paper first describes the characteristics of the knowledge model of the intelligent system and summarizes the problems that we found during the development and maintenance of such type of model. Then, the paper describes the KATS tool, a software application that we have designed to help in this task to be used by users who are not experts in computer programming. Finally, the paper shows a comparison between KATS and other approaches for knowledge acquisition.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, function of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells has only very recently been proposed (Jerusalem et al., 2013). In this paper, we present the implementation details of Neurite: the finite difference parallel program used in this reference. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite-explicit and implicit-were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between lectrophysiology and mechanics (Jerusalem et al., 2013). This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Development of transparent oxide semiconductors (TOS) from Earth-abundant materials is of great interest for cost-effective thin film device applications, such as solar cells, light emitting diodes (LEDs), touch-sensitive displays, electronic paper, and transparent thin film transistors. The need of inexpensive or high performance electrode might be even greater for organic photovoltaic (OPV), with the goal to harvest renewable energy with inexpensive, lightweight, and cost competitive materials. The natural abundance of zinc and the wide bandgap ($sim$3.3 eV) of its oxide make it an ideal candidate. In this dissertation, I have introduced various concepts on the modulations of various surface, interface and bulk opto-electronic properties of ZnO based semiconductor for charge transport, charge selectivity and optimal device performance. I have categorized transparent semiconductors into two sub groups depending upon their role in a device. Electrodes, usually 200 to 500 nm thick, optimized for good transparency and transporting the charges to the external circuit. Here, the electrical conductivity in parallel direction to thin film, i.e bulk conductivity is important. And contacts, usually 5 to 50 nm thick, are optimized in case of solar cells for providing charge selectivity and asymmetry to manipulate the built in field inside the device for charge separation and collection. Whereas in Organic LEDs (OLEDs), contacts provide optimum energy level alignment at organic oxide interface for improved charge injections. For an optimal solar cell performance, transparent electrodes are designed with maximum transparency in the region of interest to maximize the light to pass through to the absorber layer for photo-generation, plus they are designed for minimum sheet resistance for efficient charge collection and transport. As such there is need for material with high conductivity and transparency. Doping ZnO with some common elements such as B, Al, Ga, In, Ge, Si, and F result in n-type doping with increase in carriers resulting in high conductivity electrode, with better or comparable opto-electronic properties compared to current industry-standard indium tin oxide (ITO). Furthermore, improvement in mobility due to improvement on crystallographic structure also provide alternative path for high conductivity ZnO TCOs. Implementing these two aspects, various studies were done on gallium doped zinc oxide (GZO) transparent electrode, a very promising indium free electrode. The dynamics of the superimposed RF and DC power sputtering was utilized to improve the microstructure during the thin films growth, resulting in GZO electrode with conductivity greater than 4000 S/cm and transparency greater than 90 %. Similarly, various studies on research and development of Indium Zinc Tin Oxide and Indium Zinc Oxide thin films which can be applied to flexible substrates for next generation solar cells application is presented. In these new TCO systems, understanding the role of crystallographic structure ranging from poly-crystalline to amorphous phase and the influence on the charge transport and optical transparency as well as important surface passivation and surface charge transport properties. Implementation of these electrode based on ZnO on opto-electronics devices such as OLED and OPV is complicated due to chemical interaction over time with the organic layer or with ambient. The problem of inefficient charge collection/injection due to poor understanding of interface and/or bulk property of oxide electrode exists at several oxide-organic interfaces. The surface conductivity, the work function, the formation of dipoles and the band-bending at the interfacial sites can positively or negatively impact the device performance. Detailed characterization of the surface composition both before and after various chemicals treatment of various oxide electrode can therefore provide insight into optimization of device performance. Some of the work related to controlling the interfacial chemistry associated with charge transport of transparent electrodes are discussed. Thus, the role of various pre-treatment on poly-crystalline GZO electrode and amorphous indium zinc oxide (IZO) electrode is compared and contrasted. From the study, we have found that removal of defects and self passivating defects caused by accumulation of hydroxides in the surface of both poly-crystalline GZO and amorphous IZO, are critical for improving the surface conductivity and charge transport. Further insight on how these insulating and self-passivating defects cause charge accumulation and recombination in an device is discussed. With recent rapid development of bulk-heterojunction organic photovoltaics active materials, devices employing ZnO and ZnO based electrode provide air stable and cost-competitive alternatives to traditional inorganic photovoltaics. The organic light emitting diodes (OLEDs) have already been commercialized, thus to follow in the footsteps of this technology, OPV devices need further improvement in power conversion efficiency and stable materials resulting in long device lifetimes. Use of low work function metals such as Ca/Al in standard geometry do provide good electrode for electron collection, but serious problems using low work-function metal electrodes originates from the formation of non-conductive metal oxide due to oxidation resulting in rapid device failure. Hence, using low work-function, air stable, conductive metal oxides such as ZnO as electrons collecting electrode and high work-function, air stable metals such as silver for harvesting holes, has been on the rise. Devices with degenerately doped ZnO functioning as transparent conductive electrode, or as charge selective layer in a polymer/fullerene based heterojunction, present useful device structures for investigating the functional mechanisms within OPV devices and a possible pathway towards improved air-stable high efficiency devices. Furthermore, analysis of the physical properties of the ZnO layers with varying thickness, crystallographic structure, surface chemistry and grain size deposited via various techniques such as atomic layer deposition, sputtering and solution-processed ZnO with their respective OPV device performance is discussed. We find similarity and differences in electrode property for good charge injection in OLEDs and good charge collection in OPV devices very insightful in understanding physics behind device failures and successes. In general, self-passivating surface of amorphous TCOs IZO, ZTO and IZTO forms insulating layer that hinders the charge collection. Similarly, we find modulation of the carrier concentration and the mobility in electron transport layer, namely zinc oxide thin films, very important for optimizing device performance.