936 resultados para Thread safe parallel run-time
Resumo:
BACKGROUND Oesophageal clearance has been scarcely studied. AIMS Oesophageal clearance in endoscopy-negative heartburn was assessed to detect differences in bolus clearance time among patients sub-grouped according to impedance-pH findings. METHODS In 118 consecutive endoscopy-negative heartburn patients impedance-pH monitoring was performed off-therapy. Acid exposure time, number of refluxes, baseline impedance, post-reflux swallow-induced peristaltic wave index and both automated and manual bolus clearance time were calculated. Patients were sub-grouped into pH/impedance positive (abnormal acid exposure and/or number of refluxes) and pH/impedance negative (normal acid exposure and number of refluxes), the former further subdivided on the basis of abnormal/normal acid exposure time (pH+/-) and abnormal/normal number of refluxes (impedance+/-). RESULTS Poor correlation (r=0.35) between automated and manual bolus clearance time was found. Manual bolus clearance time progressively decreased from pH+/impedance+ (42.6s), pH+/impedance- (27.1s), pH-/impedance+ (17.8s) to pH-/impedance- (10.8s). There was an inverse correlation between manual bolus clearance time and both baseline impedance and post-reflux swallow-induced peristaltic wave index, and a direct correlation between manual bolus clearance and acid exposure time. A manual bolus clearance time value of 14.8s had an accuracy of 93% to differentiate pH/impedance positive from pH/impedance negative patients. CONCLUSIONS When manually measured, bolus clearance time reflects reflux severity, confirming the pathophysiological relevance of oesophageal clearance in reflux disease.
Resumo:
Orbital forcing does not only exert direct insolation effects, but also alters climate indirectly through feedback mechanisms that modify atmosphere and ocean dynamics and meridional heat and moisture transfers. We investigate the regional effects of these changes by detailed analysis of atmosphere and ocean circulation and heat transports in a coupled atmosphere-ocean-sea ice-biosphere general circulation model (ECHAM5/JSBACH/MPI-OM). We perform long term quasi equilibrium simulations under pre-industrial, mid-Holocene (6000 years before present - yBP), and Eemian (125 000 yBP) orbital boundary conditions. Compared to pre-industrial climate, Eemian and Holocene temperatures show generally warmer conditions at higher and cooler conditions at lower latitudes. Changes in sea-ice cover, ocean heat transports, and atmospheric circulation patterns lead to pronounced regional heterogeneity. Over Europe, the warming is most pronounced over the north-eastern part in accordance with recent reconstructions for the Holocene. We attribute this warming to enhanced ocean circulation in the Nordic Seas and enhanced ocean-atmosphere heat flux over the Barents Shelf in conduction with retreat of sea ice and intensified winter storm tracks over northern Europe.
Resumo:
Membrane systems are computational equivalent to Turing machines. However, their distributed and massively parallel nature obtains polynomial solutions opposite to traditional non-polynomial ones. At this point, it is very important to develop dedicated hardware and software implementations exploiting those two membrane systems features. Dealing with distributed implementations of P systems, the bottleneck communication problem has arisen. When the number of membranes grows up, the network gets congested. The purpose of distributed architectures is to reach a compromise between the massively parallel character of the system and the needed evolution step time to transit from one configuration of the system to the next one, solving the bottleneck communication problem. The goal of this paper is twofold. Firstly, to survey in a systematic and uniform way the main results regarding the way membranes can be placed on processors in order to get a software/hardware simulation of P-Systems in a distributed environment. Secondly, we improve some results about the membrane dissolution problem, prove that it is connected, and discuss the possibility of simulating this property in the distributed model. All this yields an improvement in the system parallelism implementation since it gets an increment of the parallelism of the external communication among processors. Proposed ideas improve previous architectures to tackle the communication bottleneck problem, such as reduction of the total time of an evolution step, increase of the number of membranes that could run on a processor and reduction of the number of processors.
Resumo:
There are a number of research and development activities that are exploring Time and Space Partition (TSP) to implement safe and secure flight software. This approach allows to execute different real-time applications with different levels of criticality in the same computer board. In order to do that, flight applications must be isolated from each other in the temporal and spatial domains. This paper presents the first results of a partitioning platform based on the Open Ravenscar Kernel (ORK+) and the XtratuM hypervisor. ORK+ is a small, reliable real-time kernel supporting the Ada Ravenscar Computational model that is central to the ASSERT development process. XtratuM supports multiple virtual machines, i.e. partitions, on a single computer and is being used in the Integrated Modular Avionics for Space study. ORK+ executes in an XtratuM partition enabling Ada applications to share the computer board with other applications.
Resumo:
La evolución de los teléfonos móviles inteligentes, dotados de cámaras digitales, está provocando una creciente demanda de aplicaciones cada vez más complejas que necesitan algoritmos de visión artificial en tiempo real; puesto que el tamaño de las señales de vídeo no hace sino aumentar y en cambio el rendimiento de los procesadores de un solo núcleo se ha estancado, los nuevos algoritmos que se diseñen para visión artificial han de ser paralelos para poder ejecutarse en múltiples procesadores y ser computacionalmente escalables. Una de las clases de procesadores más interesantes en la actualidad se encuentra en las tarjetas gráficas (GPU), que son dispositivos que ofrecen un alto grado de paralelismo, un excelente rendimiento numérico y una creciente versatilidad, lo que los hace interesantes para llevar a cabo computación científica. En esta tesis se exploran dos aplicaciones de visión artificial que revisten una gran complejidad computacional y no pueden ser ejecutadas en tiempo real empleando procesadores tradicionales. En cambio, como se demuestra en esta tesis, la paralelización de las distintas subtareas y su implementación sobre una GPU arrojan los resultados deseados de ejecución con tasas de refresco interactivas. Asimismo, se propone una técnica para la evaluación rápida de funciones de complejidad arbitraria especialmente indicada para su uso en una GPU. En primer lugar se estudia la aplicación de técnicas de síntesis de imágenes virtuales a partir de únicamente dos cámaras lejanas y no paralelas—en contraste con la configuración habitual en TV 3D de cámaras cercanas y paralelas—con información de color y profundidad. Empleando filtros de mediana modificados para la elaboración de un mapa de profundidad virtual y proyecciones inversas, se comprueba que estas técnicas son adecuadas para una libre elección del punto de vista. Además, se demuestra que la codificación de la información de profundidad con respecto a un sistema de referencia global es sumamente perjudicial y debería ser evitada. Por otro lado se propone un sistema de detección de objetos móviles basado en técnicas de estimación de densidad con funciones locales. Este tipo de técnicas es muy adecuada para el modelado de escenas complejas con fondos multimodales, pero ha recibido poco uso debido a su gran complejidad computacional. El sistema propuesto, implementado en tiempo real sobre una GPU, incluye propuestas para la estimación dinámica de los anchos de banda de las funciones locales, actualización selectiva del modelo de fondo, actualización de la posición de las muestras de referencia del modelo de primer plano empleando un filtro de partículas multirregión y selección automática de regiones de interés para reducir el coste computacional. Los resultados, evaluados sobre diversas bases de datos y comparados con otros algoritmos del estado del arte, demuestran la gran versatilidad y calidad de la propuesta. Finalmente se propone un método para la aproximación de funciones arbitrarias empleando funciones continuas lineales a tramos, especialmente indicada para su implementación en una GPU mediante el uso de las unidades de filtraje de texturas, normalmente no utilizadas para cómputo numérico. La propuesta incluye un riguroso análisis matemático del error cometido en la aproximación en función del número de muestras empleadas, así como un método para la obtención de una partición cuasióptima del dominio de la función para minimizar el error. ABSTRACT The evolution of smartphones, all equipped with digital cameras, is driving a growing demand for ever more complex applications that need to rely on real-time computer vision algorithms. However, video signals are only increasing in size, whereas the performance of single-core processors has somewhat stagnated in the past few years. Consequently, new computer vision algorithms will need to be parallel to run on multiple processors and be computationally scalable. One of the most promising classes of processors nowadays can be found in graphics processing units (GPU). These are devices offering a high parallelism degree, excellent numerical performance and increasing versatility, which makes them interesting to run scientific computations. In this thesis, we explore two computer vision applications with a high computational complexity that precludes them from running in real time on traditional uniprocessors. However, we show that by parallelizing subtasks and implementing them on a GPU, both applications attain their goals of running at interactive frame rates. In addition, we propose a technique for fast evaluation of arbitrarily complex functions, specially designed for GPU implementation. First, we explore the application of depth-image–based rendering techniques to the unusual configuration of two convergent, wide baseline cameras, in contrast to the usual configuration used in 3D TV, which are narrow baseline, parallel cameras. By using a backward mapping approach with a depth inpainting scheme based on median filters, we show that these techniques are adequate for free viewpoint video applications. In addition, we show that referring depth information to a global reference system is ill-advised and should be avoided. Then, we propose a background subtraction system based on kernel density estimation techniques. These techniques are very adequate for modelling complex scenes featuring multimodal backgrounds, but have not been so popular due to their huge computational and memory complexity. The proposed system, implemented in real time on a GPU, features novel proposals for dynamic kernel bandwidth estimation for the background model, selective update of the background model, update of the position of reference samples of the foreground model using a multi-region particle filter, and automatic selection of regions of interest to reduce computational cost. The results, evaluated on several databases and compared to other state-of-the-art algorithms, demonstrate the high quality and versatility of our proposal. Finally, we propose a general method for the approximation of arbitrarily complex functions using continuous piecewise linear functions, specially formulated for GPU implementation by leveraging their texture filtering units, normally unused for numerical computation. Our proposal features a rigorous mathematical analysis of the approximation error in function of the number of samples, as well as a method to obtain a suboptimal partition of the domain of the function to minimize approximation error.
Resumo:
The activation of the silent endogenous progesterone receptor (PR) gene by 17-β-estradiol (E2) in cells stably transfected with estrogen receptor (ER) was used as a model system to study the mechanism of E2-induced transcription. The time course of E2-induced PR transcription rate was determined by nuclear run-on assays. No marked effect on specific PR gene transcription rates was detected at 0 and 1 h of E2 treatment. After 3 h of E2 treatment, the PR mRNA synthesis rate increased 2.0- ± 0.2-fold and continued to increase to 3.5- ± 0.4-fold by 24 h as compared with 0 h. The transcription rate increase was followed by PR mRNA accumulation. No PR mRNA was detectable at 0, 1, and 3 h of E2 treatment. PR mRNA accumulation was detected at 6 h of E2 treatment and continued to accumulate until 18 h, the longest time point examined. Interestingly, this slow and gradual transcription rate increase of the endogenous PR gene did not parallel binding of E2 to ER, which was maximized within 30 min. Furthermore, the E2–ER level was down-regulated to 15% at 3 h as compared with 30 min of E2 treatment and remained low at 24 h of E2 exposure. These paradoxical observations indicate that E2-induced transcription activation is more complicated than just an association of the occupied ER with the transcription machinery.
Resumo:
BACKGROUND The application of therapeutic hypothermia (TH) for 12 to 24 hours following out-of-hospital cardiac arrest (OHCA) has been associated with decreased mortality and improved neurological function. However, the optimal duration of cooling is not known. We aimed to investigate whether targeted temperature management (TTM) at 33 ± 1 °C for 48 hours compared to 24 hours results in a better long-term neurological outcome. METHODS The TTH48 trial is an investigator-initiated pragmatic international trial in which patients resuscitated from OHCA are randomised to TTM at 33 ± 1 °C for either 24 or 48 hours. Inclusion criteria are: age older than 17 and below 80 years; presumed cardiac origin of arrest; and Glasgow Coma Score (GCS) <8, on admission. The primary outcome is neurological outcome at 6 months using the Cerebral Performance Category score (CPC) by an assessor blinded to treatment allocation and dichotomised to good (CPC 1-2) or poor (CPC 3-5) outcome. Secondary outcomes are: 6-month mortality, incidence of infection, bleeding and organ failure and CPC at hospital discharge, at day 28 and at day 90 following OHCA. Assuming that 50 % of the patients treated for 24 hours will have a poor outcome at 6 months, a study including 350 patients (175/arm) will have 80 % power (with a significance level of 5 %) to detect an absolute 15 % difference in primary outcome between treatment groups. A safety interim analysis was performed after the inclusion of 175 patients. DISCUSSION This is the first randomised trial to investigate the effect of the duration of TTM at 33 ± 1 °C in adult OHCA patients. We anticipate that the results of this trial will add significant knowledge regarding the management of cooling procedures in OHCA patients. TRIAL REGISTRATION NCT01689077.
Resumo:
Wireless sensor networks have been identified as one of the key technologies for the 21st century. In order to overcome their limitations such as fault tolerance and conservation of energy, we propose a middleware solution, In-Motes. In-Motes stands as a fault tolerant platform for deploying and monitoring applications in real time offers a number of possibilities for the end user giving him in parallel the freedom to experiment with various parameters, in an effort the deployed applications to run in an energy efficient manner inside the network. The proposed scheme is evaluated through the In-Motes EYE application, aiming to test its merits under real time conditions. In-Motes EYE application which is an agent based real time In-Motes application developed for sensing acceleration variations in an environment. The application was tested in a prototype area, road alike, for a period of four months.
Resumo:
Large read-only or read-write transactions with a large read set and a small write set constitute an important class of transactions used in such applications as data mining, data warehousing, statistical applications, and report generators. Such transactions are best supported with optimistic concurrency, because locking of large amounts of data for extended periods of time is not an acceptable solution. The abort rate in regular optimistic concurrency algorithms increases exponentially with the size of the transaction. The algorithm proposed in this dissertation solves this problem by using a new transaction scheduling technique that allows a large transaction to commit safely with significantly greater probability that can exceed several orders of magnitude versus regular optimistic concurrency algorithms. A performance simulation study and a formal proof of serializability and external consistency of the proposed algorithm are also presented.^ This dissertation also proposes a new query optimization technique (lazy queries). Lazy Queries is an adaptive query execution scheme which optimizes itself as the query runs. Lazy queries can be used to find an intersection of sub-queries in a very efficient way, which does not require full execution of large sub-queries nor does it require any statistical knowledge about the data.^ An efficient optimistic concurrency control algorithm used in a massively parallel B-tree with variable-length keys is introduced. B-trees with variable-length keys can be effectively used in a variety of database types. In particular, we show how such a B-tree was used in our implementation of a semantic object-oriented DBMS. The concurrency control algorithm uses semantically safe optimistic virtual "locks" that achieve very fine granularity in conflict detection. This algorithm ensures serializability and external consistency by using logical clocks and backward validation of transactional queries. A formal proof of correctness of the proposed algorithm is also presented. ^
Resumo:
This paper addresses three questions: (1) How severe were the episodes of banking instability
experienced by the UK over the past two centuries? (2) What have been the macroeconomic
indicators of UK banking instability? and (3) What have been the consequences of UK banking
instability for the cost of credit? Using a unique dataset of bank share prices from 1830 to 2010
to assess the stability of the UK banking system, we find that banking instability has grown more
severe since the 1970s. We also find that interest rates, inflation, lending growth, and equity
prices are consistent macroeconomic indicators of UK banking instability over the long run.
Furthermore, utilising a unique dataset of corporate-bond yields for the period 1860 to 2010, we
find that there is a significant long-run relationship between banking instability and the creditrisk
premium faced by businesses.
Resumo:
A large class of computational problems are characterised by frequent synchronisation, and computational requirements which change as a function of time. When such a problem is solved on a message passing multiprocessor machine [5], the combination of these characteristics leads to system performance which deteriorate in time. As the communication performance of parallel hardware steadily improves so load balance becomes a dominant factor in obtaining high parallel efficiency. Performance can be improved with periodic redistribution of computational load; however, redistribution can sometimes be very costly. We study the issue of deciding when to invoke a global load re-balancing mechanism. Such a decision policy must actively weigh the costs of remapping against the performance benefits, and should be general enough to apply automatically to a wide range of computations. This paper discusses a generic strategy for Dynamic Load Balancing (DLB) in unstructured mesh computational mechanics applications. The strategy is intended to handle varying levels of load changes throughout the run. The major issues involved in a generic dynamic load balancing scheme will be investigated together with techniques to automate the implementation of a dynamic load balancing mechanism within the Computer Aided Parallelisation Tools (CAPTools) environment, which is a semi-automatic tool for parallelisation of mesh based FORTRAN codes.
Resumo:
Universidade Estadual de Campinas. Faculdade de Educação Física