973 resultados para multi-threaded program


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we describe a distributed object oriented logic programming language in which an object is a collection of threads deductively accessing and updating a shared logic program. The key features of the language, such as static and dynamic object methods and multiple inheritance, are illustrated through a series of small examples. We show how we can implement object servers, allowing remote spawning of objects, which we can use as staging posts for mobile agents. We give as an example an information gathering mobile agent that can be queried about the information it has so far gathered whilst it is gathering new information. Finally we define a class of co-operative reasoning agents that can do resource bounded inference for full first order predicate logic, handling multiple queries and information updates concurrently. We believe that the combination of the concurrent OO and the LP programming paradigms produces a powerful tool for quickly implementing rational multi-agent applications on the internet.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diplomityö tarkastelee säikeistettyä ohjelmointia rinnakkaisohjelmoinnin ylemmällä hierarkiatasolla tarkastellen erityisesti hypersäikeistysteknologiaa. Työssä tarkastellaan hypersäikeistyksen hyviä ja huonoja puolia sekä sen vaikutuksia rinnakkaisalgoritmeihin. Työn tavoitteena oli ymmärtää Intel Pentium 4 prosessorin hypersäikeistyksen toteutus ja mahdollistaa sen hyödyntäminen, missä se tuo suorituskyvyllistä etua. Työssä kerättiin ja analysoitiin suorituskykytietoa ajamalla suuri joukko suorituskykytestejä eri olosuhteissa (muistin käsittely, kääntäjän asetukset, ympäristömuuttujat...). Työssä tarkasteltiin kahdentyyppisiä algoritmeja: matriisioperaatioita ja lajittelua. Näissä sovelluksissa on säännöllinen muistinkäyttökuvio, mikä on kaksiteräinen miekka. Se on etu aritmeettis-loogisissa prosessoinnissa, mutta toisaalta huonontaa muistin suorituskykyä. Syynä siihen on nykyaikaisten prosessorien erittäin hyvä raaka suorituskyky säännöllistä dataa käsiteltäessä, mutta muistiarkkitehtuuria rajoittaa välimuistien koko ja useat puskurit. Kun ongelman koko ylittää tietyn rajan, todellinen suorituskyky voi pudota murto-osaan huippusuorituskyvystä.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this tutorial paper we summarise the key features of the multi-threaded Qu-Prolog language for implementing multi-threaded communicating agent applications. Internal threads of an agent communicate using the shared dynamic database used as a generalisation of Linda tuple store. Threads in different agents, perhaps on different hosts, communicate using either a thread-to-thread store and forward communication system, or by a publish and subscribe mechanism in which messages are routed to their destinations based on content test subscriptions. We illustrate the features using an auction house application. This is fully distributed with multiple auctioneers and bidders which participate in simultaneous auctions. The application makes essential use of the three forms of inter-thread communication of Qu-Prolog. The agent bidding behaviour is specified graphically as a finite state automaton and its implementation is essentially the execution of its state transition function. The paper assumes familiarity with Prolog and the basic concepts of multi-agent systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Femtosecond laser microfabrication has emerged over the last decade as a 3D flexible technology in photonics. Numerical simulations provide an important insight into spatial and temporal beam and pulse shaping during the course of extremely intricate nonlinear propagation (see e.g. [1,2]). Electromagnetics of such propagation is typically described in the form of the generalized Non-Linear Schrdinger Equation (NLSE) coupled with Drude model for plasma [3]. In this paper we consider a multi-threaded parallel numerical solution for a specific model which describes femtosecond laser pulse propagation in transparent media [4, 5]. However our approach can be extended to similar models. The numerical code is implemented in NVIDIA Graphics Processing Unit (GPU) which provides an effitient hardware platform for multi-threded computing. We compare the performance of the described below parallel code implementated for GPU using CUDA programming interface [3] with a serial CPU version used in our previous papers [4,5]. © 2011 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We describe a parallel multi-threaded approach for high performance modelling of wide class of phenomena in ultrafast nonlinear optics. Specific implementation has been performed using the highly parallel capabilities of a programmable graphics processor. © 2011 SPIE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Concurrent software executes multiple threads or processes to achieve high performance. However, concurrency results in a huge number of different system behaviors that are difficult to test and verify. The aim of this dissertation is to develop new methods and tools for modeling and analyzing concurrent software systems at design and code levels. This dissertation consists of several related results. First, a formal model of Mondex, an electronic purse system, is built using Petri nets from user requirements, which is formally verified using model checking. Second, Petri nets models are automatically mined from the event traces generated from scientific workflows. Third, partial order models are automatically extracted from some instrumented concurrent program execution, and potential atomicity violation bugs are automatically verified based on the partial order models using model checking. Our formal specification and verification of Mondex have contributed to the world wide effort in developing a verified software repository. Our method to mine Petri net models automatically from provenance offers a new approach to build scientific workflows. Our dynamic prediction tool, named McPatom, can predict several known bugs in real world systems including one that evades several other existing tools. McPatom is efficient and scalable as it takes advantage of the nature of atomicity violations and considers only a pair of threads and accesses to a single shared variable at one time. However, predictive tools need to consider the tradeoffs between precision and coverage. Based on McPatom, this dissertation presents two methods for improving the coverage and precision of atomicity violation predictions: 1) a post-prediction analysis method to increase coverage while ensuring precision; 2) a follow-up replaying method to further increase coverage. Both methods are implemented in a completely automatic tool.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND: The emergency department has been identified as an area within the health care sector with the highest reports of violence. The best way to control violence is to prevent it before it becomes an issue. Ideally, to prevent violent episodes we should eliminate all triggers of frustration and violence. Our study aims to assess the impact of a quality improvement multi-faceted program aiming at preventing incivility and violence against healthcare professionals working at the ophthalmological emergency department of a teaching hospital. METHODS/DESIGN: This study is a single-center prospective, controlled time-series study with an alternate-month design. The prevention program is based on the successive implementation of five complementary interventions: a) an organizational approach with a standardized triage algorithm and patient waiting number screen, b) an environmental approach with clear signage of the premises, c) an educational approach with informational videos for patients and accompanying persons in waiting rooms, d) a human approach with a mediator in waiting rooms and e) a security approach with surveillance cameras linked to the hospital security. The primary outcome is the rate of incivility or violence by patients, or those accompanying them against healthcare staff. All patients admitted to the ophthalmological emergency department, and those accompanying them, will be enrolled. In all, 45,260 patients will be included in over a 24-month period. The unit analysis will be the patient admitted to the emergency department. Data analysis will be blinded to allocation, but due to the nature of the intervention, physicians and patients will not be blinded. DISCUSSION: The strengths of this study include the active solicitation of event reporting, that this is a prospective study and that the study enables assessment of each of the interventions that make up the program. The challenge lies in identifying effective interventions, adapting them to the context of care in an emergency department, and thoroughly assessing their efficacy with a high level of proof.The study has been registered as a cRCT at clinicaltrials.gov (identifier: NCT02015884).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Handwriting Without Tears (HWT) is a multi-sensory program that provides a simpler approach to the instruction of cursive handwriting. It was administered to a sample of third graders to assess the effectiveness of the program and determine if it would be a viable option for handwriting instruction at CID.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the past few years Tabling has emerged as a powerful logic programming model. The integration of concurrent features into the implementation of Tabling systems is demanded by need to use recently developed tabling applications within distributed systems, where a process has to respond concurrently to several requests. The support for sharing of tables among the concurrent threads of a Tabling process is a desirable feature, to allow one of Tabling’s virtues, the re-use of computations by other threads and to allow efficient usage of available memory. However, the incremental completion of tables which are evaluated concurrently is not a trivial problem. In this dissertation we describe the integration of concurrency mechanisms, by the way of multi-threading, in a state of the art Tabling and Prolog system, XSB. We begin by reviewing the main concepts for a formal description of tabled computations, called SLG resolution and for the implementation of Tabling under the SLG-WAM, the abstract machine supported by XSB. We describe the different scheduling strategies provided by XSB and introduce some new properties of local scheduling, a scheduling strategy for SLG resolution. We proceed to describe our implementation work by describing the process of integrating multi-threading in a Prolog system supporting Tabling, without addressing the problem of shared tables. We describe the trade-offs and implementation decisions involved. We then describe an optimistic algorithm for the concurrent sharing of completed tables, Shared Completed Tables, which allows the sharing of tables without incurring in deadlocks, under local scheduling. This method relies on the execution properties of local scheduling and includes full support for negation. We provide a theoretical framework and discuss the implementation’s correctness and complexity. After that, we describe amethod for the sharing of tables among threads that allows parallelism in the computation of inter-dependent subgoals, which we name Concurrent Completion. We informally argue for the correctness of Concurrent Completion. We give detailed performance measurements of the multi-threaded XSB systems over a variety of machines and operating systems, for both the Shared Completed Tables and the Concurrent Completion implementations. We focus our measurements inthe overhead over the sequential engine and the scalability of the system. We finish with a comparison of XSB with other multi-threaded Prolog systems and we compare our approach to concurrent tabling with parallel and distributed methods for the evaluation of tabling. Finally, we identify future research directions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose the Distributed using Optimal Priority Assignment (DOPA) heuristic that finds a feasible partitioning and priority assignment for distributed applications based on the linear transactional model. DOPA partitions the tasks and messages in the distributed system, and makes use of the Optimal Priority Assignment (OPA) algorithm known as Audsley’s algorithm, to find the priorities for that partition. The experimental results show how the use of the OPA algorithm increases in average the number of schedulable tasks and messages in a distributed system when compared to the use of Deadline Monotonic (DM) usually favoured in other works. Afterwards, we extend these results to the assignment of Parallel/Distributed applications and present a second heuristic named Parallel-DOPA (P-DOPA). In that case, we show how the partitioning process can be simplified by using the Distributed Stretch Transformation (DST), a parallel transaction transformation algorithm introduced in [1].

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chagas disease, named after Carlos Chagas who first described it in 1909, exists only on the American Continent. It is caused by a parasite, Trypanosoma cruzi, transmitted to humans by blood-sucking triatomine bugs and by blood transfusion. Chagas disease has two successive phases, acute and chronic. The acute phase lasts 6 to 8 weeks. After several years of starting the chronic phase, 20% to 35% of the infected individuals, depending on the geographical area will develop irreversible lesions of the autonomous nervous system in the heart, esophagus, colon and the peripheral nervous system. Data on the prevalence and distribution of Chagas disease improved in quality during the 1980's as a result of the demographically representative cross-sectional studies carried out in countries where accurate information was not available. A group of experts met in Brasília in 1979 and devised standard protocols to carry out countrywide prevalence studies on human T. cruzi infection and triatomine house infestation. Thanks to a coordinated multi-country program in the Southern Cone countries the transmission of Chagas disease by vectors and by blood transfusion has been interrupted in Uruguay in1997, in Chile in 1999, and in 8 of the 12 endemic states of Brazil in 2000 and so the incidence of new infections by T. cruzi in the whole continent has decreased by 70%. Similar control multi-country initiatives have been launched in the Andean countries and in Central America and rapid progress has been recorded to ensure the interruption of the transmission of Chagas disease by 2005 as requested by a Resolution of the World Health Assembly approved in 1998. The cost-benefit analysis of the investments of the vector control program in Brazil indicate that there are savings of US$17 in medical care and disabilities for each dollar spent on prevention, showing that the program is a health investment with good return. Since the inception in 1979 of the Steering Committee on Chagas Disease of the Special Program for Research and Training in Tropical Diseases of the World Health Organization (TDR), the objective was set to promote and finance research aimed at the development of new methods and tools to control this disease. The well known research institutions in Latin America were the key elements of a world wide network of laboratories that received - on a competitive basis - financial support for projects in line with the priorities established. It is presented the time line of the different milestones that were answering successively and logically the outstanding scientific questions identified by the Scientific Working Group in 1978 and that influenced the development and industrial production of practical solutions for diagnosis of the infection and disease control.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This Report is an update of the Cape Verde Diagnostic Trade Integration Study, titled Cape Verde’s Insertion into the Global Economy, produced and validated by the Government of Cape Verde in December 2008. Like the previous 2008 study, this Cape Verde Diagnostic Trade Integration Study Update provides a critical examination of the major institutional and production constraints that hinder Cape Verde’s ability to capitalize fully on the growth and welfare gains from its integration into the world economy. As a policy report, this study offers a set of priority policies and measures that can be implemented by both the public and private sectors to mitigate and surmount these supply side and institutional constraints. These recommendations are summarized in an Action Matrix. The Report is fruit of the generous support of the multi-donor program the Enhanced Integrated Framework (EIF). In every crisis there is an opportunity. Four years after the validation of the country’s first Diagnostic Trade Integration Study in 2008, Cape Verde finds itself in a drastically altered external environment. Cape Verde faces a worsened external environment than four years ago, when it was also traversing years of crisis as global food and energy prices escalated. Just as the country was validating its first trade study in late 2008, and celebrating its graduation from the list of Least Developed Countries, the onset of the deepest global recession in recent memory triggered an even worse external situation as the country’s principal source of markets, investments, remittances and aid, the Eurozone, unraveled economically and politically. As the Eurozone crisis spread, it was Cape Verde’s misfortune that the crisis contaminated precisely its biggest Eurozone partners and donors, such as Portugal, Spain and Italy. For such a highly dependent and exposed economy like that of Cape Verde, the deteriorating external sector has had a substantial negative impact on its macroeconomic performance. At the time of the validation workshop and graduation in 2008, no one could have foreseen or predicted the severity of the global crisis that followed. Despite traversing these years of adversity and external shocks, and suffering palpable setbacks, Cape Verde’s economy had proven surprisingly resilient, especially its principal sector, tourism. To its great credit, the country’s economic fundamentals are solid, and have been carefully and prudently managed over the years. For this reason alone, the country has thus far weathered the global and Eurozone crisis. Yet the near and medium term future remains uncertain. The country’s margin for maneuver has narrowed, its options far more limited, and hard choices lie ahead. Thus, there is no better time than now to analyze Cape Verde’s position in the global economy, and to examine the many challenges and opportunities it faces. The first diagnostic trade study outlined an ambitious agenda and set of policy strategies to enhance Cape Verde’s participation in the global economy. Written prior to the global crisis, the study did not, and could not, anticipate the scope and depth of the subsequent global and Eurozone crises. A few short months before the validation of the first DTIS Cape Verde joined the World Trade Organization (WTO). It has spent these four years adjusting to this status and implementing its commitments. At the same time, the country seeks greater economic integration with the European Union. Since 2008 the government has been investing heavily in the country’s economic infrastructure, focusing especially on fostering transformation in key sectors like agriculture, fisheries, tourism and creative industries. For these and many other reasons, it is both timely and urgent to review the road traveled since 2008. It is an opportune moment to reassess the country’s options, to rethink strategies, and to chart a new way forward that it is practical, implementable, and that builds on the country’s competitive advantages and current successes.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Remote sensing spatial, spectral, and temporal resolutions of images, acquired over a reasonably sized image extent, result in imagery that can be processed to represent land cover over large areas with an amount of spatial detail that is very attractive for monitoring, management, and scienti c activities. With Moore's Law alive and well, more and more parallelism is introduced into all computing platforms, at all levels of integration and programming to achieve higher performance and energy e ciency. Being the geometric calibration process one of the most time consuming processes when using remote sensing images, the aim of this work is to accelerate this process by taking advantage of new computing architectures and technologies, specially focusing in exploiting computation over shared memory multi-threading hardware. A parallel implementation of the most time consuming process in the remote sensing geometric correction has been implemented using OpenMP directives. This work compares the performance of the original serial binary versus the parallelized implementation, using several multi-threaded modern CPU architectures, discussing about the approach to nd the optimum hardware for a cost-e ective execution.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MOTIVATION: The detection of positive selection is widely used to study gene and genome evolution, but its application remains limited by the high computational cost of existing implementations. We present a series of computational optimizations for more efficient estimation of the likelihood function on large-scale phylogenetic problems. We illustrate our approach using the branch-site model of codon evolution. RESULTS: We introduce novel optimization techniques that substantially outperform both CodeML from the PAML package and our previously optimized sequential version SlimCodeML. These techniques can also be applied to other likelihood-based phylogeny software. Our implementation scales well for large numbers of codons and/or species. It can therefore analyse substantially larger datasets than CodeML. We evaluated FastCodeML on different platforms and measured average sequential speedups of FastCodeML (single-threaded) versus CodeML of up to 5.8, average speedups of FastCodeML (multi-threaded) versus CodeML on a single node (shared memory) of up to 36.9 for 12 CPU cores, and average speedups of the distributed FastCodeML versus CodeML of up to 170.9 on eight nodes (96 CPU cores in total).Availability and implementation: ftp://ftp.vital-it.ch/tools/FastCodeML/. CONTACT: selectome@unil.ch or nicolas.salamin@unil.ch.