8 resultados para Distributed shared memory
em CentAUR: Central Archive University of Reading - UK
Resumo:
With the transition to multicore processors almost complete, the parallel processing community is seeking efficient ways to port legacy message passing applications on shared memory and multicore processors. MPJ Express is our reference implementation of Message Passing Interface (MPI)-like bindings for the Java language. Starting with the current release, the MPJ Express software can be configured in two modes: the multicore and the cluster mode. In the multicore mode, parallel Java applications execute on shared memory or multicore processors. In the cluster mode, Java applications parallelized using MPJ Express can be executed on distributed memory platforms like compute clusters and clouds. The multicore device has been implemented using Java threads in order to satisfy two main design goals of portability and performance. We also discuss the challenges of integrating the multicore device in the MPJ Express software. This turned out to be a challenging task because the parallel application executes in a single JVM in the multicore mode. On the contrary in the cluster mode, the parallel user application executes in multiple JVMs. Due to these inherent architectural differences between the two modes, the MPJ Express runtime is modified to ensure correct semantics of the parallel program. Towards the end, we compare performance of MPJ Express (multicore mode) with other C and Java message passing libraries---including mpiJava, MPJ/Ibis, MPICH2, MPJ Express (cluster mode)---on shared memory and multicore processors. We found out that MPJ Express performs signicantly better in the multicore mode than in the cluster mode. Not only this but the MPJ Express software also performs better in comparison to other Java messaging libraries including mpiJava and MPJ/Ibis when used in the multicore mode on shared memory or multicore processors. We also demonstrate effectiveness of the MPJ Express multicore device in Gadget-2, which is a massively parallel astrophysics N-body siimulation code.
Resumo:
Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.
Resumo:
In this paper, we study the periodic oscillatory behavior of a class of bidirectional associative memory (BAM) networks with finite distributed delays. A set of criteria are proposed for determining global exponential periodicity of the proposed BAM networks, which assume neither differentiability nor monotonicity of the activation function of each neuron. In addition, our criteria are easily checkable. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
Although many examples exist for shared neural representations of self and other, it is unknown how such shared representations interact with the rest of the brain. Furthermore, do high-level inference-based shared mentalizing representations interact with lower level embodied/simulation-based shared representations? We used functional neuroimaging (fMRI) and a functional connectivity approach to assess these questions during high-level inference-based mentalizing. Shared mentalizing representations in ventromedial prefrontal cortex, posterior cingulate/precuneus, and temporo-parietal junction (TPJ) all exhibited identical functional connectivity patterns during mentalizing of both self and other. Connectivity patterns were distributed across low-level embodied neural systems such as the frontal operculum/ventral premotor cortex, the anterior insula, the primary sensorimotor cortex, and the presupplementary motor area. These results demonstrate that identical neural circuits are implementing processes involved in mentalizing of both self and other and that the nature of such processes may be the integration of low-level embodied processes within higher level inference-based mentalizing.
Resumo:
This article reviews current technological developments, particularly Peer-to-Peer technologies and Distributed Data Systems, and their value to community memory projects, particularly those concerned with the preservation of the cultural, literary and administrative data of cultures which have suffered genocide or are at risk of genocide. It draws attention to the comparatively good representation online of genocide denial groups and changes in the technological strategies of holocaust denial and other far-right groups. It draws on the author's work in providing IT support for a UK-based Non-Governmental Organization providing support for survivors of genocide in Rwanda.
Resumo:
Decoding emotional prosody is crucial for successful social interactions, and continuous monitoring of emotional intent via prosody requires working memory. It has been proposed by Ross and others that emotional prosody cognitions in the right hemisphere are organized in an analogous fashion to propositional language functions in the left hemisphere. This study aimed to test the applicability of this model in the context of prefrontal cortex working memory functions. BOLD response data were therefore collected during performance of two emotional working memory tasks by participants undergoing fMRI. In the prosody task, participants identified the emotion conveyed in pre-recorded sentences, and working memory load was manipulated in the style of an N-back task. In the matched lexico-semantic task, participants identified the emotion conveyed by sentence content. Block-design neuroimaging data were analyzed parametrically with SPM5. At first, working memory for emotional prosody appeared to be right-lateralized in the PFC, however, further analyses revealed that it shared much bilateral prefrontal functional neuroanatomy with working memory for lexico-semantic emotion. Supplementary separate analyses of males and females suggested that these language functions were less bilateral in females, but their inclusion did not alter the direction of laterality. It is concluded that Ross et al.'s model is not applicable to prefrontal cortex working memory functions, that evidence that working memory cannot be subdivided in prefrontal cortex according to material type is increased, and that incidental working memory demands may explain the frontal lobe involvement in emotional prosody comprehension as revealed by neuroimaging studies. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
Existing research on synchronous remote working in CSCW has highlighted the troubles that can arise because actions at one site are (partially) unavailable to remote colleagues. Such ‘local action’ is routinely characterised as a nuisance, a distraction, subordinate and the like. This paper explores interconnections between ‘local action’ and ‘distributed work’ in the case of a research team virtually collocated through ‘MiMeG’. MiMeG is an e-Social Science tool that facilitates ‘distributed data sessions’ in which social scientists are able to remotely collaborate on the real-time analysis of video data. The data are visible and controllable in a shared workspace and participants are additionally connected via audio conferencing. The findings reveal that whilst the (partial) unavailability of local action is at times problematic, it is also used as a resource for coordinating work. The paper considers how local action is interactionally managed in distributed data sessions and concludes by outlining implications of the analysis for the design and study of technologies to support group-to-group collaboration.
Resumo:
Advances in hardware and software technology enable us to collect, store and distribute large quantities of data on a very large scale. Automatically discovering and extracting hidden knowledge in the form of patterns from these large data volumes is known as data mining. Data mining technology is not only a part of business intelligence, but is also used in many other application areas such as research, marketing and financial analytics. For example medical scientists can use patterns extracted from historic patient data in order to determine if a new patient is likely to respond positively to a particular treatment or not; marketing analysts can use extracted patterns from customer data for future advertisement campaigns; finance experts have an interest in patterns that forecast the development of certain stock market shares for investment recommendations. However, extracting knowledge in the form of patterns from massive data volumes imposes a number of computational challenges in terms of processing time, memory, bandwidth and power consumption. These challenges have led to the development of parallel and distributed data analysis approaches and the utilisation of Grid and Cloud computing. This chapter gives an overview of parallel and distributed computing approaches and how they can be used to scale up data mining to large datasets.