74 resultados para Parallel or distributed processing


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditionally, applications and tools supporting collaborative computing have been designed only with personal computers in mind and support a limited range of computing and network platforms. These applications are therefore not well equipped to deal with network heterogeneity and, in particular, do not cope well with dynamic network topologies. Progress in this area must be made if we are to fulfil the needs of users and support the diversity, mobility, and portability that are likely to characterise group work in future. This paper describes a groupware platform called Coco that is designed to support collaboration in a heterogeneous network environment. The work demonstrates that progress in the p development of a generic supporting groupware is achievable, even in the context of heterogeneous and dynamic networks. The work demonstrates the progress made in the development of an underlying communications infrastructure, building on peer-to-peer concept and topologies to improve scalability and robustness.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Synchronous collaborative systems allow geographically distributed users to form a virtual work environment enabling cooperation between peers and enriching the human interaction. The technology facilitating this interaction has been studied for several years and various solutions can be found at present. In this paper, we discuss our experiences with one such widely adopted technology, namely the Access Grid [1]. We describe our experiences with using this technology, identify key problem areas and propose our solution to tackle these issues appropriately. Moreover, we propose the integration of Access Grid with an Application Sharing tool, developed by the authors. Our approach allows these integrated tools to utilise the enhanced features provided by our underlying dynamic transport layer.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the development of an autonomous surveillance UAV that competed in the Ministry of Defence Grand Challenge 2008. In order to focus on higher-level mission control, the UAV is built upon an existing commercially available stabilised R/C helicopter platform. The hardware architecture is developed to allow for non-invasion integration with the existing stabilised platform, and to enable to the distributed processing of closed loop control and mission goals. The resulting control system proved highly successful and was capable of flying within 40knott gusts. The software and safety architectures were key to the success of the research and also hold the potential for use in the development of more complex system comprising of multiple UAVs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Since its introduction in 1993, the Message Passing Interface (MPI) has become a de facto standard for writing High Performance Computing (HPC) applications on clusters and Massively Parallel Processors (MPPs). The recent emergence of multi-core processor systems presents a new challenge for established parallel programming paradigms, including those based on MPI. This paper presents a new Java messaging system called MPJ Express. Using this system, we exploit multiple levels of parallelism - messaging and threading - to improve application performance on multi-core processors. We refer to our approach as nested parallelism. This MPI-like Java library can support nested parallelism by using Java or Java OpenMP (JOMP) threads within an MPJ Express process. Practicality of this approach is assessed by porting to Java a massively parallel structure formation code from Cosmology called Gadget-2. We introduce nested parallelism in the Java version of the simulation code and report good speed-ups. To the best of our knowledge it is the first time this kind of hybrid parallelism is demonstrated in a high performance Java application. (C) 2009 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The K-Means algorithm for cluster analysis is one of the most influential and popular data mining methods. Its straightforward parallel formulation is well suited for distributed memory systems with reliable interconnection networks, such as massively parallel processors and clusters of workstations. However, in large-scale geographically distributed systems the straightforward parallel algorithm can be rendered useless by a single communication failure or high latency in communication paths. The lack of scalable and fault tolerant global communication and synchronisation methods in large-scale systems has hindered the adoption of the K-Means algorithm for applications in large networked systems such as wireless sensor networks, peer-to-peer systems and mobile ad hoc networks. This work proposes a fully distributed K-Means algorithm (EpidemicK-Means) which does not require global communication and is intrinsically fault tolerant. The proposed distributed K-Means algorithm provides a clustering solution which can approximate the solution of an ideal centralised algorithm over the aggregated data as closely as desired. A comparative performance analysis is carried out against the state of the art sampling methods and shows that the proposed method overcomes the limitations of the sampling-based approaches for skewed clusters distributions. The experimental analysis confirms that the proposed algorithm is very accurate and fault tolerant under unreliable network conditions (message loss and node failures) and is suitable for asynchronous networks of very large and extreme scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

ERPs were elicited to (1) words, (2) pseudowords derived from these words, and (3) nonwords with no lexical neighbors, in a task involving listening to immediately repeated auditory stimuli. There was a significant early (P200) effect of phonotactic probability in the first auditory presentation, which discriminated words and pseudowords from nonwords; and a significant somewhat later (N400) effect of lexicality, which discriminated words from pseudowords and nonwords. There was no reliable effect of lexicality in the ERPs to the second auditory presentation. We conclude that early sublexical phonological processing differed according to phonotactic probability of the stimuli, and that lexically-based redintegration occurred for words but did not occur for pseudowords or nonwords. Thus, in online word recognition and immediate retrieval, phonological and/or sublexical processing plays a more important role than lexical level redintegration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Past studies have revealed that encountering negative events interferes with cognitive processing of subsequent stimuli. The present study investigates whether negative events affect semantic and perceptual processing differently. Presentation of negative pictures produced slower reaction times than neutral or positive pictures in tasks that require semantic processing, such as natural or man-made judgments about drawings of objects, commonness judgments about objects, and categorical judgments about pairs of words. In contrast, negative picture presentation did not slow down judgments in subsequent perceptual processing (e.g., color judgments about words, size judgments about objects). The subjective arousal level of negative pictures did not modulate the interference effects on semantic or perceptual processing. These findings indicate that encountering negative emotional events interferes with semantic processing of subsequent stimuli more strongly than perceptual processing, and that not all types of subsequent cognitive processing are impaired by negative events.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The use of virtualization in high-performance computing (HPC) has been suggested as a means to provide tailored services and added functionality that many users expect from full-featured Linux cluster environments. The use of virtual machines in HPC can offer several benefits, but maintaining performance is a crucial factor. In some instances the performance criteria are placed above the isolation properties. This selective relaxation of isolation for performance is an important characteristic when considering resilience for HPC environments that employ virtualization. In this paper we consider some of the factors associated with balancing performance and isolation in configurations that employ virtual machines. In this context, we propose a classification of errors based on the concept of “error zones”, as well as a detailed analysis of the trade-offs between resilience and performance based on the level of isolation provided by virtualization solutions. Finally, a set of experiments are performed using different virtualization solutions to elucidate the discussion.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Milk is the largest source of iodine in UK diets and an earlier study showed that organic summer milk had significantly lower iodine concentration than conventional milk. There are no comparable studies with winter milk or the effect of milk fat class or heat processing method. Two retail studies with winter milk are reported. Study 1 showed no effect of fat class but organic milk was 32.2% lower in iodine than conventional milk (404 vs. 595 μg/L; P < 0.001). Study 2 found no difference between conventional and Channel Island milk but organic milk contained 35.5% less iodine than conventional milk (474 vs. 306 μg/L; P < 0.001). UHT and branded organic milk also had lower iodine concentrations than conventional milk (331 μg/L; P < 0.001 and 268 μg/L: P < 0.0001 respectively). The results indicate that replacement of conventional milk by organic or UHT milk will increase the risk of sub-optimal iodine status especially for pregnant/lactating women.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an integrative review of the development of child anxiety, drawing on a number of strands of research. Family aggregation and genetic studies indicate raised vulnerability to anxiety in offspring of adults with the disorder (e.g. the temperamental style of behavioural inhibition, or information processing biases). Environmental factors are also important; these include adverse life events and exposure to negative information or modelling. Parents are likely to be key, although not unique, sources of such influences, particularly if they are anxious themselves. Some parenting behaviours associated with child anxiety, such as overprotection, may be elicited by child characteristics, especially in the context of parental anxiety, and these may serve to maintain child disorder. Emerging evidence emphasizes the importance of taking the nature of child and parental anxiety into account, of constructing assessments and interventions that are both disorder specific, and of considering bidirectional influences.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to investigate the widely held, but largely untested, view that implicit memory (repetition priming) reflects an automatic form of retrieval. Specifically, in Experiment 1 we explored whether a secondary task (syllable monitoring), performed during retrieval, would disrupt performance on explicit (cued recall) and implicit (stem completion) memory tasks equally. Surprisingly, despite substantial memory and secondary costs to cued recall when performed with a syllable-monitoring task, the same manipulation had no effect on stem completion priming or on secondary task performance. In Experiment 2 we demonstrated that even when using a particularly demanding version of the stem completion task that incurred secondary task costs, the corresponding disruption to implicit memory performance was minimal. Collectively, the results are consistent with the view that implicit memory retrieval requires little or no processing capacity and is not seemingly susceptible to the effects of dividing attention at retrieval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fully connected cubic networks (FCCNs) are a class of newly proposed hierarchical interconnection networks for multicomputer systems, which enjoy the strengths of constant node degree and good expandability. The shortest path routing in FCCNs is an open problem. In this paper, we present an oblivious routing algorithm for n-level FCCN with N = 8(n) nodes, and prove that this algorithm creates a shortest path from the source to the destination. At the costs of both an O(N)-parallel-step off-line preprocessing phase and a list of size N stored at each node, the proposed algorithm is carried out at each related node in O(n) time. In some cases the proposed algorithm is superior to the one proposed by Chang and Wang in terms of the length of the routing path. This justifies the utility of our routing strategy. (C) 2006 Elsevier Inc. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The objective of a Visual Telepresence System is to provide the operator with a high fidelity image from a remote stereo camera pair linked to a pan/tilt device such that the operator may reorient the camera position by use of head movement. Systems such as these which utilise virtual reality style helmet mounted displays have a number of limitations. The geometry of the camera positions and of the displays is generally fixed and is most suitable only for viewing elements of a scene at a particular distance. To address such limitations, a prototype system has been developed where the geometry of the displays and cameras is dynamically controlled by the eye movement of the operator. This paper explores why it is necessary to actively adjust the display system as well as the cameras and justifies the use of mechanical adjustment of the displays as an alternative to adjustment by electronic or image processing methods. The electronic and mechanical design is described including optical arrangements and control algorithms. The performance and accuracy of the system is assessed with respect to eye movement.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study is concerned with a series of acrylate based side-chain liquid crystalline (LC) polymers. Previous studies have shown that these LC polymers have a preference for parallel or perpendicular alignment with respect to the polymer chain which depends on the length of the coupling chain joining the mesogenic unit to the polymer backbone. On the other hand, the dielectric relaxation of these side-chain LC polymers shows a strong relaxation associated to the mesogenic unit dynamics. For samples with parallel alignment, it was found that the dielectric relaxation of the nematic is weaker and broader than the relaxation of the isotropic. By contrast, for samples with perpendicular alignment, the isotropic to nematic transition reduces the broadening the relaxation and increases the relaxation strength. These two features are more evident for samples with short coupling units for which the dielectric relaxation observed appears to be strongly coupled with the backbone dynamics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.