36 resultados para IGBTs in parallel


Relevância:

90.00% 90.00%

Publicador:

Resumo:

A novel two-step paradigm was used to investigate the parallel programming of consecutive, stimulus-elicited ('reflexive') and endogenous ('voluntary') saccades. The mean latency of voluntary saccades, made following the first reflexive saccades in two-step conditions, was significantly reduced compared to that of voluntary saccades made in the single-step control trials. The latency of the first reflexive saccades was modulated by the requirement to make a second saccade: first saccade latency increased when a second voluntary saccade was required in the opposite direction to the first saccade, and decreased when a second saccade was required in the same direction as the first reflexive saccade. A second experiment confirmed the basic effect and also showed that a second reflexive saccade may be programmed in parallel with a first voluntary saccade. The results support the view that voluntary and reflexive saccades can be programmed in parallel on a common motor map. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Research shows that poor indoor air quality (IAQ) in school buildings can cause a reduction in the students' performance assessed by short-term computer-based tests: whereas good air quality in classrooms can enhance children's concentration and also teachers' productivity. Investigation of air quality in classrooms helps us to characterise pollutant levels and implement corrective measures. Outdoor pollution, ventilation equipment, furnishings, and human activities affect IAQ. In school classrooms, the occupancy density is high (1.8-2.4m(2)/person) compared to offices (10 m(2)/person). Ventilation systems expend energy and there is a trend to save energy by reducing ventilation rates. We need to establish the minimum acceptable level of fresh air required for the health of the occupants. This paper describes a project, which will aim to investigate the effect of IAQ and ventilation rates on pupils' performance and health using psychological tests. The aim is to recommend suitable ventilation rates for classrooms and examine the suitability of the air quality guidelines for classrooms. The air quality, ventilation rates and pupils' performance in classrooms will be evaluated in parallel measurements. In addition, Visual Analogue Scales will be used to assess subjective perception of the classroom environment and SBS symptoms. Pupil performance will be measured with Computerised Assessment Tests (CAT), and Pen and Paper Performance Tasks while physical parameters of the classroom environment will be recorded using an advanced data logging system. A total number of 20 primary schools in the Reading area are expected to participate in the present investigation, and the pupils participating in this study will be within the age group of 9-11 years. On completion of the project, based oil the overall data recommendations for suitable ventilation rates for schools will be formulated. (C) 2006 Elsevier Ltd. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work reported in this paper proposes 'Intelligent Agents', a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

How can a bridge be built between autonomic computing approaches and parallel computing systems? How can autonomic computing approaches be extended towards building reliable systems? How can existing technologies be merged to provide a solution for self-managing systems? The work reported in this paper aims to answer these questions by proposing Swarm-Array Computing, a novel technique inspired from swarm robotics and built on the foundations of autonomic and parallel computing paradigms. Two approaches based on intelligent cores and intelligent agents are proposed to achieve autonomy in parallel computing systems. The feasibility of the proposed approaches is validated on a multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper addresses the need for accurate predictions on the fault inflow, i.e. the number of faults found in the consecutive project weeks, in highly iterative processes. In such processes, in contrast to waterfall-like processes, fault repair and development of new features run almost in parallel. Given accurate predictions on fault inflow, managers could dynamically re-allocate resources between these different tasks in a more adequate way. Furthermore, managers could react with process improvements when the expected fault inflow is higher than desired. This study suggests software reliability growth models (SRGMs) for predicting fault inflow. Originally developed for traditional processes, the performance of these models in highly iterative processes is investigated. Additionally, a simple linear model is developed and compared to the SRGMs. The paper provides results from applying these models on fault data from three different industrial projects. One of the key findings of this study is that some SRGMs are applicable for predicting fault inflow in highly iterative processes. Moreover, the results show that the simple linear model represents a valid alternative to the SRGMs, as it provides reasonably accurate predictions and performs better in many cases.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The sampling of certain solid angle is a fundamental operation in realistic image synthesis, where the rendering equation describing the light propagation in closed domains is solved. Monte Carlo methods for solving the rendering equation use sampling of the solid angle subtended by unit hemisphere or unit sphere in order to perform the numerical integration of the rendering equation. In this work we consider the problem for generation of uniformly distributed random samples over hemisphere and sphere. Our aim is to construct and study the parallel sampling scheme for hemisphere and sphere. First we apply the symmetry property for partitioning of hemisphere and sphere. The domain of solid angle subtended by a hemisphere is divided into a number of equal sub-domains. Each sub-domain represents solid angle subtended by orthogonal spherical triangle with fixed vertices and computable parameters. Then we introduce two new algorithms for sampling of orthogonal spherical triangles. Both algorithms are based on a transformation of the unit square. Similarly to the Arvo's algorithm for sampling of arbitrary spherical triangle the suggested algorithms accommodate the stratified sampling. We derive the necessary transformations for the algorithms. The first sampling algorithm generates a sample by mapping of the unit square onto orthogonal spherical triangle. The second algorithm directly compute the unit radius vector of a sampling point inside to the orthogonal spherical triangle. The sampling of total hemisphere and sphere is performed in parallel for all sub-domains simultaneously by using the symmetry property of partitioning. The applicability of the corresponding parallel sampling scheme for Monte Carlo and Quasi-D/lonte Carlo solving of rendering equation is discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As consumers demand more functionality) from their electronic devices and manufacturers supply the demand then electrical power and clock requirements tend to increase, however reassessing system architecture can fortunately lead to suitable counter reductions. To maintain low clock rates and therefore reduce electrical power, this paper presents a parallel convolutional coder for the transmit side in many wireless consumer devices. The coder accepts a parallel data input and directly computes punctured convolutional codes without the need for a separate puncturing operation while the coded bits are available at the output of the coder in a parallel fashion. Also as the computation is in parallel then the coder can be clocked at 7 times slower than the conventional shift-register based convolutional coder (using DVB 7/8 rate). The presented coder is directly relevant to the design of modern low-power consumer devices

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The work reported in this paper proposes ‘Intelligent Agents’, a Swarm-Array computing approach focused to apply autonomic computing concepts to parallel computing systems and build reliable systems for space applications. Swarm-array computing is a robotics a swarm robotics inspired novel computing approach considered as a path to achieve autonomy in parallel computing systems. In the intelligent agent approach, a task to be executed on parallel computing cores is considered as a swarm of autonomous agents. A task is carried to a computing core by carrier agents and can be seamlessly transferred between cores in the event of a predicted failure, thereby achieving self-* objectives of autonomic computing. The approach is validated on a multi-agent simulator.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent developments in instrumentation and facilities for sample preparation have resulted in sharply increased interest in the application of neutron diffraction. Of particular interest are combined approaches in which neutron methods are used in parallel with X-ray techniques. Two distinct examples are given. The first is a single-crystal study of an A-DNA structure formed by the oligonucleotide d(AGGGGCCCCT)2, showing evidence of unusual base protonation that is not visible by X-ray crystallography. The second is a solution scattering study of the interaction of a bisacridine derivative with the human telomeric sequence d(AGGGTTAGGGTTAGGGTTAGGG) and illustrates the differing effects of NaCl and KCl on this interaction.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Proposed is a unique cell histogram architecture which will process k data items in parallel to compute 2q histogram bins per time step. An array of m/2q cells computes an m-bin histogram with a speed-up factor of k; k ⩾ 2 makes it faster than current dual-ported memory implementations. Furthermore, simple mechanisms for conflict-free storing of the histogram bins into an external memory array are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Faecal microbial changes associated with ageing include reduced bifidobacteria numbers. These changes coincide with an increased risk of disease development. Prebiotics have been observed to increase bifidobacteria numbers within humans. The present study aimed to determine if prebiotic galacto-oligosaccharides (GOS) could benefit a population of men and women of 50 years and above, through modulation of faecal microbiota, fermentation characteristics and faecal water genotoxicity. A total of thirty-seven volunteers completed this randomised, double-blind, placebo-controlled crossover trial. The treatments – juice containing 4 g GOS and placebo – were consumed twice daily for 3 weeks, preceded by 3-week washout periods. To study the effect of GOS on different large bowel regions, three-stage continuous culture systems were conducted in parallel using faecal inocula from three volunteers. Faecal samples were microbially enumerated by quantitative PCR. In vivo, following GOS intervention, bifidobacteria were significantly more compared to post-placebo (P = 0·02). Accordingly, GOS supplementation had a bifidogenic effect in all in vitro system vessels. Furthermore, in vessel 1 (similar to the proximal colon), GOS fermentation led to more lactobacilli and increased butyrate. No changes in faecal water genotoxicity were observed. To conclude, GOS supplementation significantly increased bifidobacteria numbers in vivo and in vitro. Increased butyrate production and elevated bifidobacteria numbers may constitute beneficial modulation of the gut microbiota in a maturing population.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Distributed Rule Induction (DRI) project at the University of Portsmouth is concerned with distributed data mining algorithms for automatically generating rules of all kinds. In this paper we present a system architecture and its implementation for inducing modular classification rules in parallel in a local area network using a distributed blackboard system. We present initial results of a prototype implementation based on the Prism algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The nature of armed conflict has changed dramatically in recent decades. In particular, it is increasingly the case that hostilities now occur alongside ‘everyday’ situations. This has led to a pressing need to determine when a ‘conduct of hostilities’ model (governed by international humanitarian law—IHL) applies and when a ‘law enforcement’ model (governed by international human rights law—IHRL) applies. This in turn raises the question of whether these two legal regimes are incompatible or whether they might be applied in parallel. It is on this question that the current paper focuses, examining it at the level of principle. Whilst most accounts of the principles underlying these two areas of law focus on humanitarian considerations, few have compared the role played by necessity in each. This paper seeks to address this omission. It demonstrates that considerations of necessity play a prominent role in both IHL and IHRL, albeit with differing consequences. It then applies this necessity-based analysis to suggest a principled basis for rationalising the relationship between IHL and IHRL, demonstrating how this approach would operate in practice. It is shown that, by emphasising the role of necessity in IHL and IHRL, an approach can be adopted that reconciles the two in a manner that is sympathetic to their object and purpose.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Exascale systems are the next frontier in high-performance computing and are expected to deliver a performance of the order of 10^18 operations per second using massive multicore processors. Very large- and extreme-scale parallel systems pose critical algorithmic challenges, especially related to concurrency, locality and the need to avoid global communication patterns. This work investigates a novel protocol for dynamic group communication that can be used to remove the global communication requirement and to reduce the communication cost in parallel formulations of iterative data mining algorithms. The protocol is used to provide a communication-efficient parallel formulation of the k-means algorithm for cluster analysis. The approach is based on a collective communication operation for dynamic groups of processes and exploits non-uniform data distributions. Non-uniform data distributions can be either found in real-world distributed applications or induced by means of multidimensional binary search trees. The analysis of the proposed dynamic group communication protocol has shown that it does not introduce significant communication overhead. The parallel clustering algorithm has also been extended to accommodate an approximation error, which allows a further reduction of the communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing elements.