957 resultados para Parallel system


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The furious pace of Moore's Law is driving computer architecture into a realm where the the speed of light is the dominant factor in system latencies. The number of clock cycles to span a chip are increasing, while the number of bits that can be accessed within a clock cycle is decreasing. Hence, it is becoming more difficult to hide latency. One alternative solution is to reduce latency by migrating threads and data, but the overhead of existing implementations has previously made migration an unserviceable solution so far. I present an architecture, implementation, and mechanisms that reduces the overhead of migration to the point where migration is a viable supplement to other latency hiding mechanisms, such as multithreading. The architecture is abstract, and presents programmers with a simple, uniform fine-grained multithreaded parallel programming model with implicit memory management. In other words, the spatial nature and implementation details (such as the number of processors) of a parallel machine are entirely hidden from the programmer. Compiler writers are encouraged to devise programming languages for the machine that guide a programmer to express their ideas in terms of objects, since objects exhibit an inherent physical locality of data and code. The machine implementation can then leverage this locality to automatically distribute data and threads across the physical machine by using a set of high performance migration mechanisms. An implementation of this architecture could migrate a null thread in 66 cycles -- over a factor of 1000 improvement over previous work. Performance also scales well; the time required to move a typical thread is only 4 to 5 times that of a null thread. Data migration performance is similar, and scales linearly with data block size. Since the performance of the migration mechanism is on par with that of an L2 cache, the implementation simulated in my work has no data caches and relies instead on multithreading and the migration mechanism to hide and reduce access latencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Back injuries identification and diagnoses in the transition of the Taylor model to the flexiblemodel of production organization, demands a parallel intervention of prevention actors at work. This study uses simultaneously three intervention models (structured action analysis, muscle skeletal symptoms questionnaires and muscle skeletal assessment) for work activities in a packaging plant. In this study seventy and two (72) operative workers participated (28 workers with muscle skeletal evaluation). In an intervention period of 10 months, the physical, cognitive, organizational components and productive process dynamics were evaluated from the muscle skeletal demands issues. The differences established between objective exposure at risk, back injury risk perception, appreciation and a vertebral spine evaluation, in prior and post intervention, determines the structure for a muscle skeletal risk management system. This study explains that back injury symptoms can be more efficiently reduced among operative workers combining measures registered and the adjustment between dynamics, the changes at work and efficient gestures development. Relevance: the results of this study can be used to pre ent back injuries in workers of flexible production processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost one hundred years ago the Carnegie Foundation for the Advancement of Teaching authorized a study and report about the medical education of the United States and Canada directed by Mr. Abraham Flexner an education expert of the time. This report turned out to be one of the most important documents of the medical education revolution that took place by that time in North America and that led it to become what it is today. Almost a century after that, Colombian medical education has reached an outstanding similarity to the system described in the Flexner report. The present article highlights the parallel between North America’s medical education situation a hundred years ago and Colombia’s actual medical education situation. We present here some notions about the actual education system based on what was described on 1910 and which we consider, constitutes the current medical education situation on our country and possibly on many Latin American countries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Although climate models have been improving in accuracy and efficiency over the past few decades, it now seems that these incremental improvements may be slowing. As tera/petascale computing becomes massively parallel, our legacy codes are less suitable, and even with the increased resolution that we are now beginning to use, these models cannot represent the multiscale nature of the climate system. This paper argues that it may be time to reconsider the use of adaptive mesh refinement for weather and climate forecasting in order to achieve good scaling and representation of the wide range of spatial scales in the atmosphere and ocean. Furthermore, the challenge of introducing living organisms and human responses into climate system models is only just beginning to be tackled. We do not yet have a clear framework in which to approach the problem, but it is likely to cover such a huge number of different scales and processes that radically different methods may have to be considered. The challenges of multiscale modelling and petascale computing provide an opportunity to consider a fresh approach to numerical modelling of the climate (or Earth) system, which takes advantage of the computational fluid dynamics developments in other fields and brings new perspectives on how to incorporate Earth system processes. This paper reviews some of the current issues in climate (and, by implication, Earth) system modelling, and asks the question whether a new generation of models is needed to tackle these problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a general Multi-Agent System framework for distributed data mining based on a Peer-to-Peer model. Agent protocols are implemented through message-based asynchronous communication. The framework adopts a dynamic load balancing policy that is particularly suitable for irregular search algorithms. A modular design allows a separation of the general-purpose system protocols and software components from the specific data mining algorithm. The experimental evaluation has been carried out on a parallel frequent subgraph mining algorithm, which has shown good scalability performances.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The EP2025 EDS project develops a highly parallel information server that supports established high-value interfaces. We describe the motivation for the project, the architecture of the system, and the design and application of its database and language subsystems. The Elipsys logic programming language, its advanced applications, EDS Lisp, and the Metal machine translation system are examined.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An eddy current testing system consists of a multi-sensor probe, computer and a special expansion card and software for data collection and analysis. The probe incorporates an excitation coil, and sensor coils; at least one sensor coil is a lateral current-normal coil and at least one is a current perturbation coil.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The expression of proteins using recombinant baculoviruses is a mature and widely used technology. However, some aspects of the technology continue to detract from high throughput use and the basis of the final observed expression level is poorly understood. Here, we describe the design and use of a set of vectors developed around a unified cloning strategy that allow parallel expression of target proteins in the baculovirus system as N-terminal or C-terminal fusions. Using several protein kinases as tests we found that amino-terminal fusion to maltose binding protein rescued expression of the poorly expressed human kinase Cot but had only a marginal effect on expression of a well-expressed kinase IKK-2. In addition, MBP fusion proteins were found to be secreted from the expressing cell. Use of a carboxyl-terminal GFP tagging vector showed that fluorescence measurement paralleled expression level and was a convenient readout in the context of insect cell expression, an observation that was further supported with additional non-kinase targets. The expression of the target proteins using the same vectors in vitro showed that differences in expression level were wholly dependent on the environment of the expressing cell and an investigation of the time course of expression showed it could affect substantially the observed expression level for poorly but not well-expressed proteins. Our vector suite approach shows that rapid expression survey can be achieved within the baculovirus system and in addition, goes some way to identifying the underlying basis of the expression level obtained. (c) 2006 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A number of strategies are emerging for the high throughput (HTP) expression of recombinant proteins to enable structural and functional study. Here we describe a workable HTP strategy based on parallel protein expression in E. coli and insect cells. Using this system we provide comparative expression data for five proteins derived from the Autographa californica polyhedrosis virus genome that vary in amino acid composition and in molecular weight. Although the proteins are part of a set of factors known to be required for viral late gene expression, the precise function of three of the five, late expression factors (lefs) 6, 7 and 10, is unknown. Rapid expression and characterisation has allowed the determination of their ability to bind DNA and shown a cellular location consistent with their properties. Our data point to the utility of a parallel expression strategy to rapidly obtain workable protein expression levels from many open reading frames (ORFs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Uncertainty contributes a major part in the accuracy of a decision-making process while its inconsistency is always difficult to be solved by existing decision-making tools. Entropy has been proved to be useful to evaluate the inconsistency of uncertainty among different respondents. The study demonstrates an entropy-based financial decision support system called e-FDSS. This integrated system provides decision support to evaluate attributes (funding options and multiple risks) available in projects. Fuzzy logic theory is included in the system to deal with the qualitative aspect of these options and risks. An adaptive genetic algorithm (AGA) is also employed to solve the decision algorithm in the system in order to provide optimal and consistent rates to these attributes. Seven simplified and parallel projects from a Hong Kong construction small and medium enterprise (SME) were assessed to evaluate the system. The result shows that the system calculates risk adjusted discount rates (RADR) of projects in an objective way. These rates discount project cash flow impartially. Inconsistency of uncertainty is also successfully evaluated by the use of the entropy method. Finally, the system identifies the favourable funding options that are managed by a scheme called SME Loan Guarantee Scheme (SGS). Based on these results, resource allocation could then be optimized and the best time to start a new project could also be identified throughout the overall project life cycle.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prebiotics are nondigestible food ingredients that encourage proliferation of selected groups of the colonic microflora, thereby altering the composition toward a more beneficial community. In the present study, the prebiotic potential of a novel galactooligosaccharide (GOS) mixture, produced by the activity of galactosyltransferases from Bifidobacterium bifidum 41171 on lactose, was assessed in vitro and in a parallel continuous randomized pig trial. In situ fluorescent hybridization with 16S rRNA-targeted probes was used to investigate changes in total bacteria, bifidobacteria, lactobacilli, bacteroides, and Clostridium histolyticum group in response to supplementing the novel GOS mixture. In a 3-stage continuous culture system, the bifidobacterial numbers for the first 2 vessels, which represented the proximal and traverse colon, increased (P < 0.05) after the addition of the oligosaccharide mixture. In addition, the oligosaccharide mixture strongly inhibited the attachment of enterohepatic Escherichia coli (P < 0.01) and Salmonella enterica serotype Typhimurium (P < 0.01) to HT29 cells. Addition of the novel mixture at 4% (wt:wt) to a commercial diet increased the density of bificlobacteria (P < 0.001) and the acetate concentration (P < 0.001), and decreased the pH (P < 0.001) compared with the control diet and the control diet supplemented with inulin, suggesting a great prebiotic potential for the novel oligosaccharide mixture. J. Nutr. 135: 1726-1731, 2005.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Java language first came to public attention in 1995. Within a year, it was being speculated that Java may be a good language for parallel and distributed computing. Its core features, including being objected oriented and platform independence, as well as having built-in network support and threads, has encouraged this view. Today, Java is being used in almost every type of computer-based system, ranging from sensor networks to high performance computing platforms, and from enterprise applications through to complex research-based.simulations. In this paper the key features that make Java a good language for parallel and distributed computing are first discussed. Two Java-based middleware systems, namely MPJ Express, an MPI-like Java messaging system, and Tycho, a wide-area asynchronous messaging framework with an integrated virtual registry are then discussed. The paper concludes by highlighting the advantages of using Java as middleware to support distributed applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Resource monitoring in distributed systems is required to understand the 'health' of the overall system and to help identify particular problems, such as dysfunctional hardware, a faulty, system or application software. Desirable characteristics for monitoring systems are the ability to connect to any number of different types of monitoring agents and to provide different views of the system, based on a client's particular preferences. This paper outlines and discusses the ongoing activities within the GridRM wide-area resource-monitoring project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tycho was conceived in 2003 in response to a need by the GridRM [1] resource-monitoring project for a ldquolight-weightrdquo, scalable and easy to use wide-area distributed registry and messaging system. Since Tycho's first release in 2006 a number of modifications have been made to the system to make it easier to use and more flexible. Since its inception, Tycho has been utilised across a number of application domains including widearea resource monitoring, distributed queries across archival databases, providing services for the nodes of a Cray supercomputer, and as a system for transferring multi-terabyte scientific datasets across the Internet. This paper provides an overview of the initial Tycho system, describes a number of applications that utilise Tycho, discusses a number of new utilities, and how the Tycho infrastructure has evolved in response to experience of building applications with it.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we introduce a new algorithm, based on the successful work of Fathi and Alexandrov, on hybrid Monte Carlo algorithms for matrix inversion and solving systems of linear algebraic equations. This algorithm consists of two parts, approximate inversion by Monte Carlo and iterative refinement using a deterministic method. Here we present a parallel hybrid Monte Carlo algorithm, which uses Monte Carlo to generate an approximate inverse and that improves the accuracy of the inverse with an iterative refinement. The new algorithm is applied efficiently to sparse non-singular matrices. When we are solving a system of linear algebraic equations, Bx = b, the inverse matrix is used to compute the solution vector x = B(-1)b. We present results that show the efficiency of the parallel hybrid Monte Carlo algorithm in the case of sparse matrices.