839 resultados para Simplex. CPLEXR. Parallel Efficiency. Parallel Scalability. Linear Programming


Relevância:

50.00% 50.00%

Publicador:

Resumo:

We study a brightening of the Lyman-alpha emission in the cusp which occurred in response to a short-lived southward turning of the interplanetary magnetic field (IMF) during a period of strongly enhanced solar wind plasma concentration. The cusp proton emission is detected using the SI-12 channel of the FUV imager on the IMAGE spacecraft. Analysis of the IMF observations recorded by the ACE and Wind spacecraft reveals that the assumption of a constant propagation lag from the upstream spacecraft to the Earth is not adequate for these high time-resolution studies. The variations of the southward IMF component observed by ACE and Wind allow for the calculation of the ACE-to-Earth lag as a function of time. Application of the derived propagation delays reveals that the intensity of the cusp emission varied systematically with the IMF clock angle, the relationship being particularly striking when the intensity is normalised to allow for the variation in the upstream solar wind proton concentration. The latitude of the cusp migrated equatorward while the lagged IMF pointed southward, confirming the lag calculation and indicating ongoing magnetopause reconnection. Dayside convection, as monitored by the SuperDARN network of radars, responded rapidly to the IMF changes but lagged behind the cusp proton emission response: this is shown to be as predicted by the model of flow excitation by Cowley and Lockwood (1992). We use the numerical cusp ion precipitation model of Lockwood and Davis (1996), along with modelled Lyman-_ emission efficiency and the SI-12 instrument response, to investigate the effect of the sheath field clock angle on the acceleration of ions on crossing the dayside magnetopause. This modelling reveals that the emission commences on each reconnected field line 2–2.5min after it is opened and peaks 3–5 min after it is opened. We discuss how comparison of the Lyman-alpha intensities with oxygen emissions observed simultaneously by the SI-13 channel of the FUV instrument offers an opportunity to test whether or not the clock angle dependence is consistent with the “component” or the “anti-parallel” reconnection hypothesis.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Advances in hardware technologies allow to capture and process data in real-time and the resulting high throughput data streams require novel data mining approaches. The research area of Data Stream Mining (DSM) is developing data mining algorithms that allow us to analyse these continuous streams of data in real-time. The creation and real-time adaption of classification models from data streams is one of the most challenging DSM tasks. Current classifiers for streaming data address this problem by using incremental learning algorithms. However, even so these algorithms are fast, they are challenged by high velocity data streams, where data instances are incoming at a fast rate. This is problematic if the applications desire that there is no or only a very little delay between changes in the patterns of the stream and absorption of these patterns by the classifier. Problems of scalability to Big Data of traditional data mining algorithms for static (non streaming) datasets have been addressed through the development of parallel classifiers. However, there is very little work on the parallelisation of data stream classification techniques. In this paper we investigate K-Nearest Neighbours (KNN) as the basis for a real-time adaptive and parallel methodology for scalable data stream classification tasks.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

A parallel formulation for the simulation of a branch prediction algorithm is presented. This parallel formulation identifies independent tasks in the algorithm which can be executed concurrently. The parallel implementation is based on the multithreading model and two parallel programming platforms: pthreads and Cilk++. Improvement in execution performance by up to 7 times is observed for a generic 2-bit predictor in a 12-core multiprocessor system.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Application of optimization algorithm to PDE modeling groundwater remediation can greatly reduce remediation cost. However, groundwater remediation analysis requires a computational expensive simulation, therefore, effective parallel optimization could potentially greatly reduce computational expense. The optimization algorithm used in this research is Parallel Stochastic radial basis function. This is designed for global optimization of computationally expensive functions with multiple local optima and it does not require derivatives. In each iteration of the algorithm, an RBF is updated based on all the evaluated points in order to approximate expensive function. Then the new RBF surface is used to generate the next set of points, which will be distributed to multiple processors for evaluation. The criteria of selection of next function evaluation points are estimated function value and distance from all the points known. Algorithms created for serial computing are not necessarily efficient in parallel so Parallel Stochastic RBF is different algorithm from its serial ancestor. The application for two Groundwater Superfund Remediation sites, Umatilla Chemical Depot, and Former Blaine Naval Ammunition Depot. In the study, the formulation adopted treats pumping rates as decision variables in order to remove plume of contaminated groundwater. Groundwater flow and contamination transport is simulated with MODFLOW-MT3DMS. For both problems, computation takes a large amount of CPU time, especially for Blaine problem, which requires nearly fifty minutes for a simulation for a single set of decision variables. Thus, efficient algorithm and powerful computing resource are essential in both cases. The results are discussed in terms of parallel computing metrics i.e. speedup and efficiency. We find that with use of up to 24 parallel processors, the results of the parallel Stochastic RBF algorithm are excellent with speed up efficiencies close to or exceeding 100%.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The last years have presented an increase in the acceptance and adoption of the parallel processing, as much for scientific computation of high performance as for applications of general intention. This acceptance has been favored mainly for the development of environments with massive parallel processing (MPP - Massively Parallel Processing) and of the distributed computation. A common point between distributed systems and MPPs architectures is the notion of message exchange, that allows the communication between processes. An environment of message exchange consists basically of a communication library that, acting as an extension of the programming languages that allow to the elaboration of applications parallel, such as C, C++ and Fortran. In the development of applications parallel, a basic aspect is on to the analysis of performance of the same ones. Several can be the metric ones used in this analysis: time of execution, efficiency in the use of the processing elements, scalability of the application with respect to the increase in the number of processors or to the increase of the instance of the treat problem. The establishment of models or mechanisms that allow this analysis can be a task sufficiently complicated considering parameters and involved degrees of freedom in the implementation of the parallel application. An joined alternative has been the use of collection tools and visualization of performance data, that allow the user to identify to points of strangulation and sources of inefficiency in an application. For an efficient visualization one becomes necessary to identify and to collect given relative to the execution of the application, stage this called instrumentation. In this work it is presented, initially, a study of the main techniques used in the collection of the performance data, and after that a detailed analysis of the main available tools is made that can be used in architectures parallel of the type to cluster Beowulf with Linux on X86 platform being used libraries of communication based in applications MPI - Message Passing Interface, such as LAM and MPICH. This analysis is validated on applications parallel bars that deal with the problems of the training of neural nets of the type perceptrons using retro-propagation. The gotten conclusions show to the potentiality and easinesses of the analyzed tools.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Although cluster environments have an enormous potential processing power, real applications that take advantage of this power remain an elusive goal. This is due, in part, to the lack of understanding about the characteristics of the applications best suited for these environments. This paper focuses on Master/Slave applications for large heterogeneous clusters. It defines application, cluster and execution models to derive an analytic expression for the execution time. It defines speedup and derives speedup bounds based on the inherent parallelism of the application and the aggregated computing power of the cluster. The paper derives an analytical expression for efficiency and uses it to define scalability of the algorithm-cluster combination based on the isoefficiency metric. Furthermore, the paper establishes necessary and sufficient conditions for an algorithm-cluster combination to be scalable which are easy to verify and use in practice. Finally, it covers the impact of network contention as the number of processors grow. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In this work, the behaviour of the system with N massive parallel rigid wires is analysed. The aim is to explore its resemblance to a system of multiple cosmic strings. Assuming that it behaves like a 'gas' of massive rigid wires, we use a thermodynamics approach to describe this system. We obtain a constraint relating the linear mass density of the massive wires, the number of the massive wires in the system and the dispersion velocity of the system. © 1996 IOP Publishing Ltd.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

For a typical non-symmetrical system with two parallel three phase transmission lines, modal transformation is applied using some examples of single real transformation matrices. These examples are applied searching an adequate single real transformation matrix to two parallel three phase transmission line systems. The analyses are started with the eigenvector and eigenvalue studies, using Clarke's transformation or linear combinations of Clarke's elements. The Z C and parameters are analyzed for the case that presents the smallest errors between the exact eigenvalues and the single real transformation matrix application results. The single real transformation determined for this case is based on Clarke's matrix and its main characteristic is the use of a unique homopolar reference. So, the homopolar mode becomes a connector mode between the two three-phase circuits of the analyzed system. ©2005 IEEE.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Parallel mechanisms show desirable characteristics such as a large payload to robot weight ratio, considerable stiffness, low inertia and high dynamic performances. In particular, parallel manipulators with fewer than six degrees of freedom have recently attracted researchers’ attention, as their employ may prove valuable in those applications in which a higher mobility is uncalled-for. The attention of this dissertation is focused on translational parallel manipulators (TPMs), that is on parallel manipulators whose output link (platform) is provided with a pure translational motion with respect to the frame. The first part deals with the general problem of the topological synthesis and classification of TPMs, that is it identifies the architectures that TPM legs must possess for the platform to be able to freely translate in space without altering its orientation. The second part studies both constraint and direct singularities of TPMs. In particular, special families of fully-isotropic mechanisms are identified. Such manipulators exhibit outstanding properties, as they are free from singularities and show a constant orthogonal Jacobian matrix throughout their workspace. As a consequence, both the direct and the inverse position problems are linear and the kinematic analysis proves straightforward.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Five different methods were critically examined to characterize the pore structure of the silica monoliths. The mesopore characterization was performed using: a) the classical BJH method of nitrogen sorption data, which showed overestimated values in the mesopore distribution and was improved by using the NLDFT method, b) the ISEC method implementing the PPM and PNM models, which were especially developed for monolithic silicas, that contrary to the particulate supports, demonstrate the two inflection points in the ISEC curve, enabling the calculation of pore connectivity, a measure for the mass transfer kinetics in the mesopore network, c) the mercury porosimetry using a new recommended mercury contact angle values. rnThe results of the characterization of mesopores of monolithic silica columns by the three methods indicated that all methods were useful with respect to the pore size distribution by volume, but only the ISEC method with implemented PPM and PNM models gave the average pore size and distribution based on the number average and the pore connectivity values.rnThe characterization of the flow-through pore was performed by two different methods: a) the mercury porosimetry, which was used not only for average flow-through pore value estimation, but also the assessment of entrapment. It was found that the mass transfer from the flow-through pores to mesopores was not hindered in case of small sized flow-through pores with a narrow distribution, b) the liquid penetration where the average flow-through pore values were obtained via existing equations and improved by the additional methods developed according to Hagen-Poiseuille rules. The result was that not the flow-through pore size influences the column bock pressure, but the surface area to volume ratio of silica skeleton is most decisive. Thus the monolith with lowest ratio values will be the most permeable. rnThe flow-through pore characterization results obtained by mercury porosimetry and liquid permeability were compared with the ones from imaging and image analysis. All named methods enable a reliable characterization of the flow-through pore diameters for the monolithic silica columns, but special care should be taken about the chosen theoretical model.rnThe measured pore characterization parameters were then linked with the mass transfer properties of monolithic silica columns. As indicated by the ISEC results, no restrictions in mass transfer resistance were noticed in mesopores due to their high connectivity. The mercury porosimetry results also gave evidence that no restrictions occur for mass transfer from flow-through pores to mesopores in the small scaled silica monoliths with narrow distribution. rnThe prediction of the optimum regimes of the pore structural parameters for the given target parameters in HPLC separations was performed. It was found that a low mass transfer resistance in the mesopore volume is achieved when the nominal diameter of the number average size distribution of the mesopores is appr. an order of magnitude larger that the molecular radius of the analyte. The effective diffusion coefficient of an analyte molecule in the mesopore volume is strongly dependent on the value of the nominal pore diameter of the number averaged pore size distribution. The mesopore size has to be adapted to the molecular size of the analyte, in particular for peptides and proteins. rnThe study on flow-through pores of silica monoliths demonstrated that the surface to volume of the skeletons ratio and external porosity are decisive for the column efficiency. The latter is independent from the flow-through pore diameter. The flow-through pore characteristics by direct and indirect approaches were assessed and theoretical column efficiency curves were derived. The study showed that next to the surface to volume ratio, the total porosity and its distribution of the flow-through pores and mesopores have a substantial effect on the column plate number, especially as the extent of adsorption increases. The column efficiency is increasing with decreasing flow through pore diameter, decreasing with external porosity, and increasing with total porosity. Though this tendency has a limit due to heterogeneity of the studied monolithic samples. We found that the maximum efficiency of the studied monolithic research columns could be reached at a skeleton diameter of ~ 0.5 µm. Furthermore when the intention is to maximize the column efficiency, more homogeneous monoliths should be prepared.rn

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Modern embedded systems embrace many-core shared-memory designs. Due to constrained power and area budgets, most of them feature software-managed scratchpad memories instead of data caches to increase the data locality. It is therefore programmers’ responsibility to explicitly manage the memory transfers, and this make programming these platform cumbersome. Moreover, complex modern applications must be adequately parallelized before they can the parallel potential of the platform into actual performance. To support this, programming languages were proposed, which work at a high level of abstraction, and rely on a runtime whose cost hinders performance, especially in embedded systems, where resources and power budget are constrained. This dissertation explores the applicability of the shared-memory paradigm on modern many-core systems, focusing on the ease-of-programming. It focuses on OpenMP, the de-facto standard for shared memory programming. In a first part, the cost of algorithms for synchronization and data partitioning are analyzed, and they are adapted to modern embedded many-cores. Then, the original design of an OpenMP runtime library is presented, which supports complex forms of parallelism such as multi-level and irregular parallelism. In the second part of the thesis, the focus is on heterogeneous systems, where hardware accelerators are coupled to (many-)cores to implement key functional kernels with orders-of-magnitude of speedup and energy efficiency compared to the “pure software” version. However, three main issues rise, namely i) platform design complexity, ii) architectural scalability and iii) programmability. To tackle them, a template for a generic hardware processing unit (HWPU) is proposed, which share the memory banks with cores, and the template for a scalable architecture is shown, which integrates them through the shared-memory system. Then, a full software stack and toolchain are developed to support platform design and to let programmers exploiting the accelerators of the platform. The OpenMP frontend is extended to interact with it.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Despite the several issues faced in the past, the evolutionary trend of silicon has kept its constant pace. Today an ever increasing number of cores is integrated onto the same die. Unfortunately, the extraordinary performance achievable by the many-core paradigm is limited by several factors. Memory bandwidth limitation, combined with inefficient synchronization mechanisms, can severely overcome the potential computation capabilities. Moreover, the huge HW/SW design space requires accurate and flexible tools to perform architectural explorations and validation of design choices. In this thesis we focus on the aforementioned aspects: a flexible and accurate Virtual Platform has been developed, targeting a reference many-core architecture. Such tool has been used to perform architectural explorations, focusing on instruction caching architecture and hybrid HW/SW synchronization mechanism. Beside architectural implications, another issue of embedded systems is considered: energy efficiency. Near Threshold Computing is a key research area in the Ultra-Low-Power domain, as it promises a tenfold improvement in energy efficiency compared to super-threshold operation and it mitigates thermal bottlenecks. The physical implications of modern deep sub-micron technology are severely limiting performance and reliability of modern designs. Reliability becomes a major obstacle when operating in NTC, especially memory operation becomes unreliable and can compromise system correctness. In the present work a novel hybrid memory architecture is devised to overcome reliability issues and at the same time improve energy efficiency by means of aggressive voltage scaling when allowed by workload requirements. Variability is another great drawback of near-threshold operation. The greatly increased sensitivity to threshold voltage variations in today a major concern for electronic devices. We introduce a variation-tolerant extension of the baseline many-core architecture. By means of micro-architectural knobs and a lightweight runtime control unit, the baseline architecture becomes dynamically tolerant to variations.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In the past two decades the work of a growing portion of researchers in robotics focused on a particular group of machines, belonging to the family of parallel manipulators: the cable robots. Although these robots share several theoretical elements with the better known parallel robots, they still present completely (or partly) unsolved issues. In particular, the study of their kinematic, already a difficult subject for conventional parallel manipulators, is further complicated by the non-linear nature of cables, which can exert only efforts of pure traction. The work presented in this thesis therefore focuses on the study of the kinematics of these robots and on the development of numerical techniques able to address some of the problems related to it. Most of the work is focused on the development of an interval-analysis based procedure for the solution of the direct geometric problem of a generic cable manipulator. This technique, as well as allowing for a rapid solution of the problem, also guarantees the results obtained against rounding and elimination errors and can take into account any uncertainties in the model of the problem. The developed code has been tested with the help of a small manipulator whose realization is described in this dissertation together with the auxiliary work done during its design and simulation phases.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

An important problem in computational biology is finding the longest common subsequence (LCS) of two nucleotide sequences. This paper examines the correctness and performance of a recently proposed parallel LCS algorithm that uses successor tables and pruning rules to construct a list of sets from which an LCS can be easily reconstructed. Counterexamples are given for two pruning rules that were given with the original algorithm. Because of these errors, performance measurements originally reported cannot be validated. The work presented here shows that speedup can be reliably achieved by an implementation in Unified Parallel C that runs on an Infiniband cluster. This performance is partly facilitated by exploiting the software cache of the MuPC runtime system. In addition, this implementation achieved speedup without bulk memory copy operations and the associated programming complexity of message passing.