459 resultados para Parallelism


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently major processor manufacturers have announced a dramatic shift in their paradigm to increase computing power over the coming years. Instead of focusing on faster clock speeds and more powerful single core CPUs, the trend clearly goes towards multi core systems. This will also result in a paradigm shift for the development of algorithms for computationally expensive tasks, such as data mining applications. Obviously, work on parallel algorithms is not new per se but concentrated efforts in the many application domains are still missing. Multi-core systems, but also clusters of workstations and even large-scale distributed computing infrastructures provide new opportunities and pose new challenges for the design of parallel and distributed algorithms. Since data mining and machine learning systems rely on high performance computing systems, research on the corresponding algorithms must be on the forefront of parallel algorithm research in order to keep pushing data mining and machine learning applications to be more powerful and, especially for the former, interactive. To bring together researchers and practitioners working in this exciting field, a workshop on parallel data mining was organized as part of PKDD/ECML 2006 (Berlin, Germany). The six contributions selected for the program describe various aspects of data mining and machine learning approaches featuring low to high degrees of parallelism: The first contribution focuses the classic problem of distributed association rule mining and focuses on communication efficiency to improve the state of the art. After this a parallelization technique for speeding up decision tree construction by means of thread-level parallelism for shared memory systems is presented. The next paper discusses the design of a parallel approach for dis- tributed memory systems of the frequent subgraphs mining problem. This approach is based on a hierarchical communication topology to solve issues related to multi-domain computational envi- ronments. The forth paper describes the combined use and the customization of software packages to facilitate a top down parallelism in the tuning of Support Vector Machines (SVM) and the next contribution presents an interesting idea concerning parallel training of Conditional Random Fields (CRFs) and motivates their use in labeling sequential data. The last contribution finally focuses on very efficient feature selection. It describes a parallel algorithm for feature selection from random subsets. Selecting the papers included in this volume would not have been possible without the help of an international Program Committee that has provided detailed reviews for each paper. We would like to also thank Matthew Otey who helped with publicity for the workshop.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Generally classifiers tend to overfit if there is noise in the training data or there are missing values. Ensemble learning methods are often used to improve a classifier's classification accuracy. Most ensemble learning approaches aim to improve the classification accuracy of decision trees. However, alternative classifiers to decision trees exist. The recently developed Random Prism ensemble learner for classification aims to improve an alternative classification rule induction approach, the Prism family of algorithms, which addresses some of the limitations of decision trees. However, Random Prism suffers like any ensemble learner from a high computational overhead due to replication of the data and the induction of multiple base classifiers. Hence even modest sized datasets may impose a computational challenge to ensemble learners such as Random Prism. Parallelism is often used to scale up algorithms to deal with large datasets. This paper investigates parallelisation for Random Prism, implements a prototype and evaluates it empirically using a Hadoop computing cluster.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The time to process each of W/B processing blocks of a median calculation method on a set of N W-bit integers is improved here by a factor of three compared to the literature. Parallelism uncovered in blocks containing B-bit slices are exploited by independent accumulative parallel counters so that the median is calculated faster than any known previous method for any N, W values. The improvements to the method are discussed in the context of calculating the median for a moving set of N integers for which a pipelined architecture is developed. An extra benefit of smaller area for the architecture is also reported.

Relevância:

10.00% 10.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Mobile Network Optimization (MNO) technologies have advanced at a tremendous pace in recent years. And the Dynamic Network Optimization (DNO) concept emerged years ago, aimed to continuously optimize the network in response to variations in network traffic and conditions. Yet, DNO development is still at its infancy, mainly hindered by a significant bottleneck of the lengthy optimization runtime. This paper identifies parallelism in greedy MNO algorithms and presents an advanced distributed parallel solution. The solution is designed, implemented and applied to real-life projects whose results yield a significant, highly scalable and nearly linear speedup up to 6.9 and 14.5 on distributed 8-core and 16-core systems respectively. Meanwhile, optimization outputs exhibit self-consistency and high precision compared to their sequential counterpart. This is a milestone in realizing the DNO. Further, the techniques may be applied to similar greedy optimization algorithm based applications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe infinitely scalable pipeline machines with perfect parallelism, in the sense that every instruction of an inline program is executed, on successive data, on every clock tick. Programs with shared data effectively execute in less than a clock tick. We show that pipeline machines are faster than single or multi-core, von Neumann machines for sufficiently many program runs of a sufficiently time consuming program. Our pipeline machines exploit the totality of transreal arithmetic and the known waiting time of statically compiled programs to deliver the interesting property that they need no hardware or software exception handling.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The simulated annealing approach to crystal structure determination from powder diffraction data, as implemented in the DASH program, is readily amenable to parallelization at the individual run level. Very large scale increases in speed of execution can be achieved by distributing individual DASH runs over a network of computers. The CDASH program delivers this by using scalable on-demand computing clusters built on the Amazon Elastic Compute Cloud service. By way of example, a 360 vCPU cluster returned the crystal structure of racemic ornidazole (Z0 = 3, 30 degrees of freedom) ca 40 times faster than a typical modern quad-core desktop CPU. Whilst used here specifically for DASH, this approach is of general applicability to other packages that are amenable to coarse-grained parallelism strategies.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Although radiotelemetry is considered a valuable technique for ornithological field studies, several assumptions have been made about the impact that transmitters may have on the estimation of behavioral, ecological, and reproductive parameters. To assess the potential effects of backpack radiotransmitters, we captured and assigned 8 male American kestrels (Falco sparverius) into 2 groups: radiotagged (n = 6) and control individuals (leg-banded, n = 2). Thereafter, we collected feces approximately 2 hours after capture (day -1), and subsequently during days 0 (releasing day), 4, 7, 15, 30, 40, and 55. Prior to fecal analysis, we validated the corticosterone enzyme immunoassay using standard procedures (e. g., parallelism, dose-response curve), and we confirmed physiological significance of fecal glucocorticoid metabolites through adrenocorticotropin challenge, which induced an increase of 4-fold (446.10 +/- 60.73 ng/g) above baseline (114.27 +/- 15.23 ng/g) within 4 hours (P < 0.001). Both groups exhibited a significant increase in fecal glucocorticoids during day 0 (P < 0.001), but concentrations returned to preattachment values within 4 days. Fecal glucocorticoid concentrations did not differ between samples of radiotagged and leg-banded kestrels (P > 0.05). In spite of the small number of monitored subjects, these findings suggested that radiotransmitters did not affect adrenocortical activity in these male American kestrels. (JOURNAL OF WILDLIFE MANAGEMENT 73(5): 772-778; 2009)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Magnetic fabric and rock magnetism studies were performed on apparently isotropic granite facies from the main intrusion of the Lavras do Sul Intrusive Complex pluton (LSIC, Rio Grande do Sul, South Brazil). This intrusion is roughly circular (similar to 12 x 13.5 km), composed of alkali-calcic and alkaline granitoids, with the latter occupying the margin of the pluton. Magnetic fabrics were determined by applying both anisotropy of low-field magnetic susceptibility (AMS) and anisotropy of anhysteretic remanent magnetization (AARM). The two fabrics are coaxial. The parallelism between AMS and AARM tensors excludes the presence of a single domain (SD) effect on the AMS fabric of the granites. Several rock-magnetism experiments performed in one specimen from each sampled site show that for all sites the magnetic susceptibility is dominantly carried by ferromagnetic minerals, while mainly magnetite carries the magnetic fabrics. Lineations and foliations in the granite facies were successful determined by applying magnetic methods. Magnetic lineations are gently plunging and roughly parallel to the boundaries of the pluton facies, except at the few sites in the central facies which have a radial orientation pattern. In contrast, the magnetic foliations tend to follow the contacts between the different granite facies. They are gently outerward-dipping inside the pluton, and become either steeply southwesterly dipping or vertical towards its margin. The lack of solid-state and subsolidus deformations at outcrop scale and in thin sections precludes deformation after full crystallization of the pluton. This evidence allows us to interpret the observed magnetic fabrics as primary in origin (magmatic) acquired when the rocks were solidified as a result of processes reflecting magma flow. The foliation pattern displays a dome-shaped form for the main LSIC-pluton. However, the alkaline granites which outcrop in the southern part of the studied area have an inward-dipping foliation, and the steeply plunging magnetic lineation suggests that this area could be part of a feeder zone. The magma ascent probably occurred due to ring-diking. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper I have attempted to explore "covenant" in faith and history, as it extends throughout the entire framework of the Bible and the entire history of the people who produced it. With such a monstrous topic, a comprehensive analysis of the material could take a lifetime to do it justice. Therefore, I have taken a very specific approach to the material in order to investigate the evolution of covenant from the Hebrew Bible (Old Testament) to the Christian Scriptures (New Testament). I have made every effort to approach this thesis as a text-based, non-doctrinal discussion. However, having my own religious convictions, it has, at times, been difficult to recognize and escape my biases. Nevertheless, I am confident that this final product is, for the most part, objective and free from dogmatism. Of course, I have brought my own perspective and understanding to the material, which may be different from the reader's, so there may be matters of interpretation on which we differ, but c 'est fa vie in the world of religious dialogue. The structure of this paper is symmetrical: Part I examines the traditions of the Torah and the Prophets; Part II, the Gospels and Paul's letters. I have balanced the Old Testament against the New Testament (the Torah against the Gospels; the Prophets against Paul) in order to give approximately equal weight to the two traditions, and establish a sense of parallelism in the structure of my overall work. A word should also be said about three matters of style. First, instead of the customary Christian designation of time as B.C. or A.D., I have opted to use the more modem B.C.E. (Before the Common Era) and C.E. (Common Era) notations. This more recent system is less traditional; however, more acceptable in academic and, certainly, more appropriate for a non-doctrinal discussion. Second, in the body of this paper I have chosen to highlight several texts using a variety of colors. This highlighting serves (1) to call the reader's attention to specific passages, and (2) to compare the language and imagery of similar texts. All highlighting has been added to the texts at my own discretion. Finally, the divine name, traditionally vocalized as "Yahweh," is a verbal form of the Hebrew "to be," and means, approximately, "I am who I am." This name was considered too holy to pronounce by the ancient Israelites, and, the word adonai ("My LORD") was used in its stead. In respect of this tradition, I have left the divine name in its original Hebrew form. Accordingly, should be read as "the LORD" throughout this paper. All Hebrew and Greek translations, where they occur, are my own. The Greek translations are based on the New Revised Standard Version (NRSV) of the Bible.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O presente estudo trata da discussão de um novo modelo organizacional para a Perícia Criminal no qual seja possível a um só tempo, uma atuação integrada, harmônica e independente em relação à Investigação Policial, de modo a contribuir para alterar o modelo atual em que a Perícia Criminal atua apenas de forma limitada e pontual, para um modelo que permita um paralelismo entre esta e aquela, ressaltando a importância da aplicação da criminalística como ferramenta de excelência na investigação criminal e no combate à impunidade nos procedimentos investigatórios. Inicialmente, são apresentados o tema estudado seus objetivos, delimitação e relevância. Em seguida, é realizado um panorama de trabalhos anteriores relevantes para o tema aqui abordado, incluindo uma sintética exposição dos conceitos basilares da Criminalística e suas interrelações no Sistema de Justiça Criminal de modo a demonstrar o seu potencial no procedimento investigatório. É apresentada então a metodologia de pesquisa utilizada, para, em seguida, discutir-se a análise dos processos de investigação policial e perícia criminal e os resultados da pesquisa exploratória e sua análise, buscando-se identificar os problemas e propor mecanismos de melhoria do modelo organizacional. São apresentados também casos que envolvam áreas diversas do conhecimento pericial, em que se chegou a resultados efetivos graças à aplicação do modelo proposto. Ao final espera-se demonstrar que a implementação de um modelo organizacional em que haja paralelismo, integração e independência dos processos de investigação de campo e perícia técnica possibilitará uma otimização no desenvolvimento e no resultado da investigação criminal, o que contribuirá para uma maior eficiência do sistema de persecução penal e justiça criminal.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study shows the implementation and the embedding of an Artificial Neural Network (ANN) in hardware, or in a programmable device, as a field programmable gate array (FPGA). This work allowed the exploration of different implementations, described in VHDL, of multilayer perceptrons ANN. Due to the parallelism inherent to ANNs, there are disadvantages in software implementations due to the sequential nature of the Von Neumann architectures. As an alternative to this problem, there is a hardware implementation that allows to exploit all the parallelism implicit in this model. Currently, there is an increase in use of FPGAs as a platform to implement neural networks in hardware, exploiting the high processing power, low cost, ease of programming and ability to reconfigure the circuit, allowing the network to adapt to different applications. Given this context, the aim is to develop arrays of neural networks in hardware, a flexible architecture, in which it is possible to add or remove neurons, and mainly, modify the network topology, in order to enable a modular network of fixed-point arithmetic in a FPGA. Five synthesis of VHDL descriptions were produced: two for the neuron with one or two entrances, and three different architectures of ANN. The descriptions of the used architectures became very modular, easily allowing the increase or decrease of the number of neurons. As a result, some complete neural networks were implemented in FPGA, in fixed-point arithmetic, with a high-capacity parallel processing

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A challenge that remains in the robotics field is how to make a robot to react in real time to visual stimulus. Traditional computer vision algorithms used to overcome this problem are still very expensive taking too long when using common computer processors. Very simple algorithms like image filtering or even mathematical morphology operations may take too long. Researchers have implemented image processing algorithms in high parallelism hardware devices in order to cut down the time spent in the algorithms processing, with good results. By using hardware implemented image processing techniques and a platform oriented system that uses the Nios II Processor we propose an approach that uses the hardware processing and event based programming to simplify the vision based systems while at the same time accelerating some parts of the used algorithms