853 resultados para Tutorial on Computing
Resumo:
Peer reviewed
Resumo:
We describe Janus, a massively parallel FPGA-based computer optimized for the simulation of spin glasses, theoretical models for the behavior of glassy materials. FPGAs (as compared to GPUs or many-core processors) provide a complementary approach to massively parallel computing. In particular, our model problem is formulated in terms of binary variables, and floating-point operations can be (almost) completely avoided. The FPGA architecture allows us to run many independent threads with almost no latencies in memory access, thus updating up to 1024 spins per cycle. We describe Janus in detail and we summarize the physics results obtained in four years of operation of this machine; we discuss two types of physics applications: long simulations on very large systems (which try to mimic and provide understanding about the experimental non equilibrium dynamics), and low-temperature equilibrium simulations using an artificial parallel tempering dynamics. The time scale of our non-equilibrium simulations spans eleven orders of magnitude (from picoseconds to a tenth of a second). On the other hand, our equilibrium simulations are unprecedented both because of the low temperatures reached and for the large systems that we have brought to equilibrium. A finite-time scaling ansatz emerges from the detailed comparison of the two sets of simulations. Janus has made it possible to perform spin glass simulations that would take several decades on more conventional architectures. The paper ends with an assessment of the potential of possible future versions of the Janus architecture, based on state-of-the-art technology.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
We present Tethered Monte Carlo, a simple, general purpose method of computing the effective potential of the order parameter (Helmholtz free energy). This formalism is based on a new statistical ensemble, closely related to the micromagnetic one, but with an extended configuration space (through Creutz-like demons). Canonical averages for arbitrary values of the external magnetic field are computed without additional simulations. The method is put to work in the two-dimensional Ising model, where the existence of exact results enables us to perform high precision checks. A rather peculiar feature of our implementation, which employs a local Metropolis algorithm, is the total absence, within errors, of critical slowing down for magnetic observables. Indeed, high accuracy results are presented for lattices as large as L = 1024.
Resumo:
This paper describes JANUS, a modular massively parallel and reconfigurable FPGA-based computing system. Each JANUS module has a computational core and a host. The computational core is a 4x4 array of FPGA-based processing elements with nearest-neighbor data links. Processors are also directly connected to an I/O node attached to the JANUS host, a conventional PC. JANUS is tailored for, but not limited to, the requirements of a class of hard scientific applications characterized by regular code structure, unconventional data manipulation instructions and not too large data-base size. We discuss the architecture of this configurable machine, and focus on its use on Monte Carlo simulations of statistical mechanics. On this class of application JANUS achieves impressive performances: in some cases one JANUS processing element outperfoms high-end PCs by a factor ≈1000. We also discuss the role of JANUS on other classes of scientific applications.
Open business intelligence: on the importance of data quality awareness in user-friendly data mining
Resumo:
Citizens demand more and more data for making decisions in their daily life. Therefore, mechanisms that allow citizens to understand and analyze linked open data (LOD) in a user-friendly manner are highly required. To this aim, the concept of Open Business Intelligence (OpenBI) is introduced in this position paper. OpenBI facilitates non-expert users to (i) analyze and visualize LOD, thus generating actionable information by means of reporting, OLAP analysis, dashboards or data mining; and to (ii) share the new acquired information as LOD to be reused by anyone. One of the most challenging issues of OpenBI is related to data mining, since non-experts (as citizens) need guidance during preprocessing and application of mining algorithms due to the complexity of the mining process and the low quality of the data sources. This is even worst when dealing with LOD, not only because of the different kind of links among data, but also because of its high dimensionality. As a consequence, in this position paper we advocate that data mining for OpenBI requires data quality-aware mechanisms for guiding non-expert users in obtaining and sharing the most reliable knowledge from the available LOD.
Resumo:
This paper presents a new approach to the delineation of local labor markets based on evolutionary computation. The aim of the exercise is the division of a given territory into functional regions based on travel-to-work flows. Such regions are defined so that a high degree of inter-regional separation and of intra-regional integration in both cases in terms of commuting flows is guaranteed. Additional requirements include the absence of overlap between delineated regions and the exhaustive coverage of the whole territory. The procedure is based on the maximization of a fitness function that measures aggregate intra-region interaction under constraints of inter-region separation and minimum size. In the experimentation stage, two variations of the fitness function are used, and the process is also applied as a final stage for the optimization of the results from one of the most successful existing methods, which are used by the British authorities for the delineation of travel-to-work areas (TTWAs). The empirical exercise is conducted using real data for a sufficiently large territory that is considered to be representative given the density and variety of travel-to-work patterns that it embraces. The paper includes the quantitative comparison with alternative traditional methods, the assessment of the performance of the set of operators which has been specifically designed to handle the regionalization problem and the evaluation of the convergence process. The robustness of the solutions, something crucial in a research and policy-making context, is also discussed in the paper.
Resumo:
Several recent works deal with 3D data in mobile robotic problems, e.g., mapping. Data comes from any kind of sensor (time of flight, Kinect or 3D lasers) that provide a huge amount of unorganized 3D data. In this paper we detail an efficient approach to build complete 3D models using a soft computing method, the Growing Neural Gas (GNG). As neural models deal easily with noise, imprecision, uncertainty or partial data, GNG provides better results than other approaches. The GNG obtained is then applied to a sequence. We present a comprehensive study on GNG parameters to ensure the best result at the lowest time cost. From this GNG structure, we propose to calculate planar patches and thus obtaining a fast method to compute the movement performed by a mobile robot by means of a 3D models registration algorithm. Final results of 3D mapping are also shown.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
In this paper, parallel Relaxed and Extrapolated algorithms based on the Power method for accelerating the PageRank computation are presented. Different parallel implementations of the Power method and the proposed variants are analyzed using different data distribution strategies. The reported experiments show the behavior and effectiveness of the designed algorithms for realistic test data using either OpenMP, MPI or an hybrid OpenMP/MPI approach to exploit the benefits of shared memory inside the nodes of current SMP supercomputers.
Resumo:
A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.
Resumo:
Degree in nursing from the Universitat Jaume I (UJI) maintains the continuity of learning with an integrated learning methodology (theory, simulated practice and clinical practice). The objective of this methodology is to achieve consistency between the knowledge, abilities and skills acquired in the classroom, laboratory and clinic to ensure skills related. Reference Nurse is a key figure in this process, you receive accredited training on Educational Methods, assessment of competence, and Evidence-Based Practice that plays the role of evaluating in conjunction with the subjects. It does not perceive economic remuneration. The main objective of this study is to determine the level of satisfaction of clinical nurses on the Nurses Training Program Reference in UJI (Castellon- Spain). A cross sectional study was performed and conducted on 150 nurses. 112 questionnaires were completed, collected and analysed at the end of training. The survey consists of 12 items measured with the Likert scale with 5 levels of response and two open questions regarding the positive and negative aspects of the course and to add in this formation. The training is always performed by the same faculty and it's used four sessions of 2012. We perform a quantitative analysis of the variables under study using measures of central tendency. The completion rate of the survey is 95.53% (n=107). Anonymity rate of 54,14% The overall satisfaction level of training was 3.65 (SD = 0.89) on 5 points. 54.2% (n = 58) of the reference nurses made a contribution in the open questions described in the overall results. The overall satisfaction level can be considered acceptable. It is considered necessary to elaborate a specific survey to detect areas of improvement of nurse training program reference and future recruitment strategies. The main objective of the present work is the selection and integration of different methodologies among those applicable within the framework of the European Higher Education Area to combine teaching methods with high implication from both lecturers and students.
Resumo:
Plane model extraction from three-dimensional point clouds is a necessary step in many different applications such as planar object reconstruction, indoor mapping and indoor localization. Different RANdom SAmple Consensus (RANSAC)-based methods have been proposed for this purpose in recent years. In this study, we propose a novel method-based on RANSAC called Multiplane Model Estimation, which can estimate multiple plane models simultaneously from a noisy point cloud using the knowledge extracted from a scene (or an object) in order to reconstruct it accurately. This method comprises two steps: first, it clusters the data into planar faces that preserve some constraints defined by knowledge related to the object (e.g., the angles between faces); and second, the models of the planes are estimated based on these data using a novel multi-constraint RANSAC. We performed experiments in the clustering and RANSAC stages, which showed that the proposed method performed better than state-of-the-art methods.
Resumo:
The sustainability strategy in urban spaces arises from reflecting on how to achieve a more habitable city and is materialized in a series of sustainable transformations aimed at humanizing different environments so that they can be used and enjoyed by everyone without exception and regardless of their ability. Modern communication technologies allow new opportunities to analyze efficiency in the use of urban spaces from several points of view: adequacy of facilities, usability, and social integration capabilities. The research presented in this paper proposes a method to perform an analysis of movement accessibility in sustainable cities based on radio frequency technologies and the ubiquitous computing possibilities of the new Internet of Things paradigm. The proposal can be deployed in both indoor and outdoor environments to check specific locations of a city. Finally, a case study in a controlled context has been simulated to validate the proposal as a pre-deployment step in urban environments.
Resumo:
In this work, we propose the use of the neural gas (NG), a neural network that uses an unsupervised Competitive Hebbian Learning (CHL) rule, to develop a reverse engineering process. This is a simple and accurate method to reconstruct objects from point clouds obtained from multiple overlapping views using low-cost sensors. In contrast to other methods that may need several stages that include downsampling, noise filtering and many other tasks, the NG automatically obtains the 3D model of the scanned objects. To demonstrate the validity of our proposal we tested our method with several models and performed a study of the neural network parameterization computing the quality of representation and also comparing results with other neural methods like growing neural gas and Kohonen maps or classical methods like Voxel Grid. We also reconstructed models acquired by low cost sensors that can be used in virtual and augmented reality environments for redesign or manipulation purposes. Since the NG algorithm has a strong computational cost we propose its acceleration. We have redesigned and implemented the NG learning algorithm to fit it onto Graphics Processing Units using CUDA. A speed-up of 180× faster is obtained compared to the sequential CPU version.