818 resultados para LDPC, CUDA, GPGPU, computing, GPU, DVB, S2, SDR
Resumo:
The multi-relational Data Mining approach has emerged as alternative to the analysis of structured data, such as relational databases. Unlike traditional algorithms, the multi-relational proposals allow mining directly multiple tables, avoiding the costly join operations. In this paper, is presented a comparative study involving the traditional Patricia Mine algorithm and its corresponding multi-relational proposed, MR-Radix in order to evaluate the performance of two approaches for mining association rules are used for relational databases. This study presents two original contributions: the proposition of an algorithm multi-relational MR-Radix, which is efficient for use in relational databases, both in terms of execution time and in relation to memory usage and the presentation of the empirical approach multirelational advantage in performance over several tables, which avoids the costly join operations from multiple tables. © 2011 IEEE.
Resumo:
Multi-relational data mining enables pattern mining from multiple tables. The existing multi-relational mining association rules algorithms are not able to process large volumes of data, because the amount of memory required exceeds the amount available. The proposed algorithm MRRadix presents a framework that promotes the optimization of memory usage. It also uses the concept of partitioning to handle large volumes of data. The original contribution of this proposal is enable a superior performance when compared to other related algorithms and moreover successfully concludes the task of mining association rules in large databases, bypass the problem of available memory. One of the tests showed that the MR-Radix presents fourteen times less memory usage than the GFP-growth. © 2011 IEEE.
Resumo:
Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.
Resumo:
The development of new technologies that use peer-to-peer networks grows every day, with the object to supply the need of sharing information, resources and services of databases around the world. Among them are the peer-to-peer databases that take advantage of peer-to-peer networks to manage distributed knowledge bases, allowing the sharing of information semantically related but syntactically heterogeneous. However, it is a challenge to ensure the efficient search for information without compromising the autonomy of each node and network flexibility, given the structural characteristics of these networks. On the other hand, some studies propose the use of ontology semantics by assigning standardized categorization of information. The main original contribution of this work is the approach of this problem with a proposal for optimization of queries supported by the Ant Colony algorithm and classification though ontologies. The results show that this strategy enables the semantic support to the searches in peer-to-peer databases, aiming to expand the results without compromising network performance. © 2011 IEEE.
Resumo:
Non-conventional database management systems are used to achieve a better performance when dealing with complex data. One fundamental concept of these systems is object identity (OID), because each object in the database has a unique identifier that is used to access and reference it in relationships to other objects. Two approaches can be used for the implementation of OIDs: physical or logical OIDs. In order to manage complex data, was proposed the Multimedia Data Manager Kernel (NuGeM) that uses a logical technique, named Indirect Mapping. This paper proposes an improvement to the technique used by NuGeM, whose original contribution is management of OIDs with a fewer number of disc accesses and less processing, thus reducing management time from the pages and eliminating the problem with exhaustion of OIDs. Also, the technique presented here can be applied to others OODBMSs. © 2011 IEEE.
Resumo:
The significant volume of work accidents in the cities causes an expressive loss to society. The development of Spatial Data Mining technologies presents a new perspective for the extraction of knowledge from the correlation between conventional and spatial attributes. One of the most important techniques of the Spatial Data Mining is the Spatial Clustering, which clusters similar spatial objects to find a distribution of patterns, taking into account the geographical position of the objects. Applying this technique to the health area, will provide information that can contribute towards the planning of more adequate strategies for the prevention of work accidents. The original contribution of this work is to present an application of tools developed for Spatial Clustering which supply a set of graphic resources that have helped to discover knowledge and support for management in the work accidents area. © 2011 IEEE.
Resumo:
This paper introduces a methodology for predicting the surface roughness of advanced ceramics using Adaptive Neuro-Fuzzy Inference System (ANFIS). To this end, a grinding machine was used, equipped with an acoustic emission sensor and a power transducer connected to the electric motor rotating the diamond grinding wheel. The alumina workpieces used in this work were pressed and sintered into rectangular bars. Acoustic emission and cutting power signals were collected during the tests and digitally processed to calculate the mean, standard deviation, and two other statistical data. These statistics, as well the root mean square of the acoustic emission and cutting power signals were used as input data for ANFIS. The output values of surface roughness (measured during the tests) were implemented for training and validation of the model. The results indicated that an ANFIS network is an excellent tool when applied to predict the surface roughness of ceramic workpieces in the grinding process.
Resumo:
The effect of snoring on the cardiovascular system is not well-known. In this study we analyzed the Heart Rate Variability (HRV) differences between light and heavy snorers. The experiments are done on the full-whole-night polysomnography (PSG) with ECG and audio channels from patient group (heavy snorer) and control group (light snorer), which are gender- and age-paired, totally 30 subjects. A feature Snoring Density (SND) of audio signal as classification criterion and HRV features are computed. Mann-Whitney statistical test and Support Vector Machine (SVM) classification are done to see the correlation. The result of this study shows that snoring has close impact on the HRV features. This result can provide a deeper insight into the physiological understand of snoring. © 2011 CCAL.
Resumo:
This study aimed to evaluate Y-TZP surface after different airborne particle abrasion protocols. Seventy-six Y-TZP ceramic blocks (5×4×4) mm3 were sintered and polished. Specimens were randomly divided into 19 groups (n=4) according to control group and 3 factors: a) protocol duration (2 and 4 s); b) particle size (30 μm, alumina coated silica particle; 45 μm, alumina particle; and 145 μm, alumina particle) and; c) pressure (1.5, 2.5 and 4.5 bar). Airborne particle abrasion was performed following a strict protocol. For qualitative and quantitative results, topography surfaces were analyzed in a digital optical profilometer (Interference Microscopic), using different roughness parameters (Ra, Rq, Rz, X-crossing, Mr1, Mr2 and Sdr) and 3D images. Surface roughness also was analyzed following the primer and silane applications on Y-TZP surfaces. One-way ANOVA revealed that treatments (application period, particle size and pressure of particle blasting) provided significant difference for all roughness parameters. The Tukey test determined that the significant differences between groups were different among roughness parameters. In qualitative analysis, the bonding agent application reduced roughness, filing the valleys in the surface. The protocols performed in this study verified that application period, particle size and pressure influenced the topographic pattern and amplitude of roughness.
Resumo:
This paper proposes a simple and powerful architecture for publication and universal access to smart transducers, through existing and established open standards. Smart transducers are put to work on standards and styles already included in the Web, exploring resources in Cloud Computing and simplifying access to data. © 2012 IEEE.
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
This paper presents vectorized methods of construction and descent of quadtrees that can be easily adapted to message passing parallel computing. A time complexity analysis for the present approach is also discussed. The proposed method of tree construction requires a hash table to index nodes of a linear quadtree in the breadth-first order. The hash is performed in two steps: an internal hash to index child nodes and an external hash to index nodes in the same level (depth). The quadtree descent is performed by considering each level as a vector segment of a linear quadtree, so that nodes of the same level can be processed concurrently. © 2012 Springer-Verlag.
Resumo:
Riemann surfaces, cohomology and homology groups, Cartan's spinors and triality, octonionic projective geometry, are all well supported by Complex Structures [1], [2], [3], [4]. Furthermore, in Theoretical Physics, mainly in General Relativity, Supersymmetry and Particle Physics, Complex Theory Plays a Key Role [5], [6], [7], [8]. In this context it is expected that generalizations of concepts and main results from the Classical Complex Theory, like conformal and quasiconformal mappings [9], [10] in both quaternionic and octonionic algebra, may be useful for other fields of research, as for graphical computing enviromment [11]. In this Note, following recent works by the autors [12], [13], the Cauchy Theorem will be extended for Octonions in an analogous way that it has recentely been made for quaternions [14]. Finally, will be given an octonionic treatment of the wave equation, which means a wave produced by a hyper-string with initial conditions similar to the one-dimensional case.
Resumo:
Simulation of large and complex systems, such as computing grids, is a difficult task. Current simulators, despite providing accurate results, are significantly hard to use. They usually demand a strong knowledge of programming, what is not a standard pattern in today's users of grids and high performance computing. The need for computer expertise prevents these users from simulating how the environment will respond to their applications, what may imply in large loss of efficiency, wasting precious computational resources. In this paper we introduce iSPD, iconic Simulator of Parallel and Distributed Systems, which is a simulator where grid models are produced through an iconic interface. We describe the simulator and its intermediate model languages. Results presented here provide an insight in its easy-of-use and accuracy.
Resumo:
This paper presents a new approach for damage detection in Structural Health Monitoring (SHM) systems, which is based on the Electromechanical Impedance (EMI) principle and Autoregressive (AR) models. Typical applications of EMI in SHM are based on computing the Frequency Response Function (FRF). In this work the procedure is based on the EMI principle but the results are determined through the coefficients of AR models, which are computed from the time response of PZT transducers bonded to the monitored structure, and acting as actuator and sensors at the same time. The procedure is based on exciting the PZT transducers using a wide band chirp signal and getting its time response. The AR models are obtained in both healthy and damaged conditions and used to compute statistics indexes. Practical tests were carried out in an aluminum plate and the results have demonstrated the effectiveness of the proposed method. © 2012 IEEE.