819 resultados para rule-based algorithms
Resumo:
A significant set of information stored in different databases around the world, can be shared through peer-topeer databases. With that, is obtained a large base of knowledge, without the need for large investments because they are used existing databases, as well as the infrastructure in place. However, the structural characteristics of peer-topeer, makes complex the process of finding such information. On the other side, these databases are often heterogeneous in their schemas, but semantically similar in their content. A good peer-to-peer databases systems should allow the user access information from databases scattered across the network and receive only the information really relate to your topic of interest. This paper proposes to use ontologies in peer-to-peer database queries to represent the semantics inherent to the data. The main contribution of this work is enable integration between heterogeneous databases, improve the performance of such queries and use the algorithm of optimization Ant Colony to solve the problem of locating information on peer-to-peer networks, which presents an improve of 18% in results. © 2011 IEEE.
Resumo:
Aiming to ensure greater reliability and consistency of data stored in the database, the data cleaning stage is set early in the process of Knowledge Discovery in Databases (KDD) and is responsible for eliminating problems and adjust the data for the later stages, especially for the stage of data mining. Such problems occur in the instance level and schema, namely, missing values, null values, duplicate tuples, values outside the domain, among others. Several algorithms were developed to perform the cleaning step in databases, some of them were developed specifically to work with the phonetics of words, since a word can be written in different ways. Within this perspective, this work presents as original contribution an optimization of algorithm for the detection of duplicate tuples in databases through phonetic based on multithreading without the need for trained data, as well as an independent environment of language to be supported for this. © 2011 IEEE.
Resumo:
The use of QoS parameters to evaluate the quality of service in a mesh network is essential mainly when providing multimedia services. This paper proposes an algorithm for planning wireless mesh networks in order to satisfy some QoS parameters, given a set of test points (TPs) and potential access points (APs). Examples of QoS parameters include: probability of packet loss and mean delay in responding to a request. The proposed algorithm uses a Mathematical Programming model to determine an adequate topology for the network and Monte Carlo simulation to verify whether the QoS parameters are being satisfied. The results obtained show that the proposed algorithm is able to find satisfactory solutions.
Resumo:
Wireless Sensor Networks (WSN) are a special kind of ad-hoc networks that is usually deployed in a monitoring field in order to detect some physical phenomenon. Due to the low dependability of individual nodes, small radio coverage and large areas to be monitored, the organization of nodes in small clusters is generally used. Moreover, a large number of WSN nodes is usually deployed in the monitoring area to increase WSN dependability. Therefore, the best cluster head positioning is a desirable characteristic in a WSN. In this paper, we propose a hybrid clustering algorithm based on community detection in complex networks and traditional K-means clustering technique: the QK-Means algorithm. Simulation results show that QK-Means detect communities and sub-communities thus lost message rate is decreased and WSN coverage is increased. © 2012 IEEE.
Resumo:
Increased accessibility to high-performance computing resources has created a demand for user support through performance evaluation tools like the iSPD (iconic Simulator for Parallel and Distributed systems), a simulator based on iconic modelling for distributed environments such as computer grids. It was developed to make it easier for general users to create their grid models, including allocation and scheduling algorithms. This paper describes how schedulers are managed by iSPD and how users can easily adopt the scheduling policy that improves the system being simulated. A thorough description of iSPD is given, detailing its scheduler manager. Some comparisons between iSPD and Simgrid simulations, including runs of the simulated environment in a real cluster, are also presented. © 2012 IEEE.
Resumo:
Dental recognition is very important for forensic human identification, mainly regarding the mass disasters, which have frequently happened due to tsunamis, airplanes crashes, etc. Algorithms for automatic, precise, and robust teeth segmentation from radiograph images are crucial for dental recognition. In this work we propose the use of a graph-based algorithm to extract the teeth contours from panoramic dental radiographs that are used as dental features. In order to assess our proposal, we have carried out experiments using a database of 1126 tooth images, obtained from 40 panoramic dental radiograph images from 20 individuals. The results of the graph-based algorithm was qualitatively assessed by a human expert who reported excellent scores. For dental recognition we propose the use of the teeth shapes as biometric features, by the means of BAS (Bean Angle Statistics) and Shape Context descriptors. The BAS descriptors showed, on the same database, a better performance (EER 14%) than the Shape Context (EER 20%). © 2012 IEEE.
Resumo:
This article presents a new method to detect damage in structures based on the electromechanical impedance principle. The system follows the variations in the output voltage of piezoelectric transducers and does not compute the impedance itself. The proposed system is portable, autonomous, versatile, and could efficiently replace commercial instruments in different structural health monitoring applications. The identification of damage is performed by simply comparing the variations of root mean square voltage from response signals of piezoelectric transducers, such as lead zirconate titanate patches bonded to the structure, obtained for different frequencies of the excitation signal. The proposed system is not limited by the sampling rate of analog-to-digital converters, dispenses Fourier transform algorithms, and does not require a computer for processing, operating autonomously. A low-cost prototype based on microcontroller and digital synthesizer was built, and experiments were carried out on an aluminum structure and excellent results have been obtained. © The Author(s) 2012.
Resumo:
Includes bibliography
Resumo:
Community ecology seeks to understand and predict the characteristics of communities that can develop under different environmental conditions, but most theory has been built on analytical models that are limited in the diversity of species traits that can be considered simultaneously. We address that limitation with an individual-based model to simulate assembly of fish communities characterized by life history and trophic interactions with multiple physiological tradeoffs as constraints on species performance. Simulation experiments were carried out to evaluate the distribution of 6 life history and 4 feeding traits along gradients of resource productivity and prey accessibility. These experiments revealed that traits differ greatly in importance for species sorting along the gradients. Body growth rate emerged as a key factor distinguishing community types and defining patterns of community stability and coexistence, followed by egg size and maximum body size. Dominance by fast-growing, relatively large, and fecund species occurred more frequently in cases where functional responses were saturated (i.e. high productivity and/or prey accessibility). Such dominance was associated with large biomass fluctuations and priority effects, which prevented richness from increasing with productivity and may have limited selection on secondary traits, such as spawning strategies and relative size at maturation. Our results illustrate that the distribution of species traits and the consequences for community dynamics are intimately linked and strictly dependent on how the benefits and costs of these traits are balanced across different conditions. © 2012 Elsevier B.V.
Resumo:
This paper tackles a Nurse Scheduling Problem which consists of generating work schedules for a set of nurses while considering their shift preferences and other requirements. The objective is to maximize the satisfaction of nurses' preferences and minimize the violation of soft constraints. This paper presents a new deterministic heuristic algorithm, called MAPA (multi-assignment problem-based algorithm), which is based on successive resolutions of the assignment problem. The algorithm has two phases: a constructive phase and an improvement phase. The constructive phase builds a full schedule by solving successive assignment problems, one for each day in the planning period. The improvement phase uses a couple of procedures that re-solve assignment problems to produce a better schedule. Given the deterministic nature of this algorithm, the same schedule is obtained each time that the algorithm is applied to the same problem instance. The performance of MAPA is benchmarked against published results for almost 250,000 instances from the NSPLib dataset. In most cases, particularly on large instances of the problem, the results produced by MAPA are better when compared to best-known solutions from the literature. The experiments reported here also show that the MAPA algorithm finds more feasible solutions compared with other algorithms in the literature, which suggest that this proposed approach is effective and robust. © 2013 Springer Science+Business Media New York.
Resumo:
Image restoration is a research field that attempts to recover a blurred and noisy image. Since it can be modeled as a linear system, we propose in this paper to use the meta-heuristics optimization algorithm Harmony Search (HS) to find out near-optimal solutions in a Projections Onto Convex Sets-based formulation to solve this problem. The experiments using HS and four of its variants have shown that we can obtain near-optimal and faster restored images than other evolutionary optimization approach. © 2013 IEEE.
Resumo:
Pós-graduação em Agronomia (Energia na Agricultura) - FCA
Resumo:
Este artigo apresenta uma aplicação do método para determinação espectrofotométrica simultânea dos íons divalentes de cobre, manganês e zinco à análise de medicamento polivitamínico/polimineral. O método usa 4-(2-piridilazo) resorcinol (PAR), calibração multivariada e técnicas de seleção de variáveis e foi otimizado o empregando-se o algoritmo das projeções sucessivas (APS) e o algoritmo genético (AG), para escolha dos comprimentos de onda mais informativos para a análise. Com essas técnicas, foi possível construir modelos de calibração por regressão linear múltipla (RLM-APS e RLM-AG). Os resultados obtidos foram comparados com modelos de regressão em componentes principais (PCR) e nos mínimos quadrados parciais (PLS). Demonstra-se a partir do erro médio quadrático de previsão (RMSEP) que os modelos apresentam desempenhos semelhantes ao prever as concentrações dos três analitos no medicamento. Todavia os modelos RLM são mais simples pois requerem um número muito menor de comprimentos de onda e são mais fáceis de interpretar que os baseados em variáveis latentes.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
We have developed an algorithm using a Design of Experiments technique for reduction of search-space in global optimization problems. Our approach is called Domain Optimization Algorithm. This approach can efficiently eliminate search-space regions with low probability of containing a global optimum. The Domain Optimization Algorithm approach is based on eliminating non-promising search-space regions, which are identifyed using simple models (linear) fitted to the data. Then, we run a global optimization algorithm starting its population inside the promising region. The proposed approach with this heuristic criterion of population initialization has shown relevant results for tests using hard benchmark functions.