872 resultados para Particle swarm optimization algorithm PSO
Resumo:
A simplified procedure for the preparation of immobilized beta-amylase using non-purified extract from fresh sweet potato tubers is established in this paper, using differently activated agarose supports. Beta-amylase glutaraldehyde derivative was the preparation with best features, presenting improved temperature and pH stability and activity. The possibility of reusing the amylase was also shown, when this immobilized enzyme was fully active for five cycles of use. However, immobilization decreased enzyme activity to around 15%. This seems to be mainly due to diffusion limitations of the starch inside the pores of the biocatalyst particles. A fifteen-fold increase in the Km was noticed, while the decrease of Vmax was only 30% (10.1 U mg-1 protein and 7.03 U mg-1 protein for free and immobilized preparations, respectively). © 2013 Elsevier Ltd.
Resumo:
Feature selection has been actively pursued in the last years, since to find the most discriminative set of features can enhance the recognition rates and also to make feature extraction faster. In this paper, the propose a new feature selection called Binary Cuckoo Search, which is based on the behavior of cuckoo birds. The experiments were carried out in the context of theft detection in power distribution systems in two datasets obtained from a Brazilian electrical power company, and have demonstrated the robustness of the proposed technique against with several others nature-inspired optimization techniques. © 2013 IEEE.
Resumo:
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Both Semi-Supervised Leaning and Active Learning are techniques used when unlabeled data is abundant, but the process of labeling them is expensive and/or time consuming. In this paper, those two machine learning techniques are combined into a single nature-inspired method. It features particles walking on a network built from the data set, using a unique random-greedy rule to select neighbors to visit. The particles, which have both competitive and cooperative behavior, are created on the network as the result of label queries. They may be created as the algorithm executes and only nodes affected by the new particles have to be updated. Therefore, it saves execution time compared to traditional active learning frameworks, in which the learning algorithm has to be executed several times. The data items to be queried are select based on information extracted from the nodes and particles temporal dynamics. Two different rules for queries are explored in this paper, one of them is based on querying by uncertainty approaches and the other is based on data and labeled nodes distribution. Each of them may perform better than the other according to some data sets peculiarities. Experimental results on some real-world data sets are provided, and the proposed method outperforms the semi-supervised learning method, from which it is derived, in all of them.
Resumo:
Concept drift, which refers to non stationary learning problems over time, has increasing importance in machine learning and data mining. Many concept drift applications require fast response, which means an algorithm must always be (re)trained with the latest available data. But the process of data labeling is usually expensive and/or time consuming when compared to acquisition of unlabeled data, thus usually only a small fraction of the incoming data may be effectively labeled. Semi-supervised learning methods may help in this scenario, as they use both labeled and unlabeled data in the training process. However, most of them are based on assumptions that the data is static. Therefore, semi-supervised learning with concept drifts is still an open challenging task in machine learning. Recently, a particle competition and cooperation approach has been developed to realize graph-based semi-supervised learning from static data. We have extend that approach to handle data streams and concept drift. The result is a passive algorithm which uses a single classifier approach, naturally adapted to concept changes without any explicit drift detection mechanism. It has built-in mechanisms that provide a natural way of learning from new data, gradually "forgetting" older knowledge as older data items are no longer useful for the classification of newer data items. The proposed algorithm is applied to the KDD Cup 1999 Data of network intrusion, showing its effectiveness.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de NÃvel Superior (CAPES)
Resumo:
Wireless sensor networks (WSNs) are generally used to monitor hazardous events in inaccessible areas. Thus, on one hand, it is preferable to assure the adoption of the minimum transmission power in order to extend as much as possible the WSNs lifetime. On the other hand, it is crucial to guarantee that the transmitted data is correctly received by the other nodes. Thus, trading off power optimization and reliability insurance has become one of the most important concerns when dealing with modern systems based on WSN. In this context, we present a transmission power self-optimization (TPSO) technique for WSNs. The TPSO technique consists of an algorithm able to guarantee the connectivity as well as an equally high quality of service (QoS), concentrating on the WSNs efficiency (Ef), while optimizing the transmission power necessary for data communication. Thus, the main idea behind the proposed approach is to trade off WSNs Ef against energy consumption in an environment with inherent noise. Experimental results with different types of noise and electromagnetic interference (EMI) have been explored in order to demonstrate the effectiveness of the TPSO technique.
Resumo:
This paper describes a new methodology adopted for urban traffic stream optimization. By using Petri net analysis as fitness function of a Genetic Algorithm, an entire urban road network is controlled in real time. With the advent of new technologies that have been published, particularly focusing on communications among vehicles and roads infrastructures, we consider that vehicles can provide their positions and their destinations to a central server so that it is able to calculate the best route for one of them. Our tests concentrate on comparisons between the proposed approach and other algorithms that are currently used for the same purpose, being possible to conclude that our algorithm optimizes traffic in a relevant manner.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de NÃvel Superior (CAPES)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
A self-learning simulated annealing algorithm is developed by combining the characteristics of simulated annealing and domain elimination methods. The algorithm is validated by using a standard mathematical function and by optimizing the end region of a practical power transformer. The numerical results show that the CPU time required by the proposed method is about one third of that using conventional simulated annealing algorithm.