866 resultados para the Fuzzy Colour Segmentation Algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We discuss the expectation propagation (EP) algorithm for approximate Bayesian inference using a factorizing posterior approximation. For neural network models, we use a central limit theorem argument to make EP tractable when the number of parameters is large. For two types of models, we show that EP can achieve optimal generalization performance when data are drawn from a simple distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: To determine intraocular pressure (IOP)-dependent and IOP-independent variables associated with visual field (VF) progression in treated glaucoma. Design: Retrospective cohort of the Glaucoma Progression Study. Methods: Consecutive, treated glaucoma patients with repeatable VF loss who had 8 or more VF examinations of either eye, using the Swedish Interactive Threshold Algorithm (24-2 SITA-Standard, Humphrey Field Analyzer II; Carl Zeiss Meditec, Inc, Dublin, California), during the period between January 1999 and September 2009 were included. Visual field progression was evaluated using automated pointwise linear regression. Evaluated data included age, sex, race, central corneal thickness, baseline VF mean deviation, mean follow-up IOP, peak IOP, IOP fluctuation, a detected disc hemorrhage, and presence of beta-zone parapapillary atrophy. Results: We selected 587 eyes of 587 patients (mean [SD] age, 64.9 [13.0] years). The mean (SD) number of VFs was 11.1 (3.0), spanning a mean (SD) of 6.4 (1.7) years. In the univariable model, older age (odds ratio [OR], 1.19 per decade; P = .01), baseline diagnosis of exfoliation syndrome (OR, 1.79; P = .01), decreased central corneal thickness (OR, 1.38 per 40 mu m thinner; P < .01), a detected disc hemorrhage (OR, 2.31; P < .01), presence of beta-zone parapapillary atrophy (OR, 2.17; P < .01), and all IOP parameters (mean follow-up, peak, and fluctuation; P < .01) were associated with increased risk of VF progression. In the multivariable model, peak IOP (OR, 1.13; P < .01), thinner central corneal thickness (OR, 1.45 per 40 mu m thinner; P < .01), a detected disc hemorrhage (OR, 2.59; P < .01), and presence of beta-zone parapapillary atrophy (OR, 2.38; P < .01) were associated with VF progression. Conclusions: IOP-dependent and IOP-independent risk factors affect disease progression in treated glaucoma. Peak IOP is a better predictor of progression than is IOP mean or fluctuation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It has been argued that beyond software engineering and process engineering, ontological engineering is the third capability needed if successful e-commerce is to be realized. In our experience of building an ontological-based tendering system, we face the problem of building an ontology. In this paper, we demonstrate how to build ontologies in the tendering domain. The ontology life cycle is identified. Extracting concepts from existing resources like on-line catalogs is described. We have reused electronic data interchange (EDI) to build conceptual structures in the tendering domain. An algorithm to extract abstract ontological concepts from these structures is proposed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Este artigo apresenta diversas caracter??sticas dos mercados de trabalho do setor p??blico e privado, ressaltando as disparidades entre ambos e, mais especificamente, as distor????es observadas no setor p??blico, a fim de demonstrar o grau de segmenta????o entre ambos. As compara????es se d??o em torno do comportamento do emprego, do perfil dos trabalhadores e da din??mica das remunera????es. A an??lise evidencia que a crise fiscal e a rigidez da legisla????o s??o determinantes fundamentais das caracter??sticas e distor????es (e conseq??entemente da segmenta????o) observadas no setor p??blico e que a flexibiliza????o das regras atuais, assim como as reformas constitucionais em curso, constituem importante condi????o para a aproxima????o entre os dois mercados e a melhoria da gest??o de recursos humanos na administra????o p??blica.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Esta pesquisa de mestrado teve como principal objetivo investigar estratégias de cálculo mental, utilizadas por alunos de uma 5ª série/6º ano do ensino fundamental ao resolver cálculos de adição e subtração. Para atingir este objetivo procuramos responder aos questionamentos: Quais estratégias de cálculo mental, alunos da 5ª série/6º ano empregam na resolução de cálculos de adição e subtração? Que relações existem entre o tipo de cálculo envolvido e a estratégia adotada para resolvê-lo? Para respondermos a essas questões, seguimos uma metodologia de natureza qualitativa, configurada como estudo de caso do tipo etnográfico. O trabalho de campo foi desenvolvido em uma turma de 5ª série/6º ano do ensino fundamental de uma escola pública da rede estadual de ensino do município de Serra. A pesquisa aconteceu de maio a dezembro de 2013. Oito alunos resolveram uma atividade diagnóstica composta de quatro sequências de cálculos mentais, a saber, fatos fundamentais do número 5, do número 10, do número 20 e do número 100, dentre adições e subtrações próximas a esses resultados. Todos alunos participaram da etapa de entrevistas. Dos oito alunos, foram escolhidos dados de três que participaram de outras etapas da pesquisa. Os registros realizados pelos alunos na etapa de observação da turma, na etapa diagnóstica e na etapa de intervenção didática, as anotações no caderno de campo e algumas gravações em áudio serviram como fontes de coleta de dados. Utilizamos as estratégias identificadas por Beishuizen (1997), Klein e Beishuizen (1998), Thompson (1999, 2000) e Lucangeli et al. (2003), como categorias de análise. Através da análise de dados, constatamos que as escolhas das estratégias de cálculo mental pelos alunos variaram de acordo com o tipo de sequência de cálculos, a operação aritmética (adição ou subtração) e o estado emocional deles durante a atividade. Foi possível identificar o uso de duas estratégias combinadas, o algoritmo mental e estratégias de contagens nos dedos para grande parte dos cálculos. O uso do algoritmo mental mostrou-se um procedimento de grande sobrecarga mental e, em alguns cálculos de adição sem reserva, serviu apenas como apoio à visualização numérica, sendo executado pelo aluno da esquerda para a direita, semelhantemente à estratégia de decomposição numérica. Os dados deste estudo apontam para: (i) a necessidade de se trabalhar fatos numéricos fundamentais de adição e subtração via cálculo mental de maneira sistemática em sala de aula; (ii) a necessidade de se ensinar estratégias autênticas de cálculo mental para que os alunos não se tornem dependentes de estratégias como contagens e algoritmo mental, que são mais difíceis de serem executadas com êxito; (iii) a importância de entrevistar, individualmente, os alunos a fim de compreender e avaliar o desenvolvimento destes em tarefas de cálculo mental.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Utilizar robôs autônomos capazes de planejar o seu caminho é um desafio que atrai vários pesquisadores na área de navegação de robôs. Neste contexto, este trabalho tem como objetivo implementar um algoritmo PSO híbrido para o planejamento de caminhos em ambientes estáticos para veículos holonômicos e não holonômicos. O algoritmo proposto possui duas fases: a primeira utiliza o algoritmo A* para encontrar uma trajetória inicial viável que o algoritmo PSO otimiza na segunda fase. Por fim, uma fase de pós planejamento pode ser aplicada no caminho a fim de adaptá-lo às restrições cinemáticas do veículo não holonômico. O modelo Ackerman foi considerado para os experimentos. O ambiente de simulação de robótica CARMEN (Carnegie Mellon Robot Navigation Toolkit) foi utilizado para realização de todos os experimentos computacionais considerando cinco instâncias de mapas geradas artificialmente com obstáculos. O desempenho do algoritmo desenvolvido, A*PSO, foi comparado com os algoritmos A*, PSO convencional e A* Estado Híbrido. A análise dos resultados indicou que o algoritmo A*PSO híbrido desenvolvido superou em qualidade de solução o PSO convencional. Apesar de ter encontrado melhores soluções em 40% das instâncias quando comparado com o A*, o A*PSO apresentou trajetórias com menos pontos de guinada. Investigando os resultados obtidos para o modelo não holonômico, o A*PSO obteve caminhos maiores entretanto mais suaves e seguros.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the electricity market liberalization, the distribution and retail companies are looking for better market strategies based on adequate information upon the consumption patterns of its electricity consumers. A fair insight on the consumers’ behavior will permit the definition of specific contract aspects based on the different consumption patterns. In order to form the different consumers’ classes, and find a set of representative consumption patterns we use electricity consumption data from a utility client’s database and two approaches: Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) for combining partitions in a clustering ensemble. While EAC uses a voting mechanism to produce a co-association matrix based on the pairwise associations obtained from N partitions and where each partition has equal weight in the combination process, the WEACS approach uses subsampling and weights differently the partitions. As a complementary step to the WEACS approach, we combine the partitions obtained in the WEACS approach with the ALL clustering ensemble construction method and we use the Ward Link algorithm to obtain the final data partition. The characterization of the obtained consumers’ clusters was performed using the C5.0 classification algorithm. Experiment results showed that the WEACS approach leads to better results than many other clustering approaches.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work describes a methodology to extract symbolic rules from trained neural networks. In our approach, patterns on the network are codified using formulas on a Lukasiewicz logic. For this we take advantage of the fact that every connective in this multi-valued logic can be evaluated by a neuron in an artificial network having, by activation function the identity truncated to zero and one. This fact simplifies symbolic rule extraction and allows the easy injection of formulas into a network architecture. We trained this type of neural network using a back-propagation algorithm based on Levenderg-Marquardt algorithm, where in each learning iteration, we restricted the knowledge dissemination in the network structure. This makes the descriptive power of produced neural networks similar to the descriptive power of Lukasiewicz logic language, minimizing the information loss on the translation between connectionist and symbolic structures. To avoid redundance on the generated network, the method simplifies them in a pruning phase, using the "Optimal Brain Surgeon" algorithm. We tested this method on the task of finding the formula used on the generation of a given truth table. For real data tests, we selected the Mushrooms data set, available on the UCI Machine Learning Repository.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a new and efficient methodology for distribution network reconfiguration integrated with optimal power flow (OPF) based on a Benders decomposition approach. The objective minimizes power losses, balancing load among feeders and subject to constraints: capacity limit of branches, minimum and maximum power limits of substations or distributed generators, minimum deviation of bus voltages and radial optimal operation of networks. The Generalized Benders decomposition algorithm is applied to solve the problem. The formulation can be embedded under two stages; the first one is the Master problem and is formulated as a mixed integer non-linear programming problem. This stage determines the radial topology of the distribution network. The second stage is the Slave problem and is formulated as a non-linear programming problem. This stage is used to determine the feasibility of the Master problem solution by means of an OPF and provides information to formulate the linear Benders cuts that connect both problems. The model is programmed in GAMS. The effectiveness of the proposal is demonstrated through two examples extracted from the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In real optimization problems, usually the analytical expression of the objective function is not known, nor its derivatives, or they are complex. In these cases it becomes essential to use optimization methods where the calculation of the derivatives, or the verification of their existence, is not necessary: the Direct Search Methods or Derivative-free Methods are one solution. When the problem has constraints, penalty functions are often used. Unfortunately the choice of the penalty parameters is, frequently, very difficult, because most strategies for choosing it are heuristics strategies. As an alternative to penalty function appeared the filter methods. A filter algorithm introduces a function that aggregates the constrained violations and constructs a biobjective problem. In this problem the step is accepted if it either reduces the objective function or the constrained violation. This implies that the filter methods are less parameter dependent than a penalty function. In this work, we present a new direct search method, based on simplex methods, for general constrained optimization that combines the features of the simplex method and filter methods. This method does not compute or approximate any derivatives, penalty constants or Lagrange multipliers. The basic idea of simplex filter algorithm is to construct an initial simplex and use the simplex to drive the search. We illustrate the behavior of our algorithm through some examples. The proposed methods were implemented in Java.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores