32 resultados para the Fuzzy Colour Segmentation Algorithm
Resumo:
A novel methodology to assess the risk of power transformer failures caused by external faults, such as short-circuit, taking the paper insulation condition into account, is presented. The risk index is obtained by contrasting the insulation paper condition with the probability that the transformer withstands the short-circuit current flowing along the winding during an external fault. In order to assess the risk, this probability and the value of the degree of polymerization of the insulating paper are regarded as inputs of a type-2 fuzzy logic system (T2-FLS), which computes the fuzzy risk level. A Monte Carlo simulation has been used to find the survival function of the currents flowing through the transformer winding during a single-phase or a three-phase short-circuit. The Roy Billinton Test System and a real power system have been used to test the results. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The exact vibration modes and natural frequencies of planar structures and mechanisms, comprised Euler-Bernoulli beams, are obtained by solving a transcendental. nonlinear, eigenvalue problem stated by the dynamic stiffness matrix (DSM). To solve this kind of problem, the most employed technique is the Wittrick-Williams algorithm, developed in the early seventies. By formulating a new type of eigenvalue problem, which preserves the internal degrees-of-freedom for all members in the model, the present study offers an alternative to the use of this algorithm. The new proposed eigenvalue problem presents no poles, so the roots of the problem can be found by any suitable iterative numerical method. By avoiding a standard formulation for the DSM, the local mode shapes are directly calculated and any extension to the beam theory can be easily incorporated. It is shown that the method here adopted leads to exact solutions, as confirmed by various examples. Extensions of the formulation are also given, where rotary inertia, end release, skewed edges and rigid offsets are all included. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
As is well known, Hessian-based adaptive filters (such as the recursive-least squares algorithm (RLS) for supervised adaptive filtering, or the Shalvi-Weinstein algorithm (SWA) for blind equalization) converge much faster than gradient-based algorithms [such as the least-mean-squares algorithm (LMS) or the constant-modulus algorithm (CMA)]. However, when the problem is tracking a time-variant filter, the issue is not so clear-cut: there are environments for which each family presents better performance. Given this, we propose the use of a convex combination of algorithms of different families to obtain an algorithm with superior tracking capability. We show the potential of this combination and provide a unified theoretical model for the steady-state excess mean-square error for convex combinations of gradient- and Hessian-based algorithms, assuming a random-walk model for the parameter variations. The proposed model is valid for algorithms of the same or different families, and for supervised (LMS and RLS) or blind (CMA and SWA) algorithms.
Resumo:
A warning system for sooty blotch and flyspeck (SBFS) of apple, developed in the southeastern United States, uses cumulative hours of leaf wetness duration (LWD) to predict the timing of the first appearance of signs. In the Upper Midwest United States, however, this warning system has resulted in sporadic disease control failures. The purpose of the present study was to determine whether the warning system`s algorithm could be modified to provide more reliable assessment of SBFS risk. Hourly LWD, rainfall, relative humidity (RH), and temperature data were collected from orchards in Iowa, North Carolina, and Wisconsin in 2005 and 2006. Timing of the first appearance of SBFS signs was determined by weekly scouting. Preliminary analysis using scatterplots and boxplots suggested that Cumulative hours of RH >= 97% could be a useful predictor of SBFS appearance. Receiver operating characteristic curve analysis was used to compare the predictive performance of cumulative LWD and cumulative hours of RH >= 97%. Cumulative hours of RH >= 97% was a more conservative and accurate predictor than cumulative LWD for 15 site years in the Upper Midwest, but not for four site years in North Carolina. Performance of the SBFS warning system in the Upper Midwest and climatically similar regions may be improved if cumulative hours of RH >= 97% were substituted for cumulative LWD to predict the first appearance of SBFS.
Resumo:
Human leukocyte antigen (HLA) haplotypes are frequently evaluated for population history inferences and association studies. However, the available typing techniques for the main HLA loci usually do not allow the determination of the allele phase and the constitution of a haplotype, which may be obtained by a very time-consuming and expensive family-based segregation study. Without the family-based study, computational inference by probabilistic models is necessary to obtain haplotypes. Several authors have used the expectation-maximization (EM) algorithm to determine HLA haplotypes, but high levels of erroneous inferences are expected because of the genetic distance among the main HLA loci and the presence of several recombination hotspots. In order to evaluate the efficiency of computational inference methods, 763 unrelated individuals stratified into three different datasets had their haplotypes manually defined in a family-based study of HLA-A, -B, -DRB1 and -DQB1 segregation, and these haplotypes were compared with the data obtained by the following three methods: the Expectation-Maximization (EM) and Excoffier-Laval-Balding (ELB) algorithms using the arlequin 3.11 software, and the PHASE method. When comparing the methods, we observed that all algorithms showed a poor performance for haplotype reconstruction with distant loci, estimating incorrect haplotypes for 38%-57% of the samples considering all algorithms and datasets. We suggest that computational haplotype inferences involving low-resolution HLA-A, HLA-B, HLA-DRB1 and HLA-DQB1 haplotypes should be considered with caution.
Resumo:
Cosmic shear requires high precision measurement of galaxy shapes in the presence of the observational point spread function (PSF) that smears out the image. The PSF must therefore be known for each galaxy to a high accuracy. However, for several reasons, the PSF is usually wavelength dependent; therefore, the differences between the spectral energy distribution of the observed objects introduce further complexity. In this paper, we investigate the effect of the wavelength dependence of the PSF, focusing on instruments in which the PSF size is dominated by the diffraction limit of the telescope and which use broad-band filters for shape measurement. We first calculate biases on cosmological parameter estimation from cosmic shear when the stellar PSF is used uncorrected. Using realistic galaxy and star spectral energy distributions and populations and a simple three-component circular PSF, we find that the colour dependence must be taken into account for the next generation of telescopes. We then consider two different methods for removing the effect: (i) the use of stars of the same colour as the galaxies and (ii) estimation of the galaxy spectral energy distribution using multiple colours and using a telescope model for the PSF. We find that both of these methods correct the effect to levels below the tolerances required for per cent level measurements of dark energy parameters. Comparison of the two methods favours the template-fitting method because its efficiency is less dependent on galaxy redshift than the broad-band colour method and takes full advantage of deeper photometry.
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The evolution of commodity computing lead to the possibility of efficient usage of interconnected machines to solve computationally-intensive tasks, which were previously solvable only by using expensive supercomputers. This, however, required new methods for process scheduling and distribution, considering the network latency, communication cost, heterogeneous environments and distributed computing constraints. An efficient distribution of processes over such environments requires an adequate scheduling strategy, as the cost of inefficient process allocation is unacceptably high. Therefore, a knowledge and prediction of application behavior is essential to perform effective scheduling. In this paper, we overview the evolution of scheduling approaches, focusing on distributed environments. We also evaluate the current approaches for process behavior extraction and prediction, aiming at selecting an adequate technique for online prediction of application execution. Based on this evaluation, we propose a novel model for application behavior prediction, considering chaotic properties of such behavior and the automatic detection of critical execution points. The proposed model is applied and evaluated for process scheduling in cluster and grid computing environments. The obtained results demonstrate that prediction of the process behavior is essential for efficient scheduling in large-scale and heterogeneous distributed environments, outperforming conventional scheduling policies by a factor of 10, and even more in some cases. Furthermore, the proposed approach proves to be efficient for online predictions due to its low computational cost and good precision. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In 2006 the Route load balancing algorithm was proposed and compared to other techniques aiming at optimizing the process allocation in grid environments. This algorithm schedules tasks of parallel applications considering computer neighborhoods (where the distance is defined by the network latency). Route presents good results for large environments, although there are cases where neighbors do not have an enough computational capacity nor communication system capable of serving the application. In those situations the Route migrates tasks until they stabilize in a grid area with enough resources. This migration may take long time what reduces the overall performance. In order to improve such stabilization time, this paper proposes RouteGA (Route with Genetic Algorithm support) which considers historical information on parallel application behavior and also the computer capacities and load to optimize the scheduling. This information is extracted by using monitors and summarized in a knowledge base used to quantify the occupation of tasks. Afterwards, such information is used to parameterize a genetic algorithm responsible for optimizing the task allocation. Results confirm that RouteGA outperforms the load balancing carried out by the original Route, which had previously outperformed others scheduling algorithms from literature.
Resumo:
The latest version of CATH (class, architecture, topology, homology) (version 3.2), released in July 2008 (http://www.cathdb.info), contains 1 14215 domains, 2178 Homologous superfamilies and 1110 fold groups. We have assigned 20 330 new domains, 87 new homologous superfamilies and 26 new folds since CATH release version 3.1. A total of 28 064 new domains have been assigned since our NAR 2007 database publication (CATH version 3.0). The CATH website has been completely redesigned and includes more comprehensive documentation. We have revisited the CATH architecture level as part of the development of a `Protein Chart` and present information on the population of each architecture. The CATHEDRAL structure comparison algorithm has been improved and used to characterize structural diversity in CATH superfamilies and structural overlaps between superfamilies. Although the majority of superfamilies in CATH are not structurally diverse and do not overlap significantly with other superfamilies, similar to 4% of superfamilies are very diverse and these are the superfamilies that are most highly populated in both the PDB and in the genomes. Information on the degree of structural diversity in each superfamily and structural overlaps between superfamilies can now be downloaded from the CATH website.
Resumo:
A combination of trajectory sensitivity method and master-slave synchronization was proposed to parameter estimation of nonlinear systems. It was shown that master-slave coupling increases the robustness of the trajectory sensitivity algorithm with respect to the initial guess of parameters. Since synchronization is not a guarantee that the estimation process converges to the correct parameters, a conditional test that guarantees that the new combined methodology estimates the true values of parameters was proposed. This conditional test was successfully applied to Lorenz's and Chua's systems, and the proposed parameter estimation algorithm has shown to be very robust with respect to parameter initial guesses and measurement noise for these examples. Copyright (C) 2009 Elmer P. T. Cari et al.
Resumo:
This work presents the development and implementation of an artificial neural network based algorithm for transmission lines distance protection. This algorithm was developed to be used in any transmission line regardless of its configuration or voltage level. The described ANN-based algorithm does not need any topology adaptation or ANN parameters adjustment when applied to different electrical systems. This feature makes this solution unique since all ANN-based solutions presented until now were developed for particular transmission lines, which means that those solutions cannot be implemented in commercial relays. (c) 2011 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a reliability-based analysis for calculating critical tool life in machining processes. It is possible to determine the running time for each tool involved in the process by obtaining the operations sequence for the machining procedure. Usually, the reliability of an operation depends on three independent factors: operator, machine-tool and cutting tool. The reliability of a part manufacturing process is mainly determined by the cutting time for each job and by the sequence of operations, defined by the series configuration. An algorithm is presented to define when the cutting tool must be changed. The proposed algorithm is used to evaluate the reliability of a manufacturing process composed of turning and drilling operations. The reliability of the turning operation is modeled based on data presented in the literature, and from experimental results, a statistical distribution of drilling tool wear was defined, and the reliability of the drilling process was modeled. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Electrical impedance tomography (EIT) captures images of internal features of a body. Electrodes are attached to the boundary of the body, low intensity alternating currents are applied, and the resulting electric potentials are measured. Then, based on the measurements, an estimation algorithm obtains the three-dimensional internal admittivity distribution that corresponds to the image. One of the main goals of medical EIT is to achieve high resolution and an accurate result at low computational cost. However, when the finite element method (FEM) is employed and the corresponding mesh is refined to increase resolution and accuracy, the computational cost increases substantially, especially in the estimation of absolute admittivity distributions. Therefore, we consider in this work a fast iterative solver for the forward problem, which was previously reported in the context of structural optimization. We propose several improvements to this solver to increase its performance in the EIT context. The solver is based on the recycling of approximate invariant subspaces, and it is applied to reduce the EIT computation time for a constant and high resolution finite element mesh. In addition, we consider a powerful preconditioner and provide a detailed pseudocode for the improved iterative solver. The numerical results show the effectiveness of our approach: the proposed algorithm is faster than the preconditioned conjugate gradient (CG) algorithm. The results also show that even on a standard PC without parallelization, a high mesh resolution (more than 150,000 degrees of freedom) can be used for image estimation at a relatively low computational cost. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
Leaf wetness duration (LWD) models based on empirical approaches offer practical advantages over physically based models in agricultural applications, but their spatial portability is questionable because they may be biased to the climatic conditions under which they were developed. In our study, spatial portability of three LWD models with empirical characteristics - a RH threshold model, a decision tree model with wind speed correction, and a fuzzy logic model - was evaluated using weather data collected in Brazil, Canada, Costa Rica, Italy and the USA. The fuzzy logic model was more accurate than the other models in estimating LWD measured by painted leaf wetness sensors. The fraction of correct estimates for the fuzzy logic model was greater (0.87) than for the other models (0.85-0.86) across 28 sites where painted sensors were installed, and the degree of agreement k statistic between the model and painted sensors was greater for the fuzzy logic model (0.71) than that for the other models (0.64-0.66). Values of the k statistic for the fuzzy logic model were also less variable across sites than those of the other models. When model estimates were compared with measurements from unpainted leaf wetness sensors, the fuzzy logic model had less mean absolute error (2.5 h day(-1)) than other models (2.6-2.7 h day(-1)) after the model was calibrated for the unpainted sensors. The results suggest that the fuzzy logic model has greater spatial portability than the other models evaluated and merits further validation in comparison with physical models under a wider range of climate conditions. (C) 2010 Elsevier B.V. All rights reserved.