989 resultados para Algorithm Comparison


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Most active-contour methods are based either on maximizing the image contrast under the contour or on minimizing the sum of squared distances between contour and image 'features'. The Marginalized Likelihood Ratio (MLR) contour model uses a contrast-based measure of goodness-of-fit for the contour and thus falls into the first class. The point of departure from previous models consists in marginalizing this contrast measure over unmodelled shape variations. The MLR model naturally leads to the EM Contour algorithm, in which pose optimization is carried out by iterated least-squares, as in feature-based contour methods. The difference with respect to other feature-based algorithms is that the EM Contour algorithm minimizes squared distances from Bayes least-squares (marginalized) estimates of contour locations, rather than from 'strongest features' in the neighborhood of the contour. Within the framework of the MLR model, alternatives to the EM algorithm can also be derived: one of these alternatives is the empirical-information method. Tracking experiments demonstrate the robustness of pose estimates given by the MLR model, and support the theoretical expectation that the EM Contour algorithm is more robust than either feature-based methods or the empirical-information method. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel radix-3/9 algorithm for type-III generalized discrete Hartley transform (GDHT) is proposed, which applies to length-3(P) sequences. This algorithm is especially efficient in the case that multiplication is much more time-consuming than addition. A comparison analysis shows that the proposed algorithm outperforms a known algorithm when one multiplication is more time-consuming than five additions. When combined with any known radix-2 type-III GDHT algorithm, the new algorithm also applies to length-2(q)3(P) sequences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical scheme is presented for the solution of the Euler equations of compressible flow of a gas in a single spatial co-ordinate. This includes flow in a duct of variable cross-section as well as flow with slab, cylindrical or spherical symmetry and can prove useful when testing codes for the two-dimensional equations governing compressible flow of a gas. The resulting scheme requires an average of the flow variables across the interface between cells and for computational efficiency this average is chosen to be the arithmetic mean, which is in contrast to the usual ‘square root’ averages found in this type of scheme. The scheme is applied with success to five problems with either slab or cylindrical symmetry and a comparison is made in the cylindrical case with results from a two-dimensional problem with no sources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The optimal and the zero-forcing beamformers are two commonly used algorithms in the subspace-based blind beamforming technology. The optimal beamformer is regarded as the algorithm with the best output SINR. The zero-forcing algorithm emphasizes the co-channel interference cancellation. This paper compares the performance of these two algorithms under some practical conditions: the effect of the finite data length and the existence of the angle estimation error. The investigation reveals that the zero-forcing algorithm can be more robust in the practical environment than the optimal algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The A-Train constellation of satellites provides a new capability to measure vertical cloud profiles that leads to more detailed information on ice-cloud microphysical properties than has been possible up to now. A variational radar–lidar ice-cloud retrieval algorithm (VarCloud) takes advantage of the complementary nature of the CloudSat radar and Cloud–Aerosol Lidar and Infrared Pathfinder Satellite Observations (CALIPSO) lidar to provide a seamless retrieval of ice water content, effective radius, and extinction coefficient from the thinnest cirrus (seen only by the lidar) to the thickest ice cloud (penetrated only by the radar). In this paper, several versions of the VarCloud retrieval are compared with the CloudSat standard ice-only retrieval of ice water content, two empirical formulas that derive ice water content from radar reflectivity and temperature, and retrievals of vertically integrated properties from the Moderate Resolution Imaging Spectroradiometer (MODIS) radiometer. The retrieved variables typically agree to within a factor of 2, on average, and most of the differences can be explained by the different microphysical assumptions. For example, the ice water content comparison illustrates the sensitivity of the retrievals to assumed ice particle shape. If ice particles are modeled as oblate spheroids rather than spheres for radar scattering then the retrieved ice water content is reduced by on average 50% in clouds with a reflectivity factor larger than 0 dBZ. VarCloud retrieves optical depths that are on average a factor-of-2 lower than those from MODIS, which can be explained by the different assumptions on particle mass and area; if VarCloud mimics the MODIS assumptions then better agreement is found in effective radius and optical depth is overestimated. MODIS predicts the mean vertically integrated ice water content to be around a factor-of-3 lower than that from VarCloud for the same retrievals, however, because the MODIS algorithm assumes that its retrieved effective radius (which is mostly representative of cloud top) is constant throughout the depth of the cloud. These comparisons highlight the need to refine microphysical assumptions in all retrieval algorithms and also for future studies to compare not only the mean values but also the full probability density function.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent research has shown that Lighthill–Ford spontaneous gravity wave generation theory, when applied to numerical model data, can help predict areas of clear-air turbulence. It is hypothesized that this is the case because spontaneously generated atmospheric gravity waves may initiate turbulence by locally modifying the stability and wind shear. As an improvement on the original research, this paper describes the creation of an ‘operational’ algorithm (ULTURB) with three modifications to the original method: (1) extending the altitude range for which the method is effective downward to the top of the boundary layer, (2) adding turbulent kinetic energy production from the environment to the locally produced turbulent kinetic energy production, and, (3) transforming turbulent kinetic energy dissipation to eddy dissipation rate, the turbulence metric becoming the worldwide ‘standard’. In a comparison of ULTURB with the original method and with the Graphical Turbulence Guidance second version (GTG2) automated procedure for forecasting mid- and upper-level aircraft turbulence ULTURB performed better for all turbulence intensities. Since ULTURB, unlike GTG2, is founded on a self-consistent dynamical theory, it may offer forecasters better insight into the causes of the clear-air turbulence and may ultimately enhance its predictability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an efficient graph-based algorithm for quantifying the similarity of household-level energy use profiles, using a notion of similarity that allows for small time–shifts when comparing profiles. Experimental results on a real smart meter data set demonstrate that in cases of practical interest our technique is far faster than the existing method for computing the same similarity measure. Having a fast algorithm for measuring profile similarity improves the efficiency of tasks such as clustering of customers and cross-validation of forecasting methods using historical data. Furthermore, we apply a generalisation of our algorithm to produce substantially better household-level energy use forecasts from historical smart meter data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a modified algorithm is suggested for developing polynomial neural network (PNN) models. Optimal partial description (PD) modeling is introduced at each layer of the PNN expansion, a task accomplished using the orthogonal least squares (OLS) method. Based on the initial PD models determined by the polynomial order and the number of PD inputs, OLS selects the most significant regressor terms reducing the output error variance. The method produces PNN models exhibiting a high level of accuracy and superior generalization capabilities. Additionally, parsimonious models are obtained comprising a considerably smaller number of parameters compared to the ones generated by means of the conventional PNN algorithm. Three benchmark examples are elaborated, including modeling of the gas furnace process as well as the iris and wine classification problems. Extensive simulation results and comparison with other methods in the literature, demonstrate the effectiveness of the suggested modeling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Model studies do not agree on future changes in tropical cyclone (TC) activity on regional scales. We aim to shed further light on the distribution, frequency, intensity, and seasonality of TCs that society can expect at the end of the twenty-first century in the Southern hemisphere (SH). Therefore, we investigate TC changes simulated by the atmospheric model ECHAM5 with T213 (~60 km) horizontal resolution. We identify TCs in present-day (20C; 1969–1990) and future (21C; 2069–2100) time slice simulations, using a tracking algorithm based on vorticity at 850 hPa. In contrast to the Northern hemisphere (NH), where tropical storm numbers reduce by 6 %, there is a more dramatic 22 % reduction in the SH, mainly in the South Indian Ocean. While an increase of static stability in 21C may partly explain the reduction in tropical storm numbers, stabilization cannot alone explain the larger SH drop. Large-scale circulation changes associated with a weakening of the Tropical Walker Circulation are hypothesized to cause the strong decrease of cyclones in the South Indian Ocean. In contrast the decrease found over the South Pacific appears to be partly related to increased vertical wind shear, which is possibly associated with an enhanced meridional sea surface temperature gradient. We find the main difference between the hemispheres in changes of the tropical cyclones of intermediate strength with an increase in the NH and a decrease in the SH. In both hemispheres the frequency of the strongest storms increases and the frequency of the weakest storms decreases, although the increase in SH intense storms is marginal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Observations from the Heliospheric Imager (HI) instruments aboard the twin STEREO spacecraft have enabled the compilation of several catalogues of coronal mass ejections (CMEs), each characterizing the propagation of CMEs through the inner heliosphere. Three such catalogues are the Rutherford Appleton Laboratory (RAL)-HI event list, the Solar Stormwatch CME catalogue, and, presented here, the J-tracker catalogue. Each catalogue uses a different method to characterize the location of CME fronts in the HI images: manual identification by an expert, the statistical reduction of the manual identifications of many citizen scientists, and an automated algorithm. We provide a quantitative comparison of the differences between these catalogues and techniques, using 51 CMEs common to each catalogue. The time-elongation profiles of these CME fronts are compared, as are the estimates of the CME kinematics derived from application of three widely used single-spacecraft-fitting techniques. The J-tracker and RAL-HI profiles are most similar, while the Solar Stormwatch profiles display a small systematic offset. Evidence is presented that these differences arise because the RAL-HI and J-tracker profiles follow the sunward edge of CME density enhancements, while Solar Stormwatch profiles track closer to the antisunward (leading) edge. We demonstrate that the method used to produce the time-elongation profile typically introduces more variability into the kinematic estimates than differences between the various single-spacecraft-fitting techniques. This has implications for the repeatability and robustness of these types of analyses, arguably especially so in the context of space weather forecasting, where it could make the results strongly dependent on the methods used by the forecaster.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Empirical mode decomposition (EMD) is a data-driven method used to decompose data into oscillatory components. This paper examines to what extent the defined algorithm for EMD might be susceptible to data format. Two key issues with EMD are its stability and computational speed. This paper shows that for a given signal there is no significant difference between results obtained with single (binary32) and double (binary64) floating points precision. This implies that there is no benefit in increasing floating point precision when performing EMD on devices optimised for single floating point format, such as graphical processing units (GPUs).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This Thesis Work will concentrate on a very interesting problem, the Vehicle Routing Problem (VRP). In this problem, customers or cities have to be visited and packages have to be transported to each of them, starting from a basis point on the map. The goal is to solve the transportation problem, to be able to deliver the packages-on time for the customers,-enough package for each Customer,-using the available resources- and – of course - to be so effective as it is possible.Although this problem seems to be very easy to solve with a small number of cities or customers, it is not. In this problem the algorithm have to face with several constraints, for example opening hours, package delivery times, truck capacities, etc. This makes this problem a so called Multi Constraint Optimization Problem (MCOP). What’s more, this problem is intractable with current amount of computational power which is available for most of us. As the number of customers grow, the calculations to be done grows exponential fast, because all constraints have to be solved for each customers and it should not be forgotten that the goal is to find a solution, what is best enough, before the time for the calculation is up. This problem is introduced in the first chapter: form its basics, the Traveling Salesman Problem, using some theoretical and mathematical background it is shown, why is it so hard to optimize this problem, and although it is so hard, and there is no best algorithm known for huge number of customers, why is it a worth to deal with it. Just think about a huge transportation company with ten thousands of trucks, millions of customers: how much money could be saved if we would know the optimal path for all our packages.Although there is no best algorithm is known for this kind of optimization problems, we are trying to give an acceptable solution for it in the second and third chapter, where two algorithms are described: the Genetic Algorithm and the Simulated Annealing. Both of them are based on obtaining the processes of nature and material science. These algorithms will hardly ever be able to find the best solution for the problem, but they are able to give a very good solution in special cases within acceptable calculation time.In these chapters (2nd and 3rd) the Genetic Algorithm and Simulated Annealing is described in details, from their basis in the “real world” through their terminology and finally the basic implementation of them. The work will put a stress on the limits of these algorithms, their advantages and disadvantages, and also the comparison of them to each other.Finally, after all of these theories are shown, a simulation will be executed on an artificial environment of the VRP, with both Simulated Annealing and Genetic Algorithm. They will both solve the same problem in the same environment and are going to be compared to each other. The environment and the implementation are also described here, so as the test results obtained.Finally the possible improvements of these algorithms are discussed, and the work will try to answer the “big” question, “Which algorithm is better?”, if this question even exists.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a new method for solving large scale p-median problem instances based on real data. We compare different approaches in terms of runtime, memory footprint and quality of solutions obtained. In order to test the different methods on real data, we introduce a new benchmark for the p-median problem based on real Swedish data. Because of the size of the problem addressed, up to 1938 candidate nodes, a number of algorithms, both exact and heuristic, are considered. We also propose an improved hybrid version of a genetic algorithm called impGA. Experiments show that impGA behaves as well as other methods for the standard set of medium-size problems taken from Beasley’s benchmark, but produces comparatively good results in terms of quality, runtime and memory footprint on our specific benchmark based on real Swedish data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper presents an extended genetic algorithm for solving the optimal transmission network expansion planning problem. Two main improvements have been introduced in the genetic algorithm: (a) initial population obtained by conventional optimisation based methods; (b) mutation approach inspired in the simulated annealing technique, the proposed method is general in the sense that it does not assume any particular property of the problem being solved, such as linearity or convexity. Excellent performance is reported in the test results section of the paper for a difficult large-scale real-life problem: a substantial reduction in investment costs has been obtained with regard to previous solutions obtained via conventional optimisation methods and simulated annealing algorithms; statistical comparison procedures have been employed in benchmarking different versions of the genetic algorithm and simulated annealing methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work, genetic algorithms concepts along with a rotamer library for proteins side chains are used to optimize the tertiary structure of the hydrophobic core of Cytochrome b(562) starting from the known PDB structure of its backbone which is kept fixed while the side chains of the hydrophobic core are allowed to adopt the conformations present in the rotamer library. The atoms of the side chains forming the core interact via van der Waals energy. Besides the prediction of the native core structure, it is also suggested a set of different amino acid sequences for this core. Comparison between these new cores and the native are made in terms of their volumes, van der Waals energies values and the numbers of contacts made by the side chains forming the cores. This paper proves that genetic algorithms area efficient to design new sequence for the protein core. (C) 2007 Elsevier B.V. All rights reserved.