989 resultados para Algorithm Comparison
Resumo:
The third primary production algorithm round robin (PPARR3) compares output from 24 models that estimate depth-integrated primary production from satellite measurements of ocean color, as well as seven general circulation models (GCMs) coupled with ecosystem or biogeochemical models. Here we compare the global primary production fields corresponding to eight months of 1998 and 1999 as estimated from common input fields of photosynthetically-available radiation (PAR), sea-surface temperature (SST), mixed-layer depth, and chlorophyll concentration. We also quantify the sensitivity of the ocean-color-based models to perturbations in their input variables. The pair-wise correlation between ocean-color models was used to cluster them into groups or related output, which reflect the regions and environmental conditions under which they respond differently. The groups do not follow model complexity with regards to wavelength or depth dependence, though they are related to the manner in which temperature is used to parameterize photosynthesis. Global average PP varies by a factor of two between models. The models diverged the most for the Southern Ocean, SST under 10 degrees C, and chlorophyll concentration exceeding 1 mg Chlm(-3). Based on the conditions under which the model results diverge most, we conclude that current ocean-color-based models are challenged by high-nutrient low-chlorophyll conditions, and extreme temperatures or chlorophyll concentrations. The GCM-based models predict comparable primary production to those based on ocean color: they estimate higher values in the Southern Ocean, at low SST, and in the equatorial band, while they estimate lower values in eutrophic regions (probably because the area of high chlorophyll concentrations is smaller in the GCMs). Further progress in primary production modeling requires improved understanding of the effect of temperature on photosynthesis and better parameterization of the maximum photosynthetic rate. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This paper presents a comparison of reactive power support in distribution networks provided by switched Capacitor Banks (CBs) and Distributed Generators (DGs). Regarding switched CBs, a Tabu Search metaheuristic algorithm is developed to determine their optimal operation with the objective of reducing the power losses in the lines on the system, while meeting network constraints. on the other hand, the optimal operation of DGs is analyzed through an evolutionary Multi-Objective (MO) programming approach. The objectives of such approach are the minimization of power losses and operation cost of the DGs. The comparison of the reactive power support provided by switched CBs and DGs is carried out using a modified IEEE 34 bus distribution test system.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
We introduce a new hybrid approach to determine the ground state geometry of molecular systems. Firstly, we compared the ability of genetic algorithm (GA) and simulated annealing (SA) to find the lowest energy geometry of silicon clusters with six and 10 atoms. This comparison showed that GA exhibits fast initial convergence, but its performance deteriorates as it approaches the desired global extreme. Interestingly, SA showed a complementary convergence pattern, in addition to high accuracy. Our new procedure combines selected features from GA and SA to achieve weak dependence on initial parameters, parallel search strategy, fast convergence and high accuracy. This hybrid algorithm outperforms GA and SA by one order of magnitude for small silicon clusters (Si6 and Si10). Next, we applied the hybrid method to study the geometry of a 20-atom silicon cluster. It was able to find an original geometry, apparently lower in energy than those previously described in literature. In principle, our procedure can be applied successfully to any molecular system. © 1998 Elsevier Science B.V.
Resumo:
The risk for venous thromboembolism (VTE) in medical patients is high, but risk assessment is rarely performed because there is not yet a good method to identify candidates for prophylaxis. Purpose: To perform a systematic review about VTE risk factors (RFs) in hospitalized medical patients and generate recommendations (RECs) for prophylaxis that can be implemented into practice. Data sources: A multidisciplinary group of experts from 12 Brazilian Medical Societies searched MEDLINE, Cochrane, and LILACS. Study selection: Two experts independently classified the evidence for each RF by its scientific quality in a standardized manner. A risk-assessment algorithm was created based on the results of the review. Data synthesis: Several VTE RFs have enough evidence to support RECs for prophylaxis in hospitalized medical patients (eg, increasing age, heart failure, and stroke). Other factors are considered adjuncts of risk (eg, varices, obesity, and infections). According to the algorithm, hospitalized medical patients ≥40 years-old with decreased mobility, and ≥1 RFs should receive chemoprophylaxis with heparin, provided they don't have contraindications. High prophylactic doses of unfractionated heparin or low-molecular-weight-heparin must be administered and maintained for 6-14 days. Conclusions: A multidisciplinary group generated evidence-based RECs and an easy-to-use algorithm to facilitate VTE prophylaxis in medical patients. © 2007 Rocha et al, publisher and licensee Dove Medical Press Ltd.
Resumo:
This chapter studies a two-level production planning problem where, on each level, a lot sizing and scheduling problem with parallel machines, capacity constraints and sequence-dependent setup costs and times must be solved. The problem can be found in soft drink companies where the production process involves two interdependent levels with decisions concerning raw material storage and soft drink bottling. Models and solution approaches proposed so far are surveyed and conceptually compared. Two different approaches have been selected to perform a series of computational comparisons: an evolutionary technique comprising a genetic algorithm and its memetic version, and a decomposition and relaxation approach. © 2008 Springer-Verlag Berlin Heidelberg.
Resumo:
Hepatocellular carcinoma (HCC) is a primary tumor of the liver. After local therapies, the tumor evaluation is based on the mRECIST criteria, which involves the measurement of the maximum diameter of the viable lesion. This paper describes a computed methodology to measure through the contrasted area of the lesions the maximum diameter of the tumor by a computational algorithm 63 computed tomography (CT) slices from 23 patients were assessed. Non-contrasted liver and HCC typical nodules were evaluated, and a virtual phantom was developed for this purpose. Optimization of the algorithm detection and quantification was made using the virtual phantom. After that, we compared the algorithm findings of maximum diameter of the target lesions against radiologist measures. Computed results of the maximum diameter are in good agreement with the results obtained by radiologist evaluation, indicating that the algorithm was able to detect properly the tumor limits A comparison of the estimated maximum diameter by radiologist versus the algorithm revealed differences on the order of 0.25 cm for large-sized tumors (diameter > 5 cm), whereas agreement lesser than 1.0cm was found for small-sized tumors. Differences between algorithm and radiologist measures were accurate for small-sized tumors with a trend to a small increase for tumors greater than 5 cm. Therefore, traditional methods for measuring lesion diameter should be complemented with non-subjective measurement methods, which would allow a more correct evaluation of the contrast-enhanced areas of HCC according to the mRECIST criteria.
Resumo:
The algorithm creates a buffer area around the cartographic features of interest in one of the images and compare it with the other one. During the comparison, the algorithm calculates the number of equals and different points and uses it to calculate the statistical values of the analysis. One calculated statistical value is the correctness, which shows the user the percentage of points that were correctly extracted. Another one is the completeness that shows the percentage of points that really belong to the interest feature. And the third value shows the idea of quality obtained by the extraction method, since that in order to calculate the quality the algorithm uses the correctness and completeness previously calculated. All the performed tests using this algorithm were possible to use the statistical values calculated to represent quantitatively the quality obtained by the extraction method executed. So, it is possible to say that the developed algorithm can be used to analyze extraction methods of cartographic features of interest, since that the results obtained were promising.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The increasing amount of sequences stored in genomic databases has become unfeasible to the sequential analysis. Then, the parallel computing brought its power to the Bioinformatics through parallel algorithms to align and analyze the sequences, providing improvements mainly in the running time of these algorithms. In many situations, the parallel strategy contributes to reducing the computational complexity of the big problems. This work shows some results obtained by an implementation of a parallel score estimating technique for the score matrix calculation stage, which is the first stage of a progressive multiple sequence alignment. The performance and quality of the parallel score estimating are compared with the results of a dynamic programming approach also implemented in parallel. This comparison shows a significant reduction of running time. Moreover, the quality of the final alignment, using the new strategy, is analyzed and compared with the quality of the approach with dynamic programming.
Resumo:
This paper presents a performance analysis of a baseband multiple-input single-output ultra-wideband system over scenarios CM1 and CM3 of the IEEE 802.15.3a channel model, incorporating four different schemes of pre-distortion: time reversal, zero-forcing pre-equaliser, constrained least squares pre-equaliser, and minimum mean square error pre-equaliser. For the third case, a simple solution based on the steepest-descent (gradient) algorithm is adopted and compared with theoretical results. The channel estimations at the transmitter are assumed to be truncated and noisy. Results show that the constrained least squares algorithm has a good trade-off between intersymbol interference reduction and signal-to-noise ratio preservation, providing a performance comparable to the minimum mean square error method but with lower computational complexity. Copyright (C) 2011 John Wiley & Sons, Ltd.
Resumo:
A deep theoretical analysis of the graph cut image segmentation framework presented in this paper simultaneously translates into important contributions in several directions. The most important practical contribution of this work is a full theoretical description, and implementation, of a novel powerful segmentation algorithm, GC(max). The output of GC(max) coincides with a version of a segmentation algorithm known as Iterative Relative Fuzzy Connectedness, IRFC. However, GC(max) is considerably faster than the classic IRFC algorithm, which we prove theoretically and show experimentally. Specifically, we prove that, in the worst case scenario, the GC(max) algorithm runs in linear time with respect to the variable M=|C|+|Z|, where |C| is the image scene size and |Z| is the size of the allowable range, Z, of the associated weight/affinity function. For most implementations, Z is identical to the set of allowable image intensity values, and its size can be treated as small with respect to |C|, meaning that O(M)=O(|C|). In such a situation, GC(max) runs in linear time with respect to the image size |C|. We show that the output of GC(max) constitutes a solution of a graph cut energy minimization problem, in which the energy is defined as the a"" (a) norm ayenF (P) ayen(a) of the map F (P) that associates, with every element e from the boundary of an object P, its weight w(e). This formulation brings IRFC algorithms to the realm of the graph cut energy minimizers, with energy functions ayenF (P) ayen (q) for qa[1,a]. Of these, the best known minimization problem is for the energy ayenF (P) ayen(1), which is solved by the classic min-cut/max-flow algorithm, referred to often as the Graph Cut algorithm. We notice that a minimization problem for ayenF (P) ayen (q) , qa[1,a), is identical to that for ayenF (P) ayen(1), when the original weight function w is replaced by w (q) . Thus, any algorithm GC(sum) solving the ayenF (P) ayen(1) minimization problem, solves also one for ayenF (P) ayen (q) with qa[1,a), so just two algorithms, GC(sum) and GC(max), are enough to solve all ayenF (P) ayen (q) -minimization problems. We also show that, for any fixed weight assignment, the solutions of the ayenF (P) ayen (q) -minimization problems converge to a solution of the ayenF (P) ayen(a)-minimization problem (ayenF (P) ayen(a)=lim (q -> a)ayenF (P) ayen (q) is not enough to deduce that). An experimental comparison of the performance of GC(max) and GC(sum) algorithms is included. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as the influence of the choice of the seeds on the output.
Resumo:
XML similarity evaluation has become a central issue in the database and information communities, its applications ranging over document clustering, version control, data integration and ranked retrieval. Various algorithms for comparing hierarchically structured data, XML documents in particular, have been proposed in the literature. Most of them make use of techniques for finding the edit distance between tree structures, XML documents being commonly modeled as Ordered Labeled Trees. Yet, a thorough investigation of current approaches led us to identify several similarity aspects, i.e., sub-tree related structural and semantic similarities, which are not sufficiently addressed while comparing XML documents. In this paper, we provide an integrated and fine-grained comparison framework to deal with both structural and semantic similarities in XML documents (detecting the occurrences and repetitions of structurally and semantically similar sub-trees), and to allow the end-user to adjust the comparison process according to her requirements. Our framework consists of four main modules for (i) discovering the structural commonalities between sub-trees, (ii) identifying sub-tree semantic resemblances, (iii) computing tree-based edit operations costs, and (iv) computing tree edit distance. Experimental results demonstrate higher comparison accuracy with respect to alternative methods, while timing experiments reflect the impact of semantic similarity on overall system performance.
Resumo:
[EN]The meccano method is a novel and promising mesh generation technique for simultaneously creating adaptive tetrahedral meshes and volume parameterizations of a complex solid. The method combines several former procedures: a mapping from the meccano boundary to the solid surface, a 3-D local refinement algorithm and a simultaneous mesh untangling and smoothing. In this paper we present the main advantages of our method against other standard mesh generation techniques. We show that our method constructs meshes that can be locally refined by using the Kossaczky bisection rule and maintaining a high mesh quality. Finally, we generate volume T-mesh for isogeometric analysis, based on the volume parameterization obtained by the method…