814 resultados para Benchmark


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recently, researches have shown that the performance of metaheuristics can be affected by population initialization. Opposition-based Differential Evolution (ODE), Quasi-Oppositional Differential Evolution (QODE), and Uniform-Quasi-Opposition Differential Evolution (UQODE) are three state-of-the-art methods that improve the performance of the Differential Evolution algorithm based on population initialization and different search strategies. In a different approach to achieve similar results, this paper presents a technique to discover promising regions in a continuous search-space of an optimization problem. Using machine-learning techniques, the algorithm named Smart Sampling (SS) finds regions with high possibility of containing a global optimum. Next, a metaheuristic can be initialized inside each region to find that optimum. SS and DE were combined (originating the SSDE algorithm) to evaluate our approach, and experiments were conducted in the same set of benchmark functions used by ODE, QODE and UQODE authors. Results have shown that the total number of function evaluations required by DE to reach the global optimum can be significantly reduced and that the success rate improves if SS is employed first. Such results are also in consonance with results from the literature, stating the importance of an adequate starting population. Moreover, SS presents better efficacy to find initial populations of superior quality when compared to the other three algorithms that employ oppositional learning. Finally and most important, the SS performance in finding promising regions is independent of the employed metaheuristic with which SS is combined, making SS suitable to improve the performance of a large variety of optimization techniques. (C) 2012 Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This article is the product of research that analyzed the work of bus drivers of a public transportation company that is considered a benchmark reference in its field of operations, in which it strives to achieve operating excellence. Within this context, the authors sought to understand how such a company has managed to maintain a policy that is capable of reconciling quality public transport while also providing working conditions compatible with the professional development, comfort and health of its workers. Ergonomic work analysis and activity analysis were the guiding elements used in this study. Initial analyses indicate that the activity of drivers includes serving a population and providing mobility for it, which depends on driving the vehicle itself and on relationships with colleagues, users, pedestrians, drivers and others.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Solution of structural reliability problems by the First Order method require optimization algorithms to find the smallest distance between a limit state function and the origin of standard Gaussian space. The Hassofer-Lind-Rackwitz-Fiessler (HLRF) algorithm, developed specifically for this purpose, has been shown to be efficient but not robust, as it fails to converge for a significant number of problems. On the other hand, recent developments in general (augmented Lagrangian) optimization techniques have not been tested in aplication to structural reliability problems. In the present article, three new optimization algorithms for structural reliability analysis are presented. One algorithm is based on the HLRF, but uses a new differentiable merit function with Wolfe conditions to select step length in linear search. It is shown in the article that, under certain assumptions, the proposed algorithm generates a sequence that converges to the local minimizer of the problem. Two new augmented Lagrangian methods are also presented, which use quadratic penalties to solve nonlinear problems with equality constraints. Performance and robustness of the new algorithms is compared to the classic augmented Lagrangian method, to HLRF and to the improved HLRF (iHLRF) algorithms, in the solution of 25 benchmark problems from the literature. The new proposed HLRF algorithm is shown to be more robust than HLRF or iHLRF, and as efficient as the iHLRF algorithm. The two augmented Lagrangian methods proposed herein are shown to be more robust and more efficient than the classical augmented Lagrangian method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transportation planning is currently being confronted with a broader planning view, which is given by the concept of mobility. The Index of Sustainable Urban Mobility (I_SUM) is among the tools developed for supporting this new concept implementation. It is a tool to assess the current mobility conditions of any city, which can also be applied for policy formulation. This study focus on the application of I_SUM in the city of Curitiba, Brazil. Considering that the city is known worldwide as a reference of successful urban and transportation planning, the index application must confirm it. An additional objective of the study was to evaluate the index itself, or the subjacent assessment method and reference values. A global I_SUM value of 0.747 confirmed that the city has indeed very positive characteristics regarding sustainable mobility policies. However, some deficiencies were also detected, particularly with respect to non-motorized transport modes. The application has also served to show that a few I_SUM indicators were not able to capture some of the positive aspects of the city, what may suggest the need of changes in their formulation. Finally, the index application in parts of the city suggests that the city provides fair and equitable mobility conditions to all citizens throughout the city. This is certainly a good attribute for becoming a benchmark of sustainable mobility, even if it is not yet the ideal model. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background A large number of probabilistic models used in sequence analysis assign non-zero probability values to most input sequences. To decide when a given probability is sufficient the most common way is bayesian binary classification, where the probability of the model characterizing the sequence family of interest is compared to that of an alternative probability model. We can use as alternative model a null model. This is the scoring technique used by sequence analysis tools such as HMMER, SAM and INFERNAL. The most prevalent null models are position-independent residue distributions that include: the uniform distribution, genomic distribution, family-specific distribution and the target sequence distribution. This paper presents a study to evaluate the impact of the choice of a null model in the final result of classifications. In particular, we are interested in minimizing the number of false predictions in a classification. This is a crucial issue to reduce costs of biological validation. Results For all the tests, the target null model presented the lowest number of false positives, when using random sequences as a test. The study was performed in DNA sequences using GC content as the measure of content bias, but the results should be valid also for protein sequences. To broaden the application of the results, the study was performed using randomly generated sequences. Previous studies were performed on aminoacid sequences, using only one probabilistic model (HMM) and on a specific benchmark, and lack more general conclusions about the performance of null models. Finally, a benchmark test with P. falciparum confirmed these results. Conclusions Of the evaluated models the best suited for classification are the uniform model and the target model. However, the use of the uniform model presents a GC bias that can cause more false positives for candidate sequences with extreme compositional bias, a characteristic not described in previous studies. In these cases the target model is more dependable for biological validation due to its higher specificity.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Background: In epidemiological surveys, a good reliability among the examiners regarding the caries detection method is essential. However, training and calibrating those examiners is an arduous task because it involves several patients who are examined many times. To facilitate this step, we aimed to propose a laboratory methodology to simulate the examinations performed to detect caries lesions using the International Caries Detection and Assessment System (ICDAS) in epidemiological surveys. Methods: A benchmark examiner conducted all training sessions. A total of 67 exfoliated primary teeth, varying from sound to extensive cavitated, were set in seven arch models to simulate complete mouths in primary dentition. Sixteen examiners (graduate students) evaluated all surfaces of the teeth under illumination using buccal mirrors and ball-ended probe in two occasions, using only coronal primary caries scores of the ICDAS. As reference standard, two different examiners assessed the proximal surfaces by direct visual inspection, classifying them in sound, with non-cavitated or with cavitated lesions. After, teeth were sectioned in the bucco-lingual direction, and the examiners assessed the sections in stereomicroscope, classifying the occlusal and smooth surfaces according to lesion depth. Inter-examiner reproducibility was evaluated using weighted kappa. Sensitivities and specificities were calculated at two thresholds: all lesions and advanced lesions (cavitated lesions in proximal surfaces and lesions reaching the dentine in occlusal and smooth surfaces). Conclusion: The methodology purposed for training and calibration of several examiners designated for epidemiological surveys of dental caries in preschool children using the ICDAS is feasible, permitting the assessment of reliability and accuracy of the examiners previously to the survey´s development.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study evaluated the interchangeability of prosthetic components for external hexagon implants by measuring the precision of the implant/abutment (I/A) interface with scanning electron microscopy. Ten implants for each of three brands (SIN, Conexão, Neodent) were tested with their respective abutments (milled CoCr collar rotational and non-rotational) and another of an alternative manufacturer (Microplant) in randomly arranged I/A combinations. The degree of interchangeability between the various brands of components was defined using the original abutment interface gap with its respective implant as the benchmark dimension. Accordingly, when the result for a given component placed on an implant was equal to or smaller then that gap measured when the original component of the same brand as the implant was positioned, interchangeability was considered valid. Data were compared with the Kruskal-Wallis test at 5% significance level. Some degree of misfit was observed in all specimens. Generally, the non-rotational component was more accurate than its rotational counterpart. The latter samples ranged from 0.6-16.9 µm, with a 4.6 µm median; and the former from 0.3-12.9 µm, with a 3.4 µm median. Specimens with the abutment and fixture from Conexão had larger microgap than the original set for SIN and Neodent (p<0.05). Even though the latter systems had similar results with their respective components, their interchanged abutments did not reproduce the original accuracy. The results suggest that the alternative brand abutment would have compatibility with all systems while the other brands were not completely interchangeable.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work studies the optimization and control of a styrene polymerization reactor. The proposed strategy deals with the case where, because of market conditions and equipment deterioration, the optimal operating point of the continuous reactor is modified significantly along the operation time and the control system has to search for this optimum point, besides keeping the reactor system stable at any possible point. The approach considered here consists of three layers: the Real Time Optimization (RTO), the Model Predictive Control (MPC) and a Target Calculation (TC) that coordinates the communication between the two other layers and guarantees the stability of the whole structure. The proposed algorithm is simulated with the phenomenological model of a styrene polymerization reactor, which has been widely used as a benchmark for process control. The complete optimization structure for the styrene process including disturbances rejection is developed. The simulation results show the robustness of the proposed strategy and the capability to deal with disturbances while the economic objective is optimized.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In deterministic optimization, the uncertainties of the structural system (i.e. dimension, model, material, loads, etc) are not explicitly taken into account. Hence, resulting optimal solutions may lead to reduced reliability levels. The objective of reliability based design optimization (RBDO) is to optimize structures guaranteeing that a minimum level of reliability, chosen a priori by the designer, is maintained. Since reliability analysis using the First Order Reliability Method (FORM) is an optimization procedure itself, RBDO (in its classical version) is a double-loop strategy: the reliability analysis (inner loop) and the structural optimization (outer loop). The coupling of these two loops leads to very high computational costs. To reduce the computational burden of RBDO based on FORM, several authors propose decoupling the structural optimization and the reliability analysis. These procedures may be divided in two groups: (i) serial single loop methods and (ii) unilevel methods. The basic idea of serial single loop methods is to decouple the two loops and solve them sequentially, until some convergence criterion is achieved. On the other hand, uni-level methods employ different strategies to obtain a single loop of optimization to solve the RBDO problem. This paper presents a review of such RBDO strategies. A comparison of the performance (computational cost) of the main strategies is presented for several variants of two benchmark problems from the literature and for a structure modeled using the finite element method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Semi-supervised learning is a classification paradigm in which just a few labeled instances are available for the training process. To overcome this small amount of initial label information, the information provided by the unlabeled instances is also considered. In this paper, we propose a nature-inspired semi-supervised learning technique based on attraction forces. Instances are represented as points in a k-dimensional space, and the movement of data points is modeled as a dynamical system. As the system runs, data items with the same label cooperate with each other, and data items with different labels compete among them to attract unlabeled points by applying a specific force function. In this way, all unlabeled data items can be classified when the system reaches its stable state. Stability analysis for the proposed dynamical system is performed and some heuristics are proposed for parameter setting. Simulation results show that the proposed technique achieves good classification results on artificial data sets and is comparable to well-known semi-supervised techniques using benchmark data sets.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

[EN] The accuracy and performance of current variational optical ow methods have considerably increased during the last years. The complexity of these techniques is high and enough care has to be taken for the implementation. The aim of this work is to present a comprehensible implementation of recent variational optical flow methods. We start with an energy model that relies on brightness and gradient constancy terms and a ow-based smoothness term. We minimize this energy model and derive an e cient implicit numerical scheme. In the experimental results, we evaluate the accuracy and performance of this implementation with the Middlebury benchmark database. We show that it is a competitive solution with respect to current methods in the literature. In order to increase the performance, we use a simple strategy to parallelize the execution on multi-core processors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of my dissertation is to study the gender wage gap with a specific focus on developing and transition countries. In the first chapter I present the main existing theories proposed to analyse the gender wage gap and I review the empirical literature on the gender wage gap in developing and transition countries and its main findings. Then, I discuss the overall empirical issues related to the estimation of the gender wage gap and the issues specific to developing and transition countries. The second chapter is an empirical analysis of the gender wage gap in a developing countries, the Union of Comoros, using data from the multidimensional household budget survey “Enquete integrale auprès des ménages” (EIM) run in 2004. The interest of my work is to provide a benchmark analysis for further studies on the situation of women in the Comorian labour market and to contribute to the literature on gender wage gap in Africa by making available more information on the dynamics and mechanism of the gender wage gap, given the limited interest on the topic in this area of the world. The third chapter is an applied analysis of the gender wage gap in a transition country, Poland, using data from the Labour Force Survey (LSF) collected for the years 1994 and 2004. I provide a detailed examination of how gender earning differentials have changed over the period starting from 1994 to a more advanced transition phase in 2004, when market elements have become much more important in the functioning of the Polish economy than in the earlier phase. The main contribution of my dissertation is the application of the econometrical methodology that I describe in the beginning of the second chapter. First, I run a preliminary OLS and quantile regression analysis to estimate and describe the raw and conditional wage gaps along the distribution. Second, I estimate quantile regressions separately for males and females, in order to allow for different rewards to characteristics. Third, I proceed to decompose the raw wage gap estimated at the mean through the Oaxaca-Blinder (1973) procedure. In the second chapter I run a two-steps Heckman procedure by estimating a model of participation in the labour market which shows a significant selection bias for females. Forth, I apply the Machado-Mata (2005) techniques to extend the decomposition analysis at all points of the distribution. In Poland I can also implement the Juhn, Murphy and Pierce (1991) decomposition over the period 1994-2004, to account for effects to the pay gap due to changes in overall wage dispersion beyond Oaxaca’s standard decomposition.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present the Kinship Verification in the Wild Competition: the first kinship verification competition which is held in conjunction with the International Joint Conference on Biometrics 2014, Clearwater, Florida, USA. The key goal of this competition is to compare the performance of different methods on a new-collected dataset with the same evaluation protocol and develop the first standardized benchmark for kinship verification in the wild.