80 resultados para Eficácia de algoritmos
Resumo:
This work presents a new model for the Heterogeneous p-median Problem (HPM), proposed to recover the hidden category structures present in the data provided by a sorting task procedure, a popular approach to understand heterogeneous individual’s perception of products and brands. This new model is named as the Penalty-free Heterogeneous p-median Problem (PFHPM), a single-objective version of the original problem, the HPM. The main parameter in the HPM is also eliminated, the penalty factor. It is responsible for the weighting of the objective function terms. The adjusting of this parameter controls the way that the model recovers the hidden category structures present in data, and depends on a broad knowledge of the problem. Additionally, two complementary formulations for the PFHPM are shown, both mixed integer linear programming problems. From these additional formulations lower-bounds were obtained for the PFHPM. These values were used to validate a specialized Variable Neighborhood Search (VNS) algorithm, proposed to solve the PFHPM. This algorithm provided good quality solutions for the PFHPM, solving artificial generated instances from a Monte Carlo Simulation and real data instances, even with limited computational resources. Statistical analyses presented in this work suggest that the new algorithm and model, the PFHPM, can recover more accurately the original category structures related to heterogeneous individual’s perceptions than the original model and algorithm, the HPM. Finally, an illustrative application of the PFHPM is presented, as well as some insights about some new possibilities for it, extending the new model to fuzzy environments
Resumo:
The development of home refrigerators generally are compact and economic reasons for using simplified configuration. The thermodynamic coefficient of performance ( COP ) is limited mainly in the condenser design for reasons of size and arrangement ( layout ) of the project ( design ) and climatic characteristics of the region where it will operate. It is noteworthy that this latter limitation is very significant when it comes to a country of continental size like Brazil with diverse climatic conditions. The COP of the cycle depends crucially on the ability of heat dissipated in the condenser. So in hot climates like the northeast, north, and west-central dispel ability is highly attenuated compared to the south and southeast regions with tropical or subtropical climates when compared with other regions. The dissipation in compact capacitors for applications in domestic refrigeration has been the focus of several studies, that due to its impact on reducing costs and power consumption, and better use of the space occupied by the components of refrigeration systems. This space should be kept to a minimum to allow an increase in the useful storage volume of refrigerator without changing the external dimensions of the product. Due to its low cost manufacturing, wire on tube condensers continue to be the most advantageous option for domestic refrigeration. Traditionally, these heat exchangers are designed to operate under natural convection. Not always, the benefits of greater compactness of capacitors for forced outweigh the burden of pumping air through the external heat exchanger. In this work we propose an improvement in convective condenser changing it to a transfer mechanism combined in series with conductive pipes and wire to a moist convective porous medium and the porous medium to the environment. The porous media used in the coating was composed of a gypsum plaster impregnated fiber about a mesh of natural cellulosic molded tubular wire mesh about the original structure of the condenser , and then dried and calcined to greater adherence and increased porosity. The proposed configuration was installed in domestic refrigeration system ( trough ) and tested under the same conditions of the original configuration . Was also evaluated in the dry condition and humidified drip water under natural and forced with an electro - fan ( fan coil ) convection. Assays were performed for the same 134- refrigerant charge e under the same thermal cooling load. The performance was evaluated in various configurations, showing an improvement of about 72 % compared with the original configuration proposed in humidification and natural convection.
Resumo:
Nanoemulsions are emulsified systems, characterized for reduced droplet size (50- 500nm), which the main characteristic are kinect stability and thermodynamic instability. These are promising systems on cosmetic area due to their droplet size that provide different advantages when compared to conventional systems, among others, larger surface area and better permeability. The Opuntia ficus-indica (L.) Mill is a plant cultivated on Caatinga Brazilian biome, which has great socioeconomic importance to region. This plant shows carbohydrates utilized for cosmetic industry as moisturizing active in their chemical composition. The aim of study was to develop, characterize, evaluate stability and moisturizing efficacy of cosmetic nanoemulsions added to Opuntia ficus-indica (L.) Mill extract. Nanoemulsions preparation was made using a low energy method. Different nanoemulsions were formulated varying the ratio of oil, water and surfactant phases beyond xanthan gum (0.5% e 1%) and Opuntia ficus-indica (L.) Mill hydroglycolic extract addition on 1% and 3%. Obtained nanoemulsions were submitted to preliminary and accelerated stability tests. The evaluated parameters monitored were: macroscopic aspect, pH value, droplet size, zeta potential and polydispersion index, during 60 days on different temperatures. Stable formulations were submitted to moisturizing efficacy assessment by capacitance and transepidermal water loss methodologies during 5 hours. Stable samples were white and showed homogeneous and fluid aspect, pH value was inside ideal range (4,5-6,0) to topical application and droplet size under 200nm characterizing these system as nanoemulsions. Developed nanoemulsions did not decrease transepidermal water loss, however increased the water content on stratum corneum, highlighting the nanoemulsions containing 0.5% of xanthan gum and 1% of hydroglycolic extract. This work presents cosmetic moisturizing nanoemulsions composed to vegetal raw material from Brazilian Caatinga with potential to be used on cosmetic area.
Resumo:
We present indefinite integration algorithms for rational functions over subfields of the complex numbers, through an algebraic approach. We study the local algorithm of Bernoulli and rational algorithms for the class of functions in concern, namely, the algorithms of Hermite; Horowitz-Ostrogradsky; Rothstein-Trager and Lazard-Rioboo-Trager. We also study the algorithm of Rioboo for conversion of logarithms involving complex extensions into real arctangent functions, when these logarithms arise from the integration of rational functions with real coefficients. We conclude presenting pseudocodes and codes for implementation in the software Maxima concerning the algorithms studied in this work, as well as to algorithms for polynomial gcd computation; partial fraction decomposition; squarefree factorization; subresultant computation, among other side algorithms for the work. We also present the algorithm of Zeilberger-Almkvist for integration of hyperexpontential functions, as well as its pseudocode and code for Maxima. As an alternative for the algorithms of Rothstein-Trager and Lazard-Rioboo-Trager, we yet present a code for Benoulli’s algorithm for square-free denominators; and another for Czichowski’s algorithm, although this one is not studied in detail in the present work, due to the theoretical basis necessary to understand it, which is beyond this work’s scope. Several examples are provided in order to illustrate the working of the integration algorithms in this text
Resumo:
Inaccurate diagnosis of vulvovaginitis generates inadequate treatments that cause damages women's health. Objective: evaluate the effectiveness of methods when diagnosing vulvovaginitis. Method: a cross-sectional study was performed with 200 women who complained about vaginal discharge. Vaginal smear was collected for microbiological tests, considering the gram stain method as gold standard. The efficacy of the available methods for diagnosis of vaginal discharge was assessed (sensitivity, specificity, positive predictive value and negative predictive value). Data were inserted to Graphpad Prism 6, for statistical analysis. Results: the following results were obtained: wet mount for vaginal candidiasis: sensitivity = 31%; specificity = 97%; positive predictive value (PPV) = 54%; negative predictive value (NPV) =93%; accuracy = 91%. Wet mount for bacterial vaginosis: sensitivity = 80%; specificity =95%; positive predictive value (PPV) = 80%; negative predictive value (NPV) = 95%; accuracy = 92%. Syndromic approach for bacterial vaginosis: sensitivity = 95%; specificity=43%; positive predictive value (PPV) =30%; negative predictive value (NPV) = 97%; accuracy = 54%. Syndromic approach for vaginal candidiasis: sensitivity = 75%; specificity =91%; positive predictive value (PPV) = 26%; negative predictive value (NPV) = 98%; accuracy = 90%. Pap smear for vaginal candidiasis: sensitivity = 68%, specificity = 98%; positive predictive value (PPV) = 86%; negative predictive value (NPV) =96%; accuracy = 96%. Pap smear for bacterial vaginosis: sensitivity = 75%; specificity = 100%; positive predictive value (PPV) = 100%; negative predictive value (NPV) =94%; accuracy = 95%. There was only one case of vaginal trichomoniasis reported – diagnosed by oncological cytology and wet mount – confirmed by Gram. The syndromic approach diagnosed it as bacterial vaginosis. From the data generated and with support on world literature, the Maternidade Escola Januário Cicco’s vulvovaginitis protocol was constructed. Conclusion: Pap smear and wet mount showed respectively low and very low sensitivity for vaginal candidiasis. Syndromic approach presented very low specificity and accuracy for bacterial vaginosis, which implies a large number of patients who are diagnosed or treated incorrectly.
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
The performance of algorithms for fault location i n transmission lines is directly related to the accuracy of its input data. Thus, fa ctors such as errors in the line parameters, failures in synchronization of oscillographic recor ds and errors in measurements of voltage and current can significantly influence the accurac y of algorithms that use bad data to indicate the fault location. This work presents a new method ology for fault location in transmission lines based on the theory of state estimation in or der to determine the location of faults more accurately by considering realistic systematic erro rs that may be present in measurements of voltage and current. The methodology was implemente d in two stages: pre-fault and post- fault. In the first step, assuming non-synchronized data, the synchronization angle and positive sequence line parameters are estimated, an d in the second, the fault distance is estimated. Besides calculating the most likely faul t distance obtained from measurement errors, the variance associated with the distance f ound is also determined, using the errors theory. This is one of the main contributions of th is work, since, with the proposed algorithm, it is possible to determine a most likely zone of f ault incidence, with approximately 95,45% of confidence. Tests for evaluation and validation of the proposed algorithm were realized from actual records of faults and from simulations of fictitious transmission systems using ATP software. The obtained results are relevant to show that the proposed estimation approach works even adopting realistic variances, c ompatible with real equipments errors.
Resumo:
The aim of this study was to evaluate the influence of remineralizing agents on the susceptibility of enamel cleared by the coffee pigmentation during office bleaching. Fifty bovine incisors were selected and randomly assigned into 5 groups (n = 10) on the basis of remineralizing agents: G1 gel hydrogen peroxide to 35% (control group); G2, hydrogen peroxide gel and a 35% gel 2% neutral fluorine; G3, hydrogen peroxide gel and a 35% nanostructured calcium phosphate gel, G4, hydrogen peroxide gel and a 35% casein fosfoptídia-phosphate and amorphous calcium folder; G5 hydrogen peroxide gel to 35% without mineralizing agent. All groups exception G1 (control group) were subjected to pigmentation soluble coffee according to the manufacturer's guidelines. The samples were immersed in coffee at temperature of 55° C, 1 time a day for 4 minutes. Color changes were performed by Easyshade spectrophotometer at CIE Lab method before and after 3 whitening sessions. Data were analyzed by analysis of variance ANOVA. The results showed statistically significant differences between the remineralizing substances for the parameters L *, a *, b * ΔE (p <0.0001). The L * values for the group G5, and the b * for G2 and G5 groups differed from the control group. After the 3rd whitening session, Fluor's group (G2) and that without mineralizing agent (G5) showed ΔE values less than the control group that did not undergo pigmentation. It was concluded that only the nanoclusters remineralizing agents Phosphopeptides Casein-Amorphous Calcium Phosphate and Calcium Amorphous phosphate were able to reduce the coffee interference whitening efficacy of hydrogen peroxide.
Resumo:
The great amount of data generated as the result of the automation and process supervision in industry implies in two problems: a big demand of storage in discs and the difficulty in streaming this data through a telecommunications link. The lossy data compression algorithms were born in the 90’s with the goal of solving these problems and, by consequence, industries started to use those algorithms in industrial supervision systems to compress data in real time. These algorithms were projected to eliminate redundant and undesired information in a efficient and simple way. However, those algorithms parameters must be set for each process variable, becoming impracticable to configure this parameters for each variable in case of systems that monitor thousands of them. In that context, this paper propose the algorithm Adaptive Swinging Door Trending that consists in a adaptation of the Swinging Door Trending, as this main parameters are adjusted dynamically by the analysis of the signal tendencies in real time. It’s also proposed a comparative analysis of performance in lossy data compression algorithms applied on time series process variables and dynamometer cards. The algorithms used to compare were the piecewise linear and the transforms.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
The Quadratic Minimum Spanning Tree (QMST) problem is a generalization of the Minimum Spanning Tree problem in which, beyond linear costs associated to each edge, quadratic costs associated to each pair of edges must be considered. The quadratic costs are due to interaction costs between the edges. When interactions occur between adjacent edges only, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). Both QMST and AQMST are NP-hard and model a number of real world applications involving infrastructure networks design. Linear and quadratic costs are summed in the mono-objective versions of the problems. However, real world applications often deal with conflicting objectives. In those cases, considering linear and quadratic costs separately is more appropriate and multi-objective optimization provides a more realistic modelling. Exact and heuristic algorithms are investigated in this work for the Bi-objective Adjacent Only Quadratic Spanning Tree Problem. The following techniques are proposed: backtracking, branch-and-bound, Pareto Local Search, Greedy Randomized Adaptive Search Procedure, Simulated Annealing, NSGA-II, Transgenetic Algorithm, Particle Swarm Optimization and a hybridization of the Transgenetic Algorithm with the MOEA-D technique. Pareto compliant quality indicators are used to compare the algorithms on a set of benchmark instances proposed in literature.
Resumo:
Cryptography is the main form to obtain security in any network. Even in networks with great energy consumption restrictions, processing and memory limitations, as the Wireless Sensors Networks (WSN), this is no different. Aiming to improve the cryptography performance, security and the lifetime of these networks, we propose a new cryptographic algorithm developed through the Genetic Programming (GP) techniques. For the development of the cryptographic algorithm’s fitness criteria, established by the genetic GP, nine new cryptographic algorithms were tested: AES, Blowfish, DES, RC6, Skipjack, Twofish, T-DES, XTEA and XXTEA. Starting from these tests, fitness functions was build taking into account the execution time, occupied memory space, maximum deviation, irregular deviation and correlation coefficient. After obtaining the genetic GP, the CRYSEED and CRYSEED2 was created, algorithms for the 8-bits devices, optimized for WSNs, i.e., with low complexity, few memory consumption and good security for sensing and instrumentation applications.
Resumo:
Cryptography is the main form to obtain security in any network. Even in networks with great energy consumption restrictions, processing and memory limitations, as the Wireless Sensors Networks (WSN), this is no different. Aiming to improve the cryptography performance, security and the lifetime of these networks, we propose a new cryptographic algorithm developed through the Genetic Programming (GP) techniques. For the development of the cryptographic algorithm’s fitness criteria, established by the genetic GP, nine new cryptographic algorithms were tested: AES, Blowfish, DES, RC6, Skipjack, Twofish, T-DES, XTEA and XXTEA. Starting from these tests, fitness functions was build taking into account the execution time, occupied memory space, maximum deviation, irregular deviation and correlation coefficient. After obtaining the genetic GP, the CRYSEED and CRYSEED2 was created, algorithms for the 8-bits devices, optimized for WSNs, i.e., with low complexity, few memory consumption and good security for sensing and instrumentation applications.
Resumo:
The Traveling Salesman with Multiple Ridesharing (TSP-MR) is a type of the Capacitated Traveling Salesman, which presents the possibility of sharing seats with passengers taking advantage of the paths the salesman travels through his cycle. The salesman shares the cost of a path with the boarded passengers. This model can portray a real situation in which, for example, drivers are willing to share parts of a trip with tourists that wish to move between two locations visited by the driver’s route, accepting to share the vehicle with other individuals visiting other locations within the cycle. This work proposes a mathematical formulation for the problem, and an exact and metaheuristics algorithms for its solution, comparing them.
Resumo:
The Traveling Salesman with Multiple Ridesharing (TSP-MR) is a type of the Capacitated Traveling Salesman, which presents the possibility of sharing seats with passengers taking advantage of the paths the salesman travels through his cycle. The salesman shares the cost of a path with the boarded passengers. This model can portray a real situation in which, for example, drivers are willing to share parts of a trip with tourists that wish to move between two locations visited by the driver’s route, accepting to share the vehicle with other individuals visiting other locations within the cycle. This work proposes a mathematical formulation for the problem, and an exact and metaheuristics algorithms for its solution, comparing them.