1000 resultados para Problema da indução. Lógica indutiva. Probabilidade
Resumo:
This work presents a hybrid approach for the supplier selection problem in Supply Chain Management. We joined decision-making philosophy by researchers from business school and researchers from engineering in order to deal with the problem more extensively. We utilized traditional multicriteria decision-making methods, like AHP and TOPSIS, in order to evaluate alternatives according decision maker s preferences. The both techiniques were modeled by using definitions from the Fuzzy Sets Theory to deal with imprecise data. Additionally, we proposed a multiobjetive GRASP algorithm to perform an order allocation procedure between all pre-selected alternatives. These alternatives must to be pre-qualified on the basis of the AHP and TOPSIS methods before entering the LCR. Our allocation procedure has presented low CPU times for five pseudorandom instances, containing up to 1000 alternatives, as well as good values for all considered objectives. This way, we consider the proposed model as appropriate to solve the supplier selection problem in the SCM context. It can be used to help decision makers in reducing lead times, cost and risks in their supply chain. The proposed model can also improve firm s efficiency in relation to business strategies, according decision makers, even when a large number of alternatives must be considered, differently from classical models in purchasing literature
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
In this work, the Markov chain will be the tool used in the modeling and analysis of convergence of the genetic algorithm, both the standard version as for the other versions that allows the genetic algorithm. In addition, we intend to compare the performance of the standard version with the fuzzy version, believing that this version gives the genetic algorithm a great ability to find a global optimum, own the global optimization algorithms. The choice of this algorithm is due to the fact that it has become, over the past thirty yares, one of the more importan tool used to find a solution of de optimization problem. This choice is due to its effectiveness in finding a good quality solution to the problem, considering that the knowledge of a good quality solution becomes acceptable given that there may not be another algorithm able to get the optimal solution for many of these problems. However, this algorithm can be set, taking into account, that it is not only dependent on how the problem is represented as but also some of the operators are defined, to the standard version of this, when the parameters are kept fixed, to their versions with variables parameters. Therefore to achieve good performance with the aforementioned algorithm is necessary that it has an adequate criterion in the choice of its parameters, especially the rate of mutation and crossover rate or even the size of the population. It is important to remember that those implementations in which parameters are kept fixed throughout the execution, the modeling algorithm by Markov chain results in a homogeneous chain and when it allows the variation of parameters during the execution, the Markov chain that models becomes be non - homogeneous. Therefore, in an attempt to improve the algorithm performance, few studies have tried to make the setting of the parameters through strategies that capture the intrinsic characteristics of the problem. These characteristics are extracted from the present state of execution, in order to identify and preserve a pattern related to a solution of good quality and at the same time that standard discarding of low quality. Strategies for feature extraction can either use precise techniques as fuzzy techniques, in the latter case being made through a fuzzy controller. A Markov chain is used for modeling and convergence analysis of the algorithm, both in its standard version as for the other. In order to evaluate the performance of a non-homogeneous algorithm tests will be applied to compare the standard fuzzy algorithm with the genetic algorithm, and the rate of change adjusted by a fuzzy controller. To do so, pick up optimization problems whose number of solutions varies exponentially with the number of variables
Resumo:
This work presents a proposal for a voltage and frequency control system for a wind power induction generator. It has been developed na experimental structure composes basically by a three phase induction machine, a three phase capacitor and a reactive static Power compensator controlled by histeresys. lt has been developed control algorithms using conventional methods (Pl control) and linguistic methods (using concepts of logic and fuzzy control), to compare their performances in the variable speed generator system. The control loop was projected using the ADJDA PCL 818 model board into a Pentium 200 MHz compu ter. The induction generator mathematical model was studied throught Park transformation. It has been realized simulations in the Pspice@ software, to verify the system characteristics in transient and steady-state situations. The real time control program was developed in C language, possibilish verify the algorithm performance in the 2,2kW didatic experimental system
Resumo:
This works presents a proposal to make automatic the identification of energy thefts in the meter systems through Fuzzy Logic and supervisory like SCADA. The solution we find by to collect datas from meters at customers units: voltage, current, power demand, angles conditions of phasors diagrams of voltages and currents, and taking these datas by fuzzy logic with expert knowledge into a fuzzy system. The parameters collected are computed by fuzzy logic, in engineering alghorithm, and the output shows to user if the customer researched may be consuming electrical energy without to pay for it, and these feedbacks have its own membership grades. The value of this solution is a need for reduce the losses that already sets more than twenty per cent. In such a way that it is an expert system that looks for decision make with assertivity, and it looks forward to find which problems there are on site and then it wont happen problems of relationship among the utility and the customer unit. The database of an electrical company was utilized and the datas from it were worked by the fuzzy proposal and algorithm developed and the result was confirmed
Resumo:
Every day, water scarcity becomes a more serious problem and, directly affects global society. Studies are directed in order to raise awareness of the rational use of this natural asset that is essential to our survival. Only 0.007% of the water available in the world have easy access and can be consumed by humans, it can be found in rivers, lakes, etc... To better take advantage of the water used in homes and small businesses, reuse projects are often implemented, resulting in savings for customers of water utilities. The reuse projects involve several areas of engineering, like Environmental, Chemical, Electrical and Computer Engineering. The last two are responsible for the control of the process, which aims to make gray water (soapy water), and clear blue water (rain water), ideal for consumption, or for use in watering gardens, flushing, among others applications. Water has several features that should be taken into consideration when it comes to working its reuse. Some of the features are, turbidity, temperature, electrical conductivity and, pH. In this document there is a proposal to control the pH (potential Hydrogen) through a microcontroller, using the fuzzy logic as strategy of control. The controller was developed in the fuzzy toolbox of Matlab®
Resumo:
Electrical Motors transform electrical energy into mechanic energy in a relatively easy way. In some specific applications, there is a need for electrical motors to function with noncontaminated fluids, in high speed systems, under inhospitable conditions, or yet, in local of difficult access and considerable depth. In these cases, the motors with mechanical bearings are not adequate as their wear give rise to maintenance. A possible solution for these problems stems from two different alternatives: motors with magnetic bearings, that increase the length of the machine (not convenient), and the bearingless motors that aggregate compactness. Induction motors have been used more and more in research, as they confer more robustness to bearingless motors compared to other types of machines building with others motors. The research that has already been carried out with bearingless induction motors utilized prototypes that had their structures of stator/rotor modified, that differ most of the times from the conventional induction motors. The goal of this work is to study the viability of the use of conventional induction Motors for the beringless motors applications, pointing out the types of Motors of this category that can be more useful. The study uses the Finite Elements Method (FEM). As a means of validation, a conventional induction motor with squirrel-cage rotor was successfully used for the beringless motor application of the divided winding type, confirming the proposed thesis. The controlling system was implemented in a Digital Signal Processor (DSP)
Resumo:
Lithium (Li) is a chemical element with atomic number 3 and it is among the lightest known elements in the universe. In general, the Lithium is found in the nature under the form of two stable isotopes, the 6Li and 7Li. This last one is the most dominant and responds for about 93% of the Li found in the Universe. Due to its fragileness this element is largely used in the astrophysics, especially in what refers to the understanding of the physical process that has occurred since the Big Bang going through the evolution of the galaxies and stars. In the primordial nucleosynthesis in the Big Bang moment (BBN), the theoretical calculation forecasts a Li production along with all the light elements such as Deuterium and Beryllium. To the Li the BNB theory reviews a primordial abundance of Log log ǫ(Li) =2.72 dex in a logarithmic scale related to the H. The abundance of Li found on the poor metal stars, or pop II stars type, is called as being the abundance of Li primordial and is the measure as being log ǫ(Li) =2.27 dex. In the ISM (Interstellar medium), that reflects the current value, the abundance of Lithium is log ǫ(Li) = 3.2 dex. This value has great importance for our comprehension on the chemical evolution of the galaxy. The process responsible for the increasing of the primordial value present in the Li is not clearly understood until nowadays. In fact there is a real contribution of Li from the giant stars of little mass and this contribution needs to be well streamed if we want to understand our galaxy. The main objection in this logical sequence is the appearing of some giant stars with little mass of G and K spectral types which atmosphere is highly enriched with Li. Such elevated values are exactly the opposite of what could happen with the typical abundance of giant low mass stars, where convective envelops pass through a mass deepening in which all the Li should be diluted and present abundances around log ǫ(Li) ∼1.4 dex following the model of stellar evolution. In the Literature three suggestions are found that try to reconcile the values of the abundance of Li theoretical and observed in these rich in Li giants, but any of them bring conclusive answers. In the present work, we propose a qualitative study of the evolutionary state of the rich in Li stars in the literature along with the recent discovery of the first star rich in Li observed by the Kepler Satellite. The main objective of this work is to promote a solid discussion about the evolutionary state based on the characteristic obtained from the seismic analysis of the object observed by Kepler. We used evolutionary traces and simulation done with the population synthesis code TRILEGAL intending to evaluate as precisely as possible the evolutionary state of the internal structure of these groups of stars. The results indicate a very short characteristic time when compared to the evolutionary scale related to the enrichment of these stars
Resumo:
The deficit of water and sewerage services is a historic problem in Brazil. The introduction of a new regulatory framework, in 2007, presented ways intending to overcome these deficits, among them, the providers efficiency improvement. This thesis aims to analyze the regulators performance regarding its ability to induce the efficiency of the Brazilian water and sewerage services providers. To this end, an analytical approach based on a sequential explanatory strategy was used, and it consists of three steps. In the first step, the Data Envelopment Analysis ( DEA ) was used to measure the providers efficiency in 2006 and 2011. The results show that the average efficiency may be considered high; however significant inefficiencies among the 29 analyzed providers were detected. The ones in the Southeast region showed better performance level and Northeast had the lowest. The local and the private providers were more efficient on average. In 2006 and 2011 the average performance was higher among non-regulated providers. In 2006 the group regulated by local agencies had the best average performance, in 2011, the best performance was the group regulated by the consortium agencies. To analyse the second step was used the Malmquist Index, it pointed that the productivity between 2006 and 2011 dropped. The analysis through decomposing Malmquist Index showed a shift of technical efficiency frontier to a lower level, however was detected a small provider s advance towards the frontier. Only the Midwest region recorded progress in overall productivity. The deterioration in the total factor productivity was higher among regional providers but the local ones and the private agencies moved quickly to the frontier level. The providers regulated from 2007 showed less decrease on the total productivity and the results of the catch up effect were more meaningful. In the last step, the regulators standardization activity analyses noted that there are agencies that had not issued rules until 2011. The most standards topics discussed in the issued rules were the tariff adjustments and the setting of general conditions for the provision and use of services; in another hand, the least covered topics were new technologies incentive and the introduction of efficiency-inducing regulatory mechanisms and productivity gains for price reviews. Regulators created from 2007 were more active proportionately. Even with the advent of the regulatory framework and the creation of new regulatory bodies, the evidence points to a reality in which the actions of these agencies have not been ensuring that providers of water and sewage, regulated by them, has achieved better performance. The non- achievement of regulatory goals can be explained by the incipient level of performance of the Brazilian regulatory authorities, which should be strengthened because of its potential contribution to the Brazilian basic sanitation department
Resumo:
The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done
Resumo:
The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras
Resumo:
Two-level factorial designs are widely used in industrial experimentation. However, many factors in such a design require a large number of runs to perform the experiment, and too many replications of the treatments may not be feasible, considering limitations of resources and of time, making it expensive. In these cases, unreplicated designs are used. But, with only one replicate, there is no internal estimate of experimental error to make judgments about the significance of the observed efects. One of the possible solutions for this problem is to use normal plots or half-normal plots of the efects. Many experimenters use the normal plot, while others prefer the half-normal plot and, often, for both cases, without justification. The controversy about the use of these two graphical techniques motivates this work, once there is no register of formal procedure or statistical test that indicates \which one is best". The choice between the two plots seems to be a subjective issue. The central objective of this master's thesis is, then, to perform an experimental comparative study of the normal plot and half-normal plot in the context of the analysis of the 2k unreplicated factorial experiments. This study involves the construction of simulated scenarios, in which the graphics performance to detect significant efects and to identify outliers is evaluated in order to verify the following questions: Can be a plot better than other? In which situations? What kind of information does a plot increase to the analysis of the experiment that might complement those provided by the other plot? What are the restrictions on the use of graphics? Herewith, this work intends to confront these two techniques; to examine them simultaneously in order to identify similarities, diferences or relationships that contribute to the construction of a theoretical reference to justify or to aid in the experimenter's decision about which of the two graphical techniques to use and the reason for this use. The simulation results show that the half-normal plot is better to assist in the judgement of the efects, while the normal plot is recommended to detect outliers in the data
Resumo:
In the work reported here we present theoretical and numerical results about a Risk Model with Interest Rate and Proportional Reinsurance based on the article Inequalities for the ruin probability in a controlled discrete-time risk process by Ros ario Romera and Maikol Diasparra (see [5]). Recursive and integral equations as well as upper bounds for the Ruin Probability are given considering three di erent approaches, namely, classical Lundberg inequality, Inductive approach and Martingale approach. Density estimation techniques (non-parametrics) are used to derive upper bounds for the Ruin Probability and the algorithms used in the simulation are presented
Resumo:
O objetivo do trabalho foi avaliar o efeito de regulador vegetal e de bioestimulante na indução floral do maracujazeiro-amarelo em condições não-indutivas, em Araguari-MG. Foram identificados e podados 12 ramos terciários por parcela (02-04-05), sendo 6 deles expostos de um lado da espaldeira, com luminosidade predominante pela manhã e 6 do outro lado da espaldeira, com luminosidade à tarde. O delineamento experimental foi em parcelas subdivididas, com 7 tratamentos principais (parcelas): 0mg L-1 (testemunha); 100mg L-1; 200mg L-1 e 300mg L-1 de regulador vegetal GA3 (i.a.); 2,08 mL L-1; 4,17mL L-1 e 6,25mL L-1 de bioestimulante Stimulate® (i.a.), em duas aplicações foliares (09-04-02 e 09-05-02), acrescidas de espalhante adesivo Silwett® a 0,05%. Além desses, foram utilizados 2 tratamentos secundários (subparcelas): exposição dos ramos à luminosidade da manhã e da tarde, com 4 repetições de 3 plantas por parcela. Cada subparcela foi um dos dois lados da espaldeira. As médias foram comparadas pelo teste de Tukey, a 5% de probabilidade. Realizou-se, aos 75 dias, a avaliação nos dois lados da espaldeira do comprimento dos ramos e entrenós, número de nós, de folhas e de botões florais. As variáveis estudadas não foram influenciadas pelo uso de GA3 e Stimulate®, no entanto houve diferença quando os ramos ficaram expostos à luminosidade da manhã em relação àqueles com luminosidade à tarde.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)