855 resultados para Pareto Frontier
Resumo:
This work has as an objective analyze the efficiency of producers costs of the irrigation Project Baixo-Açu , and identify the determining factors of this efficiency. To achieve these targets it was estimated, in a first stage, a frontier of costs by the method, non parametric of Data Envelopment Analysis-DEA, and measured the stakes of efficiency producers. On the second stage, it was utilized the Tobit regression pattern, estimating an inefficiency function of costs, and were indentified the associated factors of resources waste. Among the results found it was noticed the existence of a high waste of resources, that represent more than 54% of effective cost. Among the factors with the highest wastes are: energy, herbicides, defensives and chemical fertilizers. In a general way, the producers presented low efficiency level and, only, two, of seventy-five researched, achieved the frontier of costs minimization. These results reveal, in a certain way, that the producers in irrigated fruit growing in the project Baixo-Açu don t seek to minimize the production costs. It was still noticed, that the reduction of resources waste, and this way the inefficiency of costs, is associated with the agriculturalist education, his experience in agriculture, his access to the technical assistance and credit
Resumo:
This Master s Thesis proposes the application of Data Envelopment Analysis DEA to evaluate economies of scale and economies of scope in the performance of service teams involved with installation of data communication circuits, based on the study of a major telecommunication company in Brazil. Data was collected from the company s Operational Performance Division. Initial analysis of a data set, including nineteen installation teams, was performed considering input oriented methods. Subsequently, the need for restrictions on weights is analyzed using the Assurance Region method, checking for the existence of zero-valued weights. The resulting returns to scale are then verified. Further analyses using the Assurance Region Constant (AR-I-C) and Variable (AR-I-V) models verify the existence of variable, rather than constant, returns to scale. Therefore, all of the final comparisons use scores obtained through the AR-I-V model. In sequence, we verify if the system has economies of scope by analyzing the behavior of the scores in terms of individual or multiple outputs. Finally, conventional results, used by the company in study to evaluate team performance, are compared to those generated using the DEA methodology. The results presented here show that DEA is a useful methodology for assessing team performance and that it may contribute to improvements on the quality of the goal setting procedure.
Resumo:
This Master s Thesis proposes the application of Data Envelopment Analysis DEA to evaluate the performance of sales teams, based on a study of their coverage areas. Data was collected from the company contracted to distribute the products in the state of Ceará. Analyses of thirteen sales coverage areas were performed considering first the output-oriented constant return to scale method (CCR-O), then this method with assurance region (AR-O-C) and finally the method of variable returns to scale with assurance region (AR-O-V). The method used in the first approach is shown to be inappropriate for this study, since it inconveniently generates zero-valued weights, allowing that an area under evaluation obtain the maximal score by not producing. Using weight restrictions, through the assurance region methods AR-O-C and AR-O-V, decreasing returns to scale are identified, meaning that the improvement in performance is not proportional to the size of the areas being analyzed. Observing data generated by the analysis, a study is carried out, aiming to design improvement goals for the inefficient areas. Complementing this study, GDP data for each area was compared with scores obtained using AR-O-V analysis. The results presented in this work show that DEA is a useful methodology for assessing sales team performance and that it may contribute to improvements on the quality of the management process.
Resumo:
Copper is one of the most used metals in platingprocesses of galvanic industries. The presence of copper, a heavy metal, in galvanic effluents is harmful to the environment.The main objective of this researchwas the removal ofcopperfromgalvanic effluents, using for this purpose anionic surfactants. The removal process is based on the interaction between the polar head group of the anionic surfactant and the divalent copper in solution. The surfactants used in this study were derived from soybean oil (OSS), coconut oil (OCS), and sunflower oil (OGS). It was used a copper synthetic solution (280 ppm Cu+2) simulating the rinse water from a copper acid bath of a galvanic industry. It were developed 23and 32 factorial designs to evaluate the parameters that have influence in theremoval process. For each surfactant (OSS, OCS, and OGS), the independent variables evaluated were: surfactant concentration (1.25 to 3.75 g/L), pH (5 to 9) and the presence of an anionic polymer (0 to 0.0125 g/L).From the results obtained in the 23 factorial design and in the calculus for estimatingthe stoichiometric relationship between surfactants and copper in solution, it were developed new experimental tests, varying surfactant concentration in the range of 1.25 to 6.8 g/L (32 factorial design).The results obtained in the experimental designs were subjected to statistical evaluations to obtain Pareto charts and mathematical modelsfor Copper removal efficiency (%). The statistical evaluation of the 23 and 32factorial designs, using saponifiedcoconut oil (OCS), presented the mathematical model that best described the copper removal process.It can be concluded that OCS was the most efficient anionic surfactant, removing 100% of the copper present in the synthetic galvanic solution
Resumo:
We investigated the 2PA absorption spectrum of a family of perylene tetracarboxylic derivatives ( PTCDs): bis( benzimidazo) perylene ( AzoPTCD), bis( benzimidazo) thioperylene ( Monothio BZP), n-pentylimidobenzimidazoperylene ( PazoPTCD), and bis( n-butylimido) perylene ( BuPTCD). These compounds present extremely high two-photon absorption, which makes them attractive for applications in photonics devices. The two-photon absorption cross-section spectra of perylene derivatives obtained via Z-scan technique were fitted by means of a sum-over-states ( SOS) model, which described with accuracy the different regions of the 2PA cross-section spectra. Frontier molecular orbital calculations show that all molecules present similar features, indicating that nonlinear optical properties in PTCDs are mainly determined by the central portion of the molecule, with minimal effect from the lateral side groups. In general, our results pointed out that the differences in the 2PA cross-sections among the compounds are mainly due to the nonlinearity resonance enhancement.
Resumo:
In this work we have studied, by Monte Carlo computer simulation, several properties that characterize the damage spreading in the Ising model, defined in Bravais lattices (the square and the triangular lattices) and in the Sierpinski Gasket. First, we investigated the antiferromagnetic model in the triangular lattice with uniform magnetic field, by Glauber dynamics; The chaotic-frozen critical frontier that we obtained coincides , within error bars, with the paramegnetic-ferromagnetic frontier of the static transition. Using heat-bath dynamics, we have studied the ferromagnetic model in the Sierpinski Gasket: We have shown that there are two times that characterize the relaxation of the damage: One of them satisfy the generalized scaling theory proposed by Henley (critical exponent z~A/T for low temperatures). On the other hand, the other time does not obey any of the known scaling theories. Finally, we have used methods of time series analysis to study in Glauber dynamics, the damage in the ferromagnetic Ising model on a square lattice. We have obtained a Hurst exponent with value 0.5 in high temperatures and that grows to 1, close to the temperature TD, that separates the chaotic and the frozen phases
Resumo:
The new technique for automatic search of the order parameters and critical properties is applied to several well-know physical systems, testing the efficiency of such a procedure, in order to apply it for complex systems in general. The automatic-search method is combined with Monte Carlo simulations, which makes use of a given dynamical rule for the time evolution of the system. In the problems inves¬tigated, the Metropolis and Glauber dynamics produced essentially equivalent results. We present a brief introduction to critical phenomena and phase transitions. We describe the automatic-search method and discuss some previous works, where the method has been applied successfully. We apply the method for the ferromagnetic fsing model, computing the critical fron¬tiers and the magnetization exponent (3 for several geometric lattices. We also apply the method for the site-diluted ferromagnetic Ising model on a square lattice, computing its critical frontier, as well as the magnetization exponent f3 and the susceptibility exponent 7. We verify that the universality class of the system remains unchanged when the site dilution is introduced. We study the problem of long-range bond percolation in a diluted linear chain and discuss the non-extensivity questions inherent to long-range-interaction systems. Finally we present our conclusions and possible extensions of this work
Resumo:
The deficit of water and sewerage services is a historic problem in Brazil. The introduction of a new regulatory framework, in 2007, presented ways intending to overcome these deficits, among them, the providers efficiency improvement. This thesis aims to analyze the regulators performance regarding its ability to induce the efficiency of the Brazilian water and sewerage services providers. To this end, an analytical approach based on a sequential explanatory strategy was used, and it consists of three steps. In the first step, the Data Envelopment Analysis ( DEA ) was used to measure the providers efficiency in 2006 and 2011. The results show that the average efficiency may be considered high; however significant inefficiencies among the 29 analyzed providers were detected. The ones in the Southeast region showed better performance level and Northeast had the lowest. The local and the private providers were more efficient on average. In 2006 and 2011 the average performance was higher among non-regulated providers. In 2006 the group regulated by local agencies had the best average performance, in 2011, the best performance was the group regulated by the consortium agencies. To analyse the second step was used the Malmquist Index, it pointed that the productivity between 2006 and 2011 dropped. The analysis through decomposing Malmquist Index showed a shift of technical efficiency frontier to a lower level, however was detected a small provider s advance towards the frontier. Only the Midwest region recorded progress in overall productivity. The deterioration in the total factor productivity was higher among regional providers but the local ones and the private agencies moved quickly to the frontier level. The providers regulated from 2007 showed less decrease on the total productivity and the results of the catch up effect were more meaningful. In the last step, the regulators standardization activity analyses noted that there are agencies that had not issued rules until 2011. The most standards topics discussed in the issued rules were the tariff adjustments and the setting of general conditions for the provision and use of services; in another hand, the least covered topics were new technologies incentive and the introduction of efficiency-inducing regulatory mechanisms and productivity gains for price reviews. Regulators created from 2007 were more active proportionately. Even with the advent of the regulatory framework and the creation of new regulatory bodies, the evidence points to a reality in which the actions of these agencies have not been ensuring that providers of water and sewage, regulated by them, has achieved better performance. The non- achievement of regulatory goals can be explained by the incipient level of performance of the Brazilian regulatory authorities, which should be strengthened because of its potential contribution to the Brazilian basic sanitation department
Resumo:
Present day weather forecast models usually cannot provide realistic descriptions of local and particulary extreme weather conditions. However, for lead times of about a small number of days, they provide reliable forecast of the atmospheric circulation that encompasses the subscale processes leading to extremes. Hence, forecasts of extreme events can only be achieved through a combination of dynamical and statistical analysis methods, where a stable and significant statistical model based on prior physical reasoning establishes posterior statistical-dynamical model between the local extremes and the large scale circulation. Here we present the development and application of such a statistical model calibration on the besis of extreme value theory, in order to derive probabilistic forecast for extreme local temperature. The dowscaling applies to NCEP/NCAR re-analysis, in order to derive estimates of daily temperature at Brazilian northeastern region weather stations
Resumo:
In Survival Analysis, long duration models allow for the estimation of the healing fraction, which represents a portion of the population immune to the event of interest. Here we address classical and Bayesian estimation based on mixture models and promotion time models, using different distributions (exponential, Weibull and Pareto) to model failure time. The database used to illustrate the implementations is described in Kersey et al. (1987) and it consists of a group of leukemia patients who underwent a certain type of transplant. The specific implementations used were numeric optimization by BFGS as implemented in R (base::optim), Laplace approximation (own implementation) and Gibbs sampling as implemented in Winbugs. We describe the main features of the models used, the estimation methods and the computational aspects. We also discuss how different prior information can affect the Bayesian estimates
Resumo:
Energy policies and technological progress in the development of wind turbines have made wind power the fastest growing renewable power source worldwide. The inherent variability of this resource requires special attention when analyzing the impacts of high penetration on the distribution network. A time-series steady-state analysis is proposed that assesses technical issues such as energy export, losses, and short-circuit levels. A multiobjective programming approach based on the nondominated sorting genetic algorithm (NSGA) is applied in order to find configurations that maximize the integration of distributed wind power generation (DWPG) while satisfying voltage and thermal limits. The approach has been applied to a medium voltage distribution network considering hourly demand and wind profiles for part of the U.K. The Pareto optimal solutions obtained highlight the drawbacks of using a single demand and generation scenario, and indicate the importance of appropriate substation voltage settings for maximizing the connection of MPG.
Resumo:
This work presents the application of a multiobjective evolutionary algorithm (MOEA) for optimal power flow (OPF) solution. The OPF is modeled as a constrained nonlinear optimization problem, non-convex of large-scale, with continuous and discrete variables. The violated inequality constraints are treated as objective function of the problem. This strategy allows attending the physical and operational restrictions without compromise the quality of the found solutions. The developed MOEA is based on the theory of Pareto and employs a diversity-preserving mechanism to overcome the premature convergence of algorithm and local optimal solutions. Fuzzy set theory is employed to extract the best compromises of the Pareto set. Results for the IEEE-30, RTS-96 and IEEE-354 test systems are presents to validate the efficiency of proposed model and solution technique.
Resumo:
The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions
Resumo:
This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves
Resumo:
In this study, the methodological procedures involved in digital imaging of collapsed paleocaves in tufa using GPR are presented. These carbonate deposits occur in the Quixeré region, Ceará State (NE Brazil), on the western border of the Potiguar Basin. Collapsed paleocaves are exposed along a state road, which were selected to this study. We chose a portion of the called Quixeré outcrop for making a photomosaic and caring out a GPR test section to compare and parameterize the karst geometries on the geophysical line. The results were satisfactory and led to the adoption of criteria for the interpretation of others GPR sections acquired in the region of the Quixeré outcrop. Two grids of GPR lines were acquired; the first one was wider and more spaced and guided the location of the second grid, denser and located in the southern part of the outcrop. The radargrams of the second grid reveal satisfactorily the collapsed paleocaves geometries. For each grid has been developed a digital solid model of the Quixeré outcrop. The first model allows the recognition of the general distribution and location of collapsed paleocaves in tufa deposits, while the second more detailed digital model provides not only the 3D individualization of the major paleocaves, but also the estimation of their respective volumes. The digital solid models are presented here as a new frontier in the study of analog outcrops to reservoirs (for groundwater and hydrocarbon), in which the volumetric parameterization and characterization of geological bodies become essential for composing the databases, which together with petrophysical properties information, are used in more realistic computer simulations for sedimentary reservoirs.