31 resultados para cost functions
em Instituto Politécnico do Porto, Portugal
Resumo:
Na tentativa de se otimizar o processo de fabrico associado a uma tinta base aquosa (TBA), para minimizar os desvios de viscosidade final verificados, e de desenvolver um novo adjuvante plastificante para betão, recorreu-se a métodos e ferramentas estatísticas para a concretização do projeto. Relativamente à TBA, procedeu-se numa primeira fase a um acompanhamento do processo de fabrico, a fim de se obter todos os dados mais relevantes que poderiam influenciar a viscosidade final da tinta. Através de uma análise de capacidade ao parâmetro viscosidade, verificou-se que esta não estava sempre dentro das especificações do cliente, sendo o cpk do processo inferior a 1. O acompanhamento do processo resultou na escolha de 4 fatores, que culminou na realização de um plano fatorial 24. Após a realização dos ensaios, efetuou-se uma análise de regressão a um modelo de primeira ordem, não tendo sido esta significativa, o que implicou a realização de mais 8 ensaios nos pontos axiais. Com arealização de uma regressão passo-a-passo, obteve-se uma aproximação viável a um modelo de segunda ordem, que culminou na obtenção dos melhores níveis para os 4 fatores que garantem que a resposta viscosidade se situa no ponto médio do intervalo de especificação (1400 mPa.s). Quanto ao adjuvante para betão, o objetivo é o uso de polímeros SIKA ao invés da matériaprima comum neste tipo de produtos, tendo em conta o custo final da formulação. Escolheram-se 3 fatores importantes na formulação do produto (mistura de polímeros, mistura de hidrocarbonetos e % de sólidos), que resultou numa matriz fatorial 23. Os ensaios foram realizados em triplicado, em pasta de cimento, um para cada tipo de cimento mais utilizado em Portugal. Ao efetuar-se a análise estatística de dados obtiveram-se modelos de primeira ordem para cada tipo de cimento. O processo de otimização consistiu em otimizar uma função custo associada à formulação, garantindo sempre uma resposta superior à observada pelo produto considerado padrão. Os resultados foram animadores uma vez que se obteve para os 3 tipos de cimentocustos abaixo do requerido e espalhamento acima do observado pelo padrão.
Resumo:
This paper presents a methodology that aims to increase the probability of delivering power to any load point of the electrical distribution system by identifying new investments in distribution components. The methodology is based on statistical failure and repair data of the distribution power system components and it uses fuzzy-probabilistic modelling for system component outage parameters. Fuzzy membership functions of system component outage parameters are obtained by statistical records. A mixed integer non-linear optimization technique is developed to identify adequate investments in distribution networks components that allow increasing the availability level for any customer in the distribution system at minimum cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a real distribution network.
Resumo:
The paper proposes a methodology to increase the probability of delivering power to any load point by identifying new investments in distribution energy systems. The proposed methodology is based on statistical failure and repair data of distribution components and it uses a fuzzy-probabilistic modeling for the components outage parameters. The fuzzy membership functions of the outage parameters of each component are based on statistical records. A mixed integer nonlinear programming optimization model is developed in order to identify the adequate investments in distribution energy system components which allow increasing the probability of delivering power to any customer in the distribution system at the minimum possible cost for the system operator. To illustrate the application of the proposed methodology, the paper includes a case study that considers a 180 bus distribution network.
Resumo:
Electricity markets are complex environments, involving a large number of different entities, playing in a dynamic scene to obtain the best advantages and profits. MASCEM is a multi-agent electricity market simulator to model market players and simulate their operation in the market. Market players are entities with specific characteristics and objectives, making their decisions and interacting with other players. MASCEM provides several dynamic strategies for agents’ behavior. This paper presents a method that aims to provide market players with strategic bidding capabilities, allowing them to obtain the higher possible gains out of the market. This method uses a reinforcement learning algorithm to learn from experience how to choose the best from a set of possible bids. These bids are defined accordingly to the cost function that each producer presents.
Resumo:
In this paper is presented a Game Theory based methodology to allocate transmission costs, considering cooperation and competition between producers. As original contribution, it finds the degree of participation on the additional costs according to the demand behavior. A comparative study was carried out between the obtained results using Nucleolus balance and Shapley Value, with other techniques such as Averages Allocation method and the Generalized Generation Distribution Factors method (GGDF). As example, a six nodes network was used for the simulations. The results demonstrate the ability to find adequate solutions on open access environment to the networks.
Resumo:
In this paper we present a new methodology, based in game theory, to obtain the market balancing between Distribution Generation Companies (DGENCO), in liberalized electricity markets. The new contribution of this methodology is the verification of the participation rate of each agent based in Nucléolo Balancing and in Shapley Value. To validate the results we use the Zaragoza Distribution Network with 42 Bus and 5 DGENCO.
Resumo:
Este trabalho de pesquisa e desenvolvimento tem como fundamento principal o Conceito de Controlo por Lógica Difusa. Utilizando as ferramentas do software Matlab, foi possível desenvolver um controlador com base na inferência difusa que permitisse controlar qualquer tipo de sistema físico real, independentemente das suas características. O Controlo Lógico Difuso, do inglês “Fuzzy Control”, é um tipo de controlo muito particular, pois permite o uso simultâneo de dados numéricos com variáveis linguísticas que tem por base o conhecimento heurístico dos sistemas a controlar. Desta forma, consegue-se quantificar, por exemplo, se um copo está “meio cheio” ou “meio vazio”, se uma pessoa é “alta” ou “baixa”, se está “frio” ou “muito frio”. O controlo PID é, sem dúvida alguma, o controlador mais amplamente utilizado no controlo de sistemas. Devido à sua simplicidade de construção, aos reduzidos custos de aplicação e manutenção e aos resultados que se obtêm, este controlador torna-se a primeira opção quando se pretende implementar uma malha de controlo num determinado sistema. Caracterizado por três parâmetros de ajuste, a saber componente proporcional, integral e derivativa, as três em conjunto permitem uma sintonia eficaz de qualquer tipo de sistema. De forma a automatizar o processo de sintonia de controladores e, aproveitando o que melhor oferece o Controlo Difuso e o Controlo PID, agrupou-se os dois controladores, onde em conjunto, como poderemos constatar mais adiante, foram obtidos resultados que vão de encontro com os objectivos traçados. Com o auxílio do simulink do Matlab, foi desenvolvido o diagrama de blocos do sistema de controlo, onde o controlador difuso tem a tarefa de supervisionar a resposta do controlador PID, corrigindo-a ao longo do tempo de simulação. O controlador desenvolvido é denominado por Controlador FuzzyPID. Durante o desenvolvimento prático do trabalho, foi simulada a resposta de diversos sistemas à entrada em degrau unitário. Os sistemas estudados são na sua maioria sistemas físicos reais, que representam sistemas mecânicos, térmicos, pneumáticos, eléctricos, etc., e que podem ser facilmente descritos por funções de transferência de primeira, segunda e de ordem superior, com e sem atraso.
Resumo:
Screening of topologies developed by hierarchical heuristic procedures can be carried out by comparing their optimal performance. In this work we will be exploiting mono-objective process optimization using two algorithms, simulated annealing and tabu search, and four different objective functions: two of the net present value type, one of them including environmental costs and two of the global potential impact type. The hydrodealkylation of toluene to produce benzene was used as case study, considering five topologies with different complexities mainly obtained by including or not liquid recycling and heat integration. The performance of the algorithms together with the objective functions was observed, analyzed and discussed from various perspectives: average deviation of results for each algorithm, capacity for producing high purity product, screening of topologies, objective functions robustness in screening of topologies, trade-offs between economic and environmental type objective functions and variability of optimum solutions.
Resumo:
In this paper we consider a differentiated Stackelberg model, when the leader firm engages in an R&D process that gives an endogenous cost-reducing innovation. The aim is to study the licensing of the cost-reduction by a two-part tariff. By using comparative static analysis, we conclude that the degree of the differentiation of the goods plays an important role in the results. We also do a direct comparison between our model and Cournot duopoly model.
Resumo:
This technical report describes the PDFs which have been implemented to model the behaviours of certain parameters of the Repeater-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (RHW2PNetSim) and Bridge-Based Hybrid Wired/Wireless PROFIBUS Network Simulator (BHW2PNetSim).
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
Resumo:
This paper presents a novel approach to WLAN propagation models for use in indoor localization. The major goal of this work is to eliminate the need for in situ data collection to generate the Fingerprinting map, instead, it is generated by using analytical propagation models such as: COST Multi-Wall, COST 231 average wall and Motley- Keenan. As Location Estimation Algorithms kNN (K-Nearest Neighbour) and WkNN (Weighted K-Nearest Neighbour) were used to determine the accuracy of the proposed technique. This work is based on analytical and measurement tools to determine which path loss propagation models are better for location estimation applications, based on Receive Signal Strength Indicator (RSSI).This study presents different proposals for choosing the most appropriate values for the models parameters, like obstacles attenuation and coefficients. Some adjustments to these models, particularly to Motley-Keenan, considering the thickness of walls, are proposed. The best found solution is based on the adjusted Motley-Keenan and COST models that allows to obtain the propagation loss estimation for several environments.Results obtained from two testing scenarios showed the reliability of the adjustments, providing smaller errors in the measured values values in comparison with the predicted values.
Resumo:
Not just with the emergence but also with the growing of the electronic market, that is, the growth of online suppliers of services and products and Internet users (potential consumers), the necessary conditions to the affirmation of the agile/virtual enterprises (A/VE) as a present and future enterprise organizational model are created. In this context, it is our understanding that the broker may have an important role in its development, namely, if the broker performs functions for the A/VE with better efficacy and efficiency. In this article, we will present first a revision of the broker’s models in a structured form. We present a taxonomy of possible broker’s functions for the broker’s actuation near the A/VE and then the classification of the literature broker’s models. This classification will permit an analysis of a broker’s model and establish a mainframe for our broker’s model according to the BM_Virtual Enterprise Architecture Reference Model (BM_VEARM).
Resumo:
Dissertação de Mestrado apresentada ao Instituto Politécnico do Porto para cumprimento dos requisitos necessários à obtenção do grau de Mestre em Gestão das Organizações – Ramo de Gestão de Empresas Orientador: Professor Doutor Pedro Nunes Orientador: Professor Henrique Curado