55 resultados para Distributed Material Flow Control
em Instituto Politécnico do Porto, Portugal
Resumo:
This work addresses the problem of traction control in mobile wheeled robots in the particular case of the RoboCup Middle Size League (MSL). The slip control problem is formulated using simple friction models for ISePorto Team robots with a differential wheel configuration. Traction was also characterized experimentally in the MSL scenario for relevant game events. This work proposes a hierarchical traction control architecture which relies in local slip detection and control at each wheel, with relevant information being relayed to a higher level responsible for global robot motion control. A dedicated one axis control embedded hardware subsystem allowing complex local control, high frequency current sensing and odometric information procession was developed. This local axis control board is integrated in a distributed system using CAN bus communications. The slipping observer was implemented in the axis control hardware nodes integrated in the ISePorto robots and was used to control and detect loss of for traction. %and to detect the ball in the kicking device. An external vision system was used to perform a qualitative analysis of the slip detection and observer performance results are presented.
Resumo:
Aerodynamic drag is known to be one of the factors contributing more to increased aircraft fuel consumption. The primary source of skin friction drag during flight is the boundary layer separation. This is the layer of air moving smoothly in the immediate vicinity of the aircraft. In this paper we discuss a cyber-physical system approach able of performing an efficient suppression of the turbulent flow by using a dense sensing deployment to detect the low pressure region and a similarly dense deployment of actuators to manage the turbulent flow. With this concept, only the actuators in the vicinity of a separation layer are activated, minimizing power consumption and also the induced drag.
Resumo:
IEEE International Conference on Cyber Physical Systems, Networks and Applications (CPSNA'15), Hong Kong, China.
Resumo:
Different problems are daily discuss on environmental aspects such acid rain, eutrophication, global warming and an others problems. Rarely do we find some discussions about phosphorus problematic. Through the years the phosphorus as been a real problem and must be more discussed. On this thesis was done a global material flow analysis of phosphorus, based on data from the year 2004, the production of phosphate rock in that year was 18.9 million tones, almost this amount it was used as fertilizer on the soil and the plants only can uptake, on average, 20% of the input of fertilizer to grow up, the remainder is lost for the phosphorus soil. In the phosphorus soil there is equilibrium between the phosphorus available to uptake from the plants and the phosphorus associate with other compounds, this equilibrium depends of the kind of soil and is related with the soil pH. A reserve inventory was done and we have 15,000 million tones as reserve, the amount that is economical available. The reserve base is estimated in 47,000 million tones. The major reserves can be found in Morocco and Western Sahara, United Sates, China and South Africa. The reserve estimated in 2009 was 15,000 million tone of phosphate rock or 1,963 million tone of P. If every year the mined phosphate rock is around 22 Mt/yr (phosphorus production on 2008 USGS 2009), and each year the consumption of phosphorus increases because of the food demand, the reserves of phosphate rock will be finished in about 90 years, or maybe even less. About the value/impact assessment was done a qualitative analysis, if on the future we don’t have more phosphate rock to produce fertilizers, it is expected a drop on the crops yields, each depends of the kind of the soil and the impact on the humans feed and animal production will not be a relevant problem. We can recovery phosphorus from different waste streams such as ploughing crop residues back into the soil, Food processing plants and food retailers, Human and animal excreta, Meat and bone meal, Manure fibre, Sewage sludge and wastewater. Some of these examples are developed in the paper.
Resumo:
Broadcast networks that are characterised by having different physical layers (PhL) demand some kind of traffic adaptation between segments, in order to avoid traffic congestion in linking devices. In many LANs, this problem is solved by the actual linking devices, which use some kind of flow control mechanism that either tell transmitting stations to pause (the transmission) or just discard frames. In this paper, we address the case of token-passing fieldbus networks operating in a broadcast fashion and involving message transactions over heterogeneous (wired or wireless) physical layers. For the addressed case, real-time and reliability requirements demand a different solution to the traffic adaptation problem. Our approach relies on the insertion of an appropriate idle time before a station issuing a request frame. In this way, we guarantee that the linking devices’ queues do not increase in a way that the timeliness properties of the overall system turn out to be unsuitable for the targeted applications.
Resumo:
IEEE Robótica 2007 - 7th Conference on Mobile Robots and Competitions, Paderne, Portugal 2007
Resumo:
As metas da União Europeia para 2020 em termos de biocombustíveis e biolíquidos traduziram-se, na última década, num destaque da indústria de biodiesel em Portugal. Inerente ao processo de produção biodiesel está um subproduto, o glicerol bruto, cujo estudo tem vindo a ser alvo de interesse na comunidade científica. O objetivo principal deste trabalho consistiu no estudo da gasificação do glicerol técnico e do glicerol bruto, usando vapor como agente oxidante. Pretendeu-se avaliar a composição do gás de produção obtido e os parâmetros de gasificação, como a percentagem de conversão de carbono e de hidrogénio, o rendimento de gás seco, a eficiência de gás frio e o poder calorífico do gás produzido. No estudo da gasificação do glicerol técnico avaliou-se o efeito da temperatura na performance do processo, entre 750 – 1000 ºC, e estudou-se ainda o efeito do caudal de alimentação ao reator (3,8 mL/min, 6,5 mL/min e 10,0 mL/min). Para o caudal mais baixo, estudou-se o efeito da razão de mistura glicerol/água (25/75, 40/60, 60/40 e 75/25) e para a razão de mistura 60/40 foi avaliada a influência da adição de ar como agente gasificante. O estudo da gasificação do glicerol bruto foi feito realizando ensaios de gasificação numa gama de temperaturas de 750 ºC a 1000 ºC, para uma razão de mistura glicerol/água (60/40) com o caudal de 3,8 mL/min e usando apenas vapor de água como agente de gasificação. Os ensaios foram realizados num reator de leito fixo de 500 mm de comprimento e 90 mm de diâmetro interno, composto por um leito de alumina com partículas de 5 mm de diâmetro. O aquecimento foi realizado com um forno elétrico de 4 kW. A amostra de gás de produção recolhida foi analisada por cromatografia gasosa com detector de termocondutividade. Os resultados obtidos na gasificação do glicerol técnico, revelaram que a temperatura é uma variável preponderante no desempenho do processo de gasificação. À exceção do poder calorífico superior, para o qual se obteve uma ligeira diminuição de valores com o aumento da temperatura, os valores mais elevados dos parâmetros de gasificação foram obtidos para temperaturas superiores a 900 ºC. Esta temperatura parece ser determinante no modelo cinético de gasificação do glicerol, condicionando a composição do gás de produção obtido. Concluiu-se ainda que, na gama de caudais testada, o caudal de alimentação ao reator não teve influência no processo de gasificação. Os ensaios realizados para avaliar o efeito da razão de mistura permitiram verificar que, o aumento da adição de água à alimentação se traduz na redução do teor de CO e de CH4 e no aumento do teor de H2 e CO2, no gás de produção. Para a razão de mistura 25/75 foram obtidos valores de 1,3 para o rácio H2/CO para temperaturas superiores a 900 ºC. A influência da adição de água tornou-se mais evidente nos ensaios de gasificação realizados a temperaturas superiores a 900 ºC. Verificou-se um aumento da conversão de carbono, do rendimento de gás seco e da eficiência do gás frio e uma ligeira diminuição do poder calorífico e da potência disponível, no gás de produção. Para as razões de misturas 60/40 e 40/60 obtiveram-se resultados, para os parâmetros de gasificação, da mesma ordem de grandeza e com valores intermédios entre os obtidos para as razões de mistura 25/75 e 75/25. Porém, quanto maior o teor de água alimentado maior o consumo de energia associado à vaporização da água. Assim, o aumento do teor de água na mistura só apresentará interesse industrial se o objetivo passar pela produção de hidrogénio. Quanto ao efeito da adição de ar como agente de gasificação, os resultados obtidos dão indicação que se poderão potenciar algumas reações exotérmicas que contribuirão para a redução do consumo energético global do processo. Por outro lado, o gás de produção apresentou um rácio H2/CO interessante do ponto de vista da sua aplicação industrial, superior em 35 % ao verificado para a gasificação efetuada apenas na presença de vapor. À exceção do decréscimo no valor do poder calorífico superior do gás de produção, os restantes parâmetros estudados apresentaram a mesma ordem de grandeza, dos obtidos para o estudo da mesma razão de mistura na ausência de ar. Relativamente ao estudo da gasificação do glicerol bruto, obtiveram-se valores de rácio H2/CO e eficiência de gás frio mais elevados que os valores obtidos para a mesma razão de mistura usando glicerol técnico. Os demais parâmetros de gasificação avaliados mostraram-se semelhantes entre as duas matérias-primas, verificando-se apenas uma ligeira diminuição no valor do poder calorífico superior do gás produzido com glicerol bruto. Os resultados obtidos demonstram a possibilidade de valorização energética do glicerol bruto resultante da produção de biodiesel.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system's performance. The paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The results show that the proposed heuristics achieve a reasonably higher system's availability than static offline decisions when lower replication ratios are imposed due to resource or cost limitations. The paper introduces a novel approach to coordinate the activation of passive replicas in interdependent distributed environments. The proposed distributed coordination model reduces the complexity of the needed interactions among nodes and is faster to converge to a globally acceptable solution than a traditional centralised approach.
Resumo:
Replication is a proven concept for increasing the availability of distributed systems. However, actively replicating every software component in distributed embedded systems may not be a feasible approach. Not only the available resources are often limited, but also the imposed overhead could significantly degrade the system’s performance. This paper proposes heuristics to dynamically determine which components to replicate based on their significance to the system as a whole, its consequent number of passive replicas, and where to place those replicas in the network. The activation of passive replicas is coordinated through a fast convergence protocol that reduces the complexity of the needed interactions among nodes until a new collective global service solution is determined.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitiveenvironment of electricity markets impose the use of new approaches in several domains. The networkcost allocation, traditionally used in transmission networks, should be adapted and used in the distribu-tion networks considering the specifications of the connected resources. The main goal is to develop afairer methodology trying to distribute the distribution network use costs to all players which are usingthe network in each period. In this paper, a model considering different type of costs (fixed, losses, andcongestion costs) is proposed comprising the use of a large set of DER, namely distributed generation(DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehi-cles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). Theproposed model includes three distinct phases of operation. The first phase of the model consists in aneconomic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen’s andBialek’s tracing algorithms are used and compared to evaluate the impact of each resource in the net-work. Finally, the MW-mile method is used in the third phase of the proposed model. A distributionnetwork of 33 buses with large penetration of DER is used to illustrate the application of the proposedmodel.
Resumo:
Fractional Calculus (FC) goes back to the beginning of the theory of differential calculus. Nevertheless, the application of FC just emerged in the last two decades, due to the progress in the area of chaos that revealed subtle relationships with the FC concepts. In the field of dynamical systems theory some work has been carried out but the proposed models and algorithms are still in a preliminary stage of establishment. Having these ideas in mind, the paper discusses a FC perspective in the study of the dynamics and control of some distributed parameter systems.
Resumo:
The high penetration of distributed energy resources (DER) in distribution networks and the competitive environment of electricity markets impose the use of new approaches in several domains. The network cost allocation, traditionally used in transmission networks, should be adapted and used in the distribution networks considering the specifications of the connected resources. The main goal is to develop a fairer methodology trying to distribute the distribution network use costs to all players which are using the network in each period. In this paper, a model considering different type of costs (fixed, losses, and congestion costs) is proposed comprising the use of a large set of DER, namely distributed generation (DG), demand response (DR) of direct load control type, energy storage systems (ESS), and electric vehicles with capability of discharging energy to the network, which is known as vehicle-to-grid (V2G). The proposed model includes three distinct phases of operation. The first phase of the model consists in an economic dispatch based on an AC optimal power flow (AC-OPF); in the second phase Kirschen's and Bialek's tracing algorithms are used and compared to evaluate the impact of each resource in the network. Finally, the MW-mile method is used in the third phase of the proposed model. A distribution network of 33 buses with large penetration of DER is used to illustrate the application of the proposed model.
Resumo:
In recent years the use of several new resources in power systems, such as distributed generation, demand response and more recently electric vehicles, has significantly increased. Power systems aim at lowering operational costs, requiring an adequate energy resources management. In this context, load consumption management plays an important role, being necessary to use optimization strategies to adjust the consumption to the supply profile. These optimization strategies can be integrated in demand response programs. The control of the energy consumption of an intelligent house has the objective of optimizing the load consumption. This paper presents a genetic algorithm approach to manage the consumption of a residential house making use of a SCADA system developed by the authors. Consumption management is done reducing or curtailing loads to keep the power consumption in, or below, a specified energy consumption limit. This limit is determined according to the consumer strategy and taking into account the renewable based micro generation, energy price, supplier solicitations, and consumers’ preferences. The proposed approach is compared with a mixed integer non-linear approach.
Resumo:
The future scenarios for operation of smart grids are likely to include a large diversity of players, of different types and sizes. With control and decision making being decentralized over the network, intelligence should also be decentralized so that every player is able to play in the market environment. In the new context, aggregator players, enabling medium, small, and even micro size players to act in a competitive environment, will be very relevant. Virtual Power Players (VPP) and single players must optimize their energy resource management in order to accomplish their goals. This is relatively easy to larger players, with financial means to have access to adequate decision support tools, to support decision making concerning their optimal resource schedule. However, the smaller players have difficulties in accessing this kind of tools. So, it is required that these smaller players can be offered alternative methods to support their decisions. This paper presents a methodology, based on Artificial Neural Networks (ANN), intended to support smaller players’ resource scheduling. The used methodology uses a training set that is built using the energy resource scheduling solutions obtained with a reference optimization methodology, a mixed-integer non-linear programming (MINLP) in this case. The trained network is able to achieve good schedule results requiring modest computational means.
Resumo:
Cyber-Physical Systems and Ambient Intelligence are two of the most important and emerging paradigms of our days. The introduction of renewable sources gave origin to a completely different dimension of the distribution generation problem. On the other hand, Electricity Markets introduced a different dimension in the complexity, the economic dimension. Our goal is to study how to proceed with the Intelligent Training of Operators in Power Systems Control Centres, considering the new reality of Renewable Sources, Distributed Generation, and Electricity Markets, under the emerging paradigms of Cyber-Physical Systems and Ambient Intelligence. We propose Intelligent Tutoring Systems as the approach to deal with the intelligent training of operators in these new circumstances.