31 resultados para Efficient policies
em Instituto Politécnico do Porto, Portugal
Resumo:
Within a country-size asymmetric monetary union, idiosyncratic shocks and national fiscal stabilization policies cause asymmetric cross-border effects. These effects are a source of strategic interactions between noncoordinated fiscal and monetary policies: on the one hand, due to larger externalities imposed on the union, large countries face less incentives to develop free-riding fiscal policies; on the other hand, a larger strategic position vis-à-vis the central bank incentives the use of fiscal policy to, deliberately, influence monetary policy. Additionally, the existence of non-distortionary government financing may also shape policy interactions. As a result, optimal policy regimes may diverge not only across the union members, but also between the latter and the monetary union. In a two-country micro-founded New-Keynesian model for a monetary union, we consider two fiscal policy scenarios: (i) lump-sum taxes are raised to fully finance the government budget and (ii) lump-sum taxes do not ensure balanced budgets in each period; therefore, fiscal and monetary policies are expected to impinge on debt sustainability. For several degrees of country-size asymmetry, we compute optimal discretionary and dynamic non-cooperative policy games and compare their stabilization performance using a union-wide welfare measure. We also assess whether these outcomes could be improved, for the monetary union, through institutional policy arrangements. We find that, in the presence of government indebtedness, monetary policy optimally deviates from macroeconomic to debt stabilization. We also find that policy cooperation is always welfare increasing for the monetary union as a whole; however, indebted large countries may strongly oppose to this arrangement in favour of fiscal leadership. In this case, delegation of monetary policy to a conservative central bank proves to be fruitful to improve the union’s welfare.
Resumo:
Link do editor: http://www.igi-global.com/chapter/role-lifelong-learning-creation-european/13314
Resumo:
O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).
Resumo:
This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.
Resumo:
In many countries the use of renewable energy is increasing due to the introduction of new energy and environmental policies. Thus, the focus on the efficient integration of renewable energy into electric power systems is becoming extremely important. Several European countries have already achieved high penetration of wind based electricity generation and are gradually evolving towards intensive use of this generation technology. The introduction of wind based generation in power systems poses new challenges for the power system operators. This is mainly due to the variability and uncertainty in weather conditions and, consequently, in the wind based generation. In order to deal with this uncertainty and to improve the power system efficiency, adequate wind forecasting tools must be used. This paper proposes a data-mining-based methodology for very short-term wind forecasting, which is suitable to deal with large real databases. The paper includes a case study based on a real database regarding the last three years of wind speed, and results for wind speed forecasting at 5 minutes intervals.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores
Resumo:
The premise of this paper is that a model for communicating the national value system must start from a strategy aimed at the identification, the cultivation and communication of values that give consistency to the value system. The analysis concentrates on the elements of such strategies and on the implications of applying a value communication program on the identity architecture of the community. The paper will also discuss the role of the national value system in the context of the emerging global culture, where the individual has the power to create his/her own hybrid cultural model.
Resumo:
Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.
Resumo:
Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.
Resumo:
O sistema de gestão de proteção de dados pessoais e estudos clínicos em Portugal levanta controvérsia e uma interpretação distinta, dada a sensibilidade ética do tema, a integridade humana. Além deste fato, estamos diante de um problema que envolve diversos interesses e, assim, um confronto de posições. Pretende-se, ao longo deste artigo, abordar a percepção da forma como os profissionais da área da saúde, no seu quotidiano, lidam com a questão do tratamento de dados clínicos, numa tentativa de harmonizar pontos de vista e de conteúdo, verificando se há realmente um esforço das instituições hospitalares para facilitarem este processo e permitirem que os usuários sejam universalmente protegidos e bem tratados. Os resultados obtidos no documento de consulta de profissionais de saúde indicam que há uma preocupação com a confidencialidade em 100% dos inquiridos, embora existam sistemas de gestão de dados clínicos diferenciados (seis distintos). Espera-se uma tendência ascendente na procura dessas informações úteis e de interesse para deter essa informação, tomada por profissionais de saúde, instituições de saúde, seguradoras etc. O problema surge no confronto entre a proteção da vida privada, o interesse específico de usuários, o interesse público e as políticas institucionais e governamentais vigentes. Partindo do pressuposto de que a garantia de confidencialidade é uma realidade em termos de segurança, é necessário determinar se os meios utilizados para atingir essa tarefa são os mais eficientes e permitem uma gestão sustentável dos dados de saúde.
Resumo:
Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.
Resumo:
We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.
Resumo:
The availability of small inexpensive sensor elements enables the employment of large wired or wireless sensor networks for feeding control systems. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loop. This paper presents a solution to this problem by representing sensor measurements with an approximate representation-an interpolation of sensor measurements as a function of space coordinates. A priority-based medium access control (MAC) protocol is used to select the sensor messages with high information content. Thus, the information from a large number of sensor measurements is conveyed within a few messages. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.
Resumo:
The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.
Resumo:
The purpose of this paper is to study the effects of environmental and trade policies in an international mixed duopoly serving two markets. We suppose that the firm in the home country is a welfare-maximizing public firm, while the firm in the foreign country is its own profit-maximizing private firm. We find that the environmental tax can be a strategic instrument for the home government to distribute production from the foreign private firm to the home public firm. An additional effect of the home environmental tax is the reduction of the foreign private firm's output for local consumption, thereby expanding the foreign market for the home public firm.