21 resultados para efficient vulcanisation (EV)

em Instituto Politécnico do Porto, Portugal


Relevância:

20.00% 20.00%

Publicador:

Resumo:

O documento em anexo encontra-se na versão post-print (versão corrigida pelo editor).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a computationally efficient methodology for the optimal location and sizing of static and switched shunt capacitors in large distribution systems. The problem is formulated as the maximization of the savings produced by the reduction in energy losses and the avoided costs due to investment deferral in the expansion of the network. The proposed method selects the nodes to be compensated, as well as the optimal capacitor ratings and their operational characteristics, i.e. fixed or switched. After an appropriate linearization, the optimization problem was formulated as a large-scale mixed-integer linear problem, suitable for being solved by means of a widespread commercial package. Results of the proposed optimizing method are compared with another recent methodology reported in the literature using two test cases: a 15-bus and a 33-bus distribution network. For the both cases tested, the proposed methodology delivers better solutions indicated by higher loss savings, which are achieved with lower amounts of capacitive compensation. The proposed method has also been applied for compensating to an actual large distribution network served by AES-Venezuela in the metropolitan area of Caracas. A convergence time of about 4 seconds after 22298 iterations demonstrates the ability of the proposed methodology for efficiently handling large-scale compensation problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Electrotécnica e de Computadores

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Mestrado em Engenharia Química. Ramo Tecnologias de Protecção Ambiental.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The introduction of electricity markets and integration of Distributed Generation (DG) have been influencing the power system’s structure change. Recently, the smart grid concept has been introduced, to guarantee a more efficient operation of the power system using the advantages of this new paradigm. Basically, a smart grid is a structure that integrates different players, considering constant communication between them to improve power system operation and management. One of the players revealing a big importance in this context is the Virtual Power Player (VPP). In the transportation sector the Electric Vehicle (EV) is arising as an alternative to conventional vehicles propel by fossil fuels. The power system can benefit from this massive introduction of EVs, taking advantage on EVs’ ability to connect to the electric network to charge, and on the future expectation of EVs ability to discharge to the network using the Vehicle-to-Grid (V2G) capacity. This thesis proposes alternative strategies to control these two EV modes with the objective of enhancing the management of the power system. Moreover, power system must ensure the trips of EVs that will be connected to the electric network. The EV user specifies a certain amount of energy that will be necessary to charge, in order to ensure the distance to travel. The introduction of EVs in the power system turns the Energy Resource Management (ERM) under a smart grid environment, into a complex problem that can take several minutes or hours to reach the optimal solution. Adequate optimization techniques are required to accommodate this kind of complexity while solving the ERM problem in a reasonable execution time. This thesis presents a tool that solves the ERM considering the intensive use of EVs in the smart grid context. The objective is to obtain the minimum cost of ERM considering: the operation cost of DG, the cost of the energy acquired to external suppliers, the EV users payments and remuneration and penalty costs. This tool is directed to VPPs that manage specific network areas, where a high penetration level of EVs is expected to be connected in these areas. The ERM is solved using two methodologies: the adaptation of a deterministic technique proposed in a previous work, and the adaptation of the Simulated Annealing (SA) technique. With the purpose of improving the SA performance for this case, three heuristics are additionally proposed, taking advantage on the particularities and specificities of an ERM with these characteristics. A set of case studies are presented in this thesis, considering a 32 bus distribution network and up to 3000 EVs. The first case study solves the scheduling without considering EVs, to be used as a reference case for comparisons with the proposed approaches. The second case study evaluates the complexity of the ERM with the integration of EVs. The third case study evaluates the performance of scheduling with different control modes for EVs. These control modes, combined with the proposed SA approach and with the developed heuristics, aim at improving the quality of the ERM, while reducing drastically its execution time. The proposed control modes are: uncoordinated charging, smart charging and V2G capability. The fourth and final case study presents the ERM approach applied to consecutive days.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Copper zinc tin sulfide (CZTS) is a promising Earthabundant thin-film solar cell material; it has an appropriate band gap of ~1.45 eV and a high absorption coefficient. The most efficient CZTS cells tend to be slightly Zn-rich and Cu-poor. However, growing Zn-rich CZTS films can sometimes result in phase decomposition of CZTS into ZnS and Cu2SnS3, which is generally deleterious to solar cell performance. Cubic ZnS is difficult to detect by XRD, due to a similar diffraction pattern. We hypothesize that synchrotron-based extended X-ray absorption fine structure (EXAFS), which is sensitive to local chemical environment, may be able to determine the quantity of ZnS phase in CZTS films by detecting differences in the second-nearest neighbor shell of the Zn atoms. Films of varying stoichiometries, from Zn-rich to Cu-rich (Zn-poor) were examined using the EXAFS technique. Differences in the spectra as a function of Cu/Zn ratio are detected. Linear combination analysis suggests increasing ZnS signal as the CZTS films become more Zn-rich. We demonstrate that the sensitive technique of EXAFS could be used to quantify the amount of ZnS present and provide a guide to crystal growth of highly phase pure films.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Network control systems (NCSs) are spatially distributed systems in which the communication between sensors, actuators and controllers occurs through a shared band-limited digital communication network. However, the use of a shared communication network, in contrast to using several dedicated independent connections, introduces new challenges which are even more acute in large scale and dense networked control systems. In this paper we investigate a recently introduced technique of gathering information from a dense sensor network to be used in networked control applications. Obtaining efficiently an approximate interpolation of the sensed data is exploited as offering a good tradeoff between accuracy in the measurement of the input signals and the delay to the actuation. These are important aspects to take into account for the quality of control. We introduce a variation to the state-of-the-art algorithms which we prove to perform relatively better because it takes into account the changes over time of the input signal within the process of obtaining an approximate interpolation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Cluster scheduling and collision avoidance are crucial issues in large-scale cluster-tree Wireless Sensor Networks (WSNs). The paper presents a methodology that provides a Time Division Cluster Scheduling (TDCS) mechanism based on the cyclic extension of RCPS/TC (Resource Constrained Project Scheduling with Temporal Constraints) problem for a cluster-tree WSN, assuming bounded communication errors. The objective is to meet all end-to-end deadlines of a predefined set of time-bounded data flows while minimizing the energy consumption of the nodes by setting the TDCS period as long as possible. Sinceeach cluster is active only once during the period, the end-to-end delay of a given flow may span over several periods when there are the flows with opposite direction. The scheduling tool enables system designers to efficiently configure all required parameters of the IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs in the network design time. The performance evaluation of thescheduling tool shows that the problems with dozens of nodes can be solved while using optimal solvers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a wireless sensor network (WSN) where a broadcast from a sensor node does not reach all sensor nodes in the network; such networks are often called multihop networks. Sensor nodes take individual sensor readings, however, in many cases, it is relevant to compute aggregated quantities of these readings. In fact, the minimum and maximum of all sensor readings at an instant are often interesting because they indicate abnormal behavior, for example if the maximum temperature is very high then it may be that a fire has broken out. In this context, we propose an algorithm for computing the min or max of sensor readings in a multihop network. This algorithm has the particularly interesting property of having a time complexity that does not depend on the number of sensor nodes; only the network diameter and the range of the value domain of sensor readings matter.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We focus on large-scale and dense deeply embedded systems where, due to the large amount of information generated by all nodes, even simple aggregate computations such as the minimum value (MIN) of the sensor readings become notoriously expensive to obtain. Recent research has exploited a dominance-based medium access control(MAC) protocol, the CAN bus, for computing aggregated quantities in wired systems. For example, MIN can be computed efficiently and an interpolation function which approximates sensor data in an area can be obtained efficiently as well. Dominance-based MAC protocols have recently been proposed for wireless channels and these protocols can be expected to be used for achieving highly scalable aggregate computations in wireless systems. But no experimental demonstration is currently available in the research literature. In this paper, we demonstrate that highly scalable aggregate computations in wireless networks are possible. We do so by (i) building a new wireless hardware platform with appropriate characteristics for making dominance-based MAC protocols efficient, (ii) implementing dominance-based MAC protocols on this platform, (iii) implementing distributed algorithms for aggregate computations (MIN, MAX, Interpolation) using the new implementation of the dominance-based MAC protocol and (iv) performing experiments to prove that such highly scalable aggregate computations in wireless networks are possible.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The availability of small inexpensive sensor elements enables the employment of large wired or wireless sensor networks for feeding control systems. Unfortunately, the need to transmit a large number of sensor measurements over a network negatively affects the timing parameters of the control loop. This paper presents a solution to this problem by representing sensor measurements with an approximate representation-an interpolation of sensor measurements as a function of space coordinates. A priority-based medium access control (MAC) protocol is used to select the sensor messages with high information content. Thus, the information from a large number of sensor measurements is conveyed within a few messages. This approach greatly reduces the time for obtaining a snapshot of the environment state and therefore supports the real-time requirements of feedback control loops.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The simulation analysis is important approach to developing and evaluating the systems in terms of development time and cost. This paper demonstrates the application of Time Division Cluster Scheduling (TDCS) tool for the configuration of IEEE 802.15.4/ZigBee beaconenabled cluster-tree WSNs using the simulation analysis, as an illustrative example that confirms the practical applicability of the tool. The simulation study analyses how the number of retransmissions impacts the reliability of data transmission, the energy consumption of the nodes and the end-to-end communication delay, based on the simulation model that was implemented in the Opnet Modeler. The configuration parameters of the network are obtained directly from the TDCS tool. The simulation results show that the number of retransmissions impacts the reliability, the energy consumption and the end-to-end delay, in a way that improving the one may degrade the others.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Empowered by virtualisation technology, cloud infrastructures enable the construction of flexi- ble and elastic computing environments, providing an opportunity for energy and resource cost optimisation while enhancing system availability and achieving high performance. A crucial re- quirement for effective consolidation is the ability to efficiently utilise system resources for high- availability computing and energy-efficiency optimisation to reduce operational costs and carbon footprints in the environment. Additionally, failures in highly networked computing systems can negatively impact system performance substantially, prohibiting the system from achieving its initial objectives. In this paper, we propose algorithms to dynamically construct and readjust vir- tual clusters to enable the execution of users’ jobs. Allied with an energy optimising mechanism to detect and mitigate energy inefficiencies, our decision-making algorithms leverage virtuali- sation tools to provide proactive fault-tolerance and energy-efficiency to virtual clusters. We conducted simulations by injecting random synthetic jobs and jobs using the latest version of the Google cloud tracelogs. The results indicate that our strategy improves the work per Joule ratio by approximately 12.9% and the working efficiency by almost 15.9% compared with other state-of-the-art algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The need to increase agricultural yield led, among others, to an increase in the consumption of nitrogen based fertilizers. As a consequence, there are excessive concentrations of nitrates, the most abundant of the reactive nitrogen (Nr) species, in several areas of the world. The demographic changes and projected population growth for the next decades, and the economic shifts which are already shaping the near future are powerful drivers for a further intensification in the use of fertilizers, with a predicted increase of the nitrogen loads in soils. Nitrate easily diffuses in the subsurface environments, portraying high mobility in soils. Moreover, the presence of high nitrate loads in water has the potential to cause an array of health dysfunctions, such as methemoglobinemia and several cancers. Permeable Reactive Barriers (PRB) placed strategically relatively to the nitrate source constitute an effective technology to tackle nitrate pollution. Ergo, PRB avoid various adverse impacts resulting from the displacement of reactive nitrogen downstream along water bodies. A four stages literature review was carried out in 34 databases. Initially, a set of pertinent key words were identified to perform the initial databases searches. Then, the synonyms of those initial key words were used to carry out a second set of databases searches. The third stage comprised the identification of other additional relevant terms from the research papers identified in the previous two stages. Again, databases searches were performed with this third set of key words. The final step consisted of the identification of relevant papers from the bibliography of the relevant papers identified in the previous three stages of the literature review process. The set of papers identified as relevant for in-depth analysis were assessed considering a set of relevant characterization variables.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Task scheduling is one of the key mechanisms to ensure timeliness in embedded real-time systems. Such systems have often the need to execute not only application tasks but also some urgent routines (e.g. error-detection actions, consistency checkers, interrupt handlers) with minimum latency. Although fixed-priority schedulers such as Rate-Monotonic (RM) are in line with this need, they usually make a low processor utilization available to the system. Moreover, this availability usually decreases with the number of considered tasks. If dynamic-priority schedulers such as Earliest Deadline First (EDF) are applied instead, high system utilization can be guaranteed but the minimum latency for executing urgent routines may not be ensured. In this paper we describe a scheduling model according to which urgent routines are executed at the highest priority level and all other system tasks are scheduled by EDF. We show that the guaranteed processor utilization for the assumed scheduling model is at least as high as the one provided by RM for two tasks, namely 2(2√−1). Seven polynomial time tests for checking the system timeliness are derived and proved correct. The proposed tests are compared against each other and to an exact but exponential running time test.