79 resultados para Power reduction
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observedfor each individual. A particular case of FDA is when the observed functions are densityfunctions, that are also an example of infinite dimensional compositional data. In thiswork we compare several methods for dimensionality reduction for this particular typeof data: functional principal components analysis (PCA) with or without a previousdata transformation and multidimensional scaling (MDS) for diferent inter-densitiesdistances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (householdsincome distributions)
Resumo:
Our purpose in this article is to define a network structure which is based on two egos instead of the egocentered (one ego) or the complete network (n egos). We describe the characteristics and properties for this kind of network which we call “nosduocentered network”, comparing it with complete and egocentered networks. The key point for this kind of network is that relations exist between the two main egos and all alters, but relations among others are not observed. After that, we use new social network measures adapted to the nosduocentered network, some of which are based on measures for complete networks such as degree, betweenness, closeness centrality or density, while some others are tailormade for nosduocentered networks. We specify three regression models to predict research performance of PhD students based on these social network measures for different networks such as advice, collaboration, emotional support and trust. Data used are from Slovenian PhD students and their s
Resumo:
Traffic Engineering objective is to optimize network resource utilization. Although several works have been published about minimizing network resource utilization in MPLS networks, few of them have been focused in LSR label space reduction. This letter studies Asymmetric Merged Tunneling (AMT) as a new method for reducing the label space in MPLS network. The proposed method may be regarded as a combination of label merging (proposed in the MPLS architecture) and asymmetric tunneling (proposed recently in our previous works). Finally, simulation results are performed by comparing AMT with both ancestors. They show a great improvement in the label space reduction factor
Resumo:
Most network operators have considered reducing LSR label spaces (number of labels used) as a way of simplifying management of underlaying virtual private networks (VPNs) and therefore reducing operational expenditure (OPEX). The IETF outlined the label merging feature in MPLS-allowing the configuration of multipoint-to-point connections (MP2P)-as a means of reducing label space in LSRs. We found two main drawbacks in this label space reduction a)it should be separately applied to a set of LSPs with the same egress LSR-which decreases the options for better reductions, and b)LSRs close to the edge of the network experience a greater label space reduction than those close to the core. The later implies that MP2P connections reduce the number of labels asymmetrically
Resumo:
Fault location has been studied deeply for transmission lines due to its importance in power systems. Nowadays the problem of fault location on distribution systems is receiving special attention mainly because of the power quality regulations. In this context, this paper presents an application software developed in Matlabtrade that automatically calculates the location of a fault in a distribution power system, starting from voltages and currents measured at the line terminal and the model of the distribution power system data. The application is based on a N-ary tree structure, which is suitable to be used in this application due to the highly branched and the non- homogeneity nature of the distribution systems, and has been developed for single-phase, two-phase, two-phase-to-ground, and three-phase faults. The implemented application is tested by using fault data in a real electrical distribution power system
Resumo:
The ceramic shell is a material mainly used for making foundry molds. This research demonstrates that ceramic shell can be used for making sculptures with exceptional definition in its finish. The research has identified a number of advantages of the material to meet the challenges of an artist during the making of a sculpture. The research has been developed in six stages: In the first stage data were collected from the chaff as the process material. This was the starting point for research. In the second stage, we have set the appropriate composition of the slurry, both in percentage and type of binder, and firing curve. To this end, we evaluated the application characteristics, thickness, drying, mechanical strength, the reduction coefficient and porosity. In the third stage it was observed that the husk is suitable for all types of materials acting as support. It was also found that the slurry can be used with various sculptural processes: modeling, molding using silicone or plaster mold, shuttering, with internal metal frame, and so on. In addition, we have established methods to repair and modify the husk by hand and power tools. In the fourth stage we have found ways to modify the surface of the husk with other minerals that affect the structure: introduction of filing of copper, bronze and iron in the slurry ceramics, different staining procedure in hot or cold, by enamel slip, and so on. In the fifth stage sculptures were made using the methods established in the previous stages, to verify this hypothesis. The sixth stage, which is annexed, contains a new method to process the ceramic shell as a mold in casting that emerged from the proven methods in the investigation.
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
This Technical Report presents a tentative protocol used to assess the viability of powersupply systems. The viability of power-supply systems can be assessed by looking at the production factors (e.g. paid labor, power capacity, fossil-fuels) – needed for the system to operate and maintain itself – in relation to the internal constraints set by the energetic metabolism of societies. In fact, by using this protocol it becomes possible to link assessments of technical coefficients performed at the level of the power-supply systems with assessments of benchmark values performed at the societal level throughout the relevant different sectors. In particular, the example provided here in the case of France for the year 2009 makes it possible to see that in fact nuclear energy is not viable in terms of labor requirements (both direct and indirect inputs) as well as in terms of requirements of power capacity, especially when including reprocessing operations.
Resumo:
There are many factors that influence the day-ahead market bidding strategies of a generation company (GenCo) in the current energy market framework. Environmental policy issues have become more and more important for fossil-fuelled power plants and they have to be considered in their management, giving rise to emission limitations. This work allows to investigate the influence of both the allowances and emission reduction plan, and the incorporation of the derivatives medium-term commitments in the optimal generation bidding strategy to the day-ahead electricity market. Two different technologies have been considered: the coal thermal units, high-emission technology, and the combined cycle gas turbine units, low-emission technology. The Iberian Electricity Market and the Spanish National Emissions and Allocation Plans are the framework to deal with the environmental issues in the day-ahead market bidding strategies. To address emission limitations, some of the standard risk management methodologies developed for financial markets, such as Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR), have been extended. This study offers to electricity generation utilities a mathematical model to determinate the individual optimal generation bid to the wholesale electricity market, for each one of their generation units that maximizes the long-run profits of the utility abiding by the Iberian Electricity Market rules, the environmental restrictions set by the EU Emission Trading Scheme, as well as the restrictions set by the Spanish National Emissions Reduction Plan. The economic implications for a GenCo of including the environmental restrictions of these National Plans are analyzed and the most remarkable results will be presented.
Resumo:
This paper aims to survey the techniques and methods described in literature to analyse and characterise voltage sags and the corresponding objectives of these works. The study has been performed from a data mining point of view
Resumo:
Three multivariate statistical tools (principal component analysis, factor analysis, analysis discriminant) have been tested to characterize and model the sags registered in distribution substations. Those models use several features to represent the magnitude, duration and unbalanced grade of sags. They have been obtained from voltage and current waveforms. The techniques are tested and compared using 69 registers of sags. The advantages and drawbacks of each technique are listed
Resumo:
The objective of traffic engineering is to optimize network resource utilization. Although several works have been published about minimizing network resource utilization, few works have focused on LSR (label switched router) label space. This paper proposes an algorithm that takes advantage of the MPLS label stack features in order to reduce the number of labels used in LSPs. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The described algorithm sets up NHLFE (next hop label forwarding entry) tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the described algorithm achieves a great reduction factor in the label space. The presented works apply for both types of connections: P2MP (point-to-multipoint) and P2P (point-to-point)
Resumo:
The aim of traffic engineering is to optimise network resource utilization. Although several works on minimizing network resource utilization have been published, few works have focused on LSR label space. This paper proposes an algorithm that uses MPLS label stack features in order to reduce the number of labels used in LSPs forwarding. Some tunnelling methods and their MPLS implementation drawbacks are also discussed. The algorithm described sets up the NHLFE tables in each LSR, creating asymmetric tunnels when possible. Experimental results show that the algorithm achieves a large reduction factor in the label space. The work presented here applies for both types of connections: P2MP and P2P
Resumo:
This short paper addresses the problem of designing a QFT (quantitative feedback theory) based controllers for the vibration reduction in a 6-story building structure equipped with shear-mode magnetorheological dampers. A new methodology is proposed for characterizing the nonlinear hysteretic behavior of the MR damper through the uncertainty template in the Nichols chart. The design procedure for QFT control design is briefly presented
Resumo:
Monitor a distribution network implies working with a huge amount of data coining from the different elements that interact in the network. This paper presents a visualization tool that simplifies the task of searching the database for useful information applicable to fault management or preventive maintenance of the network