991 resultados para transmission usage allocation


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phasor Measurement Units (PMUs) optimized allocation allows control, monitoring and accurate operation of electric power distribution systems, improving reliability and service quality. Good quality and considerable results are obtained for transmission systems using fault location techniques based on voltage measurements. Based on these techniques and performing PMUs optimized allocation it is possible to develop an electric power distribution system fault locator, which provides accurate results. The PMUs allocation problem presents combinatorial features related to devices number that can be allocated, and also probably places for allocation. Tabu search algorithm is the proposed technique to carry out PMUs allocation. This technique applied in a 141 buses real-life distribution urban feeder improved significantly the fault location results. © 2004 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distribution systems with distributed generation require new analysis methods since networks are not longer passive. Two of the main problems in this new scenario are the network reconfiguration and the loss allocation. This work presents a distribution systems graphic simulator, developed with reconfiguration functions and a special focus on loss allocation, both considering the presence of distributed generation. This simulator uses a fast and robust power flow algorithm based on the current summation backward-forward technique. Reconfiguration problem is solved through a heuristic methodology and the losses allocation function, based on the Zbus method, is presented as an attached result for each obtained configuration. Results are presented and discussed, remarking the easiness of analysis through the graphic simulator as an excellent tool for planning and operation engineers, and very useful for training. © 2004 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A metaheuristic technique for solving the short-term transmission network expansion and reactive power planning problems, at the same time, in regulated power systems using the AC model is presented. The problem is solved using a real genetic algorithm (RGA). For each topology proposed by RGA an indicator is employed to identify the weak buses for new reactive power sources allocation. The fitness function is calculated using the cost of each configuration as well as constraints deviation of an AC optimal power flow (OPF) in which the minimum reactive generation of new reactive sources and the active power losses are objectives. With allocation of reactive power sources at load buses, the circuit capacity increases and the cost of installation could be decreased. The method is tested in a well known test system, presenting good results when compared with other approaches. © 2011 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel mathematical model for the transmission network expansion planning problem. Main idea is to consider phase-shifter (PS) transformers as a new element of the transmission system expansion together with other traditional components such as transmission lines and conventional transformers. In this way, PS are added in order to redistribute active power flows in the system and, consequently, to diminish the total investment costs due to new transmission lines. Proposed mathematical model presents the structure of a mixed-integer nonlinear programming (MINLP) problem and is based on the standard DC model. In this paper, there is also applied a specialized genetic algorithm aimed at optimizing the allocation of candidate components in the network. Results obtained from computational simulations carried out with IEEE-24 bus system show an outstanding performance of the proposed methodology and model, indicating the technical viability of using these nonconventional devices during the planning process. Copyright © 2012 Celso T. Miasaki et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The in vitro production (IVP) of embryos by in vitro fertilization or cloning procedures has been known to cause epigenetic changes in the conceptus that in turn are associated with abnormalities in pre- and postnatal development. Handmade cloning (HMC) procedures and the culture of zona-free embryos in individual microwells provide excellent tools for studies in developmental biology, since embryo development and cell allocation patterns can be evaluated under a wide range of embryo reconstruction arrangements and in in vitro embryo culture conditions. As disturbances in embryonic cell allocation after in vitro embryo manipulations and unusual in vivo conditions during the first third of pregnancy appear to be associated with large offspring, embryo aggregation procedures may allow a compensation for epigenetic defects between aggregated embryos or even may influence more favorable cell allocation in embryonic lineages, favoring subsequent development. Thus, the aim of this study was to evaluate in vitro embryo developmental potential and the pattern of cell allocation in blastocysts developed after the aggregation of handmade cloned embryos produced using syngeneic wild type and/or transgenic somatic cells. Materials, Methods & Results: In vitro-matured bovine cumulus-oocyte complexes (COC) were manually bisected after cumulus and zona pellucida removal; then, two enucleated hemi-oocytes were paired and fused with either a wild type (WT) or a GFP-expressing (GFP) fetal skin cell at the 11th and 19th passages, respectively. Following chemical activation, reconstructed cloned embryos and zona-free parthenote embryos were in vitro-cultured in microwells, for 7 days, either individually (1 x 100%) or after the aggregation of two structures (2 x 100%) per microwell, as follows: (G1) one WT cloned embryo; (G2) two aggregated WT embryos; (G3) one GFP cloned embryo; (G4) two aggregated GFP embryos; (G5) aggregation of a WT embryo and a GFP embryo; (G6) one parthenote embryo; or (G7) two aggregated parthenote embryos. Fusion (clones), cleavage (Day 2), and blastocyst (Day 7) rates, and embryonic cell allocation were compared by the. 2 or Fisher tests. Total cell number (TCN) in blastocysts was analyzed by the Student's test (P < 0.05). Fusion and cleavage rates, and cell allocation were similar between groups. On a per WOW basis, development to the blastocyst stage was similar between groups, except for lower rates of development seen in G3. However, when based on number of embryos per group (one or two), blastocyst development was higher in G1 than all other groups, which were similar between one another. Cloned GFP embryos had lower in vitro development to the blastocyst stage than WT embryos, which had more TCN than parthenote or aggregated chimeric WT/GFP embryos. Aggregated GFP embryos had fewer cells than the other embryo groups. Discussion: The in vitro development of GFP cloned embryos was lower than WT embryos, with no effects on cell allocation in resulting blastocysts. Differences in blastocyst rate between groups were likely due to lower GFP-expressing cell viability, as GFP donor cells were at high population cell doublings when used for cloning. On a per embryo basis, embryo aggregation on Day 1 resulted in blastocyst development similar to non-aggregated embryos on Day 7, with no differences in cell proportion between groups. The use of GFP-expressing cells was proven a promising strategy for the study of cell allocation during embryo development, which may assist in the elucidation of mechanisms of abnormalities after in vitro embryo manipulations, leading to the development of improved protocols for the in vitro production (IVP) of bovine embryos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have realized a Data Acquisition chain for the use and characterization of APSEL4D, a 32 x 128 Monolithic Active Pixel Sensor, developed as a prototype for frontier experiments in high energy particle physics. In particular a transition board was realized for the conversion between the chip and the FPGA voltage levels and for the signal quality enhancing. A Xilinx Spartan-3 FPGA was used for real time data processing, for the chip control and the communication with a Personal Computer through a 2.0 USB port. For this purpose a firmware code, developed in VHDL language, was written. Finally a Graphical User Interface for the online system monitoring, hit display and chip control, based on windows and widgets, was realized developing a C++ code and using Qt and Qwt dedicated libraries. APSEL4D and the full acquisition chain were characterized for the first time with the electron beam of the transmission electron microscope and with 55Fe and 90Sr radioactive sources. In addition, a beam test was performed at the T9 station of the CERN PS, where hadrons of momentum of 12 GeV/c are available. The very high time resolution of APSEL4D (up to 2.5 Mfps, but used at 6 kfps) was fundamental in realizing a single electron Young experiment using nanometric double slits obtained by a FIB technique. On high statistical samples, it was possible to observe the interference and diffractions of single isolated electrons traveling inside a transmission electron microscope. For the first time, the information on the distribution of the arrival time of the single electrons has been extracted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Distributed real-time embedded systems are becoming increasingly important to society. More demands will be made on them and greater reliance will be placed on the delivery of their services. A relevant subset of them is high-integrity or hard real-time systems, where failure can cause loss of life, environmental harm, or significant financial loss. Additionally, the evolution of communication networks and paradigms as well as the necessity of demanding processing power and fault tolerance, motivated the interconnection between electronic devices; many of the communications have the possibility of transferring data at a high speed. The concept of distributed systems emerged as systems where different parts are executed on several nodes that interact with each other via a communication network. Java’s popularity, facilities and platform independence have made it an interesting language for the real-time and embedded community. This was the motivation for the development of RTSJ (Real-Time Specification for Java), which is a language extension intended to allow the development of real-time systems. The use of Java in the development of high-integrity systems requires strict development and testing techniques. However, RTJS includes a number of language features that are forbidden in such systems. In the context of the HIJA project, the HRTJ (Hard Real-Time Java) profile was developed to define a robust subset of the language that is amenable to static analysis for high-integrity system certification. Currently, a specification under the Java community process (JSR- 302) is being developed. Its purpose is to define those capabilities needed to create safety critical applications with Java technology called Safety Critical Java (SCJ). However, neither RTSJ nor its profiles provide facilities to develop distributed realtime applications. This is an important issue, as most of the current and future systems will be distributed. The Distributed RTSJ (DRTSJ) Expert Group was created under the Java community process (JSR-50) in order to define appropriate abstractions to overcome this problem. Currently there is no formal specification. The aim of this thesis is to develop a communication middleware that is suitable for the development of distributed hard real-time systems in Java, based on the integration between the RMI (Remote Method Invocation) model and the HRTJ profile. It has been designed and implemented keeping in mind the main requirements such as the predictability and reliability in the timing behavior and the resource usage. iThe design starts with the definition of a computational model which identifies among other things: the communication model, most appropriate underlying network protocols, the analysis model, and a subset of Java for hard real-time systems. In the design, the remote references are the basic means for building distributed applications which are associated with all non-functional parameters and resources needed to implement synchronous or asynchronous remote invocations with real-time attributes. The proposed middleware separates the resource allocation from the execution itself by defining two phases and a specific threading mechanism that guarantees a suitable timing behavior. It also includes mechanisms to monitor the functional and the timing behavior. It provides independence from network protocol defining a network interface and modules. The JRMP protocol was modified to include two phases, non-functional parameters, and message size optimizations. Although serialization is one of the fundamental operations to ensure proper data transmission, current implementations are not suitable for hard real-time systems and there are no alternatives. This thesis proposes a predictable serialization that introduces a new compiler to generate optimized code according to the computational model. The proposed solution has the advantage of allowing us to schedule the communications and to adjust the memory usage at compilation time. In order to validate the design and the implementation a demanding validation process was carried out with emphasis in the functional behavior, the memory usage, the processor usage (the end-to-end response time and the response time in each functional block) and the network usage (real consumption according to the calculated consumption). The results obtained in an industrial application developed by Thales Avionics (a Flight Management System) and in exhaustive tests show that the design and the prototype are reliable for industrial applications with strict timing requirements. Los sistemas empotrados y distribuidos de tiempo real son cada vez más importantes para la sociedad. Su demanda aumenta y cada vez más dependemos de los servicios que proporcionan. Los sistemas de alta integridad constituyen un subconjunto de gran importancia. Se caracterizan por que un fallo en su funcionamiento puede causar pérdida de vidas humanas, daños en el medio ambiente o cuantiosas pérdidas económicas. La necesidad de satisfacer requisitos temporales estrictos, hace más complejo su desarrollo. Mientras que los sistemas empotrados se sigan expandiendo en nuestra sociedad, es necesario garantizar un coste de desarrollo ajustado mediante el uso técnicas adecuadas en su diseño, mantenimiento y certificación. En concreto, se requiere una tecnología flexible e independiente del hardware. La evolución de las redes y paradigmas de comunicación, así como la necesidad de mayor potencia de cómputo y de tolerancia a fallos, ha motivado la interconexión de dispositivos electrónicos. Los mecanismos de comunicación permiten la transferencia de datos con alta velocidad de transmisión. En este contexto, el concepto de sistema distribuido ha emergido como sistemas donde sus componentes se ejecutan en varios nodos en paralelo y que interactúan entre ellos mediante redes de comunicaciones. Un concepto interesante son los sistemas de tiempo real neutrales respecto a la plataforma de ejecución. Se caracterizan por la falta de conocimiento de esta plataforma durante su diseño. Esta propiedad es relevante, por que conviene que se ejecuten en la mayor variedad de arquitecturas, tienen una vida media mayor de diez anos y el lugar ˜ donde se ejecutan puede variar. El lenguaje de programación Java es una buena base para el desarrollo de este tipo de sistemas. Por este motivo se ha creado RTSJ (Real-Time Specification for Java), que es una extensión del lenguaje para permitir el desarrollo de sistemas de tiempo real. Sin embargo, RTSJ no proporciona facilidades para el desarrollo de aplicaciones distribuidas de tiempo real. Es una limitación importante dado que la mayoría de los actuales y futuros sistemas serán distribuidos. El grupo DRTSJ (DistributedRTSJ) fue creado bajo el proceso de la comunidad de Java (JSR-50) con el fin de definir las abstracciones que aborden dicha limitación, pero en la actualidad aun no existe una especificacion formal. El objetivo de esta tesis es desarrollar un middleware de comunicaciones para el desarrollo de sistemas distribuidos de tiempo real en Java, basado en la integración entre el modelo de RMI (Remote Method Invocation) y el perfil HRTJ. Ha sido diseñado e implementado teniendo en cuenta los requisitos principales, como la predecibilidad y la confiabilidad del comportamiento temporal y el uso de recursos. El diseño parte de la definición de un modelo computacional el cual identifica entre otras cosas: el modelo de comunicaciones, los protocolos de red subyacentes más adecuados, el modelo de análisis, y un subconjunto de Java para sistemas de tiempo real crítico. En el diseño, las referencias remotas son el medio básico para construcción de aplicaciones distribuidas las cuales son asociadas a todos los parámetros no funcionales y los recursos necesarios para la ejecución de invocaciones remotas síncronas o asíncronas con atributos de tiempo real. El middleware propuesto separa la asignación de recursos de la propia ejecución definiendo dos fases y un mecanismo de hebras especifico que garantiza un comportamiento temporal adecuado. Además se ha incluido mecanismos para supervisar el comportamiento funcional y temporal. Se ha buscado independencia del protocolo de red definiendo una interfaz de red y módulos específicos. También se ha modificado el protocolo JRMP para incluir diferentes fases, parámetros no funcionales y optimizaciones de los tamaños de los mensajes. Aunque la serialización es una de las operaciones fundamentales para asegurar la adecuada transmisión de datos, las actuales implementaciones no son adecuadas para sistemas críticos y no hay alternativas. Este trabajo propone una serialización predecible que ha implicado el desarrollo de un nuevo compilador para la generación de código optimizado acorde al modelo computacional. La solución propuesta tiene la ventaja que en tiempo de compilación nos permite planificar las comunicaciones y ajustar el uso de memoria. Con el objetivo de validar el diseño e implementación se ha llevado a cabo un exigente proceso de validación con énfasis en: el comportamiento funcional, el uso de memoria, el uso del procesador (tiempo de respuesta de extremo a extremo y en cada uno de los bloques funcionales) y el uso de la red (consumo real conforme al estimado). Los buenos resultados obtenidos en una aplicación industrial desarrollada por Thales Avionics (un sistema de gestión de vuelo) y en las pruebas exhaustivas han demostrado que el diseño y el prototipo son fiables para aplicaciones industriales con estrictos requisitos temporales.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Singular-value decomposition (SVD)-based multiple-input multiple output (MIMO) systems, where the whole MIMO channel is decomposed into a number of unequally weighted single-input single-output (SISO) channels, have attracted a lot of attention in the wireless community. The unequal weighting of the SISO channels has led to intensive research on bit- and power allocation even in MIMO channel situation with poor scattering conditions identified as the antennas correlation effect. In this situation, the unequal weighting of the SISO channels becomes even much stronger. In comparison to the SVD-assisted MIMO transmission, geometric mean decomposition (GMD)-based MIMO systems are able to compensate the drawback of weighted SISO channels when using SVD, where the decomposition result is nearly independent of the antennas correlation effect. The remaining interferences after the GMD-based signal processing can be easily removed by using dirty paper precoding as demonstrated in this work. Our results show that GMD-based MIMO transmission has the potential to significantly simplify the bit and power loading processes and outperforms the SVD-based MIMO transmission as long as the same QAM-constellation size is used on all equally-weighted SISO channels.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a transmission and wheeling pricing method based on the monetary flow tracing along power flow paths: the monetary flow-monetary path method. Active and reactive power flows are converted into monetary flows by using nodal prices. The method introduces a uniform measurement for transmission service usages by active and reactive powers. Because monetary flows are related to the nodal prices, the impacts of generators and loads on operation constraints and the interactive impacts between active and reactive powers can be considered. Total transmission service cost is separated into more practical line-related costs and system-wide cost, and can be flexibly distributed between generators and loads. The method is able to reconcile transmission service cost fairly and to optimize transmission system operation and development. The case study on the IEEE 30 bus test system shows that the proposed pricing method is effective in creating economic signals towards the efficient use and operation of the transmission system. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In spite of the increasing significance of broadband Internet, there are not many research papers explicitly addressing issues pertaining to its adoption and postadoption. Previous research on broadband has mainly focused on the supply side aspect at the national level, ignoring the importance of the demand side which may involve looking more deeply into the use, as well as factors impacting organizational and individual uptake. In an attempt to fill this gap, the current study empirically verifies an integrated theoretical model comprising the theory of planned behavior and the IS continuance model to examine factors influencing broadband Internet adoption and postadoption behavior of some 1,500 organizations in Singapore. Overall, strong support for the integrated model has been manifested by our results, providing insight into influential factors. At the adoption stage, perceived behavioral control has the greatest impact on behavioral intention. Our findings also suggest that, as compared to attitude, subjective norms and perceived behavioral control more significantly affect the broadband Internet adoption decision. At the postadoption stage, intention is no longer the only determinant of broadband Internet continuance; rather, initial usage was found to significantly affect broadband Internet continuance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis presents several advanced optical techniques that are crucial for improving high capacity transmission systems. The basic theory of optical fibre communications are introduced before optical solitons and their usage in optically amplified fibre systems are discussed. The design, operation, limitations and importance of the recirculating loop are illustrated. The crucial role of dispersion management in the transmission systems is then considered. Two of the most popular dispersion compensation methods - dispersion compensating fibres and fibre Bragg gratings - are emphasised. A tunable dispersion compensator is fabricated using the linear chirped fibre Bragg gratings and a bending rig. Results show that it is capable of compensating not only the second order dispersion, but also higher order dispersion. Stimulated Raman Scattering (SRS) are studied and discussed. Different dispersion maps are performed for all Raman amplified standard fibre link to obtain maximum transmission distances. Raman amplification is used in most of our loop experiments since it improves the optical signal-to-noise ratio (OSNR) and significantly reduces the nonlinear intrachannel effects of the transmission systems. The main body of the experimental work is concerned with nonlinear optical switching using the nonlinear optical loop mirrors (NOLMs). A number of different types of optical loop mirrors are built, tested and implemented in the transmission systems for noise suppression and 2R regeneration. Their results show that for 2R regeneration, NOLM does improve system performance, while NILM degrades system performance due to its sensitivity to the input pulse width, and the NALM built is unstable and therefore affects system performance.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we propose a resource allocation scheme to minimize transmit power for multicast orthogonal frequency division multiple access systems. The proposed scheme allows users to have different symbol error rate (SER) across subcarriers and guarantees an average bit error rate and transmission rate for all users. We first provide an algorithm to determine the optimal bits and target SER on subcarriers. Because the worst-case complexity of the optimal algorithm is exponential, we further propose a suboptimal algorithm that separately assigns bit and adjusts SER with a lower complexity. Numerical results show that the proposed algorithm can effectively improve the performance of multicast orthogonal frequency division multiple access systems and that the performance of the suboptimal algorithm is close to that of the optimal one. Copyright © 2012 John Wiley & Sons, Ltd. This paper proposes optimal and suboptimal algorithms for minimizing transmitting power of multicast orthogonal frequency division multiple access systems with guaranteed average bit error rate and data rate requirement. The proposed scheme allows users to have different symbol error rate across subcarriers and guarantees an average bit error rate and transmission rate for all users. Copyright © 2012 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The energy balancing capability of cooperative communication is utilized to solve the energy hole problem in wireless sensor networks. We first propose a cooperative transmission strategy, where intermediate nodes participate in two cooperative multi-input single-output (MISO) transmissions with the node at the previous hop and a selected node at the next hop, respectively. Then, we study the optimization problems for power allocation of the cooperative transmission strategy by examining two different approaches: network lifetime maximization (NLM) and energy consumption minimization (ECM). For NLM, the numerical optimal solution is derived and a searching algorithm for suboptimal solution is provided when the optimal solution does not exist. For ECM, a closed-form solution is obtained. Numerical and simulation results show that both the approaches have much longer network lifetime than SISO transmission strategies and other cooperative communication schemes. Moreover, NLM which features energy balancing outperforms ECM which focuses on energy efficiency, in the network lifetime sense.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter discusses network protection of high-voltage direct current (HVDC) transmission systems for large-scale offshore wind farms where the HVDC system utilizes voltage-source converters. The multi-terminal HVDC network topology and protection allocation and configuration are discussed with DC circuit breaker and protection relay configurations studied for different fault conditions. A detailed protection scheme is designed with a solution that does not require relay communication. Advanced understanding of protection system design and operation is necessary for reliable and safe operation of the meshed HVDC system under fault conditions. Meshed-HVDC systems are important as they will be used to interconnect large-scale offshore wind generation projects. Offshore wind generation is growing rapidly and offers a means of securing energy supply and addressing emissions targets whilst minimising community impacts. There are ambitious plans concerning such projects in Europe and in the Asia-Pacific region which will all require a reliable yet economic system to generate, collect, and transmit electrical power from renewable resources. Collective offshore wind farms are efficient and have potential as a significant low-carbon energy source. However, this requires a reliable collection and transmission system. Offshore wind power generation is a relatively new area and lacks systematic analysis of faults and associated operational experience to enhance further development. Appropriate fault protection schemes are required and this chapter highlights the process of developing and assessing such schemes. The chapter illustrates the basic meshed topology, identifies the need for distance evaluation, and appropriate cable models, then details the design and operation of the protection scheme with simulation results used to illustrate operation. © Springer Science+Business Media Singapore 2014.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The increase in renewable energy generators introduced into the electricity grid is putting pressure on its stability and management as predictions of renewable energy sources cannot be accurate or fully controlled. This, with the additional pressure of fluctuations in demand, presents a problem more complex than the current methods of controlling electricity distribution were designed for. A global approximate and distributed optimisation method for power allocation that accommodates uncertainties and volatility is suggested and analysed. It is based on a probabilistic method known as message passing [1], which has deep links to statistical physics methodology. This principled method of optimisation is based on local calculations and inherently accommodates uncertainties; it is of modest computational complexity and provides good approximate solutions.We consider uncertainty and fluctuations drawn from a Gaussian distribution and incorporate them into the message-passing algorithm. We see the effect that increasing uncertainty has on the transmission cost and how the placement of volatile nodes within a grid, such as renewable generators or consumers, effects it.