943 resultados para Data transmission systems.
Resumo:
The project consists of an experimental and numerical modelling study of the applications of ultra-long Raman fibre laser (URFL) based amplification techniques for high-speed multi-wavelength optical communications systems. The research is focused in telecommunications C-band 40 Gb/s transmission data rates with direct and coherent detection. The optical transmission performance of URFL based systems in terms of optical noise, gain bandwidth and gain flatness for different system configurations is evaluated. Systems with different overall span lengths, transmission fibre types and data modulation formats are investigated. Performance is compared with conventional Erbium doped fibre amplifier based system to evaluate system configurations where URFL based amplification provide performance or commercial advantages.
Resumo:
Multilevel power converters have been introduced as the solution for high-power high-voltage switching applications where they have well-known advantages. Recently, full back-to-back connected multilevel neutral point diode clamped converters (NPC converter) have been used inhigh-voltage direct current (HVDC) transmission systems. Bipolar-connected back-to-back NPC converters have advantages in long-distance HVDCtransmission systems over the full back-to-back connection, but greater difficulty to balance the dc capacitor voltage divider on both sending and receiving end NPC converters. This study shows that power flow control and dc capacitor voltage balancing are feasible using fast optimum-predictive-based controllers in HVDC systems using bipolar back-to-back-connected five-level NPC multilevel converters. For both converter sides, the control strategytakes in account active and reactive power, which establishes ac grid currents in both ends, and guarantees the balancing of dc bus capacitor voltages inboth NPC converters. Additionally, the semiconductor switching frequency is minimised to reduce switching losses. The performance and robustness of the new fast predictive control strategy, and its capability to solve the DC capacitor voltage balancing problem of bipolar-connected back-to-back NPCconverters are evaluated.
Resumo:
Voltage source multilevel power converter structures are being considered for high power high voltage applications where they have well known advantages. Recently, full back-to-back connected multilevel neutral diode clamped converters (NPC) have been used in high voltage direct current (HVDC) transmission systems. Bipolar back-to-back connection of NPCs have advantages in long distance HVDC transmission systems, but highly increased difficulties to balance the dc capacitor voltage dividers on both sending and receiving end NPCs. This paper proposes a fast optimum-predictive controller to balance the dc capacitor voltages and to control the power flow in a long distance HVDCsystem using bipolar back-to-back connected NPCs. For both converter sides, the control strategy considers active and reactive power to establish ac grid currents on sending and receiving ends, while guaranteeing the balancing of both NPC dc bus capacitor voltages. Furthermore, the fast predictivecontroller minimizes the semiconductor switching frequency to reduce global switching losses. The performance and robustness of the new fast predictive control strategy and the associated dc capacitors voltage balancing are evaluated. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
Retail services are a main contributor to municipal budget and are an activity that affects perceived quality-of-life, especially for those with mobility difficulties (e.g. the elderly, low income citizens). However, there is evidence of a decline in some of the services market towns provide to their citizens. In market towns, this decline has been reported all over the western world, from North America to Australia. The aim of this research was to understand retail decline and enlighten on some ways of addressing this decline, using a case study, Thornbury, a small town in the Southwest of England. Data collected came from two participatory approaches: photo-surveys and multicriteria mapping. The interpretation of data came from using participants as analysts, but also, using systems thinking (systems diagramming and social trap theory) for theory building. This research moves away from mainstream economic and town planning perspectives by making use of different methods and concepts used in anthropology and visual sociology (photo-surveys), decision-making and ecological economics (multicriteria mapping and social trap theory). In sum, this research has experimented with different methods, out of their context, to analyse retail decline in a small town. This research developed a conceptual model for retail decline and identified the existence of conflicting goals and interests and their implications for retail decline, as well as causes for these. Most of the potential causes have had little attention in the literature. This research also identified that some of the measures commonly used for dealing with retail decline may be contributing to the causes of retail decline itself. Additionally, this research reviewed some of the measures that can be used to deal with retail decline, implications for policy-making and reflected on the use of the data collection and analysis methods in the context of small to medium towns.
Resumo:
During the last few years many research efforts have been done to improve the design of ETL (Extract-Transform-Load) systems. ETL systems are considered very time-consuming, error-prone and complex involving several participants from different knowledge domains. ETL processes are one of the most important components of a data warehousing system that are strongly influenced by the complexity of business requirements, their changing and evolution. These aspects influence not only the structure of a data warehouse but also the structures of the data sources involved with. To minimize the negative impact of such variables, we propose the use of ETL patterns to build specific ETL packages. In this paper, we formalize this approach using BPMN (Business Process Modelling Language) for modelling more conceptual ETL workflows, mapping them to real execution primitives through the use of a domain-specific language that allows for the generation of specific instances that can be executed in an ETL commercial tool.
Resumo:
Los eventos transitorios únicos analógicos (ASET, Analog Single Event Transient) se producen debido a la interacción de un ión pesado o un protón de alta energía con un dispositivo sensible de un circuito analógico. La interacción del ión con un transistor bipolar o de efecto de campo MOS induce pares electrón-hueco que provocan picos que pueden propagarse a la salida del componente analógico provocando transitorios que pueden inducir fallas en el nivel sistema. Los problemas más graves debido a este tipo de fenómeno se dan en el medioambiente espacial, muy rico en iones pesados. Casos típicos los constituyen las computadoras de a bordo de satélites y otros artefactos espaciales. Sin embargo, y debido a la continua contracción de dimensiones de los transistores (que trae aparejado un aumento de sensibilidad), este fenómeno ha comenzado a observarse a nivel del mar, provocado fundamentalmente por el impacto de neutrones atmosféricos. Estos efectos pueden provocar severos problemas a los sistemas informáticos con interfaces analógicas desde las que obtienen datos para el procesamiento y se han convertido en uno de los problemas más graves a los que tienen que hacer frente los diseñadores de sistemas de alta escala de integración. Casos típicos son los Sistemas en Chip que incluyen módulos de procesamiento de altas prestaciones como las interfaces analógicas.El proyecto persigue como objetivo general estudiar la susceptibilidad de sistemas informáticos a ASETs en sus secciones analógicas, proponiendo estrategias para la mitigación de los errores.Como objetivos específicos se pretende: -Proponer nuevos modelos de ASETs basados en simulaciones en el nivel dispositivo y resueltas por el método de elementos finitos.-Utilizar los modelos para identificar las secciones más propensas a producir errores y consecuentemente para ser candidatos a la aplicación de técnicas de endurecimiento a radiaciones.-Utilizar estos modelos para estudiar la naturaleza de los errores producidos en sistemas de procesamiento de datos.-Proponer soluciones novedosas para la mitigación de estos efectos en los mismos circuitos analógicos evitando su propagación a las secciones digitales.-Proponer soluciones para la mitigación de los efectos en el nivel sistema.Para llevar a cabo el proyecto se plantea un procedimiento ascendente para las investigaciones a realizar, comenzando por descripciones en el nivel físico para posteriormente aumentar el nivel de abstracción en el que se encuentra modelado el circuito. Se propone el modelado físico de los dispositivos MOS y su resolución mediante el Método de Elementos Finitos. La inyección de cargas en las zonas sensibles de los modelos permitirá determinar los perfiles de los pulsos de corriente que deben inyectarse en el nivel circuito para emular estos efectos. Estos procedimientos se realizarán para los distintos bloques constructivos de las interfaces analógicas, proponiendo estrategias de mitigación de errores en diferentes niveles.Los resultados esperados del presente proyecto incluyen hardware para detección de errores y tolerancia a este tipo de eventos que permitan aumentar la confiabilidad de sistemas de tratamiento de la información, así como también nuevos datos referentes a efectos de la radiación en semiconductores, nuevos modelos de fallas transitorias que permitan una simulación de estos eventos en el nivel circuito y la determinación de zonas sensibles de interfaces analógicas típicas que deben ser endurecidas para radiación.
Resumo:
This paper devotes to evaluation of performance bottlenecks and algorithm deficiencies in the area of contemporary reliable multicast networking. Hereby, the impact of packet delay jitter on the end-to-end performance of multicast IP data transport is investigated. A series of tests with two most significant open-source implementations of reliable multicast is performed and analyzed. These are: UDP-based File Transfer Protocol (UFTP) and NACK-oriented Reliable multicast (NORM). Tests were targeted to simulate scenario of content distribution in WAN – sized Content Delivery Networks (CDN). Then, results were grouped and averaged, by round trip time and packet losses. This enabled us to see jitter influence independently on round trip time(RTT) and packet loss rates. Revealed jitter influence for different network conditions. Confirmed, that appearance of even small jitter causes significant data rate reduction.
Resumo:
Magdeburg, Univ., Fak. für Informatik, Diss., 2014
Resumo:
The design of control, estimation or diagnosis algorithms most often assumes that all available process variables represent the system state at the same instant of time. However, this is never true in current network systems, because of the unknown deterministic or stochastic transmission delays introduced by the communication network. During the diagnosing stage, this will often generate false alarms. Under nominal operation, the different transmission delays associated with the variables that appear in the computation form produce discrepancies of the residuals from zero. A technique aiming at the minimisation of the resulting false alarms rate, that is based on the explicit modelling of communication delays and on their best-case estimation is proposed
Resumo:
Communications play a key role in modern smart grids. New functionalities that make the grids ‘smart’ require the communication network to function properly. Data transmission between intelligent electric devices (IEDs) in the rectifier and the customer-end inverters (CEIs) used for power conversion is also required in the smart grid concept of the low-voltage direct current (LVDC) distribution network. Smart grid applications, such as smart metering, demand side management (DSM), and grid protection applied with communications are all installed in the LVDC system. Thus, besides remote connection to the databases of the grid operators, a local communication network in the LVDC network is needed. One solution applied to implement the communication medium in power distribution grids is power line communication (PLC). There are power cables in the distribution grids, and hence, they may be applied as a communication channel for the distribution-level data. This doctoral thesis proposes an IP-based high-frequency (HF) band PLC data transmission concept for the LVDC network. A general method to implement the Ethernet-based PLC concept between the public distribution rectifier and the customerend inverters in the LVDC grid is introduced. Low-voltage cables are studied as the communication channel in the frequency band of 100 kHz–30 MHz. The communication channel characteristics and the noise in the channel are described. All individual components in the channel are presented in detail, and a channel model, comprising models for each channel component is developed and verified by measurements. The channel noise is also studied by measurements. Theoretical signalto- noise ratio (SNR) and channel capacity analyses and practical data transmission tests are carried out to evaluate the applicability of the PLC concept against the requirements set by the smart grid applications in the LVDC system. The main results concerning the applicability of the PLC concept and its limitations are presented, and suggestion for future research proposed.
Resumo:
The design of control, estimation or diagnosis algorithms most often assumes that all available process variables represent the system state at the same instant of time. However, this is never true in current network systems, because of the unknown deterministic or stochastic transmission delays introduced by the communication network. During the diagnosing stage, this will often generate false alarms. Under nominal operation, the different transmission delays associated with the variables that appear in the computation form produce discrepancies of the residuals from zero. A technique aiming at the minimisation of the resulting false alarms rate, that is based on the explicit modelling of communication delays and on their best-case estimation is proposed
Resumo:
Considerable progress has taken place in numerical weather prediction over the last decade. It has been possible to extend predictive skills in the extra-tropics of the Northern Hemisphere during the winter from less than five days to seven days. Similar improvements, albeit on a lower level, have taken place in the Southern Hemisphere. Another example of improvement in the forecasts is the prediction of intense synoptic phenomena such as cyclogenesis which on the whole is quite successful with the most advanced operational models (Bengtsson (1989), Gadd and Kruze (1988)). A careful examination shows that there are no single causes for the improvements in predictive skill, but instead they are due to several different factors encompassing the forecasting system as a whole (Bengtsson, 1985). In this paper we will focus our attention on the role of data-assimilation and the effect it may have on reducing the initial error and hence improving the forecast. The first part of the paper contains a theoretical discussion on error growth in simple data assimilation systems, following Leith (1983). In the second part we will apply the result on actual forecast data from ECMWF. The potential for further forecast improvements within the framework of the present observing system in the two hemispheres will be discussed.
Resumo:
Liquid-liquid equilibrium experimental data for refined sunflower seed oil, artificially acidified with commercial oleic acid or commercial linoleic acid and a solvent (ethanol + water), were determined at 298.2 K. This set of experimental data and the experimental data from Cuevas et al.,(1) which were obtained from (283.2 to 333.2) K, for degummed sunflower seed oil-containing systems were correlated using NRTL and UNIQUAC models with temperature-dependent binary parameters. The deviation between experimental and calculated compositions presented average values of (1.13 and 1.41) % for NRTL and UNIQUAC equations, respectively, indicating that the models were able to correctly describe the behavior of compounds under different temperature and solvent hydration.
Resumo:
In this paper, an efficient genetic algorithm (GA) is presented to solve the problem of multistage and coordinated transmission expansion planning. This is a mixed integer nonlinear programming problem, difficult for systems of medium and large size and high complexity. The GA presented has a set of specialized genetic operators and an efficient form of generation of the initial population that finds high quality suboptimal topologies for large size and high complexity systems. In these systems, multistage and coordinated planning present a lower investment than static planning. Tests results are shown in one medium complexity system and one large size high complexity system.