943 resultados para Data transmission systems.
Resumo:
This chapter discusses network protection of high-voltage direct current (HVDC) transmission systems for large-scale offshore wind farms where the HVDC system utilizes voltage-source converters. The multi-terminal HVDC network topology and protection allocation and configuration are discussed with DC circuit breaker and protection relay configurations studied for different fault conditions. A detailed protection scheme is designed with a solution that does not require relay communication. Advanced understanding of protection system design and operation is necessary for reliable and safe operation of the meshed HVDC system under fault conditions. Meshed-HVDC systems are important as they will be used to interconnect large-scale offshore wind generation projects. Offshore wind generation is growing rapidly and offers a means of securing energy supply and addressing emissions targets whilst minimising community impacts. There are ambitious plans concerning such projects in Europe and in the Asia-Pacific region which will all require a reliable yet economic system to generate, collect, and transmit electrical power from renewable resources. Collective offshore wind farms are efficient and have potential as a significant low-carbon energy source. However, this requires a reliable collection and transmission system. Offshore wind power generation is a relatively new area and lacks systematic analysis of faults and associated operational experience to enhance further development. Appropriate fault protection schemes are required and this chapter highlights the process of developing and assessing such schemes. The chapter illustrates the basic meshed topology, identifies the need for distance evaluation, and appropriate cable models, then details the design and operation of the protection scheme with simulation results used to illustrate operation. © Springer Science+Business Media Singapore 2014.
Resumo:
We present the design of nonlinear regenerative communication channels that have capacity above the classical Shannon capacity of the linear additive white Gaussian noise channel. The upper bound for regeneration efficiency is found and the asymptotic behavior of the capacity in the saturation regime is derived. © 2013 IEEE.
Resumo:
We examine data transmission during the interval immediately after wavelength switching of a tunable laser and, through simulation, we demonstrate how choice of modulation format can improve the efficacy of an optical burst/packet switched network. © 2013 Optical Society of America.
Resumo:
In this Letter, we theoretically and numerically analyze the performance of coherent optical transmission systems that deploy inline or transceiver based nonlinearity compensation techniques. For systems where signal-signal nonlinear interactions are fully compensated, we find that beyond the performance peak the signal-to-noise ratio degradation has a slope of 3 dBSNR/dBPower suggesting a quartic rather than quadratic dependence on signal power. This is directly related to the fact that signals in a given span will interact not only with linear amplified spontaneous emission noise, but also with the nonlinear four-wave mixing products generated from signal-noise interaction in previous (hitherto) uncompensated spans. The performance of optical systems employing different nonlinearity compensation schemes were numerically simulated and compared against analytical predictions, showing a good agreement within a 0.4 dB margin of error.
Resumo:
We report on a theoretical study of polarization impairments in periodically spun fiber Raman amplifiers. Based on the Stochastic Generator approach we have derived equations for polarization dependent gain and mean-square gain fluctuations. We show that periodically spun fiber can work as a Raman polarizer but it suffers from increased polarization dependent gain and gain fluctuations. Unlike this, application of a depolarizer can result in suppression of polarization dependent gain and gain fluctuations. We demonstrate that it is possible to design a new fiber Raman polarizer by combining a short fiber without spin and properly chosen parameters and a long periodically spun fiber. This polarizer provides almost the same polarization pulling for all input signal states of polarization and so have very small polarization dependent gain. The obtained results can be used in high-speed fiber optic communication for design of quasi-isotropic spatially and spectrally transparent media with increased Raman gain. © 2011 IEEE.
Resumo:
The thesis presents a detailed study of different Raman fibre laser (RFL) based amplification techniques and their applications in long-haul/unrepeatered coherent transmission systems. RFL based amplifications techniques were characterised from different aspects, including signal/noise power distributions, relative intensity noise (RIN), mode structures of induced Raman fibre lasers, and so on. It was found for the first time that RFL based amplification techniques could be divided into three categories in terms of the fibre laser regime, which were Fabry-Perot fibre laser with two FBGs, weak Fabry-Perot fibre laser with one FBG and very low reflection near the input, and random distributed feedback (DFB) fibre laser with one FBG. It was also found that lowering the reflection near the input could mitigate the RIN of the signal significantly, thanks to the reduced efficiency of the Stokes shift from the FW-propagated pump. In order to evaluate the transmission performance, different RFL based amplifiers were evaluated and optimised in long-haul coherent transmission systems. The results showed that Fabry-Perot fibre laser based amplifier with two FBGs gave >4.15 dB Q factor penalty using symmetrical bidirectional pumping, as the RIN of the signal was increased significantly. However, random distributed feedback fibre laser based amplifier with one FBG could mitigate the RIN of the signal, which enabled the use of bidirectional second order pumping and consequently give the best transmission performance up to 7915 km. Furthermore, using random DFB fibre laser based amplifier was proved to be effective to combat the nonlinear impairment, and the maximum reach was enhanced by >28% in mid-link single/dual band optical phase conjugator (OPC) transmission systems. In addition, unrepeatered transmission over >350 km fibre length using RFL based amplification technique were presented experimentally using DP-QPSK and DP-16QAM transmitter.
Resumo:
The Data Processing Department of ISHC has developed coding forms to be used for the data to be entered into the program. The Highway Planning and Programming and the Design Departments are responsible for coding and submitting the necessary data forms to Data Processing for the noise prediction on the highway sections.
Resumo:
In today’s big data world, data is being produced in massive volumes, at great velocity and from a variety of different sources such as mobile devices, sensors, a plethora of small devices hooked to the internet (Internet of Things), social networks, communication networks and many others. Interactive querying and large-scale analytics are being increasingly used to derive value out of this big data. A large portion of this data is being stored and processed in the Cloud due the several advantages provided by the Cloud such as scalability, elasticity, availability, low cost of ownership and the overall economies of scale. There is thus, a growing need for large-scale cloud-based data management systems that can support real-time ingest, storage and processing of large volumes of heterogeneous data. However, in the pay-as-you-go Cloud environment, the cost of analytics can grow linearly with the time and resources required. Reducing the cost of data analytics in the Cloud thus remains a primary challenge. In my dissertation research, I have focused on building efficient and cost-effective cloud-based data management systems for different application domains that are predominant in cloud computing environments. In the first part of my dissertation, I address the problem of reducing the cost of transactional workloads on relational databases to support database-as-a-service in the Cloud. The primary challenges in supporting such workloads include choosing how to partition the data across a large number of machines, minimizing the number of distributed transactions, providing high data availability, and tolerating failures gracefully. I have designed, built and evaluated SWORD, an end-to-end scalable online transaction processing system, that utilizes workload-aware data placement and replication to minimize the number of distributed transactions that incorporates a suite of novel techniques to significantly reduce the overheads incurred both during the initial placement of data, and during query execution at runtime. In the second part of my dissertation, I focus on sampling-based progressive analytics as a means to reduce the cost of data analytics in the relational domain. Sampling has been traditionally used by data scientists to get progressive answers to complex analytical tasks over large volumes of data. Typically, this involves manually extracting samples of increasing data size (progressive samples) for exploratory querying. This provides the data scientists with user control, repeatable semantics, and result provenance. However, such solutions result in tedious workflows that preclude the reuse of work across samples. On the other hand, existing approximate query processing systems report early results, but do not offer the above benefits for complex ad-hoc queries. I propose a new progressive data-parallel computation framework, NOW!, that provides support for progressive analytics over big data. In particular, NOW! enables progressive relational (SQL) query support in the Cloud using unique progress semantics that allow efficient and deterministic query processing over samples providing meaningful early results and provenance to data scientists. NOW! enables the provision of early results using significantly fewer resources thereby enabling a substantial reduction in the cost incurred during such analytics. Finally, I propose NSCALE, a system for efficient and cost-effective complex analytics on large-scale graph-structured data in the Cloud. The system is based on the key observation that a wide range of complex analysis tasks over graph data require processing and reasoning about a large number of multi-hop neighborhoods or subgraphs in the graph; examples include ego network analysis, motif counting in biological networks, finding social circles in social networks, personalized recommendations, link prediction, etc. These tasks are not well served by existing vertex-centric graph processing frameworks whose computation and execution models limit the user program to directly access the state of a single vertex, resulting in high execution overheads. Further, the lack of support for extracting the relevant portions of the graph that are of interest to an analysis task and loading it onto distributed memory leads to poor scalability. NSCALE allows users to write programs at the level of neighborhoods or subgraphs rather than at the level of vertices, and to declaratively specify the subgraphs of interest. It enables the efficient distributed execution of these neighborhood-centric complex analysis tasks over largescale graphs, while minimizing resource consumption and communication cost, thereby substantially reducing the overall cost of graph data analytics in the Cloud. The results of our extensive experimental evaluation of these prototypes with several real-world data sets and applications validate the effectiveness of our techniques which provide orders-of-magnitude reductions in the overheads of distributed data querying and analysis in the Cloud.
Resumo:
This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.
Resumo:
Pop-up archival tags (PAT) provide summary and high-resolution time series data at predefined temporal intervals. The limited battery capabilities of PATs often restrict the transmission success and thus temporal coverage of both data products. While summary data are usually less affected by this problem, as a result of its lower size, it might be less informative. We here investigate the accuracy and feasibility of using temperature at depth summary data provided by PATs to describe encountered oceanographic conditions. Interpolated temperature at depth summary data was found to provide accurate estimates of three major thermal water column structure indicators: thermocline depth, stratification and ocean heat content. Such indicators are useful for the interpretation of the tagged animal's horizontal and vertical behaviour. The accuracy of these indicators was found to be particularly sensitive to the number of data points available in the first 100 m, which in turn depends on the vertical behaviour of the tagged animal. Based on our results, we recommend the use of temperature at depth summary data as opposed to temperature time series data for PAT studies; doing so during the tag programming will help to maximize the amount of transmitted time series data for other key data types such as light levels and depth.
Resumo:
The thesis represents the conclusive outcome of the European Joint Doctorate programmein Law, Science & Technology funded by the European Commission with the instrument Marie Skłodowska-Curie Innovative Training Networks actions inside of the H2020, grantagreement n. 814177. The tension between data protection and privacy from one side, and the need of granting further uses of processed personal datails is investigated, drawing the lines of the technological development of the de-anonymization/re-identification risk with an explorative survey. After acknowledging its span, it is questioned whether a certain degree of anonymity can still be granted focusing on a double perspective: an objective and a subjective perspective. The objective perspective focuses on the data processing models per se, while the subjective perspective investigates whether the distribution of roles and responsibilities among stakeholders can ensure data anonymity.
Resumo:
Signal-to-interference ratio (SIR) performance of a multiband orthogonal frequency division multiplexing ultra-wideband system with residual timing offset is investigated. To do so, an exact mathematical derivation of the SIR of this system is derived. It becomes obvious that, unlike a cyclic prefixing based system, a zero padding based system is sensitive to residual timing offset.
Resumo:
Autoritat de certificació (CA) per l'emissió de certificats digitals en una infraestructura de clau pública (PKI) amb una interfície web bàsica.
Resumo:
One of the most effective techniques offering QoS routing is minimum interference routing. However, it is complex in terms of computation time and is not oriented toward improving the network protection level. In order to include better levels of protection, new minimum interference routing algorithms are necessary. Minimizing the failure recovery time is also a complex process involving different failure recovery phases. Some of these phases depend completely on correct routing selection, such as minimizing the failure notification time. The level of protection also involves other aspects, such as the amount of resources used. In this case shared backup techniques should be considered. Therefore, minimum interference techniques should also be modified in order to include sharing resources for protection in their objectives. These aspects are reviewed and analyzed in this article, and a new proposal combining minimum interference with fast protection using shared segment backups is introduced. Results show that our proposed method improves both minimization of the request rejection ratio and the percentage of bandwidth allocated to backup paths in networks with low and medium protection requirements
Resumo:
Quantitatively assessing the importance or criticality of each link in a network is of practical value to operators, as that can help them to increase the network's resilience, provide more efficient services, or improve some other aspect of the service. Betweenness is a graph-theoretical measure of centrality that can be applied to communication networks to evaluate link importance. However, as we illustrate in this paper, the basic definition of betweenness centrality produces inaccurate estimations as it does not take into account some aspects relevant to networking, such as the heterogeneity in link capacity or the difference between node-pairs in their contribution to the total traffic. A new algorithm for discovering link centrality in transport networks is proposed in this paper. It requires only static or semi-static network and topology attributes, and yet produces estimations of good accuracy, as verified through extensive simulations. Its potential value is demonstrated by an example application. In the example, the simple shortest-path routing algorithm is improved in such a way that it outperforms other more advanced algorithms in terms of blocking ratio