5 resultados para DBR (distributed bragg reflector)
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
Photonic integration has become an important research topic in research for applications in the telecommunications industry. Current optical internet infrastructure has reached capacity with current generation dense wavelength division multiplexing (DWDM) systems fully occupying the low absorption region of optical fibre from 1530 nm to 1625 nm (the C and L bands). This is both due to an increase in the number of users worldwide and existing users demanding more bandwidth. Therefore, current research is focussed on using the available telecommunication spectrum more efficiently. To this end, coherent communication systems are being developed. Advanced coherent modulation schemes can be quite complex in terms of the number and array of devices required for implementation. In order to make these systems viable both logistically and commercially, photonic integration is required. In traditional DWDM systems, arrayed waveguide gratings (AWG) are used to both multiplex and demultiplex the multi-wavelength signal involved. AWGs are used widely as they allow filtering of the many DWDM wavelengths simultaneously. However, when moving to coherent telecommunication systems such as coherent optical frequency division multiplexing (OFDM) smaller FSR ranges are required from the AWG. This increases the size of the device which is counter to the miniaturisation which integration is trying to achieve. Much work was done with active filters during the 1980s. This involved using a laser device (usually below threshold) to allow selective wavelength filtering of input signals. By using more complicated cavity geometry devices such as distributed feedback (DFB) and sampled grating distributed Bragg gratings (SG-DBR) narrowband filtering is achievable with high suppression (>30 dB) of spurious wavelengths. The active nature of the devices also means that, through carrier injection, the index can be altered resulting in tunability of the filter. Used above threshold, active filters become useful in filtering coherent combs. Through injection locking, the coherence of the filtered wavelengths with the original comb source is retained. This gives active filters potential application in coherent communication system as demultiplexers. This work will focus on the use of slotted Fabry-Pérot (SFP) semiconductor lasers as active filters. Experiments were carried out to ensure that SFP lasers were useful as tunable active filters. In all experiments in this work the SFP lasers were operated above threshold and so injection locking was the mechanic by which the filters operated. Performance of the lasers under injection locking was examined using both single wavelength and coherent comb injection. In another experiment two discrete SFP lasers were used simultaneously to demultiplex a two-line coherent comb. The relative coherence of the comb lines was retained after demultiplexing. After showing that SFP lasers could be used to successfully demultiplex coherent combs a photonic integrated circuit was designed and fabricated. This involved monolithic integration of a MMI power splitter with an array of single facet SFP lasers. This device was tested much in the same way as the discrete devices. The integrated device was used to successfully demultiplex a two line coherent comb signal whilst retaining the relative coherence between the filtered comb lines. A series of modelling systems were then employed in order to understand the resonance characteristics of the fabricated devices, and to understand their performance under injection locking. Using this information, alterations to the SFP laser designs were made which were theoretically shown to provide improved performance and suitability for use in filtering coherent comb signals.
Resumo:
The multiquantum barrier (MQB), proposed by Iga et al in 1986, has been shown by several researchers to be an effective structure for improving the operating characteristics of laser diodes. These improvements include a reduction in the laser threshold current and increased characteristic temperatures. The operation of the MQB has been described as providing an increased barrier to electron overflow by reflecting high energy electrons trying to escape from the active region of the laser.This is achieved in a manner analogous to a Bragg reflector in optics. This thesis presents an investigation of the effectiveness of the MQB as an electron reflector. Numerical models have been developed for calculating the electron reflection due to MQB. Novel optical and electrical characterisation techniques have been used to try to measure an increase in barrier height due to the MQB in AlGaInP.It has been shown that the inclusion of MQB structures in bulk double heterostructure visible laser diodes can halve the threshold current above room temperature and the characteristic temperature of these lasers can be increased by up to 20K.These improvements are shown to occur in visible laser diodes even with the inclusion of theoretically ineffective MQB structures, hence the observed improvement in the characteristics of the laser diodes described above cannot be uniquely attributed to an increased barrier height due to enhance electron reflection. It is proposed here that the MQB improves the performance of laser diodes by proventing the diffusion of zinc into the active region of the laser. It is also proposed that the trapped zinc in the MQB region of the laser diode locally increases the p-type doping bringing the quasi-Fermi level for holes closer to the valence band edge thus increasing the barrier to electron overflow in the conduction band.
Resumo:
A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.
Resumo:
We review recent advances in all-optical OFDM technologies and discuss the performance of a field trial of a 2 Tbit/s Coherent WDM over 124 km with distributed Raman amplification. The results indicate that careful optimisation of the Raman pumps is essential. We also consider how all-optical OFDM systems perform favourably against energy consumption when compared with alternative coherent detection schemes. We argue that, in an energy constrained high-capacity transmission system, direct detected all-optical OFDM with 'ideal' Raman amplification is an attractive candidate for metro area datacentre interconnects with ~100 km fibre spans, with an overall energy requirement at least three times lower than coherent detection techniques.
Resumo:
It is estimated that the quantity of digital data being transferred, processed or stored at any one time currently stands at 4.4 zettabytes (4.4 × 2 70 bytes) and this figure is expected to have grown by a factor of 10 to 44 zettabytes by 2020. Exploiting this data is, and will remain, a significant challenge. At present there is the capacity to store 33% of digital data in existence at any one time; by 2020 this capacity is expected to fall to 15%. These statistics suggest that, in the era of Big Data, the identification of important, exploitable data will need to be done in a timely manner. Systems for the monitoring and analysis of data, e.g. stock markets, smart grids and sensor networks, can be made up of massive numbers of individual components. These components can be geographically distributed yet may interact with one another via continuous data streams, which in turn may affect the state of the sender or receiver. This introduces a dynamic causality, which further complicates the overall system by introducing a temporal constraint that is difficult to accommodate. Practical approaches to realising the system described above have led to a multiplicity of analysis techniques, each of which concentrates on specific characteristics of the system being analysed and treats these characteristics as the dominant component affecting the results being sought. The multiplicity of analysis techniques introduces another layer of heterogeneity, that is heterogeneity of approach, partitioning the field to the extent that results from one domain are difficult to exploit in another. The question is asked can a generic solution for the monitoring and analysis of data that: accommodates temporal constraints; bridges the gap between expert knowledge and raw data; and enables data to be effectively interpreted and exploited in a transparent manner, be identified? The approach proposed in this dissertation acquires, analyses and processes data in a manner that is free of the constraints of any particular analysis technique, while at the same time facilitating these techniques where appropriate. Constraints are applied by defining a workflow based on the production, interpretation and consumption of data. This supports the application of different analysis techniques on the same raw data without the danger of incorporating hidden bias that may exist. To illustrate and to realise this approach a software platform has been created that allows for the transparent analysis of data, combining analysis techniques with a maintainable record of provenance so that independent third party analysis can be applied to verify any derived conclusions. In order to demonstrate these concepts, a complex real world example involving the near real-time capturing and analysis of neurophysiological data from a neonatal intensive care unit (NICU) was chosen. A system was engineered to gather raw data, analyse that data using different analysis techniques, uncover information, incorporate that information into the system and curate the evolution of the discovered knowledge. The application domain was chosen for three reasons: firstly because it is complex and no comprehensive solution exists; secondly, it requires tight interaction with domain experts, thus requiring the handling of subjective knowledge and inference; and thirdly, given the dearth of neurophysiologists, there is a real world need to provide a solution for this domain