20 resultados para Bandwidth Broadening Techniques
em CORA - Cork Open Research Archive - University College Cork - Ireland
Resumo:
For pt. I see ibid., vol. 44, p. 927-36 (1997). In a digital communications system, data are transmitted from one location to another by mapping bit sequences to symbols, and symbols to sample functions of analog waveforms. The analog waveform passes through a bandlimited (possibly time-varying) analog channel, where the signal is distorted and noise is added. In a conventional system the analog sample functions sent through the channel are weighted sums of one or more sinusoids; in a chaotic communications system the sample functions are segments of chaotic waveforms. At the receiver, the symbol may be recovered by means of coherent detection, where all possible sample functions are known, or by noncoherent detection, where one or more characteristics of the sample functions are estimated. In a coherent receiver, synchronization is the most commonly used technique for recovering the sample functions from the received waveform. These sample functions are then used as reference signals for a correlator. Synchronization-based coherent receivers have advantages over noncoherent receivers in terms of noise performance, bandwidth efficiency (in narrow-band systems) and/or data rate (in chaotic systems). These advantages are lost if synchronization cannot be maintained, for example, under poor propagation conditions. In these circumstances, communication without synchronization may be preferable. The theory of conventional telecommunications is extended to chaotic communications, chaotic modulation techniques and receiver configurations are surveyed, and chaotic synchronization schemes are described
Resumo:
A massive change is currently taking place in the manner in which power networks are operated. Traditionally, power networks consisted of large power stations which were controlled from centralised locations. The trend in modern power networks is for generated power to be produced by a diverse array of energy sources which are spread over a large geographical area. As a result, controlling these systems from a centralised controller is impractical. Thus, future power networks will be controlled by a large number of intelligent distributed controllers which must work together to coordinate their actions. The term Smart Grid is the umbrella term used to denote this combination of power systems, artificial intelligence, and communications engineering. This thesis focuses on the application of optimal control techniques to Smart Grids with a focus in particular on iterative distributed MPC. A novel convergence and stability proof for iterative distributed MPC based on the Alternating Direction Method of Multipliers is derived. Distributed and centralised MPC, and an optimised PID controllers' performance are then compared when applied to a highly interconnected, nonlinear, MIMO testbed based on a part of the Nordic power grid. Finally, a novel tuning algorithm is proposed for iterative distributed MPC which simultaneously optimises both the closed loop performance and the communication overhead associated with the desired control.
Resumo:
There is much common ground between the areas of coding theory and systems theory. Fitzpatrick has shown that a Göbner basis approach leads to efficient algorithms in the decoding of Reed-Solomon codes and in scalar interpolation and partial realization. This thesis simultaneously generalizes and simplifies that approach and presents applications to discrete-time modeling, multivariable interpolation and list decoding. Gröbner basis theory has come into its own in the context of software and algorithm development. By generalizing the concept of polynomial degree, term orders are provided for multivariable polynomial rings and free modules over polynomial rings. The orders are not, in general, unique and this adds, in no small way, to the power and flexibility of the technique. As well as being generating sets for ideals or modules, Gröbner bases always contain a element which is minimal with respect tot the corresponding term order. Central to this thesis is a general algorithm, valid for any term order, that produces a Gröbner basis for the solution module (or ideal) of elements satisfying a sequence of generalized congruences. These congruences, based on shifts and homomorphisms, are applicable to a wide variety of problems, including key equations and interpolations. At the core of the algorithm is an incremental step. Iterating this step lends a recursive/iterative character to the algorithm. As a consequence, not all of the input to the algorithm need be available from the start and different "paths" can be taken to reach the final solution. The existence of a suitable chain of modules satisfying the criteria of the incremental step is a prerequisite for applying the algorithm.
Resumo:
In this thesis a novel transmission format, named Coherent Wavelength Division Multiplexing (CoWDM) for use in high information spectral density optical communication networks is proposed and studied. In chapter I a historical view of fibre optic communication systems as well as an overview of state of the art technology is presented to provide an introduction to the subject area. We see that, in general the aim of modern optical communication system designers is to provide high bandwidth services while reducing the overall cost per transmitted bit of information. In the remainder of the thesis a range of investigations, both of a theoretical and experimental nature are carried out using the CoWDM transmission format. These investigations are designed to consider features of CoWDM such as its dispersion tolerance, compatibility with forward error correction and suitability for use in currently installed long haul networks amongst others. A high bit rate optical test bed constructed at the Tyndall National Institute facilitated most of the experimental work outlined in this thesis and a collaboration with France Telecom enabled long haul transmission experiments using the CoWDM format to be carried out. An amount of research was also carried out on ancillary topics such as optical comb generation, forward error correction and phase stabilisation techniques. The aim of these investigations is to verify the suitability of CoWDM as a cost effective solution for use in both current and future high bit rate optical communication networks
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
Error correcting codes are combinatorial objects, designed to enable reliable transmission of digital data over noisy channels. They are ubiquitously used in communication, data storage etc. Error correction allows reconstruction of the original data from received word. The classical decoding algorithms are constrained to output just one codeword. However, in the late 50’s researchers proposed a relaxed error correction model for potentially large error rates known as list decoding. The research presented in this thesis focuses on reducing the computational effort and enhancing the efficiency of decoding algorithms for several codes from algorithmic as well as architectural standpoint. The codes in consideration are linear block codes closely related to Reed Solomon (RS) codes. A high speed low complexity algorithm and architecture are presented for encoding and decoding RS codes based on evaluation. The implementation results show that the hardware resources and the total execution time are significantly reduced as compared to the classical decoder. The evaluation based encoding and decoding schemes are modified and extended for shortened RS codes and software implementation shows substantial reduction in memory footprint at the expense of latency. Hermitian codes can be seen as concatenated RS codes and are much longer than RS codes over the same aphabet. A fast, novel and efficient VLSI architecture for Hermitian codes is proposed based on interpolation decoding. The proposed architecture is proven to have better than Kötter’s decoder for high rate codes. The thesis work also explores a method of constructing optimal codes by computing the subfield subcodes of Generalized Toric (GT) codes that is a natural extension of RS codes over several dimensions. The polynomial generators or evaluation polynomials for subfield-subcodes of GT codes are identified based on which dimension and bound for the minimum distance are computed. The algebraic structure for the polynomials evaluating to subfield is used to simplify the list decoding algorithm for BCH codes. Finally, an efficient and novel approach is proposed for exploiting powerful codes having complex decoding but simple encoding scheme (comparable to RS codes) for multihop wireless sensor network (WSN) applications.
Resumo:
Modern neuroscience relies heavily on sophisticated tools that allow us to visualize and manipulate cells with precise spatial and temporal control. Transgenic mouse models, for example, can be used to manipulate cellular activity in order to draw conclusions about the molecular events responsible for the development, maintenance and refinement of healthy and/or diseased neuronal circuits. Although it is fairly well established that circuits respond to activity-dependent competition between neurons, we have yet to understand either the mechanisms underlying these events or the higher-order plasticity that synchronizes entire circuits. In this thesis we aimed to develop and characterize transgenic mouse models that can be used to directly address these outstanding biological questions in different ways. We present SLICK-H, a Cre-expressing mouse line that can achieve drug-inducible, widespread, neuron-specific manipulations in vivo. This model is a clear improvement over existing models because of its particularly strong, widespread, and even distribution pattern that can be tightly controlled in the absence of drug induction. We also present SLICK-V::Ptox, a mouse line that, through expression of the tetanus toxin light chain, allows long-term inhibition of neurotransmission in a small subset (<1%) of fluorescently labeled pyramidal cells. This model, which can be used to study how a silenced cell performs in a wildtype environment, greatly facilitates the in vivo study of activity-dependent competition in the mammalian brain. As an initial application we used this model to show that tetanus toxin-expressing CA1 neurons experience a 15% - 19% decrease in apical dendritic spine density. Finally, we also describe the attempt to create additional Cre-driven mouse lines that would allow conditional alteration of neuronal activity either by hyperpolarization or inhibition of neurotransmission. Overall, the models characterized in this thesis expand upon the wealth of tools available that aim to dissect neuronal circuitry by genetically manipulating neurons in vivo.
Resumo:
In this thesis I theoretically study quantum states of ultracold atoms. The majority of the Chapters focus on engineering specific quantum states of single atoms with high fidelity in experimentally realistic systems. In the sixth Chapter, I investigate the stability and dynamics of new multidimensional solitonic states that can be created in inhomogeneous atomic Bose-Einstein condensates. In Chapter three I present two papers in which I demonstrate how the coherent tunnelling by adiabatic passage (CTAP) process can be implemented in an experimentally realistic atom chip system, to coherently transfer the centre-of-mass of a single atom between two spatially distinct magnetic waveguides. In these works I also utilise GPU (Graphics Processing Unit) computing which offers a significant performance increase in the numerical simulation of the Schrödinger equation. In Chapter four I investigate the CTAP process for a linear arrangement of radio frequency traps where the centre-of-mass of both, single atoms and clouds of interacting atoms, can be coherently controlled. In Chapter five I present a theoretical study of adiabatic radio frequency potentials where I use Floquet theory to more accurately model situations where frequencies are close and/or field amplitudes are large. I also show how one can create highly versatile 2D adiabatic radio frequency potentials using multiple radio frequency fields with arbitrary field orientation and demonstrate their utility by simulating the creation of ring vortex solitons. In the sixth Chapter I discuss the stability and dynamics of a family of multidimensional solitonic states created in harmonically confined Bose-Einstein condensates. I demonstrate that these solitonic states have interesting dynamical instabilities, where a continuous collapse and revival of the initial state occurs. Through Bogoliubov analysis, I determine the modes responsible for the observed instabilities of each solitonic state and also extract information related to the time at which instability can be observed.
Resumo:
This PhD thesis investigates the potential use of science communication models to engage a broader swathe of actors in decision making in relation to scientific and technological innovation in order to address possible democratic deficits in science and technology policy-making. A four-pronged research approach has been employed to examine different representations of the public(s) and different modes of engagement. The first case study investigates whether patient-groups could represent an alternative needs-driven approach to biomedical and health sciences R & D. This is followed by enquiry into the potential for Science Shops to represent a bottom-up approach to promote research and development of local relevance. The barriers and opportunities for the involvement of scientific researchers in science communication are next investigated via a national survey which is comparable to a similar survey conducted in the UK. The final case study investigates to what extent opposition or support regarding nanotechnology (as an emerging technology) is reflected amongst the YouTube user community and the findings are considered in the context of how support or opposition to new or emerging technologies can be addressed using conflict resolution based approaches to manage potential conflict trajectories. The research indicates that the majority of communication exercises of relevance to science policy and planning take the form of a one-way flow of information with little or no facility for public feedback. This thesis proposes that a more bottom-up approach to research and technology would help broaden acceptability and accountability for decisions made relating to new or existing technological trajectories. This approach could be better integrated with and complementary to government, institutional, e.g. university, and research funding agencies activities and help ensure that public needs and issues are better addressed directly by the research community. Such approaches could also facilitate empowerment of societal stakeholders regarding scientific literacy and agenda-setting. One-way information relays could be adapted to facilitate feedback from representative groups e.g. Non-governmental organisations or Civil Society Organisations (such as patient groups) in order to enhance the functioning and socio-economic relevance of knowledge-based societies to the betterment of human livelihoods.
Resumo:
Existing work in Computer Science and Electronic Engineering demonstrates that Digital Signal Processing techniques can effectively identify the presence of stress in the speech signal. These techniques use datasets containing real or actual stress samples i.e. real-life stress such as 911 calls and so on. Studies that use simulated or laboratory-induced stress have been less successful and inconsistent. Pervasive, ubiquitous computing is increasingly moving towards voice-activated and voice-controlled systems and devices. Speech recognition and speaker identification algorithms will have to improve and take emotional speech into account. Modelling the influence of stress on speech and voice is of interest to researchers from many different disciplines including security, telecommunications, psychology, speech science, forensics and Human Computer Interaction (HCI). The aim of this work is to assess the impact of moderate stress on the speech signal. In order to do this, a dataset of laboratory-induced stress is required. While attempting to build this dataset it became apparent that reliably inducing measurable stress in a controlled environment, when speech is a requirement, is a challenging task. This work focuses on the use of a variety of stressors to elicit a stress response during tasks that involve speech content. Biosignal analysis (commercial Brain Computer Interfaces, eye tracking and skin resistance) is used to verify and quantify the stress response, if any. This thesis explains the basis of the author’s hypotheses on the elicitation of affectively-toned speech and presents the results of several studies carried out throughout the PhD research period. These results show that the elicitation of stress, particularly the induction of affectively-toned speech, is not a simple matter and that many modulating factors influence the stress response process. A model is proposed to reflect the author’s hypothesis on the emotional response pathways relating to the elicitation of stress with a required speech content. Finally the author provides guidelines and recommendations for future research on speech under stress. Further research paths are identified and a roadmap for future research in this area is defined.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
There are difficulties with utilising self- report and physiological measures of assessment amongst forensic populations. This study investigates implicit based measures amongst sexual offenders, nonsexual offenders and low risk samples. Implicit measurement is a term applied to measurement methods that makes it difficult to influence responses through conscious control. The test battery includes the Implicit Association Test (IAT), Rapid Serial Visual Presentation (RSVP), Viewing Time (VT) and the Structured Clinical interview for disorders. The IAT proposes that people will perform better on a task when they depend on well-practiced cognitive associations. The RSVP task requires participants to identify a single target image that is presented amongst a series of rapidly presented visual images. RSVP operates on the premise that if two target images are presented within 500milliseconds of each other, the possibility that the participant will recognize the second target is significantly reduced when the first target is of salience to the individual. This is the attentional blink phenomenon. VT is based on the principle that people will look longer at images that are of salience. Results showed that on the VT task, child sexual offenders took longer to view images of children than low risk groups. Nude over clothed images induced a greater attentional blink amongst low risk and offending samples on the RSVP task. Sexual offenders took longer than low risk groups on word pairing tasks where sexual words were paired with adult words on the IAT. The SCID highlighted differences between the offending and non offending groups on the sub scales for personality disorders. More erotic stimulus items on the VT and RSVP measures is recommended to better differentiate sexual preference between offending and non offending samples. A pictorial IAT is recommended. Findings provide the basis for further development of implicit measures within the assessment of sexual offenders.
Resumo:
The demand for optical bandwidth continues to increase year on year and is being driven primarily by entertainment services and video streaming to the home. Current photonic systems are coping with this demand by increasing data rates through faster modulation techniques, spectrally efficient transmission systems and by increasing the number of modulated optical channels per fibre strand. Such photonic systems are large and power hungry due to the high number of discrete components required in their operation. Photonic integration offers excellent potential for combining otherwise discrete system components together on a single device to provide robust, power efficient and cost effective solutions. In particular, the design of optical modulators has been an area of immense interest in recent times. Not only has research been aimed at developing modulators with faster data rates, but there has also a push towards making modulators as compact as possible. Mach-Zehnder modulators (MZM) have proven to be highly successful in many optical communication applications. However, due to the relatively weak electro-optic effect on which they are based, they remain large with typical device lengths of 4 to 7 mm while requiring a travelling wave structure for high-speed operation. Nested MZMs have been extensively used in the generation of advanced modulation formats, where multi-symbol transmission can be used to increase data rates at a given modulation frequency. Such nested structures have high losses and require both complex fabrication and packaging. In recent times, it has been shown that Electro-absorption modulators (EAMs) can be used in a specific arrangement to generate Quadrature Phase Shift Keying (QPSK) modulation. EAM based QPSK modulators have increased potential for integration and can be made significantly more compact than MZM based modulators. Such modulator designs suffer from losses in excess of 40 dB, which limits their use in practical applications. The work in this thesis has focused on how these losses can be reduced by using photonic integration. In particular, the integration of multiple lasers with the modulator structure was considered as an excellent means of reducing fibre coupling losses while maximising the optical power on chip. A significant difficultly when using multiple integrated lasers in such an arrangement was to ensure coherence between the integrated lasers. The work investigated in this thesis demonstrates for the first time how optical injection locking between discrete lasers on a single photonic integrated circuit (PIC) can be used in the generation of coherent optical signals. This was done by first considering the monolithic integration of lasers and optical couplers to form an on chip optical power splitter, before then examining the behaviour of a mutually coupled system of integrated lasers. By operating the system in a highly asymmetric coupling regime, a stable phase locking region was found between the integrated lasers. It was then shown that in this stable phase locked region the optical outputs of each laser were coherent with each other and phase locked to a common master laser.
Resumo:
Recent years have witnessed a rapid growth in the demand for streaming video over the Internet, exposing challenges in coping with heterogeneous device capabilities and varying network throughput. When we couple this rise in streaming with the growing number of portable devices (smart phones, tablets, laptops) we see an ever-increasing demand for high-definition videos online while on the move. Wireless networks are inherently characterised by restricted shared bandwidth and relatively high error loss rates, thus presenting a challenge for the efficient delivery of high quality video. Additionally, mobile devices can support/demand a range of video resolutions and qualities. This demand for mobile streaming highlights the need for adaptive video streaming schemes that can adjust to available bandwidth and heterogeneity, and can provide us with graceful changes in video quality, all while respecting our viewing satisfaction. In this context the use of well-known scalable media streaming techniques, commonly known as scalable coding, is an attractive solution and the focus of this thesis. In this thesis we investigate the transmission of existing scalable video models over a lossy network and determine how the variation in viewable quality is affected by packet loss. This work focuses on leveraging the benefits of scalable media, while reducing the effects of data loss on achievable video quality. The overall approach is focused on the strategic packetisation of the underlying scalable video and how to best utilise error resiliency to maximise viewable quality. In particular, we examine the manner in which scalable video is packetised for transmission over lossy networks and propose new techniques that reduce the impact of packet loss on scalable video by selectively choosing how to packetise the data and which data to transmit. We also exploit redundancy techniques, such as error resiliency, to enhance the stream quality by ensuring a smooth play-out with fewer changes in achievable video quality. The contributions of this thesis are in the creation of new segmentation and encapsulation techniques which increase the viewable quality of existing scalable models by fragmenting and re-allocating the video sub-streams based on user requirements, available bandwidth and variations in loss rates. We offer new packetisation techniques which reduce the effects of packet loss on viewable quality by leveraging the increase in the number of frames per group of pictures (GOP) and by providing equality of data in every packet transmitted per GOP. These provide novel mechanisms for packetizing and error resiliency, as well as providing new applications for existing techniques such as Interleaving and Priority Encoded Transmission. We also introduce three new scalable coding models, which offer a balance between transmission cost and the consistency of viewable quality.
Resumo:
Quantitative analysis of penetrative deformation in sedimentary rocks of fold and thrust belts has largely been carried out using clast based strain analysis techniques. These methods analyse the geometric deviations from an original state that populations of clasts, or strain markers, have undergone. The characterisation of these geometric changes, or strain, in the early stages of rock deformation is not entirely straight forward. This is in part due to the paucity of information on the original state of the strain markers, but also the uncertainty of the relative rheological properties of the strain markers and their matrix during deformation, as well as the interaction of two competing fabrics, such as bedding and cleavage. Furthermore one of the single largest setbacks for accurate strain analysis has been associated with the methods themselves, they are traditionally time consuming, labour intensive and results can vary between users. A suite of semi-automated techniques have been tested and found to work very well, but in low strain environments the problems discussed above persist. Additionally these techniques have been compared to Anisotropy of Magnetic Susceptibility (AMS) analyses, which is a particularly sensitive tool for the characterisation of low strain in sedimentary lithologies.