934 resultados para speed discounting
Resumo:
Electronic signal processing systems currently employed at core internet routers require huge amounts of power to operate and they may be unable to continue to satisfy consumer demand for more bandwidth without an inordinate increase in cost, size and/or energy consumption. Optical signal processing techniques may be deployed in next-generation optical networks for simple tasks such as wavelength conversion, demultiplexing and format conversion at high speed (≥100Gb.s-1) to alleviate the pressure on existing core router infrastructure. To implement optical signal processing functionalities, it is necessary to exploit the nonlinear optical properties of suitable materials such as III-V semiconductor compounds, silicon, periodically-poled lithium niobate (PPLN), highly nonlinear fibre (HNLF) or chalcogenide glasses. However, nonlinear optical (NLO) components such as semiconductor optical amplifiers (SOAs), electroabsorption modulators (EAMs) and silicon nanowires are the most promising candidates as all-optical switching elements vis-à-vis ease of integration, device footprint and energy consumption. This PhD thesis presents the amplitude and phase dynamics in a range of device configurations containing SOAs, EAMs and/or silicon nanowires to support the design of all optical switching elements for deployment in next-generation optical networks. Time-resolved pump-probe spectroscopy using pulses with a pulse width of 3ps from mode-locked laser sources was utilized to accurately measure the carrier dynamics in the device(s) under test. The research work into four main topics: (a) a long SOA, (b) the concatenated SOA-EAMSOA (CSES) configuration, (c) silicon nanowires embedded in SU8 polymer and (d) a custom epitaxy design EAM with fast carrier sweepout dynamics. The principal aim was to identify the optimum operation conditions for each of these NLO device configurations to enhance their switching capability and to assess their potential for various optical signal processing functionalities. All of the NLO device configurations investigated in this thesis are compact and suitable for monolithic and/or hybrid integration.
Resumo:
In the last decade, we have witnessed the emergence of large, warehouse-scale data centres which have enabled new internet-based software applications such as cloud computing, search engines, social media, e-government etc. Such data centres consist of large collections of servers interconnected using short-reach (reach up to a few hundred meters) optical interconnect. Today, transceivers for these applications achieve up to 100Gb/s by multiplexing 10x 10Gb/s or 4x 25Gb/s channels. In the near future however, data centre operators have expressed a need for optical links which can support 400Gb/s up to 1Tb/s. The crucial challenge is to achieve this in the same footprint (same transceiver module) and with similar power consumption as today’s technology. Straightforward scaling of the currently used space or wavelength division multiplexing may be difficult to achieve: indeed a 1Tb/s transceiver would require integration of 40 VCSELs (vertical cavity surface emitting laser diode, widely used for short‐reach optical interconnect), 40 photodiodes and the electronics operating at 25Gb/s in the same module as today’s 100Gb/s transceiver. Pushing the bit rate on such links beyond today’s commercially available 100Gb/s/fibre will require new generations of VCSELs and their driver and receiver electronics. This work looks into a number of state‐of-the-art technologies and investigates their performance restraints and recommends different set of designs, specifically targeting multilevel modulation formats. Several methods to extend the bandwidth using deep submicron (65nm and 28nm) CMOS technology are explored in this work, while also maintaining a focus upon reducing power consumption and chip area. The techniques used were pre-emphasis in rising and falling edges of the signal and bandwidth extensions by inductive peaking and different local feedback techniques. These techniques have been applied to a transmitter and receiver developed for advanced modulation formats such as PAM-4 (4 level pulse amplitude modulation). Such modulation format can increase the throughput per individual channel, which helps to overcome the challenges mentioned above to realize 400Gb/s to 1Tb/s transceivers.
Resumo:
High volumes of data traffic along with bandwidth hungry applications, such as cloud computing and video on demand, is driving the core optical communication links closer and closer to their maximum capacity. The research community has clearly identifying the coming approach of the nonlinear Shannon limit for standard single mode fibre [1,2]. It is in this context that the work on modulation formats, contained in Chapter 3 of this thesis, was undertaken. The work investigates the proposed energy-efficient four-dimensional modulation formats. The work begins by studying a new visualisation technique for four dimensional modulation formats, akin to constellation diagrams. The work then carries out one of the first implementations of one such modulation format, polarisation-switched quadrature phase-shift keying (PS-QPSK). This thesis also studies two potential next-generation fibres, few-mode and hollow-core photonic band-gap fibre. Chapter 4 studies ways to experimentally quantify the nonlinearities in few-mode fibre and assess the potential benefits and limitations of such fibres. It carries out detailed experiments to measure the effects of stimulated Brillouin scattering, self-phase modulation and four-wave mixing and compares the results to numerical models, along with capacity limit calculations. Chapter 5 investigates hollow-core photonic band-gap fibre, where such fibres are predicted to have a low-loss minima at a wavelength of 2μm. To benefit from this potential low loss window requires the development of telecoms grade subsystems and components. The chapter will outline some of the development and characterisation of these components. The world's first wavelength division multiplexed (WDM) subsystem directly implemented at 2μm is presented along with WDM transmission over hollow-core photonic band-gap fibre at 2μm. References: [1]P. P. Mitra, J. B. Stark, Nature, 411, 1027-1030, 2001 [2] A. D. Ellis et al., JLT, 28, 423-433, 2010.
Resumo:
The recognition that early breast cancer is a spectrum of diseases each requiring a specific systemic therapy guided the 13th St Gallen International Breast Cancer Consensus Conference [1]. The meeting assembled 3600 participants from nearly 90 countries worldwide. Educational content has been centred on the primary and multidisciplinary treatment approach of early breast cancer. The meeting culminated on the final day, with the St Gallen Breast Cancer Treatment Consensus, established by 40-50 of the world's most experienced opinion leaders in the field of breast cancer treatment. The major issue that arose during the consensus conference was the increasing gap between what is theoretically feasible in patient risk stratification, in treatment, and in daily practice management. We need to find new paths to access innovations to clinical research and daily practice. To ensure that continued innovation meets the needs of patients, the therapeutic alliance between patients and academic-led research should to be extended to include relevant pharmaceutical companies and drug regulators with a unique effort to bring innovation into clinical practice. We need to bring together major players from the world of breast cancer research to map out a coordinated strategy on an international scale, to address the disease fragmentation, to share financial resources, and to integrate scientific data. The final goal will be to improve access to an affordable, best standard of care for all patients in each country.
Resumo:
Simultaneous measurements of high-altitude optical emissions and magnetic fields produced by sprite-associated lightning discharges enable a close examination of the link between low-altitude lightning processes and high-altitude sprite processes. We report results of the coordinated analysis of high-speed sprite video and wideband magnetic field measurements recorded simultaneously at Yucca Ridge Field Station and Duke University. From June to August 2005, sprites were detected following 67 lightning strokes, all of which had positive polarity. Our data showed that 46% of the 83 discrete sprite events in these sequences initiated more than 10 ms after the lightning return stroke, and we focus on these delayed sprites in this work. All delayed sprites were preceded by continuing current moments that averaged at least 11 kA km between the return stroke and sprites. The total lightning charge moment change at sprite initiation varied from 600 to 18,600 C km, and the minimum value to initiate long-delayed sprites ranged from 600 for 15 ms delay to 2000 C km for more than 120 ms delay. We numerically simulated electric fields at altitudes above these lightning discharges and found that the maximum normalized electric fields are essentially the same as fields that produce short-delayed sprites. Both estimated and simulation-predicted sprite initiation altitudes indicate that long-delayed sprites generally initiate around 5 km lower than short-delayed sprites. The simulation results also reveal that slow (5-20 ms) intensifications in continuing current can play a major role in initiating delayed sprites. Copyright 2008 by the American Geophysical Union.
Resumo:
Evaluating environmental policies, such as the mitigation of greenhouse gases, frequently requires balancing near-term mitigation costs against long-term environmental benefits. Conventional approaches to valuing such investments hold interest rates constant, but the authors contend that there is a real degree of uncertainty in future interest rates. This leads to a higher valuation of future benefits relative to conventional methods that ignore interest rate uncertainty.
Resumo:
We demonstrate that when the future path of the discount rate is uncertain and highly correlated, the distant future should be discounted at significantly lower rates than suggested by the current rate. We then use two centuries of US interest rate data to quantify this effect. Using both random walk and mean-reverting models, we compute the "certainty-equivalent rate" that summarizes the effect of uncertainty and measures the appropriate forward rate of discount in the future. Under the random walk model we find that the certainty-equivalent rate falls continuously from 4% to 2% after 100 years, 1% after 200 years, and 0.5% after 300 years. At horizons of 400 years, the discounted value increases by a factor of over 40,000 relative to conventional discounting. Applied to climate change mitigation, we find that incorporating discount rate uncertainty almost doubles the expected present value of mitigation benefits. © 2003 Elsevier Science (USA). All rights reserved.
Resumo:
Elevated delay discounting, in which delayed rewards quickly lose value as a function of time, is associated with substance use and abuse. Currently, the direction of causation is unclear: while some research indicates that elevated delay discounting leads to future substance use, it is also possible that chronic substance use and specifically the rate of reinforcement associated with drug use, leads to elevated delay discounting. This project aims to examine the latter possibility. 47 participants completed ten 30-minute daily sessions of a visual attention task, and were reinforced at a rate intended to model drug use (fixed ratio 1) or drug abstinence (fixed ratio 10). Baseline and post-training rates of delay discounting were assessed for hypothetical $50 and $1000. Area under the curve of the indifference points as a function of delay was calculated. A greater area under the curve suggests more self-control, whereas a lower value represents more impulsiveness. Results at the monetary value of both $50 and $1000 showed increased impulsivity in relation to the control for both the FR1 and FR10 groups indicating that the two schedules may both model drug use.
Resumo:
The issues surrounding collision of projectiles with structures has gained a high profile since the events of 11th September 2001. In such collision problems, the projectile penetrates the stucture so that tracking the interface between one material and another becomes very complex, especially if the projectile is essentially a vessel containing a fluid, e.g. fuel load. The subsequent combustion, heat transfer and melting and re-solidification process in the structure render this a very challenging computational modelling problem. The conventional approaches to the analysis of collision processes involves a Lagrangian-Lagrangian contact driven methodology. This approach suffers from a number of disadvantages in its implementation, most of which are associated with the challenges of the contact analysis component of the calculations. This paper describes a 'two fluid' approach to high speed impact between solid structures, where the objective is to overcome the problems of penetration and re-meshing. The work has been carried out using the finite volume, unstructured mesh multi-physics code PHYSICA+, where the three dimensional fluid flow, free surface, heat transfer, combustion, melting and re-solidification algorithms are approximated using cell-centred finite volume, unstructured mesh techniques on a collocated mesh. The basic procedure is illustrated for two cases of Newtonian and non-Newtonian flow to test various of its component capabilities in the analysis of problems of industrial interest.
Resumo:
Particle degradation can be a significant issue in particulate solids handling and processing, particularly in pneumatic conveying systems, in which high-speed impact is usually the main contributory factor leading to changes in particle size distribution (comparing the material to its virgin state). However, other factors may strongly influence particles breakage as well, such as particle concentrations, bend geometry,and hardness of pipe material. Because of such complex influences, it is often very difficult to predict particle degradation accurately and rapidly for industrial processes. In this article, a general method for evaluating particle degradation due to high-speed impacts is described, in which the breakage properties of particles are quantified using what are known as "breakage matrices". Rather than a pilot-size test facility, a bench-scale degradation tester has been used. Some advantages of using the bench-scale tester are briefly explored. Experimental determination of adipic acid has been carried out for a range of impact velocities in four particle size categories. Subsequently, particle breakage matrices of adipic acid have been established for these impact velocities. The experimental results show that the "breakage matrices" of particles is an effective and easy method for evaluation of particle degradation due to high-speed impacts. The possibility of the "breakage matrices" approach being applied to a pneumatic conveying system is also explored by a simulation example.